id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2208.05017
|
M Charity
|
M Charity, Julian Togelius
|
Aesthetic Bot: Interactively Evolving Game Maps on Twitter
| null | null | null | null |
cs.AI cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper describes the implementation of the Aesthetic Bot, an automated
Twitter account that posts images of small game maps that are either user-made
or generated from an evolutionary system. The bot then prompts users to vote
via a poll posted in the image's thread for the most aesthetically pleasing
map. This creates a rating system that allows for direct interaction with the
bot in a way that is integrated seamlessly into a user's regularly updated
Twitter content feed. Upon conclusion of the each voting round, the bot learns
from the distribution of votes for each map to emulate user preferences for
design and visual aesthetic in order to generate maps that would win future
vote pairings. We discuss the ongoing results and emerging behaviors that have
occurred since the release of this system from both the bot's generation of
game maps and the participating Twitter users.
|
[
{
"created": "Tue, 9 Aug 2022 19:44:47 GMT",
"version": "v1"
},
{
"created": "Wed, 24 Aug 2022 15:43:51 GMT",
"version": "v2"
}
] |
2022-08-25
|
[
[
"Charity",
"M",
""
],
[
"Togelius",
"Julian",
""
]
] |
This paper describes the implementation of the Aesthetic Bot, an automated Twitter account that posts images of small game maps that are either user-made or generated from an evolutionary system. The bot then prompts users to vote via a poll posted in the image's thread for the most aesthetically pleasing map. This creates a rating system that allows for direct interaction with the bot in a way that is integrated seamlessly into a user's regularly updated Twitter content feed. Upon conclusion of the each voting round, the bot learns from the distribution of votes for each map to emulate user preferences for design and visual aesthetic in order to generate maps that would win future vote pairings. We discuss the ongoing results and emerging behaviors that have occurred since the release of this system from both the bot's generation of game maps and the participating Twitter users.
|
2309.10561
|
R\'obert Lakatos
|
Robert Lakatos, Peter Pollner, Andras Hajdu, Tamas Joo
|
A multimodal deep learning architecture for smoking detection with a
small data approach
| null | null |
10.3389/frai.2024.1326050
| null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Introduction: Covert tobacco advertisements often raise regulatory measures.
This paper presents that artificial intelligence, particularly deep learning,
has great potential for detecting hidden advertising and allows unbiased,
reproducible, and fair quantification of tobacco-related media content.
Methods: We propose an integrated text and image processing model based on deep
learning, generative methods, and human reinforcement, which can detect smoking
cases in both textual and visual formats, even with little available training
data. Results: Our model can achieve 74\% accuracy for images and 98\% for
text. Furthermore, our system integrates the possibility of expert intervention
in the form of human reinforcement. Conclusions: Using the pre-trained
multimodal, image, and text processing models available through deep learning
makes it possible to detect smoking in different media even with few training
data.
|
[
{
"created": "Tue, 19 Sep 2023 12:15:06 GMT",
"version": "v1"
}
] |
2024-03-13
|
[
[
"Lakatos",
"Robert",
""
],
[
"Pollner",
"Peter",
""
],
[
"Hajdu",
"Andras",
""
],
[
"Joo",
"Tamas",
""
]
] |
Introduction: Covert tobacco advertisements often raise regulatory measures. This paper presents that artificial intelligence, particularly deep learning, has great potential for detecting hidden advertising and allows unbiased, reproducible, and fair quantification of tobacco-related media content. Methods: We propose an integrated text and image processing model based on deep learning, generative methods, and human reinforcement, which can detect smoking cases in both textual and visual formats, even with little available training data. Results: Our model can achieve 74\% accuracy for images and 98\% for text. Furthermore, our system integrates the possibility of expert intervention in the form of human reinforcement. Conclusions: Using the pre-trained multimodal, image, and text processing models available through deep learning makes it possible to detect smoking in different media even with few training data.
|
2307.05923
|
Kosuke Tatsumura
|
Kosuke Tatsumura, Ryo Hidaka, Jun Nakayama, Tomoya Kashimata, and
Masaya Yamasaki
|
Pairs-trading System using Quantum-inspired Combinatorial Optimization
Accelerator for Optimal Path Search in Market Graphs
|
11 pages, 8 figures
|
IEEE Access 11, pp. 104406 - 104416 (2023)
|
10.1109/ACCESS.2023.3316727
| null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pairs-trading is a trading strategy that involves matching a long position
with a short position in two stocks aiming at market-neutral profits. While a
typical pairs-trading system monitors the prices of two statistically
correlated stocks for detecting a temporary divergence, monitoring and
analyzing the prices of more stocks would potentially lead to finding more
trading opportunities. Here we report a stock pairs-trading system that finds
trading opportunities for any two stocks in an $N$-stock universe using a
combinatorial optimization accelerator based on a quantum-inspired algorithm
called simulated bifurcation. The trading opportunities are detected through
solving an optimal path search problem in an $N$-node directed graph with edge
weights corresponding to the products of instantaneous price differences and
statistical correlation factors between two stocks. The accelerator is one of
Ising machines and operates consecutively to find multiple opportunities in a
market situation with avoiding duplicate detections by a tabu search technique.
It has been demonstrated in the Tokyo Stock Exchange that the FPGA
(field-programmable gate array)-based trading system has a sufficiently low
latency (33 $\mu$s for $N$=15 or 210 pairs) to execute the pairs-trading
strategy based on optimal path search in market graphs.
|
[
{
"created": "Wed, 12 Jul 2023 05:41:39 GMT",
"version": "v1"
}
] |
2023-10-04
|
[
[
"Tatsumura",
"Kosuke",
""
],
[
"Hidaka",
"Ryo",
""
],
[
"Nakayama",
"Jun",
""
],
[
"Kashimata",
"Tomoya",
""
],
[
"Yamasaki",
"Masaya",
""
]
] |
Pairs-trading is a trading strategy that involves matching a long position with a short position in two stocks aiming at market-neutral profits. While a typical pairs-trading system monitors the prices of two statistically correlated stocks for detecting a temporary divergence, monitoring and analyzing the prices of more stocks would potentially lead to finding more trading opportunities. Here we report a stock pairs-trading system that finds trading opportunities for any two stocks in an $N$-stock universe using a combinatorial optimization accelerator based on a quantum-inspired algorithm called simulated bifurcation. The trading opportunities are detected through solving an optimal path search problem in an $N$-node directed graph with edge weights corresponding to the products of instantaneous price differences and statistical correlation factors between two stocks. The accelerator is one of Ising machines and operates consecutively to find multiple opportunities in a market situation with avoiding duplicate detections by a tabu search technique. It has been demonstrated in the Tokyo Stock Exchange that the FPGA (field-programmable gate array)-based trading system has a sufficiently low latency (33 $\mu$s for $N$=15 or 210 pairs) to execute the pairs-trading strategy based on optimal path search in market graphs.
|
1803.11353
|
Yiluan Guo
|
Yiluan Guo, Ngai-Man Cheung
|
Efficient and Deep Person Re-Identification using Multi-Level Similarity
|
To appear in CVPR2018
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Person Re-Identification (ReID) requires comparing two images of person
captured under different conditions. Existing work based on neural networks
often computes the similarity of feature maps from one single convolutional
layer. In this work, we propose an efficient, end-to-end fully convolutional
Siamese network that computes the similarities at multiple levels. We
demonstrate that multi-level similarity can improve the accuracy considerably
using low-complexity network structures in ReID problem. Specifically, first,
we use several convolutional layers to extract the features of two input
images. Then, we propose Convolution Similarity Network to compute the
similarity score maps for the inputs. We use spatial transformer networks
(STNs) to determine spatial attention. We propose to apply efficient depth-wise
convolution to compute the similarity. The proposed Convolution Similarity
Networks can be inserted into different convolutional layers to extract visual
similarities at different levels. Furthermore, we use an improved ranking loss
to further improve the performance. Our work is the first to propose to compute
visual similarities at low, middle and high levels for ReID. With extensive
experiments and analysis, we demonstrate that our system, compact yet
effective, can achieve competitive results with much smaller model size and
computational complexity.
|
[
{
"created": "Fri, 30 Mar 2018 06:18:28 GMT",
"version": "v1"
},
{
"created": "Mon, 2 Apr 2018 03:06:54 GMT",
"version": "v2"
}
] |
2018-04-03
|
[
[
"Guo",
"Yiluan",
""
],
[
"Cheung",
"Ngai-Man",
""
]
] |
Person Re-Identification (ReID) requires comparing two images of person captured under different conditions. Existing work based on neural networks often computes the similarity of feature maps from one single convolutional layer. In this work, we propose an efficient, end-to-end fully convolutional Siamese network that computes the similarities at multiple levels. We demonstrate that multi-level similarity can improve the accuracy considerably using low-complexity network structures in ReID problem. Specifically, first, we use several convolutional layers to extract the features of two input images. Then, we propose Convolution Similarity Network to compute the similarity score maps for the inputs. We use spatial transformer networks (STNs) to determine spatial attention. We propose to apply efficient depth-wise convolution to compute the similarity. The proposed Convolution Similarity Networks can be inserted into different convolutional layers to extract visual similarities at different levels. Furthermore, we use an improved ranking loss to further improve the performance. Our work is the first to propose to compute visual similarities at low, middle and high levels for ReID. With extensive experiments and analysis, we demonstrate that our system, compact yet effective, can achieve competitive results with much smaller model size and computational complexity.
|
2204.03641
|
Liming Jiang
|
Shuai Yang, Liming Jiang, Ziwei Liu, Chen Change Loy
|
Unsupervised Image-to-Image Translation with Generative Prior
|
CVPR 2022. Code: https://github.com/williamyang1991/GP-UNIT Project
page: https://www.mmlab-ntu.com/project/gpunit/
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unsupervised image-to-image translation aims to learn the translation between
two visual domains without paired data. Despite the recent progress in image
translation models, it remains challenging to build mappings between complex
domains with drastic visual discrepancies. In this work, we present a novel
framework, Generative Prior-guided UNsupervised Image-to-image Translation
(GP-UNIT), to improve the overall quality and applicability of the translation
algorithm. Our key insight is to leverage the generative prior from pre-trained
class-conditional GANs (e.g., BigGAN) to learn rich content correspondences
across various domains. We propose a novel coarse-to-fine scheme: we first
distill the generative prior to capture a robust coarse-level content
representation that can link objects at an abstract semantic level, based on
which fine-level content features are adaptively learned for more accurate
multi-level content correspondences. Extensive experiments demonstrate the
superiority of our versatile framework over state-of-the-art methods in robust,
high-quality and diversified translations, even for challenging and distant
domains.
|
[
{
"created": "Thu, 7 Apr 2022 17:59:23 GMT",
"version": "v1"
}
] |
2022-04-08
|
[
[
"Yang",
"Shuai",
""
],
[
"Jiang",
"Liming",
""
],
[
"Liu",
"Ziwei",
""
],
[
"Loy",
"Chen Change",
""
]
] |
Unsupervised image-to-image translation aims to learn the translation between two visual domains without paired data. Despite the recent progress in image translation models, it remains challenging to build mappings between complex domains with drastic visual discrepancies. In this work, we present a novel framework, Generative Prior-guided UNsupervised Image-to-image Translation (GP-UNIT), to improve the overall quality and applicability of the translation algorithm. Our key insight is to leverage the generative prior from pre-trained class-conditional GANs (e.g., BigGAN) to learn rich content correspondences across various domains. We propose a novel coarse-to-fine scheme: we first distill the generative prior to capture a robust coarse-level content representation that can link objects at an abstract semantic level, based on which fine-level content features are adaptively learned for more accurate multi-level content correspondences. Extensive experiments demonstrate the superiority of our versatile framework over state-of-the-art methods in robust, high-quality and diversified translations, even for challenging and distant domains.
|
2111.11992
|
Yi Ding
|
Yi Ding, Alex Rich, Mason Wang, Noah Stier, Matthew Turk, Pradeep Sen,
Tobias H\"ollerer
|
Sparse Fusion for Multimodal Transformers
|
11 pages, 4 figures, 5 tables, Yi Ding and Alex Rich contributed
equally
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Multimodal classification is a core task in human-centric machine learning.
We observe that information is highly complementary across modalities, thus
unimodal information can be drastically sparsified prior to multimodal fusion
without loss of accuracy. To this end, we present Sparse Fusion Transformers
(SFT), a novel multimodal fusion method for transformers that performs
comparably to existing state-of-the-art methods while having greatly reduced
memory footprint and computation cost. Key to our idea is a sparse-pooling
block that reduces unimodal token sets prior to cross-modality modeling.
Evaluations are conducted on multiple multimodal benchmark datasets for a wide
range of classification tasks. State-of-the-art performance is obtained on
multiple benchmarks under similar experiment conditions, while reporting up to
six-fold reduction in computational cost and memory requirements. Extensive
ablation studies showcase our benefits of combining sparsification and
multimodal learning over naive approaches. This paves the way for enabling
multimodal learning on low-resource devices.
|
[
{
"created": "Tue, 23 Nov 2021 16:43:49 GMT",
"version": "v1"
},
{
"created": "Wed, 24 Nov 2021 21:53:12 GMT",
"version": "v2"
}
] |
2021-11-29
|
[
[
"Ding",
"Yi",
""
],
[
"Rich",
"Alex",
""
],
[
"Wang",
"Mason",
""
],
[
"Stier",
"Noah",
""
],
[
"Turk",
"Matthew",
""
],
[
"Sen",
"Pradeep",
""
],
[
"Höllerer",
"Tobias",
""
]
] |
Multimodal classification is a core task in human-centric machine learning. We observe that information is highly complementary across modalities, thus unimodal information can be drastically sparsified prior to multimodal fusion without loss of accuracy. To this end, we present Sparse Fusion Transformers (SFT), a novel multimodal fusion method for transformers that performs comparably to existing state-of-the-art methods while having greatly reduced memory footprint and computation cost. Key to our idea is a sparse-pooling block that reduces unimodal token sets prior to cross-modality modeling. Evaluations are conducted on multiple multimodal benchmark datasets for a wide range of classification tasks. State-of-the-art performance is obtained on multiple benchmarks under similar experiment conditions, while reporting up to six-fold reduction in computational cost and memory requirements. Extensive ablation studies showcase our benefits of combining sparsification and multimodal learning over naive approaches. This paves the way for enabling multimodal learning on low-resource devices.
|
1610.00580
|
Jacob Abernethy
|
Jacob Abernethy (University of Michigan), Cyrus Anderson (University
of Michigan), Chengyu Dai (University of Michigan), Arya Farahi (University
of Michigan), Linh Nguyen (University of Michigan), Adam Rauh (University of
Michigan), Eric Schwartz (University of Michigan), Wenbo Shen (University of
Michigan), Guangsha Shi (University of Michigan), Jonathan Stroud (University
of Michigan), Xinyu Tan (University of Michigan), Jared Webb (University of
Michigan), Sheng Yang (University of Michigan)
|
Flint Water Crisis: Data-Driven Risk Assessment Via Residential Water
Testing
|
Presented at the Data For Good Exchange 2016
| null | null | null |
cs.LG stat.AP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recovery from the Flint Water Crisis has been hindered by uncertainty in both
the water testing process and the causes of contamination. In this work, we
develop an ensemble of predictive models to assess the risk of lead
contamination in individual homes and neighborhoods. To train these models, we
utilize a wide range of data sources, including voluntary residential water
tests, historical records, and city infrastructure data. Additionally, we use
our models to identify the most prominent factors that contribute to a high
risk of lead contamination. In this analysis, we find that lead service lines
are not the only factor that is predictive of the risk of lead contamination of
water. These results could be used to guide the long-term recovery efforts in
Flint, minimize the immediate damages, and improve resource-allocation
decisions for similar water infrastructure crises.
|
[
{
"created": "Fri, 30 Sep 2016 14:31:11 GMT",
"version": "v1"
}
] |
2016-10-04
|
[
[
"Abernethy",
"Jacob",
"",
"University of Michigan"
],
[
"Anderson",
"Cyrus",
"",
"University\n of Michigan"
],
[
"Dai",
"Chengyu",
"",
"University of Michigan"
],
[
"Farahi",
"Arya",
"",
"University\n of Michigan"
],
[
"Nguyen",
"Linh",
"",
"University of Michigan"
],
[
"Rauh",
"Adam",
"",
"University of\n Michigan"
],
[
"Schwartz",
"Eric",
"",
"University of Michigan"
],
[
"Shen",
"Wenbo",
"",
"University of\n Michigan"
],
[
"Shi",
"Guangsha",
"",
"University of Michigan"
],
[
"Stroud",
"Jonathan",
"",
"University\n of Michigan"
],
[
"Tan",
"Xinyu",
"",
"University of Michigan"
],
[
"Webb",
"Jared",
"",
"University of\n Michigan"
],
[
"Yang",
"Sheng",
"",
"University of Michigan"
]
] |
Recovery from the Flint Water Crisis has been hindered by uncertainty in both the water testing process and the causes of contamination. In this work, we develop an ensemble of predictive models to assess the risk of lead contamination in individual homes and neighborhoods. To train these models, we utilize a wide range of data sources, including voluntary residential water tests, historical records, and city infrastructure data. Additionally, we use our models to identify the most prominent factors that contribute to a high risk of lead contamination. In this analysis, we find that lead service lines are not the only factor that is predictive of the risk of lead contamination of water. These results could be used to guide the long-term recovery efforts in Flint, minimize the immediate damages, and improve resource-allocation decisions for similar water infrastructure crises.
|
2104.05744
|
Tarik A. Rashid
|
Sagar Chhetri, Abeer Alsadoon, Thair Al Dala in, P. W. C. Prasad,
Tarik A. Rashid, Angelika Maag
|
Deep Learning for Vision-Based Fall Detection System: Enhanced Optical
Dynamic Flow
|
16 pages
|
Computational Intelligence, 2020
|
10.1111/coin.12428
| null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Accurate fall detection for the assistance of older people is crucial to
reduce incidents of deaths or injuries due to falls. Meanwhile, a vision-based
fall detection system has shown some significant results to detect falls.
Still, numerous challenges need to be resolved. The impact of deep learning has
changed the landscape of the vision-based system, such as action recognition.
The deep learning technique has not been successfully implemented in
vision-based fall detection systems due to the requirement of a large amount of
computation power and the requirement of a large amount of sample training
data. This research aims to propose a vision-based fall detection system that
improves the accuracy of fall detection in some complex environments such as
the change of light condition in the room. Also, this research aims to increase
the performance of the pre-processing of video images. The proposed system
consists of the Enhanced Dynamic Optical Flow technique that encodes the
temporal data of optical flow videos by the method of rank pooling, which
thereby improves the processing time of fall detection and improves the
classification accuracy in dynamic lighting conditions. The experimental
results showed that the classification accuracy of the fall detection improved
by around 3% and the processing time by 40 to 50ms. The proposed system
concentrates on decreasing the processing time of fall detection and improving
classification accuracy. Meanwhile, it provides a mechanism for summarizing a
video into a single image by using a dynamic optical flow technique, which
helps to increase the performance of image pre-processing steps.
|
[
{
"created": "Thu, 18 Mar 2021 08:14:25 GMT",
"version": "v1"
}
] |
2021-04-14
|
[
[
"Chhetri",
"Sagar",
""
],
[
"Alsadoon",
"Abeer",
""
],
[
"in",
"Thair Al Dala",
""
],
[
"Prasad",
"P. W. C.",
""
],
[
"Rashid",
"Tarik A.",
""
],
[
"Maag",
"Angelika",
""
]
] |
Accurate fall detection for the assistance of older people is crucial to reduce incidents of deaths or injuries due to falls. Meanwhile, a vision-based fall detection system has shown some significant results to detect falls. Still, numerous challenges need to be resolved. The impact of deep learning has changed the landscape of the vision-based system, such as action recognition. The deep learning technique has not been successfully implemented in vision-based fall detection systems due to the requirement of a large amount of computation power and the requirement of a large amount of sample training data. This research aims to propose a vision-based fall detection system that improves the accuracy of fall detection in some complex environments such as the change of light condition in the room. Also, this research aims to increase the performance of the pre-processing of video images. The proposed system consists of the Enhanced Dynamic Optical Flow technique that encodes the temporal data of optical flow videos by the method of rank pooling, which thereby improves the processing time of fall detection and improves the classification accuracy in dynamic lighting conditions. The experimental results showed that the classification accuracy of the fall detection improved by around 3% and the processing time by 40 to 50ms. The proposed system concentrates on decreasing the processing time of fall detection and improving classification accuracy. Meanwhile, it provides a mechanism for summarizing a video into a single image by using a dynamic optical flow technique, which helps to increase the performance of image pre-processing steps.
|
1905.09084
|
Martin Eker{\aa}
|
Martin Eker{\aa}
|
Revisiting Shor's quantum algorithm for computing general discrete
logarithms
|
The pre-print has been updated with an extended heuristic, that
better captures the probability distribution for small $\varsigma$, and that
reduces to the original heuristic for $B_\eta = 0$. Associated updates have
been made to the post-processing, to support searching over $\eta$ when
$B_\eta > 0$. Various other associated updates, and improvements, additions
and minor fixes, have been made
| null | null | null |
cs.CR quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We heuristically show that Shor's algorithm for computing general discrete
logarithms achieves an expected success probability of approximately 60% to 82%
in a single run when modified to enable efficient implementation with the
semi-classical Fourier transform. By slightly increasing the number of group
operations that are evaluated quantumly and performing a single limited search
in the classical post-processing, or by performing two limited searches in the
post-processing, we show how the algorithm can be further modified to achieve a
success probability that heuristically exceeds 99% in a single run. We provide
concrete heuristic estimates of the success probability of the modified
algorithm, as a function of the group order $r$, the size of the search space
in the classical post-processing, and the additional number of group operations
evaluated quantumly. In the limit as $r \rightarrow \infty$, we heuristically
show that the success probability tends to one. In analogy with our earlier
works, we show how the modified quantum algorithm may be heuristically
simulated classically when the logarithm $d$ and $r$ are both known.
Furthermore, we heuristically show how slightly better tradeoffs may be
achieved, compared to our earlier works, if $r$ is known when computing $d$. We
generalize our heuristic to cover some of our earlier works, and compare it to
the non-heuristic analyses in those works.
|
[
{
"created": "Wed, 22 May 2019 11:47:38 GMT",
"version": "v1"
},
{
"created": "Mon, 12 Apr 2021 14:29:43 GMT",
"version": "v2"
},
{
"created": "Mon, 6 Mar 2023 13:23:21 GMT",
"version": "v3"
}
] |
2023-03-07
|
[
[
"Ekerå",
"Martin",
""
]
] |
We heuristically show that Shor's algorithm for computing general discrete logarithms achieves an expected success probability of approximately 60% to 82% in a single run when modified to enable efficient implementation with the semi-classical Fourier transform. By slightly increasing the number of group operations that are evaluated quantumly and performing a single limited search in the classical post-processing, or by performing two limited searches in the post-processing, we show how the algorithm can be further modified to achieve a success probability that heuristically exceeds 99% in a single run. We provide concrete heuristic estimates of the success probability of the modified algorithm, as a function of the group order $r$, the size of the search space in the classical post-processing, and the additional number of group operations evaluated quantumly. In the limit as $r \rightarrow \infty$, we heuristically show that the success probability tends to one. In analogy with our earlier works, we show how the modified quantum algorithm may be heuristically simulated classically when the logarithm $d$ and $r$ are both known. Furthermore, we heuristically show how slightly better tradeoffs may be achieved, compared to our earlier works, if $r$ is known when computing $d$. We generalize our heuristic to cover some of our earlier works, and compare it to the non-heuristic analyses in those works.
|
2302.05629
|
Xunyu Zhu
|
Xunyu Zhu, Jian Li, Yong Liu, Weiping Wang
|
Improving Differentiable Architecture Search via Self-Distillation
|
Accepted by Neural Networks
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Differentiable Architecture Search (DARTS) is a simple yet efficient Neural
Architecture Search (NAS) method. During the search stage, DARTS trains a
supernet by jointly optimizing architecture parameters and network parameters.
During the evaluation stage, DARTS discretizes the supernet to derive the
optimal architecture based on architecture parameters. However, recent research
has shown that during the training process, the supernet tends to converge
towards sharp minima rather than flat minima. This is evidenced by the higher
sharpness of the loss landscape of the supernet, which ultimately leads to a
performance gap between the supernet and the optimal architecture. In this
paper, we propose Self-Distillation Differentiable Neural Architecture Search
(SD-DARTS) to alleviate the discretization gap. We utilize self-distillation to
distill knowledge from previous steps of the supernet to guide its training in
the current step, effectively reducing the sharpness of the supernet's loss and
bridging the performance gap between the supernet and the optimal architecture.
Furthermore, we introduce the concept of voting teachers, where multiple
previous supernets are selected as teachers, and their output probabilities are
aggregated through voting to obtain the final teacher prediction. Experimental
results on real datasets demonstrate the advantages of our novel
self-distillation-based NAS method compared to state-of-the-art alternatives.
|
[
{
"created": "Sat, 11 Feb 2023 08:58:55 GMT",
"version": "v1"
},
{
"created": "Fri, 1 Sep 2023 07:09:55 GMT",
"version": "v2"
}
] |
2023-09-04
|
[
[
"Zhu",
"Xunyu",
""
],
[
"Li",
"Jian",
""
],
[
"Liu",
"Yong",
""
],
[
"Wang",
"Weiping",
""
]
] |
Differentiable Architecture Search (DARTS) is a simple yet efficient Neural Architecture Search (NAS) method. During the search stage, DARTS trains a supernet by jointly optimizing architecture parameters and network parameters. During the evaluation stage, DARTS discretizes the supernet to derive the optimal architecture based on architecture parameters. However, recent research has shown that during the training process, the supernet tends to converge towards sharp minima rather than flat minima. This is evidenced by the higher sharpness of the loss landscape of the supernet, which ultimately leads to a performance gap between the supernet and the optimal architecture. In this paper, we propose Self-Distillation Differentiable Neural Architecture Search (SD-DARTS) to alleviate the discretization gap. We utilize self-distillation to distill knowledge from previous steps of the supernet to guide its training in the current step, effectively reducing the sharpness of the supernet's loss and bridging the performance gap between the supernet and the optimal architecture. Furthermore, we introduce the concept of voting teachers, where multiple previous supernets are selected as teachers, and their output probabilities are aggregated through voting to obtain the final teacher prediction. Experimental results on real datasets demonstrate the advantages of our novel self-distillation-based NAS method compared to state-of-the-art alternatives.
|
2402.10196
|
Lingbo Mo
|
Lingbo Mo, Zeyi Liao, Boyuan Zheng, Yu Su, Chaowei Xiao, Huan Sun
|
A Trembling House of Cards? Mapping Adversarial Attacks against Language
Agents
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Language agents powered by large language models (LLMs) have seen exploding
development. Their capability of using language as a vehicle for thought and
communication lends an incredible level of flexibility and versatility. People
have quickly capitalized on this capability to connect LLMs to a wide range of
external components and environments: databases, tools, the Internet, robotic
embodiment, etc. Many believe an unprecedentedly powerful automation technology
is emerging. However, new automation technologies come with new safety risks,
especially for intricate systems like language agents. There is a surprisingly
large gap between the speed and scale of their development and deployment and
our understanding of their safety risks. Are we building a house of cards? In
this position paper, we present the first systematic effort in mapping
adversarial attacks against language agents. We first present a unified
conceptual framework for agents with three major components: Perception, Brain,
and Action. Under this framework, we present a comprehensive discussion and
propose 12 potential attack scenarios against different components of an agent,
covering different attack strategies (e.g., input manipulation, adversarial
demonstrations, jailbreaking, backdoors). We also draw connections to
successful attack strategies previously applied to LLMs. We emphasize the
urgency to gain a thorough understanding of language agent risks before their
widespread deployment.
|
[
{
"created": "Thu, 15 Feb 2024 18:51:32 GMT",
"version": "v1"
}
] |
2024-02-16
|
[
[
"Mo",
"Lingbo",
""
],
[
"Liao",
"Zeyi",
""
],
[
"Zheng",
"Boyuan",
""
],
[
"Su",
"Yu",
""
],
[
"Xiao",
"Chaowei",
""
],
[
"Sun",
"Huan",
""
]
] |
Language agents powered by large language models (LLMs) have seen exploding development. Their capability of using language as a vehicle for thought and communication lends an incredible level of flexibility and versatility. People have quickly capitalized on this capability to connect LLMs to a wide range of external components and environments: databases, tools, the Internet, robotic embodiment, etc. Many believe an unprecedentedly powerful automation technology is emerging. However, new automation technologies come with new safety risks, especially for intricate systems like language agents. There is a surprisingly large gap between the speed and scale of their development and deployment and our understanding of their safety risks. Are we building a house of cards? In this position paper, we present the first systematic effort in mapping adversarial attacks against language agents. We first present a unified conceptual framework for agents with three major components: Perception, Brain, and Action. Under this framework, we present a comprehensive discussion and propose 12 potential attack scenarios against different components of an agent, covering different attack strategies (e.g., input manipulation, adversarial demonstrations, jailbreaking, backdoors). We also draw connections to successful attack strategies previously applied to LLMs. We emphasize the urgency to gain a thorough understanding of language agent risks before their widespread deployment.
|
2212.05662
|
Dongju Kang
|
Dongju Kang, Doeun Kang, Sumin Hwangbo, Haider Niaz, Won Bo Lee, J.
Jay Liu, Jonggeol Na
|
Optimal Planning of Hybrid Energy Storage Systems using Curtailed
Renewable Energy through Deep Reinforcement Learning
|
30 pages, 8 figures
| null | null | null |
cs.LG cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Energy management systems (EMS) are becoming increasingly important in order
to utilize the continuously growing curtailed renewable energy. Promising
energy storage systems (ESS), such as batteries and green hydrogen should be
employed to maximize the efficiency of energy stakeholders. However, optimal
decision-making, i.e., planning the leveraging between different strategies, is
confronted with the complexity and uncertainties of large-scale problems. Here,
we propose a sophisticated deep reinforcement learning (DRL) methodology with a
policy-based algorithm to realize the real-time optimal ESS planning under the
curtailed renewable energy uncertainty. A quantitative performance comparison
proved that the DRL agent outperforms the scenario-based stochastic
optimization (SO) algorithm, even with a wide action and observation space.
Owing to the uncertainty rejection capability of the DRL, we could confirm a
robust performance, under a large uncertainty of the curtailed renewable
energy, with a maximizing net profit and stable system. Action-mapping was
performed for visually assessing the action taken by the DRL agent according to
the state. The corresponding results confirmed that the DRL agent learns the
way like what a human expert would do, suggesting reliable application of the
proposed methodology.
|
[
{
"created": "Mon, 12 Dec 2022 02:24:50 GMT",
"version": "v1"
}
] |
2022-12-13
|
[
[
"Kang",
"Dongju",
""
],
[
"Kang",
"Doeun",
""
],
[
"Hwangbo",
"Sumin",
""
],
[
"Niaz",
"Haider",
""
],
[
"Lee",
"Won Bo",
""
],
[
"Liu",
"J. Jay",
""
],
[
"Na",
"Jonggeol",
""
]
] |
Energy management systems (EMS) are becoming increasingly important in order to utilize the continuously growing curtailed renewable energy. Promising energy storage systems (ESS), such as batteries and green hydrogen should be employed to maximize the efficiency of energy stakeholders. However, optimal decision-making, i.e., planning the leveraging between different strategies, is confronted with the complexity and uncertainties of large-scale problems. Here, we propose a sophisticated deep reinforcement learning (DRL) methodology with a policy-based algorithm to realize the real-time optimal ESS planning under the curtailed renewable energy uncertainty. A quantitative performance comparison proved that the DRL agent outperforms the scenario-based stochastic optimization (SO) algorithm, even with a wide action and observation space. Owing to the uncertainty rejection capability of the DRL, we could confirm a robust performance, under a large uncertainty of the curtailed renewable energy, with a maximizing net profit and stable system. Action-mapping was performed for visually assessing the action taken by the DRL agent according to the state. The corresponding results confirmed that the DRL agent learns the way like what a human expert would do, suggesting reliable application of the proposed methodology.
|
2210.06341
|
Surya Kant Sahu
|
Surya Kant Sahu
|
TaskMix: Data Augmentation for Meta-Learning of Spoken Intent
Understanding
|
Accepted at Findings of AACL-IJCNLP 2022
| null | null | null |
cs.CL cs.AI cs.LG eess.AS
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Meta-Learning has emerged as a research direction to better transfer
knowledge from related tasks to unseen but related tasks. However,
Meta-Learning requires many training tasks to learn representations that
transfer well to unseen tasks; otherwise, it leads to overfitting, and the
performance degenerates to worse than Multi-task Learning. We show that a
state-of-the-art data augmentation method worsens this problem of overfitting
when the task diversity is low. We propose a simple method, TaskMix, which
synthesizes new tasks by linearly interpolating existing tasks. We compare
TaskMix against many baselines on an in-house multilingual intent
classification dataset of N-Best ASR hypotheses derived from real-life
human-machine telephony utterances and two datasets derived from MTOP. We show
that TaskMix outperforms baselines, alleviates overfitting when task diversity
is low, and does not degrade performance even when it is high.
|
[
{
"created": "Mon, 26 Sep 2022 00:37:40 GMT",
"version": "v1"
}
] |
2022-10-13
|
[
[
"Sahu",
"Surya Kant",
""
]
] |
Meta-Learning has emerged as a research direction to better transfer knowledge from related tasks to unseen but related tasks. However, Meta-Learning requires many training tasks to learn representations that transfer well to unseen tasks; otherwise, it leads to overfitting, and the performance degenerates to worse than Multi-task Learning. We show that a state-of-the-art data augmentation method worsens this problem of overfitting when the task diversity is low. We propose a simple method, TaskMix, which synthesizes new tasks by linearly interpolating existing tasks. We compare TaskMix against many baselines on an in-house multilingual intent classification dataset of N-Best ASR hypotheses derived from real-life human-machine telephony utterances and two datasets derived from MTOP. We show that TaskMix outperforms baselines, alleviates overfitting when task diversity is low, and does not degrade performance even when it is high.
|
2302.09875
|
Han-Dong Lim
|
Han-Dong Lim and Donghwan Lee
|
Backstepping Temporal Difference Learning
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Off-policy learning ability is an important feature of reinforcement learning
(RL) for practical applications. However, even one of the most elementary RL
algorithms, temporal-difference (TD) learning, is known to suffer form
divergence issue when the off-policy scheme is used together with linear
function approximation. To overcome the divergent behavior, several off-policy
TD-learning algorithms, including gradient-TD learning (GTD), and TD-learning
with correction (TDC), have been developed until now. In this work, we provide
a unified view of such algorithms from a purely control-theoretic perspective,
and propose a new convergent algorithm. Our method relies on the backstepping
technique, which is widely used in nonlinear control theory. Finally,
convergence of the proposed algorithm is experimentally verified in
environments where the standard TD-learning is known to be unstable.
|
[
{
"created": "Mon, 20 Feb 2023 10:06:49 GMT",
"version": "v1"
},
{
"created": "Tue, 28 Feb 2023 00:51:47 GMT",
"version": "v2"
}
] |
2023-03-01
|
[
[
"Lim",
"Han-Dong",
""
],
[
"Lee",
"Donghwan",
""
]
] |
Off-policy learning ability is an important feature of reinforcement learning (RL) for practical applications. However, even one of the most elementary RL algorithms, temporal-difference (TD) learning, is known to suffer form divergence issue when the off-policy scheme is used together with linear function approximation. To overcome the divergent behavior, several off-policy TD-learning algorithms, including gradient-TD learning (GTD), and TD-learning with correction (TDC), have been developed until now. In this work, we provide a unified view of such algorithms from a purely control-theoretic perspective, and propose a new convergent algorithm. Our method relies on the backstepping technique, which is widely used in nonlinear control theory. Finally, convergence of the proposed algorithm is experimentally verified in environments where the standard TD-learning is known to be unstable.
|
2107.05090
|
Somali Chaterji
|
Shikhar Suryavansh (1), Abu Benna (2), Chris Guest (2), Somali
Chaterji (3) ((1) Cisco Systems, USA (2) Beaconchain, Canada, (3) Purdue
University, USA)
|
Ambrosia: Reduction in Data Transfer from Sensor to Server for Increased
Lifetime of IoT Sensor Nodes
|
13 pages, 7 figures, Nature Scientific Reports
| null | null | null |
cs.NI eess.SP
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Data transmission accounts for significant energy consumption in wireless
sensor networks where streaming data is generatedby the sensors. This impedes
their use in many settings, including livestock monitoring over large pastures
(which formsour target application). We present Ambrosia, a lightweight
protocol that utilizes a window-based timeseries forecastingmechanism for data
reduction. Ambrosia employs a configurable error threshold to ensure that the
accuracy of end applicationsis unaffected by the data transfer reduction.
Experimental evaluations using LoRa and BLE on a real livestock
monitoringdeployment demonstrate 60% reduction in data transmission and a 2X
increase in battery lifetime.
|
[
{
"created": "Sun, 11 Jul 2021 17:07:38 GMT",
"version": "v1"
}
] |
2021-07-13
|
[
[
"Suryavansh",
"Shikhar",
""
],
[
"Benna",
"Abu",
""
],
[
"Guest",
"Chris",
""
],
[
"Chaterji",
"Somali",
""
]
] |
Data transmission accounts for significant energy consumption in wireless sensor networks where streaming data is generatedby the sensors. This impedes their use in many settings, including livestock monitoring over large pastures (which formsour target application). We present Ambrosia, a lightweight protocol that utilizes a window-based timeseries forecastingmechanism for data reduction. Ambrosia employs a configurable error threshold to ensure that the accuracy of end applicationsis unaffected by the data transfer reduction. Experimental evaluations using LoRa and BLE on a real livestock monitoringdeployment demonstrate 60% reduction in data transmission and a 2X increase in battery lifetime.
|
2009.11838
|
Khaled Abedrabboh
|
Khaled Abedrabboh, Matthias Pilz, Zaid Al-Fagih, Othman S. Al-Fagih,
Jean-Christophe Nebel, Luluwah Al-Fagih
|
Game theory to enhance stock management of Personal Protective Equipment
(PPE) during the COVID-19 outbreak
|
22 pages, 7 figures, published in PLOS ONE
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0246110
|
PLOS ONE 16(2): e0246110 (2021)
|
10.1371/journal.pone.0246110
| null |
cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Since the outbreak of the COVID-19 pandemic, many healthcare facilities have
suffered from shortages in medical resources, particularly in Personal
Protective Equipment (PPE). In this paper, we propose a game-theoretic approach
to schedule PPE orders among healthcare facilities. In this PPE game, each
independent healthcare facility optimises its own storage utilisation in order
to keep its PPE cost at a minimum. Such a model can reduce peak demand
considerably when applied to a variable PPE consumption profile. Experiments
conducted for NHS England regions using actual data confirm that the challenge
of securing PPE supply during disasters such as COVID-19 can be eased if proper
stock management procedures are adopted. These procedures can include early
stockpiling, increasing storage capacities and implementing measures that can
prolong the time period between successive infection waves, such as social
distancing measures. Simulation results suggest that the provision of PPE
dedicated storage space can be a viable solution to avoid straining PPE supply
chains in case a second wave of COVID-19 infections occurs.
|
[
{
"created": "Thu, 24 Sep 2020 17:36:13 GMT",
"version": "v1"
},
{
"created": "Fri, 25 Sep 2020 09:43:06 GMT",
"version": "v2"
},
{
"created": "Tue, 2 Feb 2021 08:08:35 GMT",
"version": "v3"
}
] |
2021-02-03
|
[
[
"Abedrabboh",
"Khaled",
""
],
[
"Pilz",
"Matthias",
""
],
[
"Al-Fagih",
"Zaid",
""
],
[
"Al-Fagih",
"Othman S.",
""
],
[
"Nebel",
"Jean-Christophe",
""
],
[
"Al-Fagih",
"Luluwah",
""
]
] |
Since the outbreak of the COVID-19 pandemic, many healthcare facilities have suffered from shortages in medical resources, particularly in Personal Protective Equipment (PPE). In this paper, we propose a game-theoretic approach to schedule PPE orders among healthcare facilities. In this PPE game, each independent healthcare facility optimises its own storage utilisation in order to keep its PPE cost at a minimum. Such a model can reduce peak demand considerably when applied to a variable PPE consumption profile. Experiments conducted for NHS England regions using actual data confirm that the challenge of securing PPE supply during disasters such as COVID-19 can be eased if proper stock management procedures are adopted. These procedures can include early stockpiling, increasing storage capacities and implementing measures that can prolong the time period between successive infection waves, such as social distancing measures. Simulation results suggest that the provision of PPE dedicated storage space can be a viable solution to avoid straining PPE supply chains in case a second wave of COVID-19 infections occurs.
|
2309.00023
|
Enneng Yang
|
Enneng Yang, Zhenyi Wang, Li Shen, Nan Yin, Tongliang Liu, Guibing
Guo, Xingwei Wang, and Dacheng Tao
|
Continual Learning From a Stream of APIs
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Continual learning (CL) aims to learn new tasks without forgetting previous
tasks. However, existing CL methods require a large amount of raw data, which
is often unavailable due to copyright considerations and privacy risks.
Instead, stakeholders usually release pre-trained machine learning models as a
service (MLaaS), which users can access via APIs. This paper considers two
practical-yet-novel CL settings: data-efficient CL (DECL-APIs) and data-free CL
(DFCL-APIs), which achieve CL from a stream of APIs with partial or no raw
data. Performing CL under these two new settings faces several challenges:
unavailable full raw data, unknown model parameters, heterogeneous models of
arbitrary architecture and scale, and catastrophic forgetting of previous APIs.
To overcome these issues, we propose a novel data-free cooperative continual
distillation learning framework that distills knowledge from a stream of APIs
into a CL model by generating pseudo data, just by querying APIs. Specifically,
our framework includes two cooperative generators and one CL model, forming
their training as an adversarial game. We first use the CL model and the
current API as fixed discriminators to train generators via a derivative-free
method. Generators adversarially generate hard and diverse synthetic data to
maximize the response gap between the CL model and the API. Next, we train the
CL model by minimizing the gap between the responses of the CL model and the
black-box API on synthetic data, to transfer the API's knowledge to the CL
model. Furthermore, we propose a new regularization term based on network
similarity to prevent catastrophic forgetting of previous APIs.Our method
performs comparably to classic CL with full raw data on the MNIST and SVHN in
the DFCL-APIs setting. In the DECL-APIs setting, our method achieves 0.97x,
0.75x and 0.69x performance of classic CL on CIFAR10, CIFAR100, and
MiniImageNet.
|
[
{
"created": "Thu, 31 Aug 2023 11:16:00 GMT",
"version": "v1"
}
] |
2023-09-04
|
[
[
"Yang",
"Enneng",
""
],
[
"Wang",
"Zhenyi",
""
],
[
"Shen",
"Li",
""
],
[
"Yin",
"Nan",
""
],
[
"Liu",
"Tongliang",
""
],
[
"Guo",
"Guibing",
""
],
[
"Wang",
"Xingwei",
""
],
[
"Tao",
"Dacheng",
""
]
] |
Continual learning (CL) aims to learn new tasks without forgetting previous tasks. However, existing CL methods require a large amount of raw data, which is often unavailable due to copyright considerations and privacy risks. Instead, stakeholders usually release pre-trained machine learning models as a service (MLaaS), which users can access via APIs. This paper considers two practical-yet-novel CL settings: data-efficient CL (DECL-APIs) and data-free CL (DFCL-APIs), which achieve CL from a stream of APIs with partial or no raw data. Performing CL under these two new settings faces several challenges: unavailable full raw data, unknown model parameters, heterogeneous models of arbitrary architecture and scale, and catastrophic forgetting of previous APIs. To overcome these issues, we propose a novel data-free cooperative continual distillation learning framework that distills knowledge from a stream of APIs into a CL model by generating pseudo data, just by querying APIs. Specifically, our framework includes two cooperative generators and one CL model, forming their training as an adversarial game. We first use the CL model and the current API as fixed discriminators to train generators via a derivative-free method. Generators adversarially generate hard and diverse synthetic data to maximize the response gap between the CL model and the API. Next, we train the CL model by minimizing the gap between the responses of the CL model and the black-box API on synthetic data, to transfer the API's knowledge to the CL model. Furthermore, we propose a new regularization term based on network similarity to prevent catastrophic forgetting of previous APIs.Our method performs comparably to classic CL with full raw data on the MNIST and SVHN in the DFCL-APIs setting. In the DECL-APIs setting, our method achieves 0.97x, 0.75x and 0.69x performance of classic CL on CIFAR10, CIFAR100, and MiniImageNet.
|
2206.04316
|
Huishuai Zhang
|
Huishuai Zhang and Da Yu and Yiping Lu and Di He
|
Adversarial Noises Are Linearly Separable for (Nearly) Random Neural
Networks
|
13 pages
| null | null | null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Adversarial examples, which are usually generated for specific inputs with a
specific model, are ubiquitous for neural networks. In this paper we unveil a
surprising property of adversarial noises when they are put together, i.e.,
adversarial noises crafted by one-step gradient methods are linearly separable
if equipped with the corresponding labels. We theoretically prove this property
for a two-layer network with randomly initialized entries and the neural
tangent kernel setup where the parameters are not far from initialization. The
proof idea is to show the label information can be efficiently backpropagated
to the input while keeping the linear separability. Our theory and experimental
evidence further show that the linear classifier trained with the adversarial
noises of the training data can well classify the adversarial noises of the
test data, indicating that adversarial noises actually inject a distributional
perturbation to the original data distribution. Furthermore, we empirically
demonstrate that the adversarial noises may become less linearly separable when
the above conditions are compromised while they are still much easier to
classify than original features.
|
[
{
"created": "Thu, 9 Jun 2022 07:26:46 GMT",
"version": "v1"
}
] |
2022-06-10
|
[
[
"Zhang",
"Huishuai",
""
],
[
"Yu",
"Da",
""
],
[
"Lu",
"Yiping",
""
],
[
"He",
"Di",
""
]
] |
Adversarial examples, which are usually generated for specific inputs with a specific model, are ubiquitous for neural networks. In this paper we unveil a surprising property of adversarial noises when they are put together, i.e., adversarial noises crafted by one-step gradient methods are linearly separable if equipped with the corresponding labels. We theoretically prove this property for a two-layer network with randomly initialized entries and the neural tangent kernel setup where the parameters are not far from initialization. The proof idea is to show the label information can be efficiently backpropagated to the input while keeping the linear separability. Our theory and experimental evidence further show that the linear classifier trained with the adversarial noises of the training data can well classify the adversarial noises of the test data, indicating that adversarial noises actually inject a distributional perturbation to the original data distribution. Furthermore, we empirically demonstrate that the adversarial noises may become less linearly separable when the above conditions are compromised while they are still much easier to classify than original features.
|
1205.2766
|
Rui Ferreira
|
Rui Ferreira, Roberto Grossi, Andrea Marino, Nadia Pisanti, Romeo
Rizzi and Gustavo Sacomoto
|
Optimal Listing of Cycles and st-Paths in Undirected Graphs
|
12 Pages, 7 Page Appendix
| null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present the first optimal algorithm for the classical problem of listing
all the cycles in an undirected graph. We exploit their properties so that the
total cost is the time taken to read the input graph plus the time to list the
output, namely, the edges in each of the cycles. The algorithm uses a reduction
to the problem of listing all the paths from a vertex s to a vertex t which we
also solve optimally.
|
[
{
"created": "Sat, 12 May 2012 11:12:10 GMT",
"version": "v1"
},
{
"created": "Thu, 5 Jul 2012 13:10:33 GMT",
"version": "v2"
}
] |
2012-07-06
|
[
[
"Ferreira",
"Rui",
""
],
[
"Grossi",
"Roberto",
""
],
[
"Marino",
"Andrea",
""
],
[
"Pisanti",
"Nadia",
""
],
[
"Rizzi",
"Romeo",
""
],
[
"Sacomoto",
"Gustavo",
""
]
] |
We present the first optimal algorithm for the classical problem of listing all the cycles in an undirected graph. We exploit their properties so that the total cost is the time taken to read the input graph plus the time to list the output, namely, the edges in each of the cycles. The algorithm uses a reduction to the problem of listing all the paths from a vertex s to a vertex t which we also solve optimally.
|
cs/0701144
|
Andreas U. Schmidt
|
Nicolai Kuntze and Andreas U. Schmidt
|
Trusted Ticket Systems and Applications
|
Accepted full research paper at IFIP sec2007, Sandton, South Africa,
14-16 May 2007
| null | null | null |
cs.CR
| null |
Trusted Computing is a security base technology that will perhaps be
ubiquitous in a few years in personal computers and mobile devices alike.
Despite its neutrality with respect to applications, it has raised some privacy
concerns. We show that trusted computing can be applied for service access
control in a manner protecting users' privacy. We construct a ticket system --
a concept which is at the heart of Identity Management -- relying solely on the
capabilities of the trusted platform module and the standards specified by the
Trusted Computing Group. Two examples show how it can be used for pseudonymous
and protected service access.
|
[
{
"created": "Tue, 23 Jan 2007 14:26:20 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Kuntze",
"Nicolai",
""
],
[
"Schmidt",
"Andreas U.",
""
]
] |
Trusted Computing is a security base technology that will perhaps be ubiquitous in a few years in personal computers and mobile devices alike. Despite its neutrality with respect to applications, it has raised some privacy concerns. We show that trusted computing can be applied for service access control in a manner protecting users' privacy. We construct a ticket system -- a concept which is at the heart of Identity Management -- relying solely on the capabilities of the trusted platform module and the standards specified by the Trusted Computing Group. Two examples show how it can be used for pseudonymous and protected service access.
|
2101.01944
|
Uwe Egbert Wolter
|
Uwe Wolter
|
Logics of First-Order Constraints -- A Category Independent Approach
|
23 pages, presented at the 8th Conference on Algebra and Coalgebra in
Computer Science (CALCO 2019), London, UK, June 3-6, 2019
| null | null | null |
cs.LO cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Reflecting our experiences in areas, like Algebraic Specifications, Abstract
Model Theory, Graph Transformations, and Model Driven Software Engineering
(MDSE), we present a general, category independent approach to Logics of
First-Order Constraints (LFOC). Traditional First-Order Logic, Description
Logic and the sketch framework are discussed as examples. We use the concept of
institution [Diaconescu08,GoguenBurstall92] as a guideline to describe LFOC's.
The main result states that any choice of the six parameters, we are going to
describe, gives us a corresponding "institution of constraints" at hand. The
"presentations" for an institution of constraints can be characterized as
"first-order sketches". As a corresponding variant of the "sketch-entailments"
in [Makkai97], we finally introduce "sketch rules" to equip LFOC's with the
necessary expressive power.
|
[
{
"created": "Wed, 6 Jan 2021 09:55:43 GMT",
"version": "v1"
}
] |
2021-01-07
|
[
[
"Wolter",
"Uwe",
""
]
] |
Reflecting our experiences in areas, like Algebraic Specifications, Abstract Model Theory, Graph Transformations, and Model Driven Software Engineering (MDSE), we present a general, category independent approach to Logics of First-Order Constraints (LFOC). Traditional First-Order Logic, Description Logic and the sketch framework are discussed as examples. We use the concept of institution [Diaconescu08,GoguenBurstall92] as a guideline to describe LFOC's. The main result states that any choice of the six parameters, we are going to describe, gives us a corresponding "institution of constraints" at hand. The "presentations" for an institution of constraints can be characterized as "first-order sketches". As a corresponding variant of the "sketch-entailments" in [Makkai97], we finally introduce "sketch rules" to equip LFOC's with the necessary expressive power.
|
1905.05087
|
Zhiqiang Gong
|
Zhiqiang Gong and Ping Zhong and Weidong Hu and Zixuan Xiao and Xuping
Yin
|
A novel statistical metric learning for hyperspectral image
classification
|
Submitted to Whispers2019
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, a novel statistical metric learning is developed for
spectral-spatial classification of the hyperspectral image. First, the standard
variance of the samples of each class in each batch is used to decrease the
intra-class variance within each class. Then, the distances between the means
of different classes are used to penalize the inter-class variance of the
training samples. Finally, the standard variance between the means of different
classes is added as an additional diversity term to repulse different classes
from each other. Experiments have conducted over two real-world hyperspectral
image datasets and the experimental results have shown the effectiveness of the
proposed statistical metric learning.
|
[
{
"created": "Mon, 13 May 2019 15:28:15 GMT",
"version": "v1"
}
] |
2019-05-14
|
[
[
"Gong",
"Zhiqiang",
""
],
[
"Zhong",
"Ping",
""
],
[
"Hu",
"Weidong",
""
],
[
"Xiao",
"Zixuan",
""
],
[
"Yin",
"Xuping",
""
]
] |
In this paper, a novel statistical metric learning is developed for spectral-spatial classification of the hyperspectral image. First, the standard variance of the samples of each class in each batch is used to decrease the intra-class variance within each class. Then, the distances between the means of different classes are used to penalize the inter-class variance of the training samples. Finally, the standard variance between the means of different classes is added as an additional diversity term to repulse different classes from each other. Experiments have conducted over two real-world hyperspectral image datasets and the experimental results have shown the effectiveness of the proposed statistical metric learning.
|
2101.08937
|
Jin Young Shin
|
Jin young Shin, Cheolhyeong Kim, Hyung Ju Hwang
|
Prior Preference Learning from Experts:Designing a Reward with Active
Inference
|
This paper is accepted to Neurocomputing
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Active inference may be defined as Bayesian modeling of a brain with a
biologically plausible model of the agent. Its primary idea relies on the free
energy principle and the prior preference of the agent. An agent will choose an
action that leads to its prior preference for a future observation. In this
paper, we claim that active inference can be interpreted using reinforcement
learning (RL) algorithms and find a theoretical connection between them. We
extend the concept of expected free energy (EFE), which is a core quantity in
active inference, and claim that EFE can be treated as a negative value
function. Motivated by the concept of prior preference and a theoretical
connection, we propose a simple but novel method for learning a prior
preference from experts. This illustrates that the problem with inverse RL can
be approached with a new perspective of active inference. Experimental results
of prior preference learning show the possibility of active inference with
EFE-based rewards and its application to an inverse RL problem.
|
[
{
"created": "Fri, 22 Jan 2021 04:03:45 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Feb 2021 05:02:02 GMT",
"version": "v2"
},
{
"created": "Mon, 13 Dec 2021 04:50:21 GMT",
"version": "v3"
}
] |
2021-12-14
|
[
[
"Shin",
"Jin young",
""
],
[
"Kim",
"Cheolhyeong",
""
],
[
"Hwang",
"Hyung Ju",
""
]
] |
Active inference may be defined as Bayesian modeling of a brain with a biologically plausible model of the agent. Its primary idea relies on the free energy principle and the prior preference of the agent. An agent will choose an action that leads to its prior preference for a future observation. In this paper, we claim that active inference can be interpreted using reinforcement learning (RL) algorithms and find a theoretical connection between them. We extend the concept of expected free energy (EFE), which is a core quantity in active inference, and claim that EFE can be treated as a negative value function. Motivated by the concept of prior preference and a theoretical connection, we propose a simple but novel method for learning a prior preference from experts. This illustrates that the problem with inverse RL can be approached with a new perspective of active inference. Experimental results of prior preference learning show the possibility of active inference with EFE-based rewards and its application to an inverse RL problem.
|
1702.03767
|
Patrick O. Glauner
|
Patrick Glauner, Angelo Migliosi, Jorge Meira, Petko Valtchev, Radu
State, Franck Bettinger
|
Is Big Data Sufficient for a Reliable Detection of Non-Technical Losses?
|
Proceedings of the 19th International Conference on Intelligent
System Applications to Power Systems (ISAP 2017)
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Non-technical losses (NTL) occur during the distribution of electricity in
power grids and include, but are not limited to, electricity theft and faulty
meters. In emerging countries, they may range up to 40% of the total
electricity distributed. In order to detect NTLs, machine learning methods are
used that learn irregular consumption patterns from customer data and
inspection results. The Big Data paradigm followed in modern machine learning
reflects the desire of deriving better conclusions from simply analyzing more
data, without the necessity of looking at theory and models. However, the
sample of inspected customers may be biased, i.e. it does not represent the
population of all customers. As a consequence, machine learning models trained
on these inspection results are biased as well and therefore lead to unreliable
predictions of whether customers cause NTL or not. In machine learning, this
issue is called covariate shift and has not been addressed in the literature on
NTL detection yet. In this work, we present a novel framework for quantifying
and visualizing covariate shift. We apply it to a commercial data set from
Brazil that consists of 3.6M customers and 820K inspection results. We show
that some features have a stronger covariate shift than others, making
predictions less reliable. In particular, previous inspections were focused on
certain neighborhoods or customer classes and that they were not sufficiently
spread among the population of customers. This framework is about to be
deployed in a commercial product for NTL detection.
|
[
{
"created": "Mon, 13 Feb 2017 13:33:47 GMT",
"version": "v1"
},
{
"created": "Tue, 25 Jul 2017 04:35:45 GMT",
"version": "v2"
}
] |
2017-07-26
|
[
[
"Glauner",
"Patrick",
""
],
[
"Migliosi",
"Angelo",
""
],
[
"Meira",
"Jorge",
""
],
[
"Valtchev",
"Petko",
""
],
[
"State",
"Radu",
""
],
[
"Bettinger",
"Franck",
""
]
] |
Non-technical losses (NTL) occur during the distribution of electricity in power grids and include, but are not limited to, electricity theft and faulty meters. In emerging countries, they may range up to 40% of the total electricity distributed. In order to detect NTLs, machine learning methods are used that learn irregular consumption patterns from customer data and inspection results. The Big Data paradigm followed in modern machine learning reflects the desire of deriving better conclusions from simply analyzing more data, without the necessity of looking at theory and models. However, the sample of inspected customers may be biased, i.e. it does not represent the population of all customers. As a consequence, machine learning models trained on these inspection results are biased as well and therefore lead to unreliable predictions of whether customers cause NTL or not. In machine learning, this issue is called covariate shift and has not been addressed in the literature on NTL detection yet. In this work, we present a novel framework for quantifying and visualizing covariate shift. We apply it to a commercial data set from Brazil that consists of 3.6M customers and 820K inspection results. We show that some features have a stronger covariate shift than others, making predictions less reliable. In particular, previous inspections were focused on certain neighborhoods or customer classes and that they were not sufficiently spread among the population of customers. This framework is about to be deployed in a commercial product for NTL detection.
|
1806.01041
|
Francisco Crespo Mr
|
Francisco Crespo Estefan\'ia Mart\'in
|
Applications for mobile devices focused on support for autism spectrum
disorder population and / or people in their immediate environment in their
daily lives: a systematic and practical review from a Spanish - speaking
perspective
|
16 pages, 8 figures, 2 tables
| null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The present study has made a review of scientific publications on
applications focused on autism, most of them developed for communication,
social behavior and learning, which coincides with what is observed in a
digital market that practically lacks scientific validation. The study has also
found only 135 of these type of applications with a Spanish version available
(in a practical sense), developed mostly for daily life of an autistic person
and/or people from their immediate environment. By using these applications,
there are positive results in terms of learning and permanent adoption of
behaviors and skills, but it is necessary to deepen research and further
development of applications focused on leisure, resources for parents and
professionals, and supporting of autistic adult needs.
|
[
{
"created": "Mon, 4 Jun 2018 10:53:57 GMT",
"version": "v1"
}
] |
2018-06-05
|
[
[
"Martín",
"Francisco Crespo Estefanía",
""
]
] |
The present study has made a review of scientific publications on applications focused on autism, most of them developed for communication, social behavior and learning, which coincides with what is observed in a digital market that practically lacks scientific validation. The study has also found only 135 of these type of applications with a Spanish version available (in a practical sense), developed mostly for daily life of an autistic person and/or people from their immediate environment. By using these applications, there are positive results in terms of learning and permanent adoption of behaviors and skills, but it is necessary to deepen research and further development of applications focused on leisure, resources for parents and professionals, and supporting of autistic adult needs.
|
2204.05576
|
Yuan Tian
|
Yuan Tian, Klaus-Rudolf Kladny, Qin Wang, Zhiwu Huang, Olga Fink
|
Multi-agent Actor-Critic with Time Dynamical Opponent Model
| null | null | null | null |
cs.AI
|
http://creativecommons.org/publicdomain/zero/1.0/
|
In multi-agent reinforcement learning, multiple agents learn simultaneously
while interacting with a common environment and each other. Since the agents
adapt their policies during learning, not only the behavior of a single agent
becomes non-stationary, but also the environment as perceived by the agent.
This renders it particularly challenging to perform policy improvement. In this
paper, we propose to exploit the fact that the agents seek to improve their
expected cumulative reward and introduce a novel \textit{Time Dynamical
Opponent Model} (TDOM) to encode the knowledge that the opponent policies tend
to improve over time. We motivate TDOM theoretically by deriving a lower bound
of the log objective of an individual agent and further propose
\textit{Multi-Agent Actor-Critic with Time Dynamical Opponent Model} (TDOM-AC).
We evaluate the proposed TDOM-AC on a differential game and the Multi-agent
Particle Environment. We show empirically that TDOM achieves superior opponent
behavior prediction during test time. The proposed TDOM-AC methodology
outperforms state-of-the-art Actor-Critic methods on the performed experiments
in cooperative and \textbf{especially} in mixed cooperative-competitive
environments. TDOM-AC results in a more stable training and a faster
convergence.
|
[
{
"created": "Tue, 12 Apr 2022 07:16:15 GMT",
"version": "v1"
}
] |
2022-04-13
|
[
[
"Tian",
"Yuan",
""
],
[
"Kladny",
"Klaus-Rudolf",
""
],
[
"Wang",
"Qin",
""
],
[
"Huang",
"Zhiwu",
""
],
[
"Fink",
"Olga",
""
]
] |
In multi-agent reinforcement learning, multiple agents learn simultaneously while interacting with a common environment and each other. Since the agents adapt their policies during learning, not only the behavior of a single agent becomes non-stationary, but also the environment as perceived by the agent. This renders it particularly challenging to perform policy improvement. In this paper, we propose to exploit the fact that the agents seek to improve their expected cumulative reward and introduce a novel \textit{Time Dynamical Opponent Model} (TDOM) to encode the knowledge that the opponent policies tend to improve over time. We motivate TDOM theoretically by deriving a lower bound of the log objective of an individual agent and further propose \textit{Multi-Agent Actor-Critic with Time Dynamical Opponent Model} (TDOM-AC). We evaluate the proposed TDOM-AC on a differential game and the Multi-agent Particle Environment. We show empirically that TDOM achieves superior opponent behavior prediction during test time. The proposed TDOM-AC methodology outperforms state-of-the-art Actor-Critic methods on the performed experiments in cooperative and \textbf{especially} in mixed cooperative-competitive environments. TDOM-AC results in a more stable training and a faster convergence.
|
1707.01089
|
Ingo Weber
|
Christopher Klinkm\"uler and Ingo Weber
|
Control Flow Information Analysis in Process Model Matching Techniques
| null | null | null | null |
cs.OH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Online Appendix to: "Analyzing Control Flow Information to Improve the
Effectiveness of Process Model Matching Techniques" by the same authors.
|
[
{
"created": "Tue, 4 Jul 2017 04:09:21 GMT",
"version": "v1"
}
] |
2017-07-06
|
[
[
"Klinkmüler",
"Christopher",
""
],
[
"Weber",
"Ingo",
""
]
] |
Online Appendix to: "Analyzing Control Flow Information to Improve the Effectiveness of Process Model Matching Techniques" by the same authors.
|
1201.2084
|
Rafi Muhammad
|
Mehwish Aziz, Muhammad Rafi
|
Sentence based semantic similarity measure for blog-posts
|
6th International Conference on Digital Content, Multimedia
Technology and its Applications (IDC), 2010
| null | null | null |
cs.AI cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Blogs-Online digital diary like application on web 2.0 has opened new and
easy way to voice opinion, thoughts, and like-dislike of every Internet user to
the World. Blogosphere has no doubt the largest user-generated content
repository full of knowledge. The potential of this knowledge is still to be
explored. Knowledge discovery from this new genre is quite difficult and
challenging as it is totally different from other popular genre of
web-applications like World Wide Web (WWW). Blog-posts unlike web documents are
small in size, thus lack in context and contain relaxed grammatical structures.
Hence, standard text similarity measure fails to provide good results. In this
paper, specialized requirements for comparing a pair of blog-posts is
thoroughly investigated. Based on this we proposed a novel algorithm for
sentence oriented semantic similarity measure of a pair of blog-posts. We
applied this algorithm on a subset of political blogosphere of Pakistan, to
cluster the blogs on different issues of political realm and to identify the
influential bloggers.
|
[
{
"created": "Tue, 10 Jan 2012 15:33:32 GMT",
"version": "v1"
}
] |
2012-01-11
|
[
[
"Aziz",
"Mehwish",
""
],
[
"Rafi",
"Muhammad",
""
]
] |
Blogs-Online digital diary like application on web 2.0 has opened new and easy way to voice opinion, thoughts, and like-dislike of every Internet user to the World. Blogosphere has no doubt the largest user-generated content repository full of knowledge. The potential of this knowledge is still to be explored. Knowledge discovery from this new genre is quite difficult and challenging as it is totally different from other popular genre of web-applications like World Wide Web (WWW). Blog-posts unlike web documents are small in size, thus lack in context and contain relaxed grammatical structures. Hence, standard text similarity measure fails to provide good results. In this paper, specialized requirements for comparing a pair of blog-posts is thoroughly investigated. Based on this we proposed a novel algorithm for sentence oriented semantic similarity measure of a pair of blog-posts. We applied this algorithm on a subset of political blogosphere of Pakistan, to cluster the blogs on different issues of political realm and to identify the influential bloggers.
|
2204.02342
|
Lea Matlekovic
|
Lea Matlekovic and Peter Schneider-Kamp
|
From Monolith to Microservices: Software Architecture for Autonomous UAV
Infrastructure Inspection
|
11th International Conference on Cloud Computing: Services and
Architecture (CLOUD 2022)
| null |
10.5121/csit.2022.120622
| null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Linear-infrastructure Mission Control (LiMiC) is an application for
autonomous Unmanned Aerial Vehicle (UAV) infrastructure inspection mission
planning developed in monolithic software architecture. The application
calculates routes along the infrastructure based on the users' inputs, the
number of UAVs participating in the mission, and UAVs' locations. LiMiC1.0 is
the latest application version migrated from monolith to microservices,
continuously integrated, and deployed using DevOps tools to facilitate future
features development, enable better traffic management, and improve the route
calculation processing time. Processing time was improved by refactoring the
route calculation algorithm into services, scaling them in the Kubernetes
cluster, and enabling asynchronous communication in between. In this paper, we
discuss the differences between the monolith and microservice architecture to
justify our decision for migration. We describe the methodology for the
application's migration and implementation processes, technologies we use for
continuous integration and deployment, and we present microservices improved
performance results compared with the monolithic application.
|
[
{
"created": "Tue, 5 Apr 2022 16:57:14 GMT",
"version": "v1"
}
] |
2022-04-06
|
[
[
"Matlekovic",
"Lea",
""
],
[
"Schneider-Kamp",
"Peter",
""
]
] |
Linear-infrastructure Mission Control (LiMiC) is an application for autonomous Unmanned Aerial Vehicle (UAV) infrastructure inspection mission planning developed in monolithic software architecture. The application calculates routes along the infrastructure based on the users' inputs, the number of UAVs participating in the mission, and UAVs' locations. LiMiC1.0 is the latest application version migrated from monolith to microservices, continuously integrated, and deployed using DevOps tools to facilitate future features development, enable better traffic management, and improve the route calculation processing time. Processing time was improved by refactoring the route calculation algorithm into services, scaling them in the Kubernetes cluster, and enabling asynchronous communication in between. In this paper, we discuss the differences between the monolith and microservice architecture to justify our decision for migration. We describe the methodology for the application's migration and implementation processes, technologies we use for continuous integration and deployment, and we present microservices improved performance results compared with the monolithic application.
|
1812.03632
|
Nazmus Saquib
|
Shoumik Sharar Chowdhury, Nazmus Saquib, Niamat Zawad, Manash Kumar
Mandal, Syed Haque
|
Statement networks: a power structure narrative as depicted by
newspapers
|
Presented at NeurIPS 2018 Workshop on Machine Learning for the
Developing World
| null | null | null |
cs.CY stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We report a data mining pipeline and subsequent analysis to understand the
core periphery power structure created in three national newspapers in
Bangladesh, as depicted by statements made by people appearing in news.
Statements made by one actor about another actor can be considered a form of
public conversation. Named entity recognition techniques can be used to create
a temporal actor network from such conversations, which shows some unique
structure, and reveals much room for improvement in news reporting and also the
top actors' conversation preferences. Our results indicate there is a presence
of cliquishness between powerful political leaders when it comes to their
appearance in news. We also show how these cohesive cores form through the news
articles, and how, over a decade, news cycles change the actors belonging in
these groups.
|
[
{
"created": "Mon, 10 Dec 2018 05:40:54 GMT",
"version": "v1"
}
] |
2018-12-11
|
[
[
"Chowdhury",
"Shoumik Sharar",
""
],
[
"Saquib",
"Nazmus",
""
],
[
"Zawad",
"Niamat",
""
],
[
"Mandal",
"Manash Kumar",
""
],
[
"Haque",
"Syed",
""
]
] |
We report a data mining pipeline and subsequent analysis to understand the core periphery power structure created in three national newspapers in Bangladesh, as depicted by statements made by people appearing in news. Statements made by one actor about another actor can be considered a form of public conversation. Named entity recognition techniques can be used to create a temporal actor network from such conversations, which shows some unique structure, and reveals much room for improvement in news reporting and also the top actors' conversation preferences. Our results indicate there is a presence of cliquishness between powerful political leaders when it comes to their appearance in news. We also show how these cohesive cores form through the news articles, and how, over a decade, news cycles change the actors belonging in these groups.
|
1907.06554
|
Mohammad Aliannejadi
|
Mohammad Aliannejadi and Hamed Zamani and Fabio Crestani and W. Bruce
Croft
|
Asking Clarifying Questions in Open-Domain Information-Seeking
Conversations
|
To appear in SIGIR 2019
| null |
10.1145/3331184.3331265
| null |
cs.CL cs.AI cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Users often fail to formulate their complex information needs in a single
query. As a consequence, they may need to scan multiple result pages or
reformulate their queries, which may be a frustrating experience.
Alternatively, systems can improve user satisfaction by proactively asking
questions of the users to clarify their information needs. Asking clarifying
questions is especially important in conversational systems since they can only
return a limited number of (often only one) result(s). In this paper, we
formulate the task of asking clarifying questions in open-domain
information-seeking conversational systems. To this end, we propose an offline
evaluation methodology for the task and collect a dataset, called Qulac,
through crowdsourcing. Our dataset is built on top of the TREC Web Track
2009-2012 data and consists of over 10K question-answer pairs for 198 TREC
topics with 762 facets. Our experiments on an oracle model demonstrate that
asking only one good question leads to over 170% retrieval performance
improvement in terms of P@1, which clearly demonstrates the potential impact of
the task. We further propose a retrieval framework consisting of three
components: question retrieval, question selection, and document retrieval. In
particular, our question selection model takes into account the original query
and previous question-answer interactions while selecting the next question.
Our model significantly outperforms competitive baselines. To foster research
in this area, we have made Qulac publicly available.
|
[
{
"created": "Mon, 15 Jul 2019 15:45:37 GMT",
"version": "v1"
}
] |
2019-07-16
|
[
[
"Aliannejadi",
"Mohammad",
""
],
[
"Zamani",
"Hamed",
""
],
[
"Crestani",
"Fabio",
""
],
[
"Croft",
"W. Bruce",
""
]
] |
Users often fail to formulate their complex information needs in a single query. As a consequence, they may need to scan multiple result pages or reformulate their queries, which may be a frustrating experience. Alternatively, systems can improve user satisfaction by proactively asking questions of the users to clarify their information needs. Asking clarifying questions is especially important in conversational systems since they can only return a limited number of (often only one) result(s). In this paper, we formulate the task of asking clarifying questions in open-domain information-seeking conversational systems. To this end, we propose an offline evaluation methodology for the task and collect a dataset, called Qulac, through crowdsourcing. Our dataset is built on top of the TREC Web Track 2009-2012 data and consists of over 10K question-answer pairs for 198 TREC topics with 762 facets. Our experiments on an oracle model demonstrate that asking only one good question leads to over 170% retrieval performance improvement in terms of P@1, which clearly demonstrates the potential impact of the task. We further propose a retrieval framework consisting of three components: question retrieval, question selection, and document retrieval. In particular, our question selection model takes into account the original query and previous question-answer interactions while selecting the next question. Our model significantly outperforms competitive baselines. To foster research in this area, we have made Qulac publicly available.
|
2305.03888
|
Yujin Huang
|
Zijian Wang, Shuo Huang, Yujin Huang, Helei Cui
|
Energy-Latency Attacks to On-Device Neural Networks via Sponge Poisoning
|
Accepted to AsiaCCS Workshop on Secure and Trustworthy Deep Learning
Systems (SecTL 2023)
| null | null | null |
cs.CR cs.AI cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, on-device deep learning has gained attention as a means of
developing affordable deep learning applications for mobile devices. However,
on-device models are constrained by limited energy and computation resources.
In the mean time, a poisoning attack known as sponge poisoning has been
developed.This attack involves feeding the model with poisoned examples to
increase the energy consumption during inference. As previous work is focusing
on server hardware accelerators, in this work, we extend the sponge poisoning
attack to an on-device scenario to evaluate the vulnerability of mobile device
processors. We present an on-device sponge poisoning attack pipeline to
simulate the streaming and consistent inference scenario to bridge the
knowledge gap in the on-device setting. Our exclusive experimental analysis
with processors and on-device networks shows that sponge poisoning attacks can
effectively pollute the modern processor with its built-in accelerator. We
analyze the impact of different factors in the sponge poisoning algorithm and
highlight the need for improved defense mechanisms to prevent such attacks on
on-device deep learning applications.
|
[
{
"created": "Sat, 6 May 2023 01:20:30 GMT",
"version": "v1"
},
{
"created": "Thu, 11 May 2023 09:31:06 GMT",
"version": "v2"
}
] |
2023-05-12
|
[
[
"Wang",
"Zijian",
""
],
[
"Huang",
"Shuo",
""
],
[
"Huang",
"Yujin",
""
],
[
"Cui",
"Helei",
""
]
] |
In recent years, on-device deep learning has gained attention as a means of developing affordable deep learning applications for mobile devices. However, on-device models are constrained by limited energy and computation resources. In the mean time, a poisoning attack known as sponge poisoning has been developed.This attack involves feeding the model with poisoned examples to increase the energy consumption during inference. As previous work is focusing on server hardware accelerators, in this work, we extend the sponge poisoning attack to an on-device scenario to evaluate the vulnerability of mobile device processors. We present an on-device sponge poisoning attack pipeline to simulate the streaming and consistent inference scenario to bridge the knowledge gap in the on-device setting. Our exclusive experimental analysis with processors and on-device networks shows that sponge poisoning attacks can effectively pollute the modern processor with its built-in accelerator. We analyze the impact of different factors in the sponge poisoning algorithm and highlight the need for improved defense mechanisms to prevent such attacks on on-device deep learning applications.
|
2210.11416
|
Jason Wei
|
Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William
Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert
Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha
Chowdhery, Alex Castro-Ros, Marie Pellat, Kevin Robinson, Dasha Valter,
Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew
Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam
Roberts, Denny Zhou, Quoc V. Le, Jason Wei
|
Scaling Instruction-Finetuned Language Models
|
Public checkpoints:
https://huggingface.co/docs/transformers/model_doc/flan-t5
| null | null | null |
cs.LG cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Finetuning language models on a collection of datasets phrased as
instructions has been shown to improve model performance and generalization to
unseen tasks. In this paper we explore instruction finetuning with a particular
focus on (1) scaling the number of tasks, (2) scaling the model size, and (3)
finetuning on chain-of-thought data. We find that instruction finetuning with
the above aspects dramatically improves performance on a variety of model
classes (PaLM, T5, U-PaLM), prompting setups (zero-shot, few-shot, CoT), and
evaluation benchmarks (MMLU, BBH, TyDiQA, MGSM, open-ended generation). For
instance, Flan-PaLM 540B instruction-finetuned on 1.8K tasks outperforms PALM
540B by a large margin (+9.4% on average). Flan-PaLM 540B achieves
state-of-the-art performance on several benchmarks, such as 75.2% on five-shot
MMLU. We also publicly release Flan-T5 checkpoints, which achieve strong
few-shot performance even compared to much larger models, such as PaLM 62B.
Overall, instruction finetuning is a general method for improving the
performance and usability of pretrained language models.
|
[
{
"created": "Thu, 20 Oct 2022 16:58:32 GMT",
"version": "v1"
},
{
"created": "Fri, 21 Oct 2022 17:46:04 GMT",
"version": "v2"
},
{
"created": "Wed, 16 Nov 2022 09:44:42 GMT",
"version": "v3"
},
{
"created": "Wed, 23 Nov 2022 02:11:56 GMT",
"version": "v4"
},
{
"created": "Tue, 6 Dec 2022 21:39:48 GMT",
"version": "v5"
}
] |
2022-12-08
|
[
[
"Chung",
"Hyung Won",
""
],
[
"Hou",
"Le",
""
],
[
"Longpre",
"Shayne",
""
],
[
"Zoph",
"Barret",
""
],
[
"Tay",
"Yi",
""
],
[
"Fedus",
"William",
""
],
[
"Li",
"Yunxuan",
""
],
[
"Wang",
"Xuezhi",
""
],
[
"Dehghani",
"Mostafa",
""
],
[
"Brahma",
"Siddhartha",
""
],
[
"Webson",
"Albert",
""
],
[
"Gu",
"Shixiang Shane",
""
],
[
"Dai",
"Zhuyun",
""
],
[
"Suzgun",
"Mirac",
""
],
[
"Chen",
"Xinyun",
""
],
[
"Chowdhery",
"Aakanksha",
""
],
[
"Castro-Ros",
"Alex",
""
],
[
"Pellat",
"Marie",
""
],
[
"Robinson",
"Kevin",
""
],
[
"Valter",
"Dasha",
""
],
[
"Narang",
"Sharan",
""
],
[
"Mishra",
"Gaurav",
""
],
[
"Yu",
"Adams",
""
],
[
"Zhao",
"Vincent",
""
],
[
"Huang",
"Yanping",
""
],
[
"Dai",
"Andrew",
""
],
[
"Yu",
"Hongkun",
""
],
[
"Petrov",
"Slav",
""
],
[
"Chi",
"Ed H.",
""
],
[
"Dean",
"Jeff",
""
],
[
"Devlin",
"Jacob",
""
],
[
"Roberts",
"Adam",
""
],
[
"Zhou",
"Denny",
""
],
[
"Le",
"Quoc V.",
""
],
[
"Wei",
"Jason",
""
]
] |
Finetuning language models on a collection of datasets phrased as instructions has been shown to improve model performance and generalization to unseen tasks. In this paper we explore instruction finetuning with a particular focus on (1) scaling the number of tasks, (2) scaling the model size, and (3) finetuning on chain-of-thought data. We find that instruction finetuning with the above aspects dramatically improves performance on a variety of model classes (PaLM, T5, U-PaLM), prompting setups (zero-shot, few-shot, CoT), and evaluation benchmarks (MMLU, BBH, TyDiQA, MGSM, open-ended generation). For instance, Flan-PaLM 540B instruction-finetuned on 1.8K tasks outperforms PALM 540B by a large margin (+9.4% on average). Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints, which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models.
|
2007.01755
|
Bin-Bin Gao
|
Bin-Bin Gao, Hong-Yu Zhou
|
Learning to Discover Multi-Class Attentional Regions for Multi-Label
Image Recognition
|
13 pages, Accepted by IEEE TIP (5-Jun-2021)
| null |
10.1109/TIP.2021.3088605
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-label image recognition is a practical and challenging task compared to
single-label image classification. However, previous works may be suboptimal
because of a great number of object proposals or complex attentional region
generation modules. In this paper, we propose a simple but efficient two-stream
framework to recognize multi-category objects from global image to local
regions, similar to how human beings perceive objects. To bridge the gap
between global and local streams, we propose a multi-class attentional region
module which aims to make the number of attentional regions as small as
possible and keep the diversity of these regions as high as possible. Our
method can efficiently and effectively recognize multi-class objects with an
affordable computation cost and a parameter-free region localization module.
Over three benchmarks on multi-label image classification, we create new
state-of-the-art results with a single model only using image semantics without
label dependency. In addition, the effectiveness of the proposed method is
extensively demonstrated under different factors such as global pooling
strategy, input size and network architecture. Code has been made available
at~\url{https://github.com/gaobb/MCAR}.
|
[
{
"created": "Fri, 3 Jul 2020 15:22:46 GMT",
"version": "v1"
},
{
"created": "Tue, 1 Dec 2020 16:43:01 GMT",
"version": "v2"
},
{
"created": "Wed, 9 Jun 2021 08:27:59 GMT",
"version": "v3"
}
] |
2021-07-21
|
[
[
"Gao",
"Bin-Bin",
""
],
[
"Zhou",
"Hong-Yu",
""
]
] |
Multi-label image recognition is a practical and challenging task compared to single-label image classification. However, previous works may be suboptimal because of a great number of object proposals or complex attentional region generation modules. In this paper, we propose a simple but efficient two-stream framework to recognize multi-category objects from global image to local regions, similar to how human beings perceive objects. To bridge the gap between global and local streams, we propose a multi-class attentional region module which aims to make the number of attentional regions as small as possible and keep the diversity of these regions as high as possible. Our method can efficiently and effectively recognize multi-class objects with an affordable computation cost and a parameter-free region localization module. Over three benchmarks on multi-label image classification, we create new state-of-the-art results with a single model only using image semantics without label dependency. In addition, the effectiveness of the proposed method is extensively demonstrated under different factors such as global pooling strategy, input size and network architecture. Code has been made available at~\url{https://github.com/gaobb/MCAR}.
|
2407.18413
|
Daniel Szelogowski
|
Daniel Szelogowski
|
Simulation of Neural Responses to Classical Music Using Organoid
Intelligence Methods
|
10 pages, 9 figures
| null | null | null |
cs.NE cs.AI cs.LG cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Music is a complex auditory stimulus capable of eliciting significant changes
in brain activity, influencing cognitive processes such as memory, attention,
and emotional regulation. However, the underlying mechanisms of music-induced
cognitive processes remain largely unknown. Organoid intelligence and deep
learning models show promise for simulating and analyzing these neural
responses to classical music, an area significantly unexplored in computational
neuroscience. Hence, we present the PyOrganoid library, an innovative tool that
facilitates the simulation of organoid learning models, integrating
sophisticated machine learning techniques with biologically inspired organoid
simulations. Our study features the development of the Pianoid model, a "deep
organoid learning" model that utilizes a Bidirectional LSTM network to predict
EEG responses based on audio features from classical music recordings. This
model demonstrates the feasibility of using computational methods to replicate
complex neural processes, providing valuable insights into music perception and
cognition. Likewise, our findings emphasize the utility of synthetic models in
neuroscience research and highlight the PyOrganoid library's potential as a
versatile tool for advancing studies in neuroscience and artificial
intelligence.
|
[
{
"created": "Thu, 25 Jul 2024 22:11:30 GMT",
"version": "v1"
}
] |
2024-07-29
|
[
[
"Szelogowski",
"Daniel",
""
]
] |
Music is a complex auditory stimulus capable of eliciting significant changes in brain activity, influencing cognitive processes such as memory, attention, and emotional regulation. However, the underlying mechanisms of music-induced cognitive processes remain largely unknown. Organoid intelligence and deep learning models show promise for simulating and analyzing these neural responses to classical music, an area significantly unexplored in computational neuroscience. Hence, we present the PyOrganoid library, an innovative tool that facilitates the simulation of organoid learning models, integrating sophisticated machine learning techniques with biologically inspired organoid simulations. Our study features the development of the Pianoid model, a "deep organoid learning" model that utilizes a Bidirectional LSTM network to predict EEG responses based on audio features from classical music recordings. This model demonstrates the feasibility of using computational methods to replicate complex neural processes, providing valuable insights into music perception and cognition. Likewise, our findings emphasize the utility of synthetic models in neuroscience research and highlight the PyOrganoid library's potential as a versatile tool for advancing studies in neuroscience and artificial intelligence.
|
2407.09346
|
Lester Phillip Violeta
|
Lester Phillip Violeta, Taketo Akama
|
A Preliminary Investigation on Flexible Singing Voice Synthesis Through
Decomposed Framework with Inferrable Features
|
Preliminary investigations
| null | null | null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We investigate the feasibility of a singing voice synthesis (SVS) system by
using a decomposed framework to improve flexibility in generating singing
voices. Due to data-driven approaches, SVS performs a music score-to-waveform
mapping; however, the direct mapping limits control, such as being able to only
synthesize in the language or the singers present in the labeled singing
datasets. As collecting large singing datasets labeled with music scores is an
expensive task, we investigate an alternative approach by decomposing the SVS
system and inferring different singing voice features. We decompose the SVS
system into three-stage modules of linguistic, pitch contour, and synthesis, in
which singing voice features such as linguistic content, F0, voiced/unvoiced,
singer embeddings, and loudness are directly inferred from audio. Through this
decomposed framework, we show that we can alleviate the labeled dataset
requirements, adapt to different languages or singers, and inpaint the lyrical
content of singing voices. Our investigations show that the framework has the
potential to reach state-of-the-art in SVS, even though the model has
additional functionality and improved flexibility. The comprehensive analysis
of our investigated framework's current capabilities sheds light on the ways
the research community can achieve a flexible and multifunctional SVS system.
|
[
{
"created": "Fri, 12 Jul 2024 15:22:23 GMT",
"version": "v1"
}
] |
2024-07-15
|
[
[
"Violeta",
"Lester Phillip",
""
],
[
"Akama",
"Taketo",
""
]
] |
We investigate the feasibility of a singing voice synthesis (SVS) system by using a decomposed framework to improve flexibility in generating singing voices. Due to data-driven approaches, SVS performs a music score-to-waveform mapping; however, the direct mapping limits control, such as being able to only synthesize in the language or the singers present in the labeled singing datasets. As collecting large singing datasets labeled with music scores is an expensive task, we investigate an alternative approach by decomposing the SVS system and inferring different singing voice features. We decompose the SVS system into three-stage modules of linguistic, pitch contour, and synthesis, in which singing voice features such as linguistic content, F0, voiced/unvoiced, singer embeddings, and loudness are directly inferred from audio. Through this decomposed framework, we show that we can alleviate the labeled dataset requirements, adapt to different languages or singers, and inpaint the lyrical content of singing voices. Our investigations show that the framework has the potential to reach state-of-the-art in SVS, even though the model has additional functionality and improved flexibility. The comprehensive analysis of our investigated framework's current capabilities sheds light on the ways the research community can achieve a flexible and multifunctional SVS system.
|
1607.00715
|
Sebastian Sardina
|
Davide Aversa and Sebastian Sardina and Stavros Vassos
|
Path planning with Inventory-driven Jump-Point-Search
| null |
In Proceedings of the AAAI Conference on Artificial Intelligence
and Interactive Digital Entertainment (AIIDE), pp. 2-8, 2015
| null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In many navigational domains the traversability of cells is conditioned on
the path taken. This is often the case in video-games, in which a character may
need to acquire a certain object (i.e., a key or a flying suit) to be able to
traverse specific locations (e.g., doors or high walls). In order for
non-player characters to handle such scenarios we present invJPS, an
"inventory-driven" pathfinding approach based on the highly successful
grid-based Jump-Point-Search (JPS) algorithm. We show, formally and
experimentally, that the invJPS preserves JPS's optimality guarantees and its
symmetry breaking advantages in inventory-based variants of game maps.
|
[
{
"created": "Mon, 4 Jul 2016 01:13:32 GMT",
"version": "v1"
}
] |
2016-07-05
|
[
[
"Aversa",
"Davide",
""
],
[
"Sardina",
"Sebastian",
""
],
[
"Vassos",
"Stavros",
""
]
] |
In many navigational domains the traversability of cells is conditioned on the path taken. This is often the case in video-games, in which a character may need to acquire a certain object (i.e., a key or a flying suit) to be able to traverse specific locations (e.g., doors or high walls). In order for non-player characters to handle such scenarios we present invJPS, an "inventory-driven" pathfinding approach based on the highly successful grid-based Jump-Point-Search (JPS) algorithm. We show, formally and experimentally, that the invJPS preserves JPS's optimality guarantees and its symmetry breaking advantages in inventory-based variants of game maps.
|
1705.10614
|
Mahesh Babu Vaddi
|
Mahesh Babu Vaddi and B. Sundar Rajan
|
Near-Optimal Vector Linear Index Codes For Single Unicast Index Coding
Problems with Symmetric Neighboring Interference
|
14 pages, 8 figures and 3 tables. arXiv admin note: substantial text
overlap with arXiv:1705.05060, arXiv:1705.03192
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A single unicast index coding problem (SUICP) with symmetric neighboring
interference (SNI) has equal number of $K$ messages and $K$ receivers, the
$k$th receiver $R_{k}$ wanting the $k$th message $x_{k}$ and having the
side-information $\mathcal{K}_{k}=(\mathcal{I}_{k} \cup x_{k})^c,$ where
${I}_k= \{x_{k-U},\dots,x_{k-2},x_{k-1}\}\cup\{x_{k+1},
x_{k+2},\dots,x_{k+D}\}$ is the interference with $D$ messages after and $U$
messages before its desired message. Maleki, Cadambe and Jafar obtained the
capacity of this single unicast index coding problem with symmetric neighboring
interference (SUICP-SNI) with $K$ tending to infinity and Blasiak, Kleinberg
and Lubetzky for the special case of $(D=U=1)$ with $K$ being finite. In our
previous work, we proved the capacity of SUICP-SNI for arbitrary $K$ and $D$
with $U=\text{gcd}(K,D+1)-1$. This paper deals with near-optimal linear code
construction for SUICP-SNI with arbitrary $K,U$ and $D.$ For SUICP-SNI with
arbitrary $K,U$ and $D$, we define a set of $2$-tuples such that for every
$(a,b)$ in that set the rate $D+1+\frac{a}{b}$ is achieved by using vector
linear index codes over every field. We prove that the set
$\mathcal{\mathbf{S}}$ consists of $(a,b)$ such that the rate of constructed
vector linear index codes are at most $\frac{K~\text{mod}~(D+1)}{\left \lfloor
\frac{K}{D+1} \right \rfloor}$ away from a known lower bound on broadcast rate
of SUICP-SNI. The three known results on the exact capacity of the SUICP-SNI
are recovered as special cases of our results. Also, we give a low complexity
decoding procedure for the proposed vector linear index codes for the
SUICP-SNI.
|
[
{
"created": "Sun, 28 May 2017 10:20:09 GMT",
"version": "v1"
}
] |
2017-05-31
|
[
[
"Vaddi",
"Mahesh Babu",
""
],
[
"Rajan",
"B. Sundar",
""
]
] |
A single unicast index coding problem (SUICP) with symmetric neighboring interference (SNI) has equal number of $K$ messages and $K$ receivers, the $k$th receiver $R_{k}$ wanting the $k$th message $x_{k}$ and having the side-information $\mathcal{K}_{k}=(\mathcal{I}_{k} \cup x_{k})^c,$ where ${I}_k= \{x_{k-U},\dots,x_{k-2},x_{k-1}\}\cup\{x_{k+1}, x_{k+2},\dots,x_{k+D}\}$ is the interference with $D$ messages after and $U$ messages before its desired message. Maleki, Cadambe and Jafar obtained the capacity of this single unicast index coding problem with symmetric neighboring interference (SUICP-SNI) with $K$ tending to infinity and Blasiak, Kleinberg and Lubetzky for the special case of $(D=U=1)$ with $K$ being finite. In our previous work, we proved the capacity of SUICP-SNI for arbitrary $K$ and $D$ with $U=\text{gcd}(K,D+1)-1$. This paper deals with near-optimal linear code construction for SUICP-SNI with arbitrary $K,U$ and $D.$ For SUICP-SNI with arbitrary $K,U$ and $D$, we define a set of $2$-tuples such that for every $(a,b)$ in that set the rate $D+1+\frac{a}{b}$ is achieved by using vector linear index codes over every field. We prove that the set $\mathcal{\mathbf{S}}$ consists of $(a,b)$ such that the rate of constructed vector linear index codes are at most $\frac{K~\text{mod}~(D+1)}{\left \lfloor \frac{K}{D+1} \right \rfloor}$ away from a known lower bound on broadcast rate of SUICP-SNI. The three known results on the exact capacity of the SUICP-SNI are recovered as special cases of our results. Also, we give a low complexity decoding procedure for the proposed vector linear index codes for the SUICP-SNI.
|
1110.0021
|
Dirk Beyer
|
Sven Apel and Hendrik Speidel and Philipp Wendler and Alexander von
Rhein and Dirk Beyer
|
Feature-Aware Verification
|
12 pages, 9 figures, 1 table
| null | null |
Technical Report, Number MIP-1105, University of Passau, Germany
|
cs.SE cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A software product line is a set of software products that are distinguished
in terms of features (i.e., end-user--visible units of behavior). Feature
interactions ---situations in which the combination of features leads to
emergent and possibly critical behavior--- are a major source of failures in
software product lines. We explore how feature-aware verification can improve
the automatic detection of feature interactions in software product lines.
Feature-aware verification uses product-line verification techniques and
supports the specification of feature properties along with the features in
separate and composable units. It integrates the technique of variability
encoding to verify a product line without generating and checking a possibly
exponential number of feature combinations. We developed the tool suite
SPLverifier for feature-aware verification, which is based on standard
model-checking technology. We applied it to an e-mail system that incorporates
domain knowledge of AT&T. We found that feature interactions can be detected
automatically based on specifications that have only feature-local knowledge,
and that variability encoding significantly improves the verification
performance when proving the absence of interactions.
|
[
{
"created": "Fri, 30 Sep 2011 20:46:35 GMT",
"version": "v1"
}
] |
2015-03-19
|
[
[
"Apel",
"Sven",
""
],
[
"Speidel",
"Hendrik",
""
],
[
"Wendler",
"Philipp",
""
],
[
"von Rhein",
"Alexander",
""
],
[
"Beyer",
"Dirk",
""
]
] |
A software product line is a set of software products that are distinguished in terms of features (i.e., end-user--visible units of behavior). Feature interactions ---situations in which the combination of features leads to emergent and possibly critical behavior--- are a major source of failures in software product lines. We explore how feature-aware verification can improve the automatic detection of feature interactions in software product lines. Feature-aware verification uses product-line verification techniques and supports the specification of feature properties along with the features in separate and composable units. It integrates the technique of variability encoding to verify a product line without generating and checking a possibly exponential number of feature combinations. We developed the tool suite SPLverifier for feature-aware verification, which is based on standard model-checking technology. We applied it to an e-mail system that incorporates domain knowledge of AT&T. We found that feature interactions can be detected automatically based on specifications that have only feature-local knowledge, and that variability encoding significantly improves the verification performance when proving the absence of interactions.
|
1612.08170
|
Benjamin Berkels
|
Benjamin Berkels and Benedikt Wirth
|
Joint denoising and distortion correction of atomic scale scanning
transmission electron microscopy images
| null | null |
10.1088/1361-6420/aa7b94
| null |
cs.CV physics.data-an
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Nowadays, modern electron microscopes deliver images at atomic scale. The
precise atomic structure encodes information about material properties. Thus,
an important ingredient in the image analysis is to locate the centers of the
atoms shown in micrographs as precisely as possible. Here, we consider scanning
transmission electron microscopy (STEM), which acquires data in a rastering
pattern, pixel by pixel. Due to this rastering combined with the magnification
to atomic scale, movements of the specimen even at the nanometer scale lead to
random image distortions that make precise atom localization difficult. Given a
series of STEM images, we derive a Bayesian method that jointly estimates the
distortion in each image and reconstructs the underlying atomic grid of the
material by fitting the atom bumps with suitable bump functions. The resulting
highly non-convex minimization problems are solved numerically with a trust
region approach. Well-posedness of the reconstruction method and the model
behavior for faster and faster rastering are investigated using variational
techniques. The performance of the method is finally evaluated on both
synthetic and real experimental data.
|
[
{
"created": "Sat, 24 Dec 2016 12:37:17 GMT",
"version": "v1"
}
] |
2017-09-13
|
[
[
"Berkels",
"Benjamin",
""
],
[
"Wirth",
"Benedikt",
""
]
] |
Nowadays, modern electron microscopes deliver images at atomic scale. The precise atomic structure encodes information about material properties. Thus, an important ingredient in the image analysis is to locate the centers of the atoms shown in micrographs as precisely as possible. Here, we consider scanning transmission electron microscopy (STEM), which acquires data in a rastering pattern, pixel by pixel. Due to this rastering combined with the magnification to atomic scale, movements of the specimen even at the nanometer scale lead to random image distortions that make precise atom localization difficult. Given a series of STEM images, we derive a Bayesian method that jointly estimates the distortion in each image and reconstructs the underlying atomic grid of the material by fitting the atom bumps with suitable bump functions. The resulting highly non-convex minimization problems are solved numerically with a trust region approach. Well-posedness of the reconstruction method and the model behavior for faster and faster rastering are investigated using variational techniques. The performance of the method is finally evaluated on both synthetic and real experimental data.
|
1601.06719
|
Guiying Li
|
Guiying Li, Junlong Liu, Chunhui Jiang, Liangpeng Zhang, Minlong Lin,
and Ke Tang
|
Relief R-CNN : Utilizing Convolutional Features for Fast Object
Detection
|
9 pages, 2 figures, accepted by ISNN 2017
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
R-CNN style methods are sorts of the state-of-the-art object detection
methods, which consist of region proposal generation and deep CNN
classification. However, the proposal generation phase in this paradigm is
usually time consuming, which would slow down the whole detection time in
testing. This paper suggests that the value discrepancies among features in
deep convolutional feature maps contain plenty of useful spatial information,
and proposes a simple approach to extract the information for fast region
proposal generation in testing. The proposed method, namely Relief R-CNN
(R2-CNN), adopts a novel region proposal generator in a trained R-CNN style
model. The new generator directly generates proposals from convolutional
features by some simple rules, thus resulting in a much faster proposal
generation speed and a lower demand of computation resources. Empirical studies
show that R2-CNN could achieve the fastest detection speed with comparable
accuracy among all the compared algorithms in testing.
|
[
{
"created": "Mon, 25 Jan 2016 18:53:14 GMT",
"version": "v1"
},
{
"created": "Fri, 11 Mar 2016 13:10:55 GMT",
"version": "v2"
},
{
"created": "Tue, 20 Sep 2016 11:52:59 GMT",
"version": "v3"
},
{
"created": "Wed, 26 Apr 2017 07:12:14 GMT",
"version": "v4"
}
] |
2017-04-27
|
[
[
"Li",
"Guiying",
""
],
[
"Liu",
"Junlong",
""
],
[
"Jiang",
"Chunhui",
""
],
[
"Zhang",
"Liangpeng",
""
],
[
"Lin",
"Minlong",
""
],
[
"Tang",
"Ke",
""
]
] |
R-CNN style methods are sorts of the state-of-the-art object detection methods, which consist of region proposal generation and deep CNN classification. However, the proposal generation phase in this paradigm is usually time consuming, which would slow down the whole detection time in testing. This paper suggests that the value discrepancies among features in deep convolutional feature maps contain plenty of useful spatial information, and proposes a simple approach to extract the information for fast region proposal generation in testing. The proposed method, namely Relief R-CNN (R2-CNN), adopts a novel region proposal generator in a trained R-CNN style model. The new generator directly generates proposals from convolutional features by some simple rules, thus resulting in a much faster proposal generation speed and a lower demand of computation resources. Empirical studies show that R2-CNN could achieve the fastest detection speed with comparable accuracy among all the compared algorithms in testing.
|
2302.12198
|
Jiadi Cui
|
Jiadi Cui and S\"oren Schwertfeger
|
CP+: Camera Poses Augmentation with Large-scale LiDAR Maps
| null | null |
10.1109/RCAR54675.2022.9872176
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large-scale colored point clouds have many advantages in navigation or scene
display. Relying on cameras and LiDARs, which are now widely used in
reconstruction tasks, it is possible to obtain such colored point clouds.
However, the information from these two kinds of sensors is not well fused in
many existing frameworks, resulting in poor colorization results, thus
resulting in inaccurate camera poses and damaged point colorization results. We
propose a novel framework called Camera Pose Augmentation (CP+) to improve the
camera poses and align them directly with the LiDAR-based point cloud. Initial
coarse camera poses are given by LiDAR-Inertial or LiDAR-Inertial-Visual
Odometry with approximate extrinsic parameters and time synchronization. The
key steps to improve the alignment of the images consist of selecting a point
cloud corresponding to a region of interest in each camera view, extracting
reliable edge features from this point cloud, and deriving 2D-3D line
correspondences which are used towards iterative minimization of the
re-projection error.
|
[
{
"created": "Thu, 23 Feb 2023 17:49:53 GMT",
"version": "v1"
},
{
"created": "Mon, 27 Feb 2023 07:18:15 GMT",
"version": "v2"
}
] |
2023-02-28
|
[
[
"Cui",
"Jiadi",
""
],
[
"Schwertfeger",
"Sören",
""
]
] |
Large-scale colored point clouds have many advantages in navigation or scene display. Relying on cameras and LiDARs, which are now widely used in reconstruction tasks, it is possible to obtain such colored point clouds. However, the information from these two kinds of sensors is not well fused in many existing frameworks, resulting in poor colorization results, thus resulting in inaccurate camera poses and damaged point colorization results. We propose a novel framework called Camera Pose Augmentation (CP+) to improve the camera poses and align them directly with the LiDAR-based point cloud. Initial coarse camera poses are given by LiDAR-Inertial or LiDAR-Inertial-Visual Odometry with approximate extrinsic parameters and time synchronization. The key steps to improve the alignment of the images consist of selecting a point cloud corresponding to a region of interest in each camera view, extracting reliable edge features from this point cloud, and deriving 2D-3D line correspondences which are used towards iterative minimization of the re-projection error.
|
1201.1223
|
Paul Vitanyi
|
P. M. B. Vitanyi (National Research Center for Mathematics and
Computer Science in the Netherlands (CWI))
|
Turing Machines and Understanding Computational Complexity
|
9 pages, 1 figure, LaTeX. To appear in: Alan Turing - His Work and
Impact, Elsevier
|
In: S. Barry Cooper, Jan van Leeuwen (eds.), "Alan Turing: His
Work and Impact", Elsevier, Amsterdam, London, New York, Tokyo, 2013,
pp.57-63
| null | null |
cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We describe the Turing Machine, list some of its many influences on the
theory of computation and complexity of computations, and illustrate its
importance.
|
[
{
"created": "Thu, 5 Jan 2012 17:24:33 GMT",
"version": "v1"
}
] |
2013-08-26
|
[
[
"Vitanyi",
"P. M. B.",
"",
"National Research Center for Mathematics and\n Computer Science in the Netherlands"
]
] |
We describe the Turing Machine, list some of its many influences on the theory of computation and complexity of computations, and illustrate its importance.
|
1605.02210
|
Adrian Onet
|
Adrian Onet
|
Inference-based semantics in Data Exchange
| null | null | null | null |
cs.DB cs.CC cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Data Exchange is an old problem that was firstly studied from a theoretical
point of view only in 2003. Since then many approaches were considered when it
came to the language describing the relationship between the source and the
target schema. These approaches focus on what it makes a target instance a
"good" solution for data-exchange. In this paper we propose the inference-based
semantics that solves many certain-answer anomalies existing in current
data-exchange semantics. To this we introduce a new mapping language between
the source and the target schema based on annotated bidirectional dependencies
(abd) and, consequently define the semantics for this new language. It is shown
that the ABD-semantics can properly represent the inference-based semantics,
for any source-to-target mappings. We discovered three dichotomy results under
the new semantics for solution-existence, solution-check and UCQ evaluation
problems. These results rely on two factors describing the annotation used in
the mappings (density and cardinality). Finally we also investigate the
certain-answers evaluation problem under ABD-semantics and discover many
tractable classes for non-UCQ queries even for a subclass of CQ with negation.
|
[
{
"created": "Sat, 7 May 2016 16:28:40 GMT",
"version": "v1"
}
] |
2016-05-10
|
[
[
"Onet",
"Adrian",
""
]
] |
Data Exchange is an old problem that was firstly studied from a theoretical point of view only in 2003. Since then many approaches were considered when it came to the language describing the relationship between the source and the target schema. These approaches focus on what it makes a target instance a "good" solution for data-exchange. In this paper we propose the inference-based semantics that solves many certain-answer anomalies existing in current data-exchange semantics. To this we introduce a new mapping language between the source and the target schema based on annotated bidirectional dependencies (abd) and, consequently define the semantics for this new language. It is shown that the ABD-semantics can properly represent the inference-based semantics, for any source-to-target mappings. We discovered three dichotomy results under the new semantics for solution-existence, solution-check and UCQ evaluation problems. These results rely on two factors describing the annotation used in the mappings (density and cardinality). Finally we also investigate the certain-answers evaluation problem under ABD-semantics and discover many tractable classes for non-UCQ queries even for a subclass of CQ with negation.
|
2008.11662
|
Ujjal Kr Dutta
|
Rajdeep Hazra Banerjee, Abhinav Ravi, Ujjal Kr Dutta
|
Attr2Style: A Transfer Learning Approach for Inferring Fashion Styles
via Apparel Attributes
|
In Annual Conference on Innovative Applications of Artificial
Intelligence (IAAI), colocated with AAAI Conference on Artificial
Intelligence (AAAI) 2021
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Popular fashion e-commerce platforms mostly provide details about low-level
attributes of an apparel (eg, neck type, dress length, collar type) on their
product detail pages. However, customers usually prefer to buy apparel based on
their style information, or simply put, occasion (eg, party/ sports/ casual
wear). Application of a supervised image-captioning model to generate
style-based image captions is limited because obtaining ground-truth
annotations in the form of style-based captions is difficult. This is because
annotating style-based captions requires a certain amount of fashion domain
expertise, and also adds to the costs and manual effort. On the contrary,
low-level attribute based annotations are much more easily available. To
address this issue, we propose a transfer-learning based image captioning model
that is trained on a source dataset with sufficient attribute-based
ground-truth captions, and used to predict style-based captions on a target
dataset. The target dataset has only a limited amount of images with
style-based ground-truth captions. The main motivation of our approach comes
from the fact that most often there are correlations among the low-level
attributes and the higher-level styles for an apparel. We leverage this fact
and train our model in an encoder-decoder based framework using attention
mechanism. In particular, the encoder of the model is first trained on the
source dataset to obtain latent representations capturing the low-level
attributes. The trained model is fine-tuned to generate style-based captions
for the target dataset. To highlight the effectiveness of our method, we
qualitatively and quantitatively demonstrate that the captions generated by our
approach are close to the actual style information for the evaluated apparel. A
Proof Of Concept for our model is under pilot at Myntra where it is exposed to
some internal users for feedback.
|
[
{
"created": "Wed, 26 Aug 2020 16:42:21 GMT",
"version": "v1"
},
{
"created": "Fri, 11 Dec 2020 12:03:48 GMT",
"version": "v2"
}
] |
2020-12-14
|
[
[
"Banerjee",
"Rajdeep Hazra",
""
],
[
"Ravi",
"Abhinav",
""
],
[
"Dutta",
"Ujjal Kr",
""
]
] |
Popular fashion e-commerce platforms mostly provide details about low-level attributes of an apparel (eg, neck type, dress length, collar type) on their product detail pages. However, customers usually prefer to buy apparel based on their style information, or simply put, occasion (eg, party/ sports/ casual wear). Application of a supervised image-captioning model to generate style-based image captions is limited because obtaining ground-truth annotations in the form of style-based captions is difficult. This is because annotating style-based captions requires a certain amount of fashion domain expertise, and also adds to the costs and manual effort. On the contrary, low-level attribute based annotations are much more easily available. To address this issue, we propose a transfer-learning based image captioning model that is trained on a source dataset with sufficient attribute-based ground-truth captions, and used to predict style-based captions on a target dataset. The target dataset has only a limited amount of images with style-based ground-truth captions. The main motivation of our approach comes from the fact that most often there are correlations among the low-level attributes and the higher-level styles for an apparel. We leverage this fact and train our model in an encoder-decoder based framework using attention mechanism. In particular, the encoder of the model is first trained on the source dataset to obtain latent representations capturing the low-level attributes. The trained model is fine-tuned to generate style-based captions for the target dataset. To highlight the effectiveness of our method, we qualitatively and quantitatively demonstrate that the captions generated by our approach are close to the actual style information for the evaluated apparel. A Proof Of Concept for our model is under pilot at Myntra where it is exposed to some internal users for feedback.
|
2201.10015
|
Reza Maalek
|
Reza Maalek and Shahrokh Maalek
|
Automatic Recognition and Digital Documentation of Cultural Heritage
Hemispherical Domes using Images
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Advancements in optical metrology has enabled documentation of dense 3D point
clouds of cultural heritage sites. For large scale and continuous digital
documentation, processing of dense 3D point clouds becomes computationally
cumbersome, and often requires additional hardware for data management,
increasing the time cost, and complexity of projects. To this end, this
manuscript presents an original approach to generate fast and reliable semantic
digital models of heritage hemispherical domes using only two images. New
closed formulations were derived to establish the relationships between spheres
and their projected ellipses onto images, which fostered the development of a
new automatic framework for as-built generation of spheres. The effectiveness
of the proposed method was evaluated under both laboratory and real-world
datasets. The results revealed that the proposed method achieved as-built
modeling accuracy of around 6mm, while improving the computation time by a
factor of 7, when compared to established point cloud processing methods.
|
[
{
"created": "Tue, 25 Jan 2022 00:14:04 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Apr 2022 12:12:35 GMT",
"version": "v2"
}
] |
2022-04-07
|
[
[
"Maalek",
"Reza",
""
],
[
"Maalek",
"Shahrokh",
""
]
] |
Advancements in optical metrology has enabled documentation of dense 3D point clouds of cultural heritage sites. For large scale and continuous digital documentation, processing of dense 3D point clouds becomes computationally cumbersome, and often requires additional hardware for data management, increasing the time cost, and complexity of projects. To this end, this manuscript presents an original approach to generate fast and reliable semantic digital models of heritage hemispherical domes using only two images. New closed formulations were derived to establish the relationships between spheres and their projected ellipses onto images, which fostered the development of a new automatic framework for as-built generation of spheres. The effectiveness of the proposed method was evaluated under both laboratory and real-world datasets. The results revealed that the proposed method achieved as-built modeling accuracy of around 6mm, while improving the computation time by a factor of 7, when compared to established point cloud processing methods.
|
2212.12240
|
Jingyang Zhao
|
Jingyang Zhao and Mingyu Xiao
|
Practical Algorithms with Guaranteed Approximation Ratio for TTP with
Maximum Tour Length Two
|
arXiv admin note: substantial text overlap with arXiv:2108.13060
| null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
The Traveling Tournament Problem (TTP) is a hard but interesting sports
scheduling problem inspired by Major League Baseball, which is to design a
double round-robin schedule such that each pair of teams plays one game in each
other's home venue, minimizing the total distance traveled by all $n$ teams
($n$ is even). In this paper, we consider TTP-2, i.e., TTP under the constraint
that at most two consecutive home games or away games are allowed for each
team. We propose practical algorithms for TTP-2 with improved approximation
ratios. Due to the different structural properties of the problem, all known
algorithms for TTP-2 are different for $n/2$ being odd and even, and our
algorithms are also different for these two cases. For even $n/2$, our
approximation ratio is $1+3/n$, improving the previous result of $1+4/n$. For
odd $n/2$, our approximation ratio is $1+5/n$, improving the previous result of
$3/2+6/n$. In practice, our algorithms are easy to implement. Experiments on
well-known benchmark sets show that our algorithms beat previously known
solutions for all instances with an average improvement of $5.66\%$.
|
[
{
"created": "Fri, 23 Dec 2022 10:30:20 GMT",
"version": "v1"
}
] |
2022-12-26
|
[
[
"Zhao",
"Jingyang",
""
],
[
"Xiao",
"Mingyu",
""
]
] |
The Traveling Tournament Problem (TTP) is a hard but interesting sports scheduling problem inspired by Major League Baseball, which is to design a double round-robin schedule such that each pair of teams plays one game in each other's home venue, minimizing the total distance traveled by all $n$ teams ($n$ is even). In this paper, we consider TTP-2, i.e., TTP under the constraint that at most two consecutive home games or away games are allowed for each team. We propose practical algorithms for TTP-2 with improved approximation ratios. Due to the different structural properties of the problem, all known algorithms for TTP-2 are different for $n/2$ being odd and even, and our algorithms are also different for these two cases. For even $n/2$, our approximation ratio is $1+3/n$, improving the previous result of $1+4/n$. For odd $n/2$, our approximation ratio is $1+5/n$, improving the previous result of $3/2+6/n$. In practice, our algorithms are easy to implement. Experiments on well-known benchmark sets show that our algorithms beat previously known solutions for all instances with an average improvement of $5.66\%$.
|
1411.5739
|
Rahul Vaze
|
Ashwin Pananjady, Vivek Kumar Bagaria, Rahul Vaze
|
The Online Disjoint Set Cover Problem and its Applications
|
To appear in IEEE INFOCOM 2015
| null |
10.1109/INFOCOM.2015.7218497
| null |
cs.DS cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Given a universe $U$ of $n$ elements and a collection of subsets
$\mathcal{S}$ of $U$, the maximum disjoint set cover problem (DSCP) is to
partition $\mathcal{S}$ into as many set covers as possible, where a set cover
is defined as a collection of subsets whose union is $U$. We consider the
online DSCP, in which the subsets arrive one by one (possibly in an order
chosen by an adversary), and must be irrevocably assigned to some partition on
arrival with the objective of minimizing the competitive ratio. The competitive
ratio of an online DSCP algorithm $A$ is defined as the maximum ratio of the
number of disjoint set covers obtained by the optimal offline algorithm to the
number of disjoint set covers obtained by $A$ across all inputs. We propose an
online algorithm for solving the DSCP with competitive ratio $\ln n$. We then
show a lower bound of $\Omega(\sqrt{\ln n})$ on the competitive ratio for any
online DSCP algorithm. The online disjoint set cover problem has wide ranging
applications in practice, including the online crowd-sourcing problem, the
online coverage lifetime maximization problem in wireless sensor networks, and
in online resource allocation problems.
|
[
{
"created": "Fri, 21 Nov 2014 01:52:19 GMT",
"version": "v1"
}
] |
2016-11-18
|
[
[
"Pananjady",
"Ashwin",
""
],
[
"Bagaria",
"Vivek Kumar",
""
],
[
"Vaze",
"Rahul",
""
]
] |
Given a universe $U$ of $n$ elements and a collection of subsets $\mathcal{S}$ of $U$, the maximum disjoint set cover problem (DSCP) is to partition $\mathcal{S}$ into as many set covers as possible, where a set cover is defined as a collection of subsets whose union is $U$. We consider the online DSCP, in which the subsets arrive one by one (possibly in an order chosen by an adversary), and must be irrevocably assigned to some partition on arrival with the objective of minimizing the competitive ratio. The competitive ratio of an online DSCP algorithm $A$ is defined as the maximum ratio of the number of disjoint set covers obtained by the optimal offline algorithm to the number of disjoint set covers obtained by $A$ across all inputs. We propose an online algorithm for solving the DSCP with competitive ratio $\ln n$. We then show a lower bound of $\Omega(\sqrt{\ln n})$ on the competitive ratio for any online DSCP algorithm. The online disjoint set cover problem has wide ranging applications in practice, including the online crowd-sourcing problem, the online coverage lifetime maximization problem in wireless sensor networks, and in online resource allocation problems.
|
2010.13170
|
Ce Jin
|
Ce Jin, Jelani Nelson, Kewen Wu
|
An Improved Sketching Algorithm for Edit Distance
|
Appeared in STACS 2021. Fixed the title to match the conference
version
| null |
10.4230/LIPIcs.STACS.2021.45
| null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We provide improved upper bounds for the simultaneous sketching complexity of
edit distance. Consider two parties, Alice with input $x\in\Sigma^n$ and Bob
with input $y\in\Sigma^n$, that share public randomness and are given a promise
that the edit distance $\mathsf{ed}(x,y)$ between their two strings is at most
some given value $k$. Alice must send a message $sx$ and Bob must send $sy$ to
a third party Charlie, who does not know the inputs but shares the same public
randomness and also knows $k$. Charlie must output $\mathsf{ed}(x,y)$ precisely
as well as a sequence of $\mathsf{ed}(x,y)$ edits required to transform $x$
into $y$. The goal is to minimize the lengths $|sx|, |sy|$ of the messages
sent.
The protocol of Belazzougui and Zhang (FOCS 2016), building upon the random
walk method of Chakraborty, Goldenberg, and Kouck\'y (STOC 2016), achieves a
maximum message length of $\tilde O(k^8)$ bits, where $\tilde O(\cdot)$ hides
$\mathrm{poly}(\log n)$ factors. In this work we build upon Belazzougui and
Zhang's protocol and provide an improved analysis demonstrating that a slight
modification of their construction achieves a bound of $\tilde O(k^3)$.
|
[
{
"created": "Sun, 25 Oct 2020 17:35:05 GMT",
"version": "v1"
},
{
"created": "Fri, 25 Dec 2020 03:50:06 GMT",
"version": "v2"
},
{
"created": "Sun, 2 May 2021 09:00:34 GMT",
"version": "v3"
}
] |
2021-05-04
|
[
[
"Jin",
"Ce",
""
],
[
"Nelson",
"Jelani",
""
],
[
"Wu",
"Kewen",
""
]
] |
We provide improved upper bounds for the simultaneous sketching complexity of edit distance. Consider two parties, Alice with input $x\in\Sigma^n$ and Bob with input $y\in\Sigma^n$, that share public randomness and are given a promise that the edit distance $\mathsf{ed}(x,y)$ between their two strings is at most some given value $k$. Alice must send a message $sx$ and Bob must send $sy$ to a third party Charlie, who does not know the inputs but shares the same public randomness and also knows $k$. Charlie must output $\mathsf{ed}(x,y)$ precisely as well as a sequence of $\mathsf{ed}(x,y)$ edits required to transform $x$ into $y$. The goal is to minimize the lengths $|sx|, |sy|$ of the messages sent. The protocol of Belazzougui and Zhang (FOCS 2016), building upon the random walk method of Chakraborty, Goldenberg, and Kouck\'y (STOC 2016), achieves a maximum message length of $\tilde O(k^8)$ bits, where $\tilde O(\cdot)$ hides $\mathrm{poly}(\log n)$ factors. In this work we build upon Belazzougui and Zhang's protocol and provide an improved analysis demonstrating that a slight modification of their construction achieves a bound of $\tilde O(k^3)$.
|
2305.05335
|
Sougata Saha
|
Sougata Saha, Rohini Srihari
|
Rudolf Christoph Eucken at SemEval-2023 Task 4: An Ensemble Approach for
Identifying Human Values from Arguments
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The subtle human values we acquire through life experiences govern our
thoughts and gets reflected in our speech. It plays an integral part in
capturing the essence of our individuality and making it imperative to identify
such values in computational systems that mimic human actions. Computational
argumentation is a field that deals with the argumentation capabilities of
humans and can benefit from identifying such values. Motivated by that, we
present an ensemble approach for detecting human values from argument text. Our
ensemble comprises three models: (i) An entailment-based model for determining
the human values based on their descriptions, (ii) A Roberta-based classifier
that predicts the set of human values from an argument. (iii) A Roberta-based
classifier to predict a reduced set of human values from an argument. We
experiment with different ways of combining the models and report our results.
Furthermore, our best combination achieves an overall F1 score of 0.48 on the
main test set.
|
[
{
"created": "Tue, 9 May 2023 10:54:34 GMT",
"version": "v1"
}
] |
2023-05-10
|
[
[
"Saha",
"Sougata",
""
],
[
"Srihari",
"Rohini",
""
]
] |
The subtle human values we acquire through life experiences govern our thoughts and gets reflected in our speech. It plays an integral part in capturing the essence of our individuality and making it imperative to identify such values in computational systems that mimic human actions. Computational argumentation is a field that deals with the argumentation capabilities of humans and can benefit from identifying such values. Motivated by that, we present an ensemble approach for detecting human values from argument text. Our ensemble comprises three models: (i) An entailment-based model for determining the human values based on their descriptions, (ii) A Roberta-based classifier that predicts the set of human values from an argument. (iii) A Roberta-based classifier to predict a reduced set of human values from an argument. We experiment with different ways of combining the models and report our results. Furthermore, our best combination achieves an overall F1 score of 0.48 on the main test set.
|
2111.15301
|
Jaafar Elmirghani
|
Abrar S. Alhazmi, Sanaa H. Mohamed and, T. E. H. El-Gorashi, and
Jaafar M. H. Elmirghani
|
Optical Wireless Sytems for Spine and Leaf Data Center Downlinks
| null | null | null | null |
cs.NI eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
The continually growing demands for traffic as a result of advanced
technologies in 5G and 6G systems offering services with intensive demands such
as IoT and virtual reality applications has resulted in significant performance
expectations of data center networks (DCNs). More specifically, DCNs are
expected to meet high bandwidth connectivity, high throughput, low latency, and
high scalability requirements. However, the current wired DCN architectures
introduce large cabling requirements and limit the ability to reconfigure data
centres as they expand. To that end, wireless technologies such as Optical
Wireless Communication (OWC) have been proposed as a viable and cost-effective
solution to meet the aforementioned requirements. This paper proposes the use
of Infrared (IR) OWC systems that employ Wavelength Division Multiplexing (WDM)
to enhance the DCN communication in the downlink direction; i.e. from Access
Points (APs) in the ceiling, connected to spine switches, to receivers attached
to the top of the racks representing leaf switches. The proposed systems
utilize Angle Diversity Transmitters (ADTs) mounted on the room ceiling to
facilitate inter-rack communication. Two different optical receiver types are
considered, namely Angle Diversity Receivers (ADRs) and Wide Field-of-View
Receivers (WFOVR). The simulation (i.e. channel modeling) results show that our
proposed data center links achieve good data rates in the data centre up to 15
Gbps.
|
[
{
"created": "Tue, 30 Nov 2021 11:26:24 GMT",
"version": "v1"
}
] |
2021-12-01
|
[
[
"Alhazmi",
"Abrar S.",
""
],
[
"and",
"Sanaa H. Mohamed",
""
],
[
"El-Gorashi",
"T. E. H.",
""
],
[
"Elmirghani",
"Jaafar M. H.",
""
]
] |
The continually growing demands for traffic as a result of advanced technologies in 5G and 6G systems offering services with intensive demands such as IoT and virtual reality applications has resulted in significant performance expectations of data center networks (DCNs). More specifically, DCNs are expected to meet high bandwidth connectivity, high throughput, low latency, and high scalability requirements. However, the current wired DCN architectures introduce large cabling requirements and limit the ability to reconfigure data centres as they expand. To that end, wireless technologies such as Optical Wireless Communication (OWC) have been proposed as a viable and cost-effective solution to meet the aforementioned requirements. This paper proposes the use of Infrared (IR) OWC systems that employ Wavelength Division Multiplexing (WDM) to enhance the DCN communication in the downlink direction; i.e. from Access Points (APs) in the ceiling, connected to spine switches, to receivers attached to the top of the racks representing leaf switches. The proposed systems utilize Angle Diversity Transmitters (ADTs) mounted on the room ceiling to facilitate inter-rack communication. Two different optical receiver types are considered, namely Angle Diversity Receivers (ADRs) and Wide Field-of-View Receivers (WFOVR). The simulation (i.e. channel modeling) results show that our proposed data center links achieve good data rates in the data centre up to 15 Gbps.
|
2302.08950
|
Vinicius Ribeiro
|
Vinicius Ribeiro, Yiteng Huang, Yuan Shangguan, Zhaojun Yang, Li Wan,
Ming Sun
|
Handling the Alignment for Wake Word Detection: A Comparison Between
Alignment-Based, Alignment-Free and Hybrid Approaches
|
Accepted to Interspeech 2023
| null | null | null |
cs.CL cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Wake word detection exists in most intelligent homes and portable devices. It
offers these devices the ability to "wake up" when summoned at a low cost of
power and computing. This paper focuses on understanding alignment's role in
developing a wake-word system that answers a generic phrase. We discuss three
approaches. The first is alignment-based, where the model is trained with
frame-wise cross-entropy. The second is alignment-free, where the model is
trained with CTC. The third, proposed by us, is a hybrid solution in which the
model is trained with a small set of aligned data and then tuned with a
sizeable unaligned dataset. We compare the three approaches and evaluate the
impact of the different aligned-to-unaligned ratios for hybrid training. Our
results show that the alignment-free system performs better than the
alignment-based for the target operating point, and with a small fraction of
the data (20%), we can train a model that complies with our initial
constraints.
|
[
{
"created": "Fri, 17 Feb 2023 15:33:47 GMT",
"version": "v1"
},
{
"created": "Mon, 22 May 2023 14:44:21 GMT",
"version": "v2"
},
{
"created": "Wed, 7 Jun 2023 15:04:41 GMT",
"version": "v3"
}
] |
2023-06-08
|
[
[
"Ribeiro",
"Vinicius",
""
],
[
"Huang",
"Yiteng",
""
],
[
"Shangguan",
"Yuan",
""
],
[
"Yang",
"Zhaojun",
""
],
[
"Wan",
"Li",
""
],
[
"Sun",
"Ming",
""
]
] |
Wake word detection exists in most intelligent homes and portable devices. It offers these devices the ability to "wake up" when summoned at a low cost of power and computing. This paper focuses on understanding alignment's role in developing a wake-word system that answers a generic phrase. We discuss three approaches. The first is alignment-based, where the model is trained with frame-wise cross-entropy. The second is alignment-free, where the model is trained with CTC. The third, proposed by us, is a hybrid solution in which the model is trained with a small set of aligned data and then tuned with a sizeable unaligned dataset. We compare the three approaches and evaluate the impact of the different aligned-to-unaligned ratios for hybrid training. Our results show that the alignment-free system performs better than the alignment-based for the target operating point, and with a small fraction of the data (20%), we can train a model that complies with our initial constraints.
|
1910.06266
|
Dinesh Verma
|
D. Verma, S. Calo
|
Using AI/ML to gain situational understanding from passive network
observations
|
Presented at AAAI FSS-19: Artificial Intelligence in Government and
Public Sector, Arlington, Virginia, USA
| null | null | null |
cs.CR cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The data available in the network traffic fromany Government building
contains a significant amount ofinformation. An analysis of the traffic can
yield insightsand situational understanding about what is happening inthe
building. However, the use of traditional network packet inspection, either
deep or shallow, is useful for only a limited understanding of the environment,
with applicability limited to some aspects of network and security management.
If weuse AI/ML based techniques to understand the network traffic, we can gain
significant insights which increase our situational awareness of what is
happening in the environment.At IBM, we have created a system which uses a
combination of network domain knowledge and machine learning techniques to
convert network traffic into actionable insights about the on premise
environment. These insights include characterization of the communicating
devices, discovering unauthorized devices that may violate policy requirements,
identifying hidden components and vulnerability points, detecting leakage of
sensitive information, and identifying the presence of people and devices.In
this paper, we will describe the overall design of this system, the major
use-cases that have been identified for it, and the lessons learnt when
deploying this system for some of those use-cases
|
[
{
"created": "Mon, 14 Oct 2019 16:46:33 GMT",
"version": "v1"
}
] |
2019-10-15
|
[
[
"Verma",
"D.",
""
],
[
"Calo",
"S.",
""
]
] |
The data available in the network traffic fromany Government building contains a significant amount ofinformation. An analysis of the traffic can yield insightsand situational understanding about what is happening inthe building. However, the use of traditional network packet inspection, either deep or shallow, is useful for only a limited understanding of the environment, with applicability limited to some aspects of network and security management. If weuse AI/ML based techniques to understand the network traffic, we can gain significant insights which increase our situational awareness of what is happening in the environment.At IBM, we have created a system which uses a combination of network domain knowledge and machine learning techniques to convert network traffic into actionable insights about the on premise environment. These insights include characterization of the communicating devices, discovering unauthorized devices that may violate policy requirements, identifying hidden components and vulnerability points, detecting leakage of sensitive information, and identifying the presence of people and devices.In this paper, we will describe the overall design of this system, the major use-cases that have been identified for it, and the lessons learnt when deploying this system for some of those use-cases
|
1801.02261
|
Avi Ben-Cohen
|
Avi Ben-Cohen, Eyal Klang, Michal Marianne Amitai, Jacob Goldberger,
Hayit Greenspan
|
Anatomical Data Augmentation For CNN based Pixel-wise Classification
|
To be presented at IEEE ISBI 2018
| null |
10.1109/ISBI.2018.8363762
| null |
cs.CV cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work we propose a method for anatomical data augmentation that is
based on using slices of computed tomography (CT) examinations that are
adjacent to labeled slices as another resource of labeled data for training the
network. The extended labeled data is used to train a U-net network for a
pixel-wise classification into different hepatic lesions and normal liver
tissues. Our dataset contains CT examinations from 140 patients with 333 CT
images annotated by an expert radiologist. We tested our approach and compared
it to the conventional training process. Results indicate superiority of our
method. Using the anatomical data augmentation we achieved an improvement of 3%
in the success rate, 5% in the classification accuracy, and 4% in Dice.
|
[
{
"created": "Sun, 7 Jan 2018 23:00:02 GMT",
"version": "v1"
}
] |
2018-07-24
|
[
[
"Ben-Cohen",
"Avi",
""
],
[
"Klang",
"Eyal",
""
],
[
"Amitai",
"Michal Marianne",
""
],
[
"Goldberger",
"Jacob",
""
],
[
"Greenspan",
"Hayit",
""
]
] |
In this work we propose a method for anatomical data augmentation that is based on using slices of computed tomography (CT) examinations that are adjacent to labeled slices as another resource of labeled data for training the network. The extended labeled data is used to train a U-net network for a pixel-wise classification into different hepatic lesions and normal liver tissues. Our dataset contains CT examinations from 140 patients with 333 CT images annotated by an expert radiologist. We tested our approach and compared it to the conventional training process. Results indicate superiority of our method. Using the anatomical data augmentation we achieved an improvement of 3% in the success rate, 5% in the classification accuracy, and 4% in Dice.
|
2309.05281
|
Shentong Mo
|
Shentong Mo, Weiguo Pian, Yapeng Tian
|
Class-Incremental Grouping Network for Continual Audio-Visual Learning
|
ICCV 2023. arXiv admin note: text overlap with arXiv:2303.17056
| null | null | null |
cs.CV cs.LG cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Continual learning is a challenging problem in which models need to be
trained on non-stationary data across sequential tasks for class-incremental
learning. While previous methods have focused on using either regularization or
rehearsal-based frameworks to alleviate catastrophic forgetting in image
classification, they are limited to a single modality and cannot learn compact
class-aware cross-modal representations for continual audio-visual learning. To
address this gap, we propose a novel class-incremental grouping network (CIGN)
that can learn category-wise semantic features to achieve continual
audio-visual learning. Our CIGN leverages learnable audio-visual class tokens
and audio-visual grouping to continually aggregate class-aware features.
Additionally, it utilizes class tokens distillation and continual grouping to
prevent forgetting parameters learned from previous tasks, thereby improving
the model's ability to capture discriminative audio-visual categories. We
conduct extensive experiments on VGGSound-Instruments, VGGSound-100, and
VGG-Sound Sources benchmarks. Our experimental results demonstrate that the
CIGN achieves state-of-the-art audio-visual class-incremental learning
performance. Code is available at https://github.com/stoneMo/CIGN.
|
[
{
"created": "Mon, 11 Sep 2023 07:36:16 GMT",
"version": "v1"
}
] |
2023-09-12
|
[
[
"Mo",
"Shentong",
""
],
[
"Pian",
"Weiguo",
""
],
[
"Tian",
"Yapeng",
""
]
] |
Continual learning is a challenging problem in which models need to be trained on non-stationary data across sequential tasks for class-incremental learning. While previous methods have focused on using either regularization or rehearsal-based frameworks to alleviate catastrophic forgetting in image classification, they are limited to a single modality and cannot learn compact class-aware cross-modal representations for continual audio-visual learning. To address this gap, we propose a novel class-incremental grouping network (CIGN) that can learn category-wise semantic features to achieve continual audio-visual learning. Our CIGN leverages learnable audio-visual class tokens and audio-visual grouping to continually aggregate class-aware features. Additionally, it utilizes class tokens distillation and continual grouping to prevent forgetting parameters learned from previous tasks, thereby improving the model's ability to capture discriminative audio-visual categories. We conduct extensive experiments on VGGSound-Instruments, VGGSound-100, and VGG-Sound Sources benchmarks. Our experimental results demonstrate that the CIGN achieves state-of-the-art audio-visual class-incremental learning performance. Code is available at https://github.com/stoneMo/CIGN.
|
2002.04185
|
Casey Chu
|
Casey Chu, Kentaro Minami, Kenji Fukumizu
|
Smoothness and Stability in GANs
|
ICLR 2020
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generative adversarial networks, or GANs, commonly display unstable behavior
during training. In this work, we develop a principled theoretical framework
for understanding the stability of various types of GANs. In particular, we
derive conditions that guarantee eventual stationarity of the generator when it
is trained with gradient descent, conditions that must be satisfied by the
divergence that is minimized by the GAN and the generator's architecture. We
find that existing GAN variants satisfy some, but not all, of these conditions.
Using tools from convex analysis, optimal transport, and reproducing kernels,
we construct a GAN that fulfills these conditions simultaneously. In the
process, we explain and clarify the need for various existing GAN stabilization
techniques, including Lipschitz constraints, gradient penalties, and smooth
activation functions.
|
[
{
"created": "Tue, 11 Feb 2020 03:08:28 GMT",
"version": "v1"
}
] |
2020-02-12
|
[
[
"Chu",
"Casey",
""
],
[
"Minami",
"Kentaro",
""
],
[
"Fukumizu",
"Kenji",
""
]
] |
Generative adversarial networks, or GANs, commonly display unstable behavior during training. In this work, we develop a principled theoretical framework for understanding the stability of various types of GANs. In particular, we derive conditions that guarantee eventual stationarity of the generator when it is trained with gradient descent, conditions that must be satisfied by the divergence that is minimized by the GAN and the generator's architecture. We find that existing GAN variants satisfy some, but not all, of these conditions. Using tools from convex analysis, optimal transport, and reproducing kernels, we construct a GAN that fulfills these conditions simultaneously. In the process, we explain and clarify the need for various existing GAN stabilization techniques, including Lipschitz constraints, gradient penalties, and smooth activation functions.
|
2307.06304
|
Mostafa Dehghani
|
Mostafa Dehghani, Basil Mustafa, Josip Djolonga, Jonathan Heek,
Matthias Minderer, Mathilde Caron, Andreas Steiner, Joan Puigcerver, Robert
Geirhos, Ibrahim Alabdulmohsin, Avital Oliver, Piotr Padlewski, Alexey
Gritsenko, Mario Lu\v{c}i\'c, Neil Houlsby
|
Patch n' Pack: NaViT, a Vision Transformer for any Aspect Ratio and
Resolution
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The ubiquitous and demonstrably suboptimal choice of resizing images to a
fixed resolution before processing them with computer vision models has not yet
been successfully challenged. However, models such as the Vision Transformer
(ViT) offer flexible sequence-based modeling, and hence varying input sequence
lengths. We take advantage of this with NaViT (Native Resolution ViT) which
uses sequence packing during training to process inputs of arbitrary
resolutions and aspect ratios. Alongside flexible model usage, we demonstrate
improved training efficiency for large-scale supervised and contrastive
image-text pretraining. NaViT can be efficiently transferred to standard tasks
such as image and video classification, object detection, and semantic
segmentation and leads to improved results on robustness and fairness
benchmarks. At inference time, the input resolution flexibility can be used to
smoothly navigate the test-time cost-performance trade-off. We believe that
NaViT marks a departure from the standard, CNN-designed, input and modelling
pipeline used by most computer vision models, and represents a promising
direction for ViTs.
|
[
{
"created": "Wed, 12 Jul 2023 17:01:03 GMT",
"version": "v1"
}
] |
2023-07-13
|
[
[
"Dehghani",
"Mostafa",
""
],
[
"Mustafa",
"Basil",
""
],
[
"Djolonga",
"Josip",
""
],
[
"Heek",
"Jonathan",
""
],
[
"Minderer",
"Matthias",
""
],
[
"Caron",
"Mathilde",
""
],
[
"Steiner",
"Andreas",
""
],
[
"Puigcerver",
"Joan",
""
],
[
"Geirhos",
"Robert",
""
],
[
"Alabdulmohsin",
"Ibrahim",
""
],
[
"Oliver",
"Avital",
""
],
[
"Padlewski",
"Piotr",
""
],
[
"Gritsenko",
"Alexey",
""
],
[
"Lučić",
"Mario",
""
],
[
"Houlsby",
"Neil",
""
]
] |
The ubiquitous and demonstrably suboptimal choice of resizing images to a fixed resolution before processing them with computer vision models has not yet been successfully challenged. However, models such as the Vision Transformer (ViT) offer flexible sequence-based modeling, and hence varying input sequence lengths. We take advantage of this with NaViT (Native Resolution ViT) which uses sequence packing during training to process inputs of arbitrary resolutions and aspect ratios. Alongside flexible model usage, we demonstrate improved training efficiency for large-scale supervised and contrastive image-text pretraining. NaViT can be efficiently transferred to standard tasks such as image and video classification, object detection, and semantic segmentation and leads to improved results on robustness and fairness benchmarks. At inference time, the input resolution flexibility can be used to smoothly navigate the test-time cost-performance trade-off. We believe that NaViT marks a departure from the standard, CNN-designed, input and modelling pipeline used by most computer vision models, and represents a promising direction for ViTs.
|
2208.01150
|
Matthew McDermott
|
Matthew McDermott and Jason Rife
|
Mitigating Shadows in Lidar Scan Matching using Spherical Voxels
| null | null |
10.1109/LRA.2022.3216987
| null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we propose an approach to mitigate shadowing errors in Lidar
scan matching, by introducing a preprocessing step based on spherical gridding.
Because the grid aligns with the Lidar beam, it is relatively easy to eliminate
shadow edges which cause systematic errors in Lidar scan matching. As we show
through simulation, our proposed algorithm provides better results than
ground-plane removal, the most common existing strategy for shadow mitigation.
Unlike ground plane removal, our method applies to arbitrary terrains (e.g.
shadows on urban walls, shadows in hilly terrain) while retaining key Lidar
points on the ground that are critical for estimating changes in height, pitch,
and roll. Our preprocessing algorithm can be used with a range of scan-matching
methods; however, for voxel-based scan matching methods, it provides additional
benefits by reducing computation costs and more evenly distributing Lidar
points among voxels.
|
[
{
"created": "Mon, 1 Aug 2022 21:44:51 GMT",
"version": "v1"
}
] |
2022-10-25
|
[
[
"McDermott",
"Matthew",
""
],
[
"Rife",
"Jason",
""
]
] |
In this paper we propose an approach to mitigate shadowing errors in Lidar scan matching, by introducing a preprocessing step based on spherical gridding. Because the grid aligns with the Lidar beam, it is relatively easy to eliminate shadow edges which cause systematic errors in Lidar scan matching. As we show through simulation, our proposed algorithm provides better results than ground-plane removal, the most common existing strategy for shadow mitigation. Unlike ground plane removal, our method applies to arbitrary terrains (e.g. shadows on urban walls, shadows in hilly terrain) while retaining key Lidar points on the ground that are critical for estimating changes in height, pitch, and roll. Our preprocessing algorithm can be used with a range of scan-matching methods; however, for voxel-based scan matching methods, it provides additional benefits by reducing computation costs and more evenly distributing Lidar points among voxels.
|
2101.07622
|
Chang Sun
|
Chang Sun
|
Knowledge Graph for Microdata of Statistics Netherlands
| null | null | null | null |
cs.DL cs.DB
|
http://creativecommons.org/licenses/by/4.0/
|
Statistics Netherlands (CBS) hosted a huge amount of data not only on the
statistical level but also on the individual level. With the development of
data science technologies, more and more researchers request to conduct their
research by using high-quality individual data from CBS (called CBS Microdata)
or combining them with other data sources. Making great use of these data for
research and scientific purposes can tremendously benefit the whole society.
However, CBS Microdata has been collected and maintained in different ways by
different departments in and out of CBS. The representation, quality, metadata
of datasets are not sufficiently harmonized. The project converts the
descriptions of all CBS microdata sets into one knowledge graph with
comprehensive metadata in Dutch and English using text mining and semantic web
technologies. Researchers can easily query the metadata, explore the relations
among multiple datasets, and find the needed variables. For example, if a
researcher searches a dataset about "Age at Death" in the Health and Well-being
category, all information related to this dataset will appear including
keywords and variable names. "Age at Death" dataset has a keyword - "Death".
This keyword will lead to other datasets such as "Date of Death". "Cause of
Death", "Production statistics Health and welfare" from Population, Business
categories, and Health and well-being categories. This will tremendously save
time and costs for the data requester but also data maintainers.
|
[
{
"created": "Tue, 19 Jan 2021 13:54:57 GMT",
"version": "v1"
}
] |
2021-01-20
|
[
[
"Sun",
"Chang",
""
]
] |
Statistics Netherlands (CBS) hosted a huge amount of data not only on the statistical level but also on the individual level. With the development of data science technologies, more and more researchers request to conduct their research by using high-quality individual data from CBS (called CBS Microdata) or combining them with other data sources. Making great use of these data for research and scientific purposes can tremendously benefit the whole society. However, CBS Microdata has been collected and maintained in different ways by different departments in and out of CBS. The representation, quality, metadata of datasets are not sufficiently harmonized. The project converts the descriptions of all CBS microdata sets into one knowledge graph with comprehensive metadata in Dutch and English using text mining and semantic web technologies. Researchers can easily query the metadata, explore the relations among multiple datasets, and find the needed variables. For example, if a researcher searches a dataset about "Age at Death" in the Health and Well-being category, all information related to this dataset will appear including keywords and variable names. "Age at Death" dataset has a keyword - "Death". This keyword will lead to other datasets such as "Date of Death". "Cause of Death", "Production statistics Health and welfare" from Population, Business categories, and Health and well-being categories. This will tremendously save time and costs for the data requester but also data maintainers.
|
2001.03898
|
Yao Zhang
|
Yao Zhang, Daniel Jarrett, Mihaela van der Schaar
|
Stepwise Model Selection for Sequence Prediction via Deep Kernel
Learning
| null |
Proceedings of the 23rd International Conference on Artificial
Intelligence and Statistics (AISTATS) 2020
| null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An essential problem in automated machine learning (AutoML) is that of model
selection. A unique challenge in the sequential setting is the fact that the
optimal model itself may vary over time, depending on the distribution of
features and labels available up to each point in time. In this paper, we
propose a novel Bayesian optimization (BO) algorithm to tackle the challenge of
model selection in this setting. This is accomplished by treating the
performance at each time step as its own black-box function. In order to solve
the resulting multiple black-box function optimization problem jointly and
efficiently, we exploit potential correlations among black-box functions using
deep kernel learning (DKL). To the best of our knowledge, we are the first to
formulate the problem of stepwise model selection (SMS) for sequence
prediction, and to design and demonstrate an efficient joint-learning algorithm
for this purpose. Using multiple real-world datasets, we verify that our
proposed method outperforms both standard BO and multi-objective BO algorithms
on a variety of sequence prediction tasks.
|
[
{
"created": "Sun, 12 Jan 2020 09:42:19 GMT",
"version": "v1"
},
{
"created": "Sun, 9 Feb 2020 13:54:16 GMT",
"version": "v2"
},
{
"created": "Fri, 14 Feb 2020 11:46:09 GMT",
"version": "v3"
}
] |
2020-02-17
|
[
[
"Zhang",
"Yao",
""
],
[
"Jarrett",
"Daniel",
""
],
[
"van der Schaar",
"Mihaela",
""
]
] |
An essential problem in automated machine learning (AutoML) is that of model selection. A unique challenge in the sequential setting is the fact that the optimal model itself may vary over time, depending on the distribution of features and labels available up to each point in time. In this paper, we propose a novel Bayesian optimization (BO) algorithm to tackle the challenge of model selection in this setting. This is accomplished by treating the performance at each time step as its own black-box function. In order to solve the resulting multiple black-box function optimization problem jointly and efficiently, we exploit potential correlations among black-box functions using deep kernel learning (DKL). To the best of our knowledge, we are the first to formulate the problem of stepwise model selection (SMS) for sequence prediction, and to design and demonstrate an efficient joint-learning algorithm for this purpose. Using multiple real-world datasets, we verify that our proposed method outperforms both standard BO and multi-objective BO algorithms on a variety of sequence prediction tasks.
|
2301.04452
|
Liel Leman
|
Gabriella Chouraqui and Liron Cohen and Gil Einziger and Liel Leman
|
Uncertainty Estimation based on Geometric Separation
|
Submitted to JMLR. arXiv admin note: substantial text overlap with
arXiv:2206.11562
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
In machine learning, accurately predicting the probability that a specific
input is correct is crucial for risk management. This process, known as
uncertainty (or confidence) estimation, is particularly important in
mission-critical applications such as autonomous driving. In this work, we put
forward a novel geometric-based approach for improving uncertainty estimations
in machine learning models. Our approach involves using the geometric distance
of the current input from existing training inputs as a signal for estimating
uncertainty, and then calibrating this signal using standard post-hoc
techniques. We demonstrate that our method leads to more accurate uncertainty
estimations than recently proposed approaches through extensive evaluation on a
variety of datasets and models. Additionally, we optimize our approach so that
it can be implemented on large datasets in near real-time applications, making
it suitable for time-sensitive scenarios.
|
[
{
"created": "Wed, 11 Jan 2023 13:19:24 GMT",
"version": "v1"
}
] |
2023-01-12
|
[
[
"Chouraqui",
"Gabriella",
""
],
[
"Cohen",
"Liron",
""
],
[
"Einziger",
"Gil",
""
],
[
"Leman",
"Liel",
""
]
] |
In machine learning, accurately predicting the probability that a specific input is correct is crucial for risk management. This process, known as uncertainty (or confidence) estimation, is particularly important in mission-critical applications such as autonomous driving. In this work, we put forward a novel geometric-based approach for improving uncertainty estimations in machine learning models. Our approach involves using the geometric distance of the current input from existing training inputs as a signal for estimating uncertainty, and then calibrating this signal using standard post-hoc techniques. We demonstrate that our method leads to more accurate uncertainty estimations than recently proposed approaches through extensive evaluation on a variety of datasets and models. Additionally, we optimize our approach so that it can be implemented on large datasets in near real-time applications, making it suitable for time-sensitive scenarios.
|
1411.6361
|
Baptiste Wicht
|
Baptiste Wicht, Roberto A. Vitillo, Dehao Chen, David Levinthal
|
Hardware Counted Profile-Guided Optimization
|
10 pages
| null | null | null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Profile-Guided Optimization (PGO) is an excellent means to improve the
performance of a compiled program. Indeed, the execution path data it provides
helps the compiler to generate better code and better cacheline packing.
At the time of this writing, compilers only support instrumentation-based
PGO. This proved effective for optimizing programs. However, few projects use
it, due to its complicated dual-compilation model and its high overhead. Our
solution of sampling Hardware Performance Counters overcome these drawbacks. In
this paper, we propose a PGO solution for GCC by sampling Last Branch Record
(LBR) events and using debug symbols to recreate source locations of binary
instructions.
By using LBR-Sampling, the generated profiles are very accurate. This
solution achieved an average of 83% of the gains obtained with
instrumentation-based PGO and 93% on C++ benchmarks only. The profiling
overhead is only 1.06% on average whereas instrumentation incurs a 16% overhead
on average.
|
[
{
"created": "Mon, 24 Nov 2014 07:01:31 GMT",
"version": "v1"
}
] |
2014-11-25
|
[
[
"Wicht",
"Baptiste",
""
],
[
"Vitillo",
"Roberto A.",
""
],
[
"Chen",
"Dehao",
""
],
[
"Levinthal",
"David",
""
]
] |
Profile-Guided Optimization (PGO) is an excellent means to improve the performance of a compiled program. Indeed, the execution path data it provides helps the compiler to generate better code and better cacheline packing. At the time of this writing, compilers only support instrumentation-based PGO. This proved effective for optimizing programs. However, few projects use it, due to its complicated dual-compilation model and its high overhead. Our solution of sampling Hardware Performance Counters overcome these drawbacks. In this paper, we propose a PGO solution for GCC by sampling Last Branch Record (LBR) events and using debug symbols to recreate source locations of binary instructions. By using LBR-Sampling, the generated profiles are very accurate. This solution achieved an average of 83% of the gains obtained with instrumentation-based PGO and 93% on C++ benchmarks only. The profiling overhead is only 1.06% on average whereas instrumentation incurs a 16% overhead on average.
|
2208.09333
|
Pedro Reviriego
|
Pedro Reviriego and Elena Merino-G\'omez
|
Text to Image Generation: Leaving no Language Behind
| null | null | null | null |
cs.CL cs.AI cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
One of the latest applications of Artificial Intelligence (AI) is to generate
images from natural language descriptions. These generators are now becoming
available and achieve impressive results that have been used for example in the
front cover of magazines. As the input to the generators is in the form of a
natural language text, a question that arises immediately is how these models
behave when the input is written in different languages. In this paper we
perform an initial exploration of how the performance of three popular
text-to-image generators depends on the language. The results show that there
is a significant performance degradation when using languages other than
English, especially for languages that are not widely used. This observation
leads us to discuss different alternatives on how text-to-image generators can
be improved so that performance is consistent across different languages. This
is fundamental to ensure that this new technology can be used by non-native
English speakers and to preserve linguistic diversity.
|
[
{
"created": "Fri, 19 Aug 2022 13:24:56 GMT",
"version": "v1"
},
{
"created": "Thu, 17 Nov 2022 10:53:35 GMT",
"version": "v2"
}
] |
2022-11-18
|
[
[
"Reviriego",
"Pedro",
""
],
[
"Merino-Gómez",
"Elena",
""
]
] |
One of the latest applications of Artificial Intelligence (AI) is to generate images from natural language descriptions. These generators are now becoming available and achieve impressive results that have been used for example in the front cover of magazines. As the input to the generators is in the form of a natural language text, a question that arises immediately is how these models behave when the input is written in different languages. In this paper we perform an initial exploration of how the performance of three popular text-to-image generators depends on the language. The results show that there is a significant performance degradation when using languages other than English, especially for languages that are not widely used. This observation leads us to discuss different alternatives on how text-to-image generators can be improved so that performance is consistent across different languages. This is fundamental to ensure that this new technology can be used by non-native English speakers and to preserve linguistic diversity.
|
2307.01168
|
Vitor Fortes Rey
|
Vitor Fortes Rey, Dominique Nshimyimana, Paul Lukowicz
|
Don't freeze: Finetune encoders for better Self-Supervised HAR
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Recently self-supervised learning has been proposed in the field of human
activity recognition as a solution to the labelled data availability problem.
The idea being that by using pretext tasks such as reconstruction or
contrastive predictive coding, useful representations can be learned that then
can be used for classification. Those approaches follow the pretrain, freeze
and fine-tune procedure. In this paper we will show how a simple change - not
freezing the representation - leads to substantial performance gains across
pretext tasks. The improvement was found in all four investigated datasets and
across all four pretext tasks and is inversely proportional to amount of
labelled data. Moreover the effect is present whether the pretext task is
carried on the Capture24 dataset or directly in unlabelled data of the target
dataset.
|
[
{
"created": "Mon, 3 Jul 2023 17:23:34 GMT",
"version": "v1"
}
] |
2023-07-04
|
[
[
"Rey",
"Vitor Fortes",
""
],
[
"Nshimyimana",
"Dominique",
""
],
[
"Lukowicz",
"Paul",
""
]
] |
Recently self-supervised learning has been proposed in the field of human activity recognition as a solution to the labelled data availability problem. The idea being that by using pretext tasks such as reconstruction or contrastive predictive coding, useful representations can be learned that then can be used for classification. Those approaches follow the pretrain, freeze and fine-tune procedure. In this paper we will show how a simple change - not freezing the representation - leads to substantial performance gains across pretext tasks. The improvement was found in all four investigated datasets and across all four pretext tasks and is inversely proportional to amount of labelled data. Moreover the effect is present whether the pretext task is carried on the Capture24 dataset or directly in unlabelled data of the target dataset.
|
1803.02549
|
Hyoyoung Jung
|
Hyoyoung Jung, Jaewook Kang, Tae Seok Lee, Suil Kim, Kiseon Kim
|
An iALM-ICA-based Anti-Jamming DS-CDMA Receiver for LMS Systems
|
IEEE Transactions on Aerospace and Electric Systems, "accepted"
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider a land mobile satellite communication system using spread
spectrum techniques where the uplink is exposed to MT jamming attacks, and the
downlink is corrupted by multi-path fading channels. We proposes an
anti-jamming receiver, which exploits inherent low-dimensionality of the
received signal model, by formulating a robust principal component analysis
(Robust PCA)-based recovery problem. Simulation results verify that the
proposed receiver outperforms the conventional receiver for a reasonable rank
of the jamming signal.
|
[
{
"created": "Wed, 7 Mar 2018 07:25:36 GMT",
"version": "v1"
}
] |
2018-03-08
|
[
[
"Jung",
"Hyoyoung",
""
],
[
"Kang",
"Jaewook",
""
],
[
"Lee",
"Tae Seok",
""
],
[
"Kim",
"Suil",
""
],
[
"Kim",
"Kiseon",
""
]
] |
We consider a land mobile satellite communication system using spread spectrum techniques where the uplink is exposed to MT jamming attacks, and the downlink is corrupted by multi-path fading channels. We proposes an anti-jamming receiver, which exploits inherent low-dimensionality of the received signal model, by formulating a robust principal component analysis (Robust PCA)-based recovery problem. Simulation results verify that the proposed receiver outperforms the conventional receiver for a reasonable rank of the jamming signal.
|
1908.05243
|
Morteza Banagar
|
Morteza Banagar and Harpreet S. Dhillon
|
Performance Characterization of Canonical Mobility Models in Drone
Cellular Networks
|
Journal submission. A part of this paper will be presented at IEEE
Globecom 2019. It is available at arXiv:1905.00972
| null |
10.1109/TWC.2020.2988633
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we characterize the performance of several canonical mobility
models in a drone cellular network in which drone base stations (DBSs) serve
user equipments (UEs) on the ground. In particular, we consider the following
four mobility models: (i) straight line (SL), (ii) random stop (RS), (iii)
random walk (RW), and (iv) random waypoint (RWP), among which the SL mobility
model is inspired by the simulation models used by the third generation
partnership project (3GPP) for the placement and trajectory of drones, while
the other three are well-known canonical models (or their variants) that offer
a useful balance between realism and tractability. Assuming the
nearest-neighbor association policy, we consider two service models for the
UEs: (i) UE independent model (UIM), and (ii) UE dependent model (UDM). While
the serving DBS follows the same mobility model as the other DBSs in the UIM,
it is assumed to fly towards the UE of interest in the UDM and hover above its
location after reaching there. The main contribution of this paper is a unified
approach to characterize the point process of DBSs for all the mobility and
service models. Using this, we provide exact mathematical expressions for the
average received rate and the session rate as seen by the typical UE. Further,
using tools from calculus of variations, we concretely demonstrate that the
simple SL mobility model provides a lower bound on the performance of other
general mobility models (including the ones in which drones follow curved
trajectories) as long as the movement of each drone in these models is
independent and identically distributed (i.i.d.). To the best of our knowledge,
this is the first work that provides a rigorous analysis of key canonical
mobility models for an infinite drone cellular network and establishes useful
connections between them.
|
[
{
"created": "Wed, 14 Aug 2019 17:08:51 GMT",
"version": "v1"
}
] |
2021-01-26
|
[
[
"Banagar",
"Morteza",
""
],
[
"Dhillon",
"Harpreet S.",
""
]
] |
In this paper, we characterize the performance of several canonical mobility models in a drone cellular network in which drone base stations (DBSs) serve user equipments (UEs) on the ground. In particular, we consider the following four mobility models: (i) straight line (SL), (ii) random stop (RS), (iii) random walk (RW), and (iv) random waypoint (RWP), among which the SL mobility model is inspired by the simulation models used by the third generation partnership project (3GPP) for the placement and trajectory of drones, while the other three are well-known canonical models (or their variants) that offer a useful balance between realism and tractability. Assuming the nearest-neighbor association policy, we consider two service models for the UEs: (i) UE independent model (UIM), and (ii) UE dependent model (UDM). While the serving DBS follows the same mobility model as the other DBSs in the UIM, it is assumed to fly towards the UE of interest in the UDM and hover above its location after reaching there. The main contribution of this paper is a unified approach to characterize the point process of DBSs for all the mobility and service models. Using this, we provide exact mathematical expressions for the average received rate and the session rate as seen by the typical UE. Further, using tools from calculus of variations, we concretely demonstrate that the simple SL mobility model provides a lower bound on the performance of other general mobility models (including the ones in which drones follow curved trajectories) as long as the movement of each drone in these models is independent and identically distributed (i.i.d.). To the best of our knowledge, this is the first work that provides a rigorous analysis of key canonical mobility models for an infinite drone cellular network and establishes useful connections between them.
|
1501.05390
|
Victor Pan
|
Victor Y. Pan and Liang Zhao
|
Real Polynomial Root-finding by Means of Matrix and Polynomial
Iterations
|
24 pages 12 tables. arXiv admin note: substantial text overlap with
arXiv:1404.6817
| null | null | null |
cs.SC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Univariate polynomial root-finding is a classical subject, still important
for modern computing. Frequently one seeks just the real roots of a polynomial
with real coefficients. They can be approximated at a low computational cost if
the polynomial has no nonreal roots, but for high degree polynomials, nonreal
roots are typically much more numerous than the real ones. The challenge is
known for a long time, and the subject has been intensively studied.
Nevertheless, we produce some novel ideas and techniques and obtain dramatic
acceleration of the known algorithms. In order to achieve our progress we
exploit the correlation between the computations with matrices and polynomials,
randomized matrix computations, and complex plane geometry, extend the
techniques of the matrix sign iterations, and use the structure of the
companion matrix of the input polynomial. The results of our extensive tests
with benchmark polynomials and random matrices are quite encouraging. In
particular in our tests the number of iterations required for convergence of
our algorithms grew very slowly (if at all) as we increased the degree of the
univariate input polynomials and the dimension of the input matrices from 64 to
1024.
|
[
{
"created": "Thu, 22 Jan 2015 04:30:09 GMT",
"version": "v1"
},
{
"created": "Sat, 1 Aug 2015 04:01:28 GMT",
"version": "v2"
},
{
"created": "Thu, 13 Apr 2017 15:56:28 GMT",
"version": "v3"
}
] |
2017-04-14
|
[
[
"Pan",
"Victor Y.",
""
],
[
"Zhao",
"Liang",
""
]
] |
Univariate polynomial root-finding is a classical subject, still important for modern computing. Frequently one seeks just the real roots of a polynomial with real coefficients. They can be approximated at a low computational cost if the polynomial has no nonreal roots, but for high degree polynomials, nonreal roots are typically much more numerous than the real ones. The challenge is known for a long time, and the subject has been intensively studied. Nevertheless, we produce some novel ideas and techniques and obtain dramatic acceleration of the known algorithms. In order to achieve our progress we exploit the correlation between the computations with matrices and polynomials, randomized matrix computations, and complex plane geometry, extend the techniques of the matrix sign iterations, and use the structure of the companion matrix of the input polynomial. The results of our extensive tests with benchmark polynomials and random matrices are quite encouraging. In particular in our tests the number of iterations required for convergence of our algorithms grew very slowly (if at all) as we increased the degree of the univariate input polynomials and the dimension of the input matrices from 64 to 1024.
|
2108.12238
|
Ling Chen
|
Ling Chen, Jiahui Xu, Binqing Wu, Yuntao Qian, Zhenhong Du, Yansheng
Li, Yongjun Zhang
|
Group-Aware Graph Neural Network for Nationwide City Air Quality
Forecasting
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The problem of air pollution threatens public health. Air quality forecasting
can provide the air quality index hours or even days later, which can help the
public to prevent air pollution in advance. Previous works focus on citywide
air quality forecasting and cannot solve nationwide city forecasting problem,
whose difficulties lie in capturing the latent dependencies between
geographically distant but highly correlated cities. In this paper, we propose
the group-aware graph neural network (GAGNN), a hierarchical model for
nationwide city air quality forecasting. The model constructs a city graph and
a city group graph to model the spatial and latent dependencies between cities,
respectively. GAGNN introduces differentiable grouping network to discover the
latent dependencies among cities and generate city groups. Based on the
generated city groups, a group correlation encoding module is introduced to
learn the correlations between them, which can effectively capture the
dependencies between city groups. After the graph construction, GAGNN
implements message passing mechanism to model the dependencies between cities
and city groups. The evaluation experiments on Chinese city air quality dataset
indicate that our GAGNN outperforms existing forecasting models.
|
[
{
"created": "Fri, 27 Aug 2021 12:37:56 GMT",
"version": "v1"
}
] |
2021-08-30
|
[
[
"Chen",
"Ling",
""
],
[
"Xu",
"Jiahui",
""
],
[
"Wu",
"Binqing",
""
],
[
"Qian",
"Yuntao",
""
],
[
"Du",
"Zhenhong",
""
],
[
"Li",
"Yansheng",
""
],
[
"Zhang",
"Yongjun",
""
]
] |
The problem of air pollution threatens public health. Air quality forecasting can provide the air quality index hours or even days later, which can help the public to prevent air pollution in advance. Previous works focus on citywide air quality forecasting and cannot solve nationwide city forecasting problem, whose difficulties lie in capturing the latent dependencies between geographically distant but highly correlated cities. In this paper, we propose the group-aware graph neural network (GAGNN), a hierarchical model for nationwide city air quality forecasting. The model constructs a city graph and a city group graph to model the spatial and latent dependencies between cities, respectively. GAGNN introduces differentiable grouping network to discover the latent dependencies among cities and generate city groups. Based on the generated city groups, a group correlation encoding module is introduced to learn the correlations between them, which can effectively capture the dependencies between city groups. After the graph construction, GAGNN implements message passing mechanism to model the dependencies between cities and city groups. The evaluation experiments on Chinese city air quality dataset indicate that our GAGNN outperforms existing forecasting models.
|
1506.04352
|
Zhe Wang
|
Zhe Wang, Kai Hu, Baolin Yin
|
Internet Traffic Matrix Structural Analysis Based on Multi-Resolution
RPCA
|
18 pages, in Chinese. This unpublished manuscript is an improvement
on our previous papers in references [12] and [13]
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Internet traffic matrix plays a significant roll in network operation and
management, therefore, the structural analysis of traffic matrix, which
decomposes different traffic components of this high-dimensional traffic
dataset, is quite valuable to some network applications. In this study, based
on the Robust Principal Component Analysis (RPCA) theory, a novel traffic
matrix structural analysis approach named Multi-Resolution RPCA is created,
which utilizes the wavelet multi-resolution analysis. Firstly, we build the
Multi-Resolution Traffic Matrix Decomposition Model (MR-TMDM), which
characterizes the smoothness of the deterministic traffic by its wavelet
coefficients. Secondly, based on this model, we improve the Stable Principal
Component Pursuit (SPCP), propose a new traffic matrix decomposition method
named SPCP-MRC with Multi-Resolution Constraints, and design its numerical
algorithm. Specifically, we give and prove the closed-form solution to a
sub-problem in the algorithm. Lastly, we evaluate different traffic
decomposition methods by multiple groups of simulated traffic matrices
containing different kinds of anomalies and distinct noise levels. It is
demonstrated that SPCP-MRC, compared with other methods, achieves more accurate
and more reasonable traffic decompositions.
|
[
{
"created": "Sun, 14 Jun 2015 05:12:56 GMT",
"version": "v1"
},
{
"created": "Fri, 26 Jun 2015 06:43:46 GMT",
"version": "v2"
}
] |
2015-06-29
|
[
[
"Wang",
"Zhe",
""
],
[
"Hu",
"Kai",
""
],
[
"Yin",
"Baolin",
""
]
] |
The Internet traffic matrix plays a significant roll in network operation and management, therefore, the structural analysis of traffic matrix, which decomposes different traffic components of this high-dimensional traffic dataset, is quite valuable to some network applications. In this study, based on the Robust Principal Component Analysis (RPCA) theory, a novel traffic matrix structural analysis approach named Multi-Resolution RPCA is created, which utilizes the wavelet multi-resolution analysis. Firstly, we build the Multi-Resolution Traffic Matrix Decomposition Model (MR-TMDM), which characterizes the smoothness of the deterministic traffic by its wavelet coefficients. Secondly, based on this model, we improve the Stable Principal Component Pursuit (SPCP), propose a new traffic matrix decomposition method named SPCP-MRC with Multi-Resolution Constraints, and design its numerical algorithm. Specifically, we give and prove the closed-form solution to a sub-problem in the algorithm. Lastly, we evaluate different traffic decomposition methods by multiple groups of simulated traffic matrices containing different kinds of anomalies and distinct noise levels. It is demonstrated that SPCP-MRC, compared with other methods, achieves more accurate and more reasonable traffic decompositions.
|
2204.13952
|
Xiaoqing Fan
|
Xiaoqing Fan, Ge Li, Dingquan Li, Yurui Ren, Wei Gao, Thomas H. Li
|
Deep Geometry Post-Processing for Decompressed Point Clouds
| null | null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Point cloud compression plays a crucial role in reducing the huge cost of
data storage and transmission. However, distortions can be introduced into the
decompressed point clouds due to quantization. In this paper, we propose a
novel learning-based post-processing method to enhance the decompressed point
clouds. Specifically, a voxelized point cloud is first divided into small
cubes. Then, a 3D convolutional network is proposed to predict the occupancy
probability for each location of a cube. We leverage both local and global
contexts by generating multi-scale probabilities. These probabilities are
progressively summed to predict the results in a coarse-to-fine manner.
Finally, we obtain the geometry-refined point clouds based on the predicted
probabilities. Different from previous methods, we deal with decompressed point
clouds with huge variety of distortions using a single model. Experimental
results show that the proposed method can significantly improve the quality of
the decompressed point clouds, achieving 9.30dB BDPSNR gain on three
representative datasets on average.
|
[
{
"created": "Fri, 29 Apr 2022 08:57:03 GMT",
"version": "v1"
}
] |
2022-05-02
|
[
[
"Fan",
"Xiaoqing",
""
],
[
"Li",
"Ge",
""
],
[
"Li",
"Dingquan",
""
],
[
"Ren",
"Yurui",
""
],
[
"Gao",
"Wei",
""
],
[
"Li",
"Thomas H.",
""
]
] |
Point cloud compression plays a crucial role in reducing the huge cost of data storage and transmission. However, distortions can be introduced into the decompressed point clouds due to quantization. In this paper, we propose a novel learning-based post-processing method to enhance the decompressed point clouds. Specifically, a voxelized point cloud is first divided into small cubes. Then, a 3D convolutional network is proposed to predict the occupancy probability for each location of a cube. We leverage both local and global contexts by generating multi-scale probabilities. These probabilities are progressively summed to predict the results in a coarse-to-fine manner. Finally, we obtain the geometry-refined point clouds based on the predicted probabilities. Different from previous methods, we deal with decompressed point clouds with huge variety of distortions using a single model. Experimental results show that the proposed method can significantly improve the quality of the decompressed point clouds, achieving 9.30dB BDPSNR gain on three representative datasets on average.
|
1701.06828
|
Nidhi Rastogi
|
Nidhi Rastogi, Marie Joan Kristine Gloria and James Hendler
|
Security and Privacy of performing Data Analytics in the cloud - A
three-way handshake of Technology, Policy, and Management
|
28 pages, 3 figures, Journal of Information Privacy
|
Journal of Information Policy 5 (2015): 129-154
| null | null |
cs.DC cs.CY cs.NI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Cloud platform came into existence primarily to accelerate IT delivery and to
promote innovation. To this point, it has performed largely well to the
expectations of technologists, businesses and customers. The service aspect of
this technology has paved the road for a faster set up of infrastructure and
related goals for both startups and established organizations. This has further
led to quicker delivery of many user-friendly applications to the market while
proving to be a commercially viable option to companies with limited resources.
On the technology front, the creation and adoption of this ecosystem has
allowed easy collection of massive data from various sources at one place,
where the place is sometimes referred as just the cloud. Efficient data mining
can be performed on raw data to extract potentially useful information, which
was not possible at this scale before. Targeted advertising is a common example
that can help businesses. Despite these promising offerings, concerns around
security and privacy of user information suppressed wider acceptance and an
all-encompassing deployment of the cloud platform. In this paper, we discuss
security and privacy concerns that occur due to data exchanging hands between a
cloud servicer provider (CSP) and the primary cloud user - the data collector,
from the content generator. We offer solutions that encompass technology,
policy and sound management of the cloud service asserting that this approach
has the potential to provide a holistic solution.
|
[
{
"created": "Tue, 24 Jan 2017 11:59:28 GMT",
"version": "v1"
}
] |
2017-01-25
|
[
[
"Rastogi",
"Nidhi",
""
],
[
"Gloria",
"Marie Joan Kristine",
""
],
[
"Hendler",
"James",
""
]
] |
Cloud platform came into existence primarily to accelerate IT delivery and to promote innovation. To this point, it has performed largely well to the expectations of technologists, businesses and customers. The service aspect of this technology has paved the road for a faster set up of infrastructure and related goals for both startups and established organizations. This has further led to quicker delivery of many user-friendly applications to the market while proving to be a commercially viable option to companies with limited resources. On the technology front, the creation and adoption of this ecosystem has allowed easy collection of massive data from various sources at one place, where the place is sometimes referred as just the cloud. Efficient data mining can be performed on raw data to extract potentially useful information, which was not possible at this scale before. Targeted advertising is a common example that can help businesses. Despite these promising offerings, concerns around security and privacy of user information suppressed wider acceptance and an all-encompassing deployment of the cloud platform. In this paper, we discuss security and privacy concerns that occur due to data exchanging hands between a cloud servicer provider (CSP) and the primary cloud user - the data collector, from the content generator. We offer solutions that encompass technology, policy and sound management of the cloud service asserting that this approach has the potential to provide a holistic solution.
|
1312.1121
|
Jan Palczewski
|
Anna Palczewska and Jan Palczewski and Richard Marchese Robinson and
Daniel Neagu
|
Interpreting random forest classification models using a feature
contribution method
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Model interpretation is one of the key aspects of the model evaluation
process. The explanation of the relationship between model variables and
outputs is relatively easy for statistical models, such as linear regressions,
thanks to the availability of model parameters and their statistical
significance. For "black box" models, such as random forest, this information
is hidden inside the model structure. This work presents an approach for
computing feature contributions for random forest classification models. It
allows for the determination of the influence of each variable on the model
prediction for an individual instance. By analysing feature contributions for a
training dataset, the most significant variables can be determined and their
typical contribution towards predictions made for individual classes, i.e.,
class-specific feature contribution "patterns", are discovered. These patterns
represent a standard behaviour of the model and allow for an additional
assessment of the model reliability for a new data. Interpretation of feature
contributions for two UCI benchmark datasets shows the potential of the
proposed methodology. The robustness of results is demonstrated through an
extensive analysis of feature contributions calculated for a large number of
generated random forest models.
|
[
{
"created": "Wed, 4 Dec 2013 11:57:53 GMT",
"version": "v1"
}
] |
2013-12-05
|
[
[
"Palczewska",
"Anna",
""
],
[
"Palczewski",
"Jan",
""
],
[
"Robinson",
"Richard Marchese",
""
],
[
"Neagu",
"Daniel",
""
]
] |
Model interpretation is one of the key aspects of the model evaluation process. The explanation of the relationship between model variables and outputs is relatively easy for statistical models, such as linear regressions, thanks to the availability of model parameters and their statistical significance. For "black box" models, such as random forest, this information is hidden inside the model structure. This work presents an approach for computing feature contributions for random forest classification models. It allows for the determination of the influence of each variable on the model prediction for an individual instance. By analysing feature contributions for a training dataset, the most significant variables can be determined and their typical contribution towards predictions made for individual classes, i.e., class-specific feature contribution "patterns", are discovered. These patterns represent a standard behaviour of the model and allow for an additional assessment of the model reliability for a new data. Interpretation of feature contributions for two UCI benchmark datasets shows the potential of the proposed methodology. The robustness of results is demonstrated through an extensive analysis of feature contributions calculated for a large number of generated random forest models.
|
1812.11346
|
Fotis Savva
|
Fotis Savva, Christos Anagnostopoulos, Peter Triantafillou
|
Explaining Aggregates for Exploratory Analytics
|
13 pages
| null |
10.1109/BigData.2018.8621953
| null |
cs.DB cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Analysts wishing to explore multivariate data spaces, typically pose queries
involving selection operators, i.e., range or radius queries, which define data
subspaces of possible interest and then use aggregation functions, the results
of which determine their exploratory analytics interests. However, such
aggregate query (AQ) results are simple scalars and as such, convey limited
information about the queried subspaces for exploratory analysis. We address
this shortcoming aiding analysts to explore and understand data subspaces by
contributing a novel explanation mechanism coined XAXA: eXplaining Aggregates
for eXploratory Analytics. XAXA's novel AQ explanations are represented using
functions obtained by a three-fold joint optimization problem. Explanations
assume the form of a set of parametric piecewise-linear functions acquired
through a statistical learning model. A key feature of the proposed solution is
that model training is performed by only monitoring AQs and their answers
on-line. In XAXA, explanations for future AQs can be computed without any
database (DB) access and can be used to further explore the queried data
subspaces, without issuing any more queries to the DB. We evaluate the
explanation accuracy and efficiency of XAXA through theoretically grounded
metrics over real-world and synthetic datasets and query workloads.
|
[
{
"created": "Sat, 29 Dec 2018 11:43:32 GMT",
"version": "v1"
},
{
"created": "Thu, 12 Mar 2020 17:04:57 GMT",
"version": "v2"
}
] |
2020-03-13
|
[
[
"Savva",
"Fotis",
""
],
[
"Anagnostopoulos",
"Christos",
""
],
[
"Triantafillou",
"Peter",
""
]
] |
Analysts wishing to explore multivariate data spaces, typically pose queries involving selection operators, i.e., range or radius queries, which define data subspaces of possible interest and then use aggregation functions, the results of which determine their exploratory analytics interests. However, such aggregate query (AQ) results are simple scalars and as such, convey limited information about the queried subspaces for exploratory analysis. We address this shortcoming aiding analysts to explore and understand data subspaces by contributing a novel explanation mechanism coined XAXA: eXplaining Aggregates for eXploratory Analytics. XAXA's novel AQ explanations are represented using functions obtained by a three-fold joint optimization problem. Explanations assume the form of a set of parametric piecewise-linear functions acquired through a statistical learning model. A key feature of the proposed solution is that model training is performed by only monitoring AQs and their answers on-line. In XAXA, explanations for future AQs can be computed without any database (DB) access and can be used to further explore the queried data subspaces, without issuing any more queries to the DB. We evaluate the explanation accuracy and efficiency of XAXA through theoretically grounded metrics over real-world and synthetic datasets and query workloads.
|
2007.09206
|
Daniel Garijo
|
Daniel Garijo and Maximiliano Osorio
|
OBA: An Ontology-Based Framework for Creating REST APIs for Knowledge
Graphs
| null | null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years, Semantic Web technologies have been increasingly adopted by
researchers, industry and public institutions to describe and link data on the
Web, create web annotations and consume large knowledge graphs like Wikidata
and DBPedia. However, there is still a knowledge gap between ontology
engineers, who design, populate and create knowledge graphs; and web
developers, who need to understand, access and query these knowledge graphs but
are not familiar with ontologies, RDF or SPARQL. In this paper we describe the
Ontology-Based APIs framework (OBA), our approach to automatically create REST
APIs from ontologies while following RESTful API best practices. Given an
ontology (or ontology network) OBA uses standard technologies familiar to web
developers (OpenAPI Specification, JSON) and combines them with W3C standards
(OWL, JSON-LD frames and SPARQL) to create maintainable APIs with
documentation, units tests, automated validation of resources and clients (in
Python, Javascript, etc.) for non Semantic Web experts to access the contents
of a target knowledge graph. We showcase OBA with three examples that
illustrate the capabilities of the framework for different ontologies.
|
[
{
"created": "Fri, 17 Jul 2020 19:46:18 GMT",
"version": "v1"
}
] |
2020-07-21
|
[
[
"Garijo",
"Daniel",
""
],
[
"Osorio",
"Maximiliano",
""
]
] |
In recent years, Semantic Web technologies have been increasingly adopted by researchers, industry and public institutions to describe and link data on the Web, create web annotations and consume large knowledge graphs like Wikidata and DBPedia. However, there is still a knowledge gap between ontology engineers, who design, populate and create knowledge graphs; and web developers, who need to understand, access and query these knowledge graphs but are not familiar with ontologies, RDF or SPARQL. In this paper we describe the Ontology-Based APIs framework (OBA), our approach to automatically create REST APIs from ontologies while following RESTful API best practices. Given an ontology (or ontology network) OBA uses standard technologies familiar to web developers (OpenAPI Specification, JSON) and combines them with W3C standards (OWL, JSON-LD frames and SPARQL) to create maintainable APIs with documentation, units tests, automated validation of resources and clients (in Python, Javascript, etc.) for non Semantic Web experts to access the contents of a target knowledge graph. We showcase OBA with three examples that illustrate the capabilities of the framework for different ontologies.
|
2210.10637
|
Kai Sun
|
Kai Sun
|
Digital Asset Valuation: A Study on Domain Names, Email Addresses, and
NFTs
| null | null | null | null |
cs.IR cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing works on valuing digital assets on the Internet typically focus on a
single asset class. To promote the development of automated valuation
techniques, preferably those that are generally applicable to multiple asset
classes, we construct DASH, the first Digital Asset Sales History dataset that
features multiple digital asset classes spanning from classical to
blockchain-based ones. Consisting of 280K transactions of domain names
(DASH_DN), email addresses (DASH_EA), and non-fungible token (NFT)-based
identifiers (DASH_NFT), such as Ethereum Name Service names, DASH advances the
field in several aspects: the subsets DASH_DN, DASH_EA, and DASH_NFT are the
largest freely accessible domain name transaction dataset, the only publicly
available email address transaction dataset, and the first NFT transaction
dataset that focuses on identifiers, respectively.
We build strong conventional feature-based models as the baselines for DASH.
We next explore deep learning models based on fine-tuning pre-trained language
models, which have not yet been explored for digital asset valuation in the
previous literature. We find that the vanilla fine-tuned model already performs
reasonably well, outperforming all but the best-performing baselines. We
further propose improvements to make the model more aware of the time
sensitivity of transactions and the popularity of assets. Experimental results
show that our improved model consistently outperforms all the other models
across all asset classes on DASH.
|
[
{
"created": "Thu, 6 Oct 2022 12:59:06 GMT",
"version": "v1"
}
] |
2022-10-20
|
[
[
"Sun",
"Kai",
""
]
] |
Existing works on valuing digital assets on the Internet typically focus on a single asset class. To promote the development of automated valuation techniques, preferably those that are generally applicable to multiple asset classes, we construct DASH, the first Digital Asset Sales History dataset that features multiple digital asset classes spanning from classical to blockchain-based ones. Consisting of 280K transactions of domain names (DASH_DN), email addresses (DASH_EA), and non-fungible token (NFT)-based identifiers (DASH_NFT), such as Ethereum Name Service names, DASH advances the field in several aspects: the subsets DASH_DN, DASH_EA, and DASH_NFT are the largest freely accessible domain name transaction dataset, the only publicly available email address transaction dataset, and the first NFT transaction dataset that focuses on identifiers, respectively. We build strong conventional feature-based models as the baselines for DASH. We next explore deep learning models based on fine-tuning pre-trained language models, which have not yet been explored for digital asset valuation in the previous literature. We find that the vanilla fine-tuned model already performs reasonably well, outperforming all but the best-performing baselines. We further propose improvements to make the model more aware of the time sensitivity of transactions and the popularity of assets. Experimental results show that our improved model consistently outperforms all the other models across all asset classes on DASH.
|
2008.03124
|
Md Obaidul Hossen
|
Md Obaidul Hossen, Yang Zhang, Hesam Fathi Moghadam, Yue Zhang,
Michael Dayringer, Muhannad S Bakir
|
Design Space Exploration of Power Delivery For Advanced Packaging
Technologies
| null | null | null | null |
cs.AR eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, a design space exploration of power delivery networks is
performed for multi-chip 2.5-D and 3-D IC technologies. The focus of the paper
is the effective placement of the voltage regulator modules (VRMs) for power
supply noise (PSN) suppression. Multiple on-package VRM configurations have
been analyzed and compared. Additionally, 3D IC chip-on-VRM and
backside-of-the-package VRM configurations are studied. From the PSN
perspective, the 3D IC chip-on-VRM case suppresses the PSN the most even with
high current density hotspots. The paper also studies the impact of different
parameters such as VRM-chip distance on the package, on-chip decoupling
capacitor density, etc. on the PSN.
|
[
{
"created": "Fri, 10 Jul 2020 17:56:02 GMT",
"version": "v1"
}
] |
2020-08-10
|
[
[
"Hossen",
"Md Obaidul",
""
],
[
"Zhang",
"Yang",
""
],
[
"Moghadam",
"Hesam Fathi",
""
],
[
"Zhang",
"Yue",
""
],
[
"Dayringer",
"Michael",
""
],
[
"Bakir",
"Muhannad S",
""
]
] |
In this paper, a design space exploration of power delivery networks is performed for multi-chip 2.5-D and 3-D IC technologies. The focus of the paper is the effective placement of the voltage regulator modules (VRMs) for power supply noise (PSN) suppression. Multiple on-package VRM configurations have been analyzed and compared. Additionally, 3D IC chip-on-VRM and backside-of-the-package VRM configurations are studied. From the PSN perspective, the 3D IC chip-on-VRM case suppresses the PSN the most even with high current density hotspots. The paper also studies the impact of different parameters such as VRM-chip distance on the package, on-chip decoupling capacitor density, etc. on the PSN.
|
2405.08645
|
Boqi Chen
|
Boqi Chen, Krist\'of Marussy, Oszk\'ar Semer\'ath, Gunter Mussbacher,
D\'aniel Varr\'o
|
Certifying Robustness of Graph Convolutional Networks for Node
Perturbation with Polyhedra Abstract Interpretation
| null | null | null | null |
cs.LG cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
Graph convolutional neural networks (GCNs) are powerful tools for learning
graph-based knowledge representations from training data. However, they are
vulnerable to small perturbations in the input graph, which makes them
susceptible to input faults or adversarial attacks. This poses a significant
problem for GCNs intended to be used in critical applications, which need to
provide certifiably robust services even in the presence of adversarial
perturbations. We propose an improved GCN robustness certification technique
for node classification in the presence of node feature perturbations. We
introduce a novel polyhedra-based abstract interpretation approach to tackle
specific challenges of graph data and provide tight upper and lower bounds for
the robustness of the GCN. Experiments show that our approach simultaneously
improves the tightness of robustness bounds as well as the runtime performance
of certification. Moreover, our method can be used during training to further
improve the robustness of GCNs.
|
[
{
"created": "Tue, 14 May 2024 14:21:55 GMT",
"version": "v1"
}
] |
2024-05-15
|
[
[
"Chen",
"Boqi",
""
],
[
"Marussy",
"Kristóf",
""
],
[
"Semeráth",
"Oszkár",
""
],
[
"Mussbacher",
"Gunter",
""
],
[
"Varró",
"Dániel",
""
]
] |
Graph convolutional neural networks (GCNs) are powerful tools for learning graph-based knowledge representations from training data. However, they are vulnerable to small perturbations in the input graph, which makes them susceptible to input faults or adversarial attacks. This poses a significant problem for GCNs intended to be used in critical applications, which need to provide certifiably robust services even in the presence of adversarial perturbations. We propose an improved GCN robustness certification technique for node classification in the presence of node feature perturbations. We introduce a novel polyhedra-based abstract interpretation approach to tackle specific challenges of graph data and provide tight upper and lower bounds for the robustness of the GCN. Experiments show that our approach simultaneously improves the tightness of robustness bounds as well as the runtime performance of certification. Moreover, our method can be used during training to further improve the robustness of GCNs.
|
2406.13629
|
Zhepei Wei
|
Zhepei Wei, Wei-Lin Chen, Yu Meng
|
InstructRAG: Instructing Retrieval-Augmented Generation with Explicit
Denoising
|
Code: https://github.com/weizhepei/InstructRAG
| null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Retrieval-augmented generation (RAG) has shown promising potential to enhance
the accuracy and factuality of language models (LMs). However, imperfect
retrievers or noisy corpora can introduce misleading or even erroneous
information to the retrieved contents, posing a significant challenge to the
generation quality. Existing RAG methods typically address this challenge by
directly predicting final answers despite potentially noisy inputs, resulting
in an implicit denoising process that is difficult to interpret and verify. On
the other hand, the acquisition of explicit denoising supervision is often
costly, involving significant human efforts. In this work, we propose
InstructRAG, where LMs explicitly learn the denoising process through
self-synthesized rationales -- First, we instruct the LM to explain how the
ground-truth answer is derived from retrieved documents. Then, these rationales
can be used either as demonstrations for in-context learning of explicit
denoising or as supervised fine-tuning data to train the model. Compared to
standard RAG approaches, InstructRAG requires no additional supervision, allows
for easier verification of the predicted answers, and effectively improves
generation accuracy. Experiments show InstructRAG consistently outperforms
existing RAG methods in both training-free and trainable scenarios, achieving a
relative improvement of 8.3% over the best baseline method on average across
five knowledge-intensive benchmarks. Extensive analysis indicates that
InstructRAG scales well with increased numbers of retrieved documents and
consistently exhibits robust denoising ability even in out-of-domain datasets,
demonstrating strong generalizability.
|
[
{
"created": "Wed, 19 Jun 2024 15:25:29 GMT",
"version": "v1"
}
] |
2024-06-21
|
[
[
"Wei",
"Zhepei",
""
],
[
"Chen",
"Wei-Lin",
""
],
[
"Meng",
"Yu",
""
]
] |
Retrieval-augmented generation (RAG) has shown promising potential to enhance the accuracy and factuality of language models (LMs). However, imperfect retrievers or noisy corpora can introduce misleading or even erroneous information to the retrieved contents, posing a significant challenge to the generation quality. Existing RAG methods typically address this challenge by directly predicting final answers despite potentially noisy inputs, resulting in an implicit denoising process that is difficult to interpret and verify. On the other hand, the acquisition of explicit denoising supervision is often costly, involving significant human efforts. In this work, we propose InstructRAG, where LMs explicitly learn the denoising process through self-synthesized rationales -- First, we instruct the LM to explain how the ground-truth answer is derived from retrieved documents. Then, these rationales can be used either as demonstrations for in-context learning of explicit denoising or as supervised fine-tuning data to train the model. Compared to standard RAG approaches, InstructRAG requires no additional supervision, allows for easier verification of the predicted answers, and effectively improves generation accuracy. Experiments show InstructRAG consistently outperforms existing RAG methods in both training-free and trainable scenarios, achieving a relative improvement of 8.3% over the best baseline method on average across five knowledge-intensive benchmarks. Extensive analysis indicates that InstructRAG scales well with increased numbers of retrieved documents and consistently exhibits robust denoising ability even in out-of-domain datasets, demonstrating strong generalizability.
|
1101.2288
|
Feng Liu
|
Feng Liu, Chung Chan, Ying Jun (Angela) Zhang
|
On the Degree of Freedom for Multi-Source Multi-Destination Wireless
Network with Multi-layer Relays
|
15 pages, 2 figures
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Degree of freedom (DoF) region provides an approximation of capacity region
in high signal-to-noise ratio (SNR) regime, while sum DoF gives the scaling
factor. In this correspondence, we analyse the DoF region and sum DoF for
unicast layered multi-hop relay wireless networks with arbitrary number of
source/destination/relay nodes, arbitrary number of hops and arbitrary number
of antennas at each node. The result is valid for quite a few message
topologies. We reveal the limitation on capacity of multi-hop network due to
the concatenation structure and show the similarity with capacitor network.
From the analysis on bound gap and optimality condition, the ultimate capacity
of multi-hop network is shown to be strictly inferior to that of single-hop
network. Linear scaling law can be established when the number of hops is
fixed. At cost of channel state information at transmitters (CSIT) for each
component single-hop network, our achievable scheme avoids routing and
simplifies scheduling.
|
[
{
"created": "Wed, 12 Jan 2011 08:04:17 GMT",
"version": "v1"
}
] |
2015-03-17
|
[
[
"Liu",
"Feng",
"",
"Angela"
],
[
"Chan",
"Chung",
"",
"Angela"
],
[
"Jun",
"Ying",
"",
"Angela"
],
[
"Zhang",
"",
""
]
] |
Degree of freedom (DoF) region provides an approximation of capacity region in high signal-to-noise ratio (SNR) regime, while sum DoF gives the scaling factor. In this correspondence, we analyse the DoF region and sum DoF for unicast layered multi-hop relay wireless networks with arbitrary number of source/destination/relay nodes, arbitrary number of hops and arbitrary number of antennas at each node. The result is valid for quite a few message topologies. We reveal the limitation on capacity of multi-hop network due to the concatenation structure and show the similarity with capacitor network. From the analysis on bound gap and optimality condition, the ultimate capacity of multi-hop network is shown to be strictly inferior to that of single-hop network. Linear scaling law can be established when the number of hops is fixed. At cost of channel state information at transmitters (CSIT) for each component single-hop network, our achievable scheme avoids routing and simplifies scheduling.
|
1610.01795
|
Mohamad Ivan Fanany
|
Ines Heidieni Ikasari, Vina Ayumi, Mohamad Ivan Fanany, Sidik Mulyono
|
Multiple Regularizations Deep Learning for Paddy Growth Stages
Classification from LANDSAT-8
|
11 pages
| null | null | null |
cs.CV cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This study uses remote sensing technology that can provide information about
the condition of the earth's surface area, fast, and spatially. The study area
was in Karawang District, lying in the Northern part of West Java-Indonesia. We
address a paddy growth stages classification using LANDSAT 8 image data
obtained from multi-sensor remote sensing image taken in October 2015 to August
2016. This study pursues a fast and accurate classification of paddy growth
stages by employing multiple regularizations learning on some deep learning
methods such as DNN (Deep Neural Networks) and 1-D CNN (1-D Convolutional
Neural Networks). The used regularizations are Fast Dropout, Dropout, and Batch
Normalization. To evaluate the effectiveness, we also compared our method with
other machine learning methods such as (Logistic Regression, SVM, Random
Forest, and XGBoost). The data used are seven bands of LANDSAT-8 spectral data
samples that correspond to paddy growth stages data obtained from i-Sky (eye in
the sky) Innovation system. The growth stages are determined based on paddy
crop phenology profile from time series of LANDSAT-8 images. The classification
results show that MLP using multiple regularization Dropout and Batch
Normalization achieves the highest accuracy for this dataset.
|
[
{
"created": "Thu, 6 Oct 2016 09:46:08 GMT",
"version": "v1"
}
] |
2016-10-07
|
[
[
"Ikasari",
"Ines Heidieni",
""
],
[
"Ayumi",
"Vina",
""
],
[
"Fanany",
"Mohamad Ivan",
""
],
[
"Mulyono",
"Sidik",
""
]
] |
This study uses remote sensing technology that can provide information about the condition of the earth's surface area, fast, and spatially. The study area was in Karawang District, lying in the Northern part of West Java-Indonesia. We address a paddy growth stages classification using LANDSAT 8 image data obtained from multi-sensor remote sensing image taken in October 2015 to August 2016. This study pursues a fast and accurate classification of paddy growth stages by employing multiple regularizations learning on some deep learning methods such as DNN (Deep Neural Networks) and 1-D CNN (1-D Convolutional Neural Networks). The used regularizations are Fast Dropout, Dropout, and Batch Normalization. To evaluate the effectiveness, we also compared our method with other machine learning methods such as (Logistic Regression, SVM, Random Forest, and XGBoost). The data used are seven bands of LANDSAT-8 spectral data samples that correspond to paddy growth stages data obtained from i-Sky (eye in the sky) Innovation system. The growth stages are determined based on paddy crop phenology profile from time series of LANDSAT-8 images. The classification results show that MLP using multiple regularization Dropout and Batch Normalization achieves the highest accuracy for this dataset.
|
2406.08446
|
Yuling Gu
|
Yuling Gu, Oyvind Tafjord, Bailey Kuehl, Dany Haddad, Jesse Dodge,
Hannaneh Hajishirzi
|
OLMES: A Standard for Language Model Evaluations
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Progress in AI is often demonstrated by new models claiming improved
performance on tasks measuring model capabilities. Evaluating language models
in particular is challenging, as small changes to how a model is evaluated on a
task can lead to large changes in measured performance. There is no common
standard setup, so different models are evaluated on the same tasks in
different ways, leading to claims about which models perform best not being
reproducible. We propose OLMES, a completely documented, practical, open
standard for reproducible LLM evaluations. In developing this standard, we
identify and review the varying factors in evaluation practices adopted by the
community - such as details of prompt formatting, choice of in-context
examples, probability normalizations, and task formulation. In particular,
OLMES supports meaningful comparisons between smaller base models that require
the unnatural "cloze" formulation of multiple-choice questions against larger
models that can utilize the original formulation. OLMES includes
well-considered recommendations guided by results from existing literature as
well as new experiments investigating open questions.
|
[
{
"created": "Wed, 12 Jun 2024 17:37:09 GMT",
"version": "v1"
}
] |
2024-06-13
|
[
[
"Gu",
"Yuling",
""
],
[
"Tafjord",
"Oyvind",
""
],
[
"Kuehl",
"Bailey",
""
],
[
"Haddad",
"Dany",
""
],
[
"Dodge",
"Jesse",
""
],
[
"Hajishirzi",
"Hannaneh",
""
]
] |
Progress in AI is often demonstrated by new models claiming improved performance on tasks measuring model capabilities. Evaluating language models in particular is challenging, as small changes to how a model is evaluated on a task can lead to large changes in measured performance. There is no common standard setup, so different models are evaluated on the same tasks in different ways, leading to claims about which models perform best not being reproducible. We propose OLMES, a completely documented, practical, open standard for reproducible LLM evaluations. In developing this standard, we identify and review the varying factors in evaluation practices adopted by the community - such as details of prompt formatting, choice of in-context examples, probability normalizations, and task formulation. In particular, OLMES supports meaningful comparisons between smaller base models that require the unnatural "cloze" formulation of multiple-choice questions against larger models that can utilize the original formulation. OLMES includes well-considered recommendations guided by results from existing literature as well as new experiments investigating open questions.
|
2112.14569
|
Ivan P Yamshchikov
|
Vladislav Mosin, Igor Samenko, Alexey Tikhonov, Borislav Kozlovskii,
Ivan P. Yamshchikov
|
Fine-Tuning Transformers: Vocabulary Transfer
| null | null |
10.1016/j.artint.2023.103860
| null |
cs.CL cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transformers are responsible for the vast majority of recent advances in
natural language processing. The majority of practical natural language
processing applications of these models are typically enabled through transfer
learning. This paper studies if corpus-specific tokenization used for
fine-tuning improves the resulting performance of the model. Through a series
of experiments, we demonstrate that such tokenization combined with the
initialization and fine-tuning strategy for the vocabulary tokens speeds up the
transfer and boosts the performance of the fine-tuned model. We call this
aspect of transfer facilitation vocabulary transfer.
|
[
{
"created": "Wed, 29 Dec 2021 14:22:42 GMT",
"version": "v1"
},
{
"created": "Mon, 12 Dec 2022 19:42:06 GMT",
"version": "v2"
}
] |
2024-02-02
|
[
[
"Mosin",
"Vladislav",
""
],
[
"Samenko",
"Igor",
""
],
[
"Tikhonov",
"Alexey",
""
],
[
"Kozlovskii",
"Borislav",
""
],
[
"Yamshchikov",
"Ivan P.",
""
]
] |
Transformers are responsible for the vast majority of recent advances in natural language processing. The majority of practical natural language processing applications of these models are typically enabled through transfer learning. This paper studies if corpus-specific tokenization used for fine-tuning improves the resulting performance of the model. Through a series of experiments, we demonstrate that such tokenization combined with the initialization and fine-tuning strategy for the vocabulary tokens speeds up the transfer and boosts the performance of the fine-tuned model. We call this aspect of transfer facilitation vocabulary transfer.
|
1802.08236
|
Xin Jin
|
Xin Jin, Xiaozhou Li, Haoyu Zhang, Nate Foster, Jeongkeun Lee, Robert
Soule, Changhoon Kim, Ion Stoica
|
NetChain: Scale-Free Sub-RTT Coordination (Extended Version)
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Coordination services are a fundamental building block of modern cloud
systems, providing critical functionalities like configuration management and
distributed locking. The major challenge is to achieve low latency and high
throughput while providing strong consistency and fault-tolerance. Traditional
server-based solutions require multiple round-trip times (RTTs) to process a
query. This paper presents NetChain, a new approach that provides scale-free
sub-RTT coordination in datacenters. NetChain exploits recent advances in
programmable switches to store data and process queries entirely in the network
data plane. This eliminates the query processing at coordination servers and
cuts the end-to-end latency to as little as half of an RTT---clients only
experience processing delay from their own software stack plus network delay,
which in a datacenter setting is typically much smaller. We design new
protocols and algorithms based on chain replication to guarantee strong
consistency and to efficiently handle switch failures. We implement a prototype
with four Barefoot Tofino switches and four commodity servers. Evaluation
results show that compared to traditional server-based solutions like
ZooKeeper, our prototype provides orders of magnitude higher throughput and
lower latency, and handles failures gracefully.
|
[
{
"created": "Thu, 22 Feb 2018 18:46:39 GMT",
"version": "v1"
}
] |
2018-02-23
|
[
[
"Jin",
"Xin",
""
],
[
"Li",
"Xiaozhou",
""
],
[
"Zhang",
"Haoyu",
""
],
[
"Foster",
"Nate",
""
],
[
"Lee",
"Jeongkeun",
""
],
[
"Soule",
"Robert",
""
],
[
"Kim",
"Changhoon",
""
],
[
"Stoica",
"Ion",
""
]
] |
Coordination services are a fundamental building block of modern cloud systems, providing critical functionalities like configuration management and distributed locking. The major challenge is to achieve low latency and high throughput while providing strong consistency and fault-tolerance. Traditional server-based solutions require multiple round-trip times (RTTs) to process a query. This paper presents NetChain, a new approach that provides scale-free sub-RTT coordination in datacenters. NetChain exploits recent advances in programmable switches to store data and process queries entirely in the network data plane. This eliminates the query processing at coordination servers and cuts the end-to-end latency to as little as half of an RTT---clients only experience processing delay from their own software stack plus network delay, which in a datacenter setting is typically much smaller. We design new protocols and algorithms based on chain replication to guarantee strong consistency and to efficiently handle switch failures. We implement a prototype with four Barefoot Tofino switches and four commodity servers. Evaluation results show that compared to traditional server-based solutions like ZooKeeper, our prototype provides orders of magnitude higher throughput and lower latency, and handles failures gracefully.
|
1911.07643
|
Miguel Suau
|
Miguel Suau, Jinke He, Elena Congeduti, Rolf A.N. Starre, Aleksander
Czechowski, Frans A. Oliehoek
|
Influence-aware Memory Architectures for Deep Reinforcement Learning
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Due to its perceptual limitations, an agent may have too little information
about the state of the environment to act optimally. In such cases, it is
important to keep track of the observation history to uncover hidden state.
Recent deep reinforcement learning methods use recurrent neural networks (RNN)
to memorize past observations. However, these models are expensive to train and
have convergence difficulties, especially when dealing with high dimensional
input spaces. In this paper, we propose influence-aware memory (IAM), a
theoretically inspired memory architecture that tries to alleviate the training
difficulties by restricting the input of the recurrent layers to those
variables that influence the hidden state information. Moreover, as opposed to
standard RNNs, in which every piece of information used for estimating Q values
is inevitably fed back into the network for the next prediction, our model
allows information to flow without being necessarily stored in the RNN's
internal memory. Results indicate that, by letting the recurrent layers focus
on a small fraction of the observation variables while processing the rest of
the information with a feedforward neural network, we can outperform standard
recurrent architectures both in training speed and policy performance. This
approach also reduces runtime and obtains better scores than methods that stack
multiple observations to remove partial observability.
|
[
{
"created": "Mon, 18 Nov 2019 13:54:25 GMT",
"version": "v1"
},
{
"created": "Thu, 21 Nov 2019 20:50:00 GMT",
"version": "v2"
},
{
"created": "Tue, 18 Aug 2020 09:00:18 GMT",
"version": "v3"
},
{
"created": "Wed, 17 Feb 2021 18:26:14 GMT",
"version": "v4"
}
] |
2021-02-18
|
[
[
"Suau",
"Miguel",
""
],
[
"He",
"Jinke",
""
],
[
"Congeduti",
"Elena",
""
],
[
"Starre",
"Rolf A. N.",
""
],
[
"Czechowski",
"Aleksander",
""
],
[
"Oliehoek",
"Frans A.",
""
]
] |
Due to its perceptual limitations, an agent may have too little information about the state of the environment to act optimally. In such cases, it is important to keep track of the observation history to uncover hidden state. Recent deep reinforcement learning methods use recurrent neural networks (RNN) to memorize past observations. However, these models are expensive to train and have convergence difficulties, especially when dealing with high dimensional input spaces. In this paper, we propose influence-aware memory (IAM), a theoretically inspired memory architecture that tries to alleviate the training difficulties by restricting the input of the recurrent layers to those variables that influence the hidden state information. Moreover, as opposed to standard RNNs, in which every piece of information used for estimating Q values is inevitably fed back into the network for the next prediction, our model allows information to flow without being necessarily stored in the RNN's internal memory. Results indicate that, by letting the recurrent layers focus on a small fraction of the observation variables while processing the rest of the information with a feedforward neural network, we can outperform standard recurrent architectures both in training speed and policy performance. This approach also reduces runtime and obtains better scores than methods that stack multiple observations to remove partial observability.
|
1202.1490
|
Aravindh Krishnamoorthy
|
Aravindh Krishnamoorthy, Kenan Kocagoez
|
Singular Values using Cholesky Decomposition
| null | null | null | null |
cs.MS cs.NA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper two ways to compute singular values are presented which use
Cholesky decomposition as their basic operation.
|
[
{
"created": "Tue, 7 Feb 2012 18:37:07 GMT",
"version": "v1"
}
] |
2015-03-20
|
[
[
"Krishnamoorthy",
"Aravindh",
""
],
[
"Kocagoez",
"Kenan",
""
]
] |
In this paper two ways to compute singular values are presented which use Cholesky decomposition as their basic operation.
|
2105.14935
|
Dalila Tamzalit
|
Jean-Philippe Gouigoux, Dalila Tamzalit (IUT Nantes, LS2N), Joost
Noppen
|
Microservice Maturity of Organizations: towards an assessment framework
| null |
International Conference on Research Challenges in Information
Science, May 2021, Virtual, Cyprus. pp.523-540
|
10.1007/978-3-030-75018-3_34
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This early work aims to allow organizations to diagnose their capacity to
properly adopt microservices through initial milestones of a Microservice
Maturity Model (MiMMo). The objective is to prepare the way towards a general
framework to help companies and industries to determine their microservices
maturity. Organizations lean more and more on distributed web applications and
Line of Business software. This is particularly relevant during the current
Covid-19 crisis, where companies are even more challenged to offer their
services online, targeting a very high level of responsiveness in the face of
rapidly increasing and diverse demands. For this, microservices remain the most
suitable delivery application architectural style. They allow agility not only
on the technical application, as often considered, but on the enterprise
architecture as a whole, influencing the actual financial business of the
company. However, microservices adoption is highly risk-prone and complex.
Before they establish an appropriate migration plan, first and foremost,
companies must assess their degree of readiness to adopt microservices. For
this, MiMMo, a Microservices Maturity Model framework assessment, is proposed
to help companies assess their readiness for the microservice architectural
style, based on their actual situation. MiMMo results from observations of and
experience with about thirty organizations writing software. It conceptualizes
and generalizes the progression paths they have followed to adopt microservices
appropriately. Using the model, an organization can evaluate itself in two
dimensions and five maturity levels and thus: (i) benchmark itself on its
current use of microservices; (ii) project the next steps it needs to achieve a
higher maturity level and (iii) analyze how it has evolved and maintain a
global coherence between technical and business stakes.
|
[
{
"created": "Mon, 31 May 2021 13:01:06 GMT",
"version": "v1"
}
] |
2021-06-01
|
[
[
"Gouigoux",
"Jean-Philippe",
"",
"IUT Nantes, LS2N"
],
[
"Tamzalit",
"Dalila",
"",
"IUT Nantes, LS2N"
],
[
"Noppen",
"Joost",
""
]
] |
This early work aims to allow organizations to diagnose their capacity to properly adopt microservices through initial milestones of a Microservice Maturity Model (MiMMo). The objective is to prepare the way towards a general framework to help companies and industries to determine their microservices maturity. Organizations lean more and more on distributed web applications and Line of Business software. This is particularly relevant during the current Covid-19 crisis, where companies are even more challenged to offer their services online, targeting a very high level of responsiveness in the face of rapidly increasing and diverse demands. For this, microservices remain the most suitable delivery application architectural style. They allow agility not only on the technical application, as often considered, but on the enterprise architecture as a whole, influencing the actual financial business of the company. However, microservices adoption is highly risk-prone and complex. Before they establish an appropriate migration plan, first and foremost, companies must assess their degree of readiness to adopt microservices. For this, MiMMo, a Microservices Maturity Model framework assessment, is proposed to help companies assess their readiness for the microservice architectural style, based on their actual situation. MiMMo results from observations of and experience with about thirty organizations writing software. It conceptualizes and generalizes the progression paths they have followed to adopt microservices appropriately. Using the model, an organization can evaluate itself in two dimensions and five maturity levels and thus: (i) benchmark itself on its current use of microservices; (ii) project the next steps it needs to achieve a higher maturity level and (iii) analyze how it has evolved and maintain a global coherence between technical and business stakes.
|
2203.05551
|
Stephen Whitelam
|
Stephen Whitelam, Isaac Tamblyn
|
Cellular automata can classify data by inducing trajectory phase
coexistence
| null | null |
10.1103/PhysRevE.108.014126
| null |
cs.NE cond-mat.stat-mech
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We show that cellular automata can classify data by inducing a form of
dynamical phase coexistence. We use Monte Carlo methods to search for general
two-dimensional deterministic automata that classify images on the basis of
activity, the number of state changes that occur in a trajectory initiated from
the image. When the number of timesteps of the automaton is a trainable
parameter, the search scheme identifies automata that generate a population of
dynamical trajectories displaying high or low activity, depending on initial
conditions. Automata of this nature behave as nonlinear activation functions
with an output that is effectively binary, resembling an emergent version of a
spiking neuron.
|
[
{
"created": "Thu, 10 Mar 2022 18:57:27 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Apr 2022 18:33:15 GMT",
"version": "v2"
},
{
"created": "Mon, 25 Jul 2022 22:33:05 GMT",
"version": "v3"
}
] |
2023-08-02
|
[
[
"Whitelam",
"Stephen",
""
],
[
"Tamblyn",
"Isaac",
""
]
] |
We show that cellular automata can classify data by inducing a form of dynamical phase coexistence. We use Monte Carlo methods to search for general two-dimensional deterministic automata that classify images on the basis of activity, the number of state changes that occur in a trajectory initiated from the image. When the number of timesteps of the automaton is a trainable parameter, the search scheme identifies automata that generate a population of dynamical trajectories displaying high or low activity, depending on initial conditions. Automata of this nature behave as nonlinear activation functions with an output that is effectively binary, resembling an emergent version of a spiking neuron.
|
2305.11361
|
David Liu
|
David Liu, Virginie Do, Nicolas Usunier, Maximilian Nickel
|
Group fairness without demographics using social networks
| null | null |
10.1145/3593013.3594091
| null |
cs.CY cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
Group fairness is a popular approach to prevent unfavorable treatment of
individuals based on sensitive attributes such as race, gender, and disability.
However, the reliance of group fairness on access to discrete group information
raises several limitations and concerns, especially with regard to privacy,
intersectionality, and unforeseen biases. In this work, we propose a
"group-free" measure of fairness that does not rely on sensitive attributes
and, instead, is based on homophily in social networks, i.e., the common
property that individuals sharing similar attributes are more likely to be
connected. Our measure is group-free as it avoids recovering any form of group
memberships and uses only pairwise similarities between individuals to define
inequality in outcomes relative to the homophily structure in the network. We
theoretically justify our measure by showing it is commensurate with the notion
of additive decomposability in the economic inequality literature and also
bound the impact of non-sensitive confounding attributes. Furthermore, we apply
our measure to develop fair algorithms for classification, maximizing
information access, and recommender systems. Our experimental results show that
the proposed approach can reduce inequality among protected classes without
knowledge of sensitive attribute labels. We conclude with a discussion of the
limitations of our approach when applied in real-world settings.
|
[
{
"created": "Fri, 19 May 2023 00:45:55 GMT",
"version": "v1"
}
] |
2023-05-22
|
[
[
"Liu",
"David",
""
],
[
"Do",
"Virginie",
""
],
[
"Usunier",
"Nicolas",
""
],
[
"Nickel",
"Maximilian",
""
]
] |
Group fairness is a popular approach to prevent unfavorable treatment of individuals based on sensitive attributes such as race, gender, and disability. However, the reliance of group fairness on access to discrete group information raises several limitations and concerns, especially with regard to privacy, intersectionality, and unforeseen biases. In this work, we propose a "group-free" measure of fairness that does not rely on sensitive attributes and, instead, is based on homophily in social networks, i.e., the common property that individuals sharing similar attributes are more likely to be connected. Our measure is group-free as it avoids recovering any form of group memberships and uses only pairwise similarities between individuals to define inequality in outcomes relative to the homophily structure in the network. We theoretically justify our measure by showing it is commensurate with the notion of additive decomposability in the economic inequality literature and also bound the impact of non-sensitive confounding attributes. Furthermore, we apply our measure to develop fair algorithms for classification, maximizing information access, and recommender systems. Our experimental results show that the proposed approach can reduce inequality among protected classes without knowledge of sensitive attribute labels. We conclude with a discussion of the limitations of our approach when applied in real-world settings.
|
1407.3208
|
Brian Ruttenberg
|
Brian E. Ruttenberg and Avi Pfeffer
|
Decision-Making with Complex Data Structures using Probabilistic
Programming
| null | null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing decision-theoretic reasoning frameworks such as decision networks
use simple data structures and processes. However, decisions are often made
based on complex data structures, such as social networks and protein
sequences, and rich processes involving those structures. We present a
framework for representing decision problems with complex data structures using
probabilistic programming, allowing probabilistic models to be created with
programming language constructs such as data structures and control flow. We
provide a way to use arbitrary data types with minimal effort from the user,
and an approximate decision-making algorithm that is effective even when the
information space is very large or infinite. Experimental results show our
algorithm working on problems with very large information spaces.
|
[
{
"created": "Fri, 11 Jul 2014 16:20:15 GMT",
"version": "v1"
}
] |
2014-07-14
|
[
[
"Ruttenberg",
"Brian E.",
""
],
[
"Pfeffer",
"Avi",
""
]
] |
Existing decision-theoretic reasoning frameworks such as decision networks use simple data structures and processes. However, decisions are often made based on complex data structures, such as social networks and protein sequences, and rich processes involving those structures. We present a framework for representing decision problems with complex data structures using probabilistic programming, allowing probabilistic models to be created with programming language constructs such as data structures and control flow. We provide a way to use arbitrary data types with minimal effort from the user, and an approximate decision-making algorithm that is effective even when the information space is very large or infinite. Experimental results show our algorithm working on problems with very large information spaces.
|
1904.00979
|
Yingwei Li
|
Yingwei Li, Song Bai, Cihang Xie, Zhenyu Liao, Xiaohui Shen and Alan
L. Yuille
|
Regional Homogeneity: Towards Learning Transferable Universal
Adversarial Perturbations Against Defenses
|
ECCV 2020. Project page:
https://github.com/LiYingwei/Regional-Homogeneity
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper focuses on learning transferable adversarial examples specifically
against defense models (models to defense adversarial attacks). In particular,
we show that a simple universal perturbation can fool a series of
state-of-the-art defenses.
Adversarial examples generated by existing attacks are generally hard to
transfer to defense models. We observe the property of regional homogeneity in
adversarial perturbations and suggest that the defenses are less robust to
regionally homogeneous perturbations. Therefore, we propose an effective
transforming paradigm and a customized gradient transformer module to transform
existing perturbations into regionally homogeneous ones. Without explicitly
forcing the perturbations to be universal, we observe that a well-trained
gradient transformer module tends to output input-independent gradients (hence
universal) benefiting from the under-fitting phenomenon. Thorough experiments
demonstrate that our work significantly outperforms the prior art attacking
algorithms (either image-dependent or universal ones) by an average improvement
of 14.0% when attacking 9 defenses in the transfer-based attack setting. In
addition to the cross-model transferability, we also verify that regionally
homogeneous perturbations can well transfer across different vision tasks
(attacking with the semantic segmentation task and testing on the object
detection task). The code is available here:
https://github.com/LiYingwei/Regional-Homogeneity.
|
[
{
"created": "Mon, 1 Apr 2019 17:31:02 GMT",
"version": "v1"
},
{
"created": "Fri, 31 Jul 2020 01:42:37 GMT",
"version": "v2"
}
] |
2020-08-03
|
[
[
"Li",
"Yingwei",
""
],
[
"Bai",
"Song",
""
],
[
"Xie",
"Cihang",
""
],
[
"Liao",
"Zhenyu",
""
],
[
"Shen",
"Xiaohui",
""
],
[
"Yuille",
"Alan L.",
""
]
] |
This paper focuses on learning transferable adversarial examples specifically against defense models (models to defense adversarial attacks). In particular, we show that a simple universal perturbation can fool a series of state-of-the-art defenses. Adversarial examples generated by existing attacks are generally hard to transfer to defense models. We observe the property of regional homogeneity in adversarial perturbations and suggest that the defenses are less robust to regionally homogeneous perturbations. Therefore, we propose an effective transforming paradigm and a customized gradient transformer module to transform existing perturbations into regionally homogeneous ones. Without explicitly forcing the perturbations to be universal, we observe that a well-trained gradient transformer module tends to output input-independent gradients (hence universal) benefiting from the under-fitting phenomenon. Thorough experiments demonstrate that our work significantly outperforms the prior art attacking algorithms (either image-dependent or universal ones) by an average improvement of 14.0% when attacking 9 defenses in the transfer-based attack setting. In addition to the cross-model transferability, we also verify that regionally homogeneous perturbations can well transfer across different vision tasks (attacking with the semantic segmentation task and testing on the object detection task). The code is available here: https://github.com/LiYingwei/Regional-Homogeneity.
|
1404.6451
|
Krasimir Yordzhev
|
Krasimir Yordzhev
|
On an Algorithm for Isomorphism-Free Generations of Combinatorial
Objects
| null |
International Journal of Emerging Trends & Technology in Computer
Science (IJETTCS), Vol. 2, No. 6 (2013) 215-220
| null | null |
cs.DS cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the work are defined the concepts semi-canonical and canonical binary
matrix. What is described is an algorithm solving the combinatorial problem for
finding the semi-canonical matrices in the set \Lambda_n^k consisting of all
n\times n binary matrices having exactly k 1's in every row and every column
without perambulating all elements. In the described algorithm bitwise
operations are substantially used. In this way it becomes easier to find the
solution to the problem for receiving one representative from every equivalence
class regarding the introduced in the article equivalence relation in the set
\Lambda_n^k . The last problem is equivalent to the problem for finding all
canonical matrices in \Lambda_n^k .
|
[
{
"created": "Fri, 25 Apr 2014 15:14:48 GMT",
"version": "v1"
}
] |
2014-04-28
|
[
[
"Yordzhev",
"Krasimir",
""
]
] |
In the work are defined the concepts semi-canonical and canonical binary matrix. What is described is an algorithm solving the combinatorial problem for finding the semi-canonical matrices in the set \Lambda_n^k consisting of all n\times n binary matrices having exactly k 1's in every row and every column without perambulating all elements. In the described algorithm bitwise operations are substantially used. In this way it becomes easier to find the solution to the problem for receiving one representative from every equivalence class regarding the introduced in the article equivalence relation in the set \Lambda_n^k . The last problem is equivalent to the problem for finding all canonical matrices in \Lambda_n^k .
|
2403.11395
|
Alhassan Mumuni
|
Alhassan Mumuni and Fuseini Mumuni
|
Automated data processing and feature engineering for deep learning and
big data applications: a survey
|
Journal of Information and Intelligence (2024)
| null |
10.1016/j.jiixd.2024.01.002
| null |
cs.LG cs.AI cs.DB
|
http://creativecommons.org/licenses/by/4.0/
|
Modern approach to artificial intelligence (AI) aims to design algorithms
that learn directly from data. This approach has achieved impressive results
and has contributed significantly to the progress of AI, particularly in the
sphere of supervised deep learning. It has also simplified the design of
machine learning systems as the learning process is highly automated. However,
not all data processing tasks in conventional deep learning pipelines have been
automated. In most cases data has to be manually collected, preprocessed and
further extended through data augmentation before they can be effective for
training. Recently, special techniques for automating these tasks have emerged.
The automation of data processing tasks is driven by the need to utilize large
volumes of complex, heterogeneous data for machine learning and big data
applications. Today, end-to-end automated data processing systems based on
automated machine learning (AutoML) techniques are capable of taking raw data
and transforming them into useful features for Big Data tasks by automating all
intermediate processing stages. In this work, we present a thorough review of
approaches for automating data processing tasks in deep learning pipelines,
including automated data preprocessing--e.g., data cleaning, labeling, missing
data imputation, and categorical data encoding--as well as data augmentation
(including synthetic data generation using generative AI methods) and feature
engineering--specifically, automated feature extraction, feature construction
and feature selection. In addition to automating specific data processing
tasks, we discuss the use of AutoML methods and tools to simultaneously
optimize all stages of the machine learning pipeline.
|
[
{
"created": "Mon, 18 Mar 2024 01:07:48 GMT",
"version": "v1"
},
{
"created": "Tue, 19 Mar 2024 09:36:27 GMT",
"version": "v2"
}
] |
2024-03-20
|
[
[
"Mumuni",
"Alhassan",
""
],
[
"Mumuni",
"Fuseini",
""
]
] |
Modern approach to artificial intelligence (AI) aims to design algorithms that learn directly from data. This approach has achieved impressive results and has contributed significantly to the progress of AI, particularly in the sphere of supervised deep learning. It has also simplified the design of machine learning systems as the learning process is highly automated. However, not all data processing tasks in conventional deep learning pipelines have been automated. In most cases data has to be manually collected, preprocessed and further extended through data augmentation before they can be effective for training. Recently, special techniques for automating these tasks have emerged. The automation of data processing tasks is driven by the need to utilize large volumes of complex, heterogeneous data for machine learning and big data applications. Today, end-to-end automated data processing systems based on automated machine learning (AutoML) techniques are capable of taking raw data and transforming them into useful features for Big Data tasks by automating all intermediate processing stages. In this work, we present a thorough review of approaches for automating data processing tasks in deep learning pipelines, including automated data preprocessing--e.g., data cleaning, labeling, missing data imputation, and categorical data encoding--as well as data augmentation (including synthetic data generation using generative AI methods) and feature engineering--specifically, automated feature extraction, feature construction and feature selection. In addition to automating specific data processing tasks, we discuss the use of AutoML methods and tools to simultaneously optimize all stages of the machine learning pipeline.
|
2308.07970
|
Hanieh Rafiee
|
Hanieh Rafiee, Mojtaba Mahdavi, AhmadReza NaghshNilchi
|
Introducing a New Evaluation Criteria for EMD-Base Steganography Method
| null | null | null | null |
cs.CR cs.MM
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Steganography is a technique to hide the presence of secret communication.
When one of the communication elements is under the influence of the enemy, it
can be used. The main measure to evaluate steganography methods in a certain
capacity is security. Therefore, in a certain capacity, reducing the amount of
changes in the cover media, creates a higher embedding efficiency and thus more
security of an steganography method. Mostly, security and capacity are in
conflict with each other, the increase of one lead to the decrease of the
other. The presence of a single criterion that represents security and capacity
at the same time be useful in comparing steganography methods. EMD and the
relevant methods are a group of steganography techniques, which optimize the
amount of changes resulting from embedding (security). The present paper is
aimed to provide an evaluation criterion for this group of steganography
methods. In this study, after a general review and comparison of EMD-based
steganography techniques, we present a method to compare them exactly, from the
perspective of embedding efficiency. First, a formula is presented to determine
the value of embedding efficiency, which indicates the effect of one or more
changes on one or more pixels. The results demonstrate that the proposed
embedding efficiency formula shows the performance of the methods better when
several changes are made on a pixel compared to the existing criteria. In the
second step, we have obtained an upper bound, which determines the best
efficiency for each certain capacity. Finally, based on the introduced bound,
another evaluation criterion for a better comparison of the methods is
presented.
|
[
{
"created": "Tue, 15 Aug 2023 18:17:16 GMT",
"version": "v1"
}
] |
2023-08-17
|
[
[
"Rafiee",
"Hanieh",
""
],
[
"Mahdavi",
"Mojtaba",
""
],
[
"NaghshNilchi",
"AhmadReza",
""
]
] |
Steganography is a technique to hide the presence of secret communication. When one of the communication elements is under the influence of the enemy, it can be used. The main measure to evaluate steganography methods in a certain capacity is security. Therefore, in a certain capacity, reducing the amount of changes in the cover media, creates a higher embedding efficiency and thus more security of an steganography method. Mostly, security and capacity are in conflict with each other, the increase of one lead to the decrease of the other. The presence of a single criterion that represents security and capacity at the same time be useful in comparing steganography methods. EMD and the relevant methods are a group of steganography techniques, which optimize the amount of changes resulting from embedding (security). The present paper is aimed to provide an evaluation criterion for this group of steganography methods. In this study, after a general review and comparison of EMD-based steganography techniques, we present a method to compare them exactly, from the perspective of embedding efficiency. First, a formula is presented to determine the value of embedding efficiency, which indicates the effect of one or more changes on one or more pixels. The results demonstrate that the proposed embedding efficiency formula shows the performance of the methods better when several changes are made on a pixel compared to the existing criteria. In the second step, we have obtained an upper bound, which determines the best efficiency for each certain capacity. Finally, based on the introduced bound, another evaluation criterion for a better comparison of the methods is presented.
|
1902.06006
|
Noah Smith
|
Noah A. Smith
|
Contextual Word Representations: A Contextual Introduction
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This introduction aims to tell the story of how we put words into computers.
It is part of the story of the field of natural language processing (NLP), a
branch of artificial intelligence. It targets a wide audience with a basic
understanding of computer programming, but avoids a detailed mathematical
treatment, and it does not present any algorithms. It also does not focus on
any particular application of NLP such as translation, question answering, or
information extraction. The ideas presented here were developed by many
researchers over many decades, so the citations are not exhaustive but rather
direct the reader to a handful of papers that are, in the author's view,
seminal. After reading this document, you should have a general understanding
of word vectors (also known as word embeddings): why they exist, what problems
they solve, where they come from, how they have changed over time, and what
some of the open questions about them are. Readers already familiar with word
vectors are advised to skip to Section 5 for the discussion of the most recent
advance, contextual word vectors.
|
[
{
"created": "Fri, 15 Feb 2019 23:28:36 GMT",
"version": "v1"
},
{
"created": "Tue, 19 Feb 2019 05:25:19 GMT",
"version": "v2"
},
{
"created": "Fri, 17 Apr 2020 17:16:08 GMT",
"version": "v3"
}
] |
2020-04-20
|
[
[
"Smith",
"Noah A.",
""
]
] |
This introduction aims to tell the story of how we put words into computers. It is part of the story of the field of natural language processing (NLP), a branch of artificial intelligence. It targets a wide audience with a basic understanding of computer programming, but avoids a detailed mathematical treatment, and it does not present any algorithms. It also does not focus on any particular application of NLP such as translation, question answering, or information extraction. The ideas presented here were developed by many researchers over many decades, so the citations are not exhaustive but rather direct the reader to a handful of papers that are, in the author's view, seminal. After reading this document, you should have a general understanding of word vectors (also known as word embeddings): why they exist, what problems they solve, where they come from, how they have changed over time, and what some of the open questions about them are. Readers already familiar with word vectors are advised to skip to Section 5 for the discussion of the most recent advance, contextual word vectors.
|
1907.04931
|
Hanqing Zeng
|
Hanqing Zeng, Hongkuan Zhou, Ajitesh Srivastava, Rajgopal Kannan,
Viktor Prasanna
|
GraphSAINT: Graph Sampling Based Inductive Learning Method
|
Published at ICLR 2020; Code release:
github.com/GraphSAINT/GraphSAINT
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graph Convolutional Networks (GCNs) are powerful models for learning
representations of attributed graphs. To scale GCNs to large graphs,
state-of-the-art methods use various layer sampling techniques to alleviate the
"neighbor explosion" problem during minibatch training. We propose GraphSAINT,
a graph sampling based inductive learning method that improves training
efficiency and accuracy in a fundamentally different way. By changing
perspective, GraphSAINT constructs minibatches by sampling the training graph,
rather than the nodes or edges across GCN layers. Each iteration, a complete
GCN is built from the properly sampled subgraph. Thus, we ensure fixed number
of well-connected nodes in all layers. We further propose normalization
technique to eliminate bias, and sampling algorithms for variance reduction.
Importantly, we can decouple the sampling from the forward and backward
propagation, and extend GraphSAINT with many architecture variants (e.g., graph
attention, jumping connection). GraphSAINT demonstrates superior performance in
both accuracy and training time on five large graphs, and achieves new
state-of-the-art F1 scores for PPI (0.995) and Reddit (0.970).
|
[
{
"created": "Wed, 10 Jul 2019 21:11:13 GMT",
"version": "v1"
},
{
"created": "Sun, 29 Sep 2019 08:36:31 GMT",
"version": "v2"
},
{
"created": "Fri, 27 Dec 2019 23:58:33 GMT",
"version": "v3"
},
{
"created": "Sun, 16 Feb 2020 00:42:48 GMT",
"version": "v4"
}
] |
2020-02-18
|
[
[
"Zeng",
"Hanqing",
""
],
[
"Zhou",
"Hongkuan",
""
],
[
"Srivastava",
"Ajitesh",
""
],
[
"Kannan",
"Rajgopal",
""
],
[
"Prasanna",
"Viktor",
""
]
] |
Graph Convolutional Networks (GCNs) are powerful models for learning representations of attributed graphs. To scale GCNs to large graphs, state-of-the-art methods use various layer sampling techniques to alleviate the "neighbor explosion" problem during minibatch training. We propose GraphSAINT, a graph sampling based inductive learning method that improves training efficiency and accuracy in a fundamentally different way. By changing perspective, GraphSAINT constructs minibatches by sampling the training graph, rather than the nodes or edges across GCN layers. Each iteration, a complete GCN is built from the properly sampled subgraph. Thus, we ensure fixed number of well-connected nodes in all layers. We further propose normalization technique to eliminate bias, and sampling algorithms for variance reduction. Importantly, we can decouple the sampling from the forward and backward propagation, and extend GraphSAINT with many architecture variants (e.g., graph attention, jumping connection). GraphSAINT demonstrates superior performance in both accuracy and training time on five large graphs, and achieves new state-of-the-art F1 scores for PPI (0.995) and Reddit (0.970).
|
1708.05891
|
Mohamed Eldesouki
|
Mohamed Eldesouki, Younes Samih, Ahmed Abdelali, Mohammed Attia, Hamdy
Mubarak, Kareem Darwish, Kallmeyer Laura
|
Arabic Multi-Dialect Segmentation: bi-LSTM-CRF vs. SVM
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Arabic word segmentation is essential for a variety of NLP applications such
as machine translation and information retrieval. Segmentation entails breaking
words into their constituent stems, affixes and clitics. In this paper, we
compare two approaches for segmenting four major Arabic dialects using only
several thousand training examples for each dialect. The two approaches involve
posing the problem as a ranking problem, where an SVM ranker picks the best
segmentation, and as a sequence labeling problem, where a bi-LSTM RNN coupled
with CRF determines where best to segment words. We are able to achieve solid
segmentation results for all dialects using rather limited training data. We
also show that employing Modern Standard Arabic data for domain adaptation and
assuming context independence improve overall results.
|
[
{
"created": "Sat, 19 Aug 2017 19:52:36 GMT",
"version": "v1"
}
] |
2017-08-22
|
[
[
"Eldesouki",
"Mohamed",
""
],
[
"Samih",
"Younes",
""
],
[
"Abdelali",
"Ahmed",
""
],
[
"Attia",
"Mohammed",
""
],
[
"Mubarak",
"Hamdy",
""
],
[
"Darwish",
"Kareem",
""
],
[
"Laura",
"Kallmeyer",
""
]
] |
Arabic word segmentation is essential for a variety of NLP applications such as machine translation and information retrieval. Segmentation entails breaking words into their constituent stems, affixes and clitics. In this paper, we compare two approaches for segmenting four major Arabic dialects using only several thousand training examples for each dialect. The two approaches involve posing the problem as a ranking problem, where an SVM ranker picks the best segmentation, and as a sequence labeling problem, where a bi-LSTM RNN coupled with CRF determines where best to segment words. We are able to achieve solid segmentation results for all dialects using rather limited training data. We also show that employing Modern Standard Arabic data for domain adaptation and assuming context independence improve overall results.
|
2306.15247
|
Wei-Kun Chen
|
Wei-Kun Chen, Zheyu Wu, Rui-Jin Zhang, Ya-Feng Liu, Yu-Hong Dai,
Zhi-Quan Luo
|
Towards Efficient Optimal Large-Scale Network Slicing: A Decomposition
Approach
|
13 pages, 11 figures, submitted for possible publication; for the
conference version, see arXiv:2306.15247v1
| null | null | null |
cs.IT eess.SP math.IT math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper considers the network slicing (NS) problem which attempts to map
multiple customized virtual network requests to a common shared network
infrastructure and allocate network resources to meet diverse service
requirements. This paper proposes an efficient decomposition algorithm for
globally solving the large-scale NP-hard NS problem. The proposed algorithm
decomposes the hard NS problem into two relatively easy function placement (FP)
and traffic routing (TR) subproblems and iteratively solves them enabling the
information feedback between each other, which makes it particularly suitable
to solve large-scale problems. Specifically, the FP subproblem is to place
service functions into cloud nodes in the network, and solving it can return a
function placement strategy based on which the TR subproblem is defined; and
the TR subproblem is to find paths connecting two nodes hosting two adjacent
functions in the network, and solving it can either verify that the solution of
the FP subproblem is an optimal solution of the original problem, or return a
valid inequality to the FP subproblem that cuts off the current infeasible
solution. The proposed algorithm is guaranteed to find the global solution of
the NS problem. By taking the special structure of the NS problem into
consideration, we successfully develop two families of valid inequalities that
render the proposed decomposition algorithm converge much more quickly and thus
much more efficient. We demonstrate the effectiveness and efficiency of the
proposed valid inequalities and algorithm via numerical experiments.
|
[
{
"created": "Tue, 27 Jun 2023 07:00:02 GMT",
"version": "v1"
},
{
"created": "Fri, 15 Dec 2023 03:41:52 GMT",
"version": "v2"
}
] |
2023-12-18
|
[
[
"Chen",
"Wei-Kun",
""
],
[
"Wu",
"Zheyu",
""
],
[
"Zhang",
"Rui-Jin",
""
],
[
"Liu",
"Ya-Feng",
""
],
[
"Dai",
"Yu-Hong",
""
],
[
"Luo",
"Zhi-Quan",
""
]
] |
This paper considers the network slicing (NS) problem which attempts to map multiple customized virtual network requests to a common shared network infrastructure and allocate network resources to meet diverse service requirements. This paper proposes an efficient decomposition algorithm for globally solving the large-scale NP-hard NS problem. The proposed algorithm decomposes the hard NS problem into two relatively easy function placement (FP) and traffic routing (TR) subproblems and iteratively solves them enabling the information feedback between each other, which makes it particularly suitable to solve large-scale problems. Specifically, the FP subproblem is to place service functions into cloud nodes in the network, and solving it can return a function placement strategy based on which the TR subproblem is defined; and the TR subproblem is to find paths connecting two nodes hosting two adjacent functions in the network, and solving it can either verify that the solution of the FP subproblem is an optimal solution of the original problem, or return a valid inequality to the FP subproblem that cuts off the current infeasible solution. The proposed algorithm is guaranteed to find the global solution of the NS problem. By taking the special structure of the NS problem into consideration, we successfully develop two families of valid inequalities that render the proposed decomposition algorithm converge much more quickly and thus much more efficient. We demonstrate the effectiveness and efficiency of the proposed valid inequalities and algorithm via numerical experiments.
|
2207.09622
|
Yun-Bin Zhao Y
|
Yun-Bin Zhao and Zhi-Quan Luo
|
Natural Thresholding Algorithms for Signal Recovery with Sparsity
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
The algorithms based on the technique of optimal $k$-thresholding (OT) were
recently proposed for signal recovery, and they are very different from the
traditional family of hard thresholding methods. However, the computational
cost for OT-based algorithms remains high at the current stage of their
development. This stimulates the development of the so-called natural
thresholding (NT) algorithm and its variants in this paper. The family of NT
algorithms is developed through the first-order approximation of the so-called
regularized optimal $k$-thresholding model, and thus the computational cost for
this family of algorithms is significantly lower than that of the OT-based
algorithms. The guaranteed performance of NT-type algorithms for signal
recovery from noisy measurements is shown under the restricted isometry
property and concavity of the objective function of regularized optimal
$k$-thresholding model. Empirical results indicate that the NT-type algorithms
are robust and very comparable to several mainstream algorithms for sparse
signal recovery.
|
[
{
"created": "Wed, 20 Jul 2022 02:44:12 GMT",
"version": "v1"
}
] |
2022-07-21
|
[
[
"Zhao",
"Yun-Bin",
""
],
[
"Luo",
"Zhi-Quan",
""
]
] |
The algorithms based on the technique of optimal $k$-thresholding (OT) were recently proposed for signal recovery, and they are very different from the traditional family of hard thresholding methods. However, the computational cost for OT-based algorithms remains high at the current stage of their development. This stimulates the development of the so-called natural thresholding (NT) algorithm and its variants in this paper. The family of NT algorithms is developed through the first-order approximation of the so-called regularized optimal $k$-thresholding model, and thus the computational cost for this family of algorithms is significantly lower than that of the OT-based algorithms. The guaranteed performance of NT-type algorithms for signal recovery from noisy measurements is shown under the restricted isometry property and concavity of the objective function of regularized optimal $k$-thresholding model. Empirical results indicate that the NT-type algorithms are robust and very comparable to several mainstream algorithms for sparse signal recovery.
|
1904.09529
|
Kevin Karsch
|
Mark A. Livingston, Zhuming Ai, Kevin Karsch, Gregory O. Gibson
|
User interface design for military AR applications
| null | null |
10.1007/s10055-010-0179-1
| null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Designing a user interface for military situation awareness presents
challenges for managing information in a useful and usable manner. We present
an integrated set of functions for the presentation of and interaction with
information for a mobile augmented reality application for military
applications. Our research has concentrated on four areas. We filter
information based on relevance to the user (in turn based on location),
evaluate methods for presenting information that represents entities occluded
from the user's view, enable interaction through a top-down map view metaphor
akin to current techniques used in the military, and facilitate collaboration
with other mobile users and/or a command center. In addition, we refined the
user interface architecture to conform to requirements from subject matter
experts. We discuss the lessons learned in our work and directions for future
research.
|
[
{
"created": "Sun, 21 Apr 2019 02:10:15 GMT",
"version": "v1"
}
] |
2019-04-23
|
[
[
"Livingston",
"Mark A.",
""
],
[
"Ai",
"Zhuming",
""
],
[
"Karsch",
"Kevin",
""
],
[
"Gibson",
"Gregory O.",
""
]
] |
Designing a user interface for military situation awareness presents challenges for managing information in a useful and usable manner. We present an integrated set of functions for the presentation of and interaction with information for a mobile augmented reality application for military applications. Our research has concentrated on four areas. We filter information based on relevance to the user (in turn based on location), evaluate methods for presenting information that represents entities occluded from the user's view, enable interaction through a top-down map view metaphor akin to current techniques used in the military, and facilitate collaboration with other mobile users and/or a command center. In addition, we refined the user interface architecture to conform to requirements from subject matter experts. We discuss the lessons learned in our work and directions for future research.
|
2305.14204
|
Andrea Sipos
|
Andrea Sipos and Nima Fazeli
|
MultiSCOPE: Disambiguating In-Hand Object Poses with Proprioception and
Tactile Feedback
|
Accepted to RSS 2023
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we propose a method for estimating in-hand object poses using
proprioception and tactile feedback from a bimanual robotic system. Our method
addresses the problem of reducing pose uncertainty through a sequence of
frictional contact interactions between the grasped objects. As part of our
method, we propose 1) a tool segmentation routine that facilitates contact
location and object pose estimation, 2) a loss that allows reasoning over
solution consistency between interactions, and 3) a loss to promote converging
to object poses and contact locations that explain the external force-torque
experienced by each arm. We demonstrate the efficacy of our method in a
task-based demonstration both in simulation and on a real-world bimanual
platform and show significant improvement in object pose estimation over single
interactions. Visit www.mmintlab.com/multiscope/ for code and videos.
|
[
{
"created": "Tue, 23 May 2023 16:24:17 GMT",
"version": "v1"
}
] |
2023-05-24
|
[
[
"Sipos",
"Andrea",
""
],
[
"Fazeli",
"Nima",
""
]
] |
In this paper, we propose a method for estimating in-hand object poses using proprioception and tactile feedback from a bimanual robotic system. Our method addresses the problem of reducing pose uncertainty through a sequence of frictional contact interactions between the grasped objects. As part of our method, we propose 1) a tool segmentation routine that facilitates contact location and object pose estimation, 2) a loss that allows reasoning over solution consistency between interactions, and 3) a loss to promote converging to object poses and contact locations that explain the external force-torque experienced by each arm. We demonstrate the efficacy of our method in a task-based demonstration both in simulation and on a real-world bimanual platform and show significant improvement in object pose estimation over single interactions. Visit www.mmintlab.com/multiscope/ for code and videos.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.