id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2211.15787
|
Ju-Chiang Wang
|
Ju-Chiang Wang, Jordan B. L. Smith, Yun-Ning Hung
|
MuSFA: Improving Music Structural Function Analysis with Partially
Labeled Data
|
ISMIR2022, LBD paper
| null | null | null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Music structure analysis (MSA) systems aim to segment a song recording into
non-overlapping sections with useful labels. Previous MSA systems typically
predict abstract labels in a post-processing step and require the full context
of the song. By contrast, we recently proposed a supervised framework, called
"Music Structural Function Analysis" (MuSFA), that models and predicts
meaningful labels like 'verse' and 'chorus' directly from audio, without
requiring the full context of a song. However, the performance of this system
depends on the amount and quality of training data. In this paper, we propose
to repurpose a public dataset, HookTheory Lead Sheet Dataset (HLSD), to improve
the performance. HLSD contains over 18K excerpts of music sections originally
collected for studying automatic melody harmonization. We treat each excerpt as
a partially labeled song and provide a label mapping, so that HLSD can be used
together with other public datasets, such as SALAMI, RWC, and Isophonics. In
cross-dataset evaluations, we find that including HLSD in training can improve
state-of-the-art boundary detection and section labeling scores by ~3% and ~1%
respectively.
|
[
{
"created": "Mon, 28 Nov 2022 21:48:45 GMT",
"version": "v1"
}
] |
2022-11-30
|
[
[
"Wang",
"Ju-Chiang",
""
],
[
"Smith",
"Jordan B. L.",
""
],
[
"Hung",
"Yun-Ning",
""
]
] |
Music structure analysis (MSA) systems aim to segment a song recording into non-overlapping sections with useful labels. Previous MSA systems typically predict abstract labels in a post-processing step and require the full context of the song. By contrast, we recently proposed a supervised framework, called "Music Structural Function Analysis" (MuSFA), that models and predicts meaningful labels like 'verse' and 'chorus' directly from audio, without requiring the full context of a song. However, the performance of this system depends on the amount and quality of training data. In this paper, we propose to repurpose a public dataset, HookTheory Lead Sheet Dataset (HLSD), to improve the performance. HLSD contains over 18K excerpts of music sections originally collected for studying automatic melody harmonization. We treat each excerpt as a partially labeled song and provide a label mapping, so that HLSD can be used together with other public datasets, such as SALAMI, RWC, and Isophonics. In cross-dataset evaluations, we find that including HLSD in training can improve state-of-the-art boundary detection and section labeling scores by ~3% and ~1% respectively.
|
2405.15334
|
Jonathan Dong
|
Shuya Lin, Yuxiong Wang, Jonathan Dong, and Shiguang Ni
|
Detection and Positive Reconstruction of Cognitive Distortion sentences:
Mandarin Dataset and Evaluation
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This research introduces a Positive Reconstruction Framework based on
positive psychology theory. Overcoming negative thoughts can be challenging,
our objective is to address and reframe them through a positive
reinterpretation. To tackle this challenge, a two-fold approach is necessary:
identifying cognitive distortions and suggesting a positively reframed
alternative while preserving the original thought's meaning. Recent studies
have investigated the application of Natural Language Processing (NLP) models
in English for each stage of this process. In this study, we emphasize the
theoretical foundation for the Positive Reconstruction Framework, grounded in
broaden-and-build theory. We provide a shared corpus containing 4001 instances
for detecting cognitive distortions and 1900 instances for positive
reconstruction in Mandarin. Leveraging recent NLP techniques, including
transfer learning, fine-tuning pretrained networks, and prompt engineering, we
demonstrate the effectiveness of automated tools for both tasks. In summary,
our study contributes to multilingual positive reconstruction, highlighting the
effectiveness of NLP in cognitive distortion detection and positive
reconstruction.
|
[
{
"created": "Fri, 24 May 2024 08:17:20 GMT",
"version": "v1"
}
] |
2024-05-27
|
[
[
"Lin",
"Shuya",
""
],
[
"Wang",
"Yuxiong",
""
],
[
"Dong",
"Jonathan",
""
],
[
"Ni",
"Shiguang",
""
]
] |
This research introduces a Positive Reconstruction Framework based on positive psychology theory. Overcoming negative thoughts can be challenging, our objective is to address and reframe them through a positive reinterpretation. To tackle this challenge, a two-fold approach is necessary: identifying cognitive distortions and suggesting a positively reframed alternative while preserving the original thought's meaning. Recent studies have investigated the application of Natural Language Processing (NLP) models in English for each stage of this process. In this study, we emphasize the theoretical foundation for the Positive Reconstruction Framework, grounded in broaden-and-build theory. We provide a shared corpus containing 4001 instances for detecting cognitive distortions and 1900 instances for positive reconstruction in Mandarin. Leveraging recent NLP techniques, including transfer learning, fine-tuning pretrained networks, and prompt engineering, we demonstrate the effectiveness of automated tools for both tasks. In summary, our study contributes to multilingual positive reconstruction, highlighting the effectiveness of NLP in cognitive distortion detection and positive reconstruction.
|
2404.09833
|
Hongchi Xia
|
Hongchi Xia, Zhi-Hao Lin, Wei-Chiu Ma, Shenlong Wang
|
Video2Game: Real-time, Interactive, Realistic and Browser-Compatible
Environment from a Single Video
|
CVPR 2024. Project page (with code): https://video2game.github.io/
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Creating high-quality and interactive virtual environments, such as games and
simulators, often involves complex and costly manual modeling processes. In
this paper, we present Video2Game, a novel approach that automatically converts
videos of real-world scenes into realistic and interactive game environments.
At the heart of our system are three core components:(i) a neural radiance
fields (NeRF) module that effectively captures the geometry and visual
appearance of the scene; (ii) a mesh module that distills the knowledge from
NeRF for faster rendering; and (iii) a physics module that models the
interactions and physical dynamics among the objects. By following the
carefully designed pipeline, one can construct an interactable and actionable
digital replica of the real world. We benchmark our system on both indoor and
large-scale outdoor scenes. We show that we can not only produce
highly-realistic renderings in real-time, but also build interactive games on
top.
|
[
{
"created": "Mon, 15 Apr 2024 14:32:32 GMT",
"version": "v1"
}
] |
2024-04-16
|
[
[
"Xia",
"Hongchi",
""
],
[
"Lin",
"Zhi-Hao",
""
],
[
"Ma",
"Wei-Chiu",
""
],
[
"Wang",
"Shenlong",
""
]
] |
Creating high-quality and interactive virtual environments, such as games and simulators, often involves complex and costly manual modeling processes. In this paper, we present Video2Game, a novel approach that automatically converts videos of real-world scenes into realistic and interactive game environments. At the heart of our system are three core components:(i) a neural radiance fields (NeRF) module that effectively captures the geometry and visual appearance of the scene; (ii) a mesh module that distills the knowledge from NeRF for faster rendering; and (iii) a physics module that models the interactions and physical dynamics among the objects. By following the carefully designed pipeline, one can construct an interactable and actionable digital replica of the real world. We benchmark our system on both indoor and large-scale outdoor scenes. We show that we can not only produce highly-realistic renderings in real-time, but also build interactive games on top.
|
2301.05586
|
Bo Zhang
|
Chuyi Li, Lulu Li, Yifei Geng, Hongliang Jiang, Meng Cheng, Bo Zhang,
Zaidan Ke, Xiaoming Xu, Xiangxiang Chu
|
YOLOv6 v3.0: A Full-Scale Reloading
|
Tech Report. arXiv admin note: text overlap with arXiv:2209.02976
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The YOLO community has been in high spirits since our first two releases! By
the advent of Chinese New Year 2023, which sees the Year of the Rabbit, we
refurnish YOLOv6 with numerous novel enhancements on the network architecture
and the training scheme. This release is identified as YOLOv6 v3.0. For a
glimpse of performance, our YOLOv6-N hits 37.5% AP on the COCO dataset at a
throughput of 1187 FPS tested with an NVIDIA Tesla T4 GPU. YOLOv6-S strikes
45.0% AP at 484 FPS, outperforming other mainstream detectors at the same scale
(YOLOv5-S, YOLOv8-S, YOLOX-S and PPYOLOE-S). Whereas, YOLOv6-M/L also achieve
better accuracy performance (50.0%/52.8% respectively) than other detectors at
a similar inference speed. Additionally, with an extended backbone and neck
design, our YOLOv6-L6 achieves the state-of-the-art accuracy in real-time.
Extensive experiments are carefully conducted to validate the effectiveness of
each improving component. Our code is made available at
https://github.com/meituan/YOLOv6.
|
[
{
"created": "Fri, 13 Jan 2023 14:46:46 GMT",
"version": "v1"
}
] |
2023-01-16
|
[
[
"Li",
"Chuyi",
""
],
[
"Li",
"Lulu",
""
],
[
"Geng",
"Yifei",
""
],
[
"Jiang",
"Hongliang",
""
],
[
"Cheng",
"Meng",
""
],
[
"Zhang",
"Bo",
""
],
[
"Ke",
"Zaidan",
""
],
[
"Xu",
"Xiaoming",
""
],
[
"Chu",
"Xiangxiang",
""
]
] |
The YOLO community has been in high spirits since our first two releases! By the advent of Chinese New Year 2023, which sees the Year of the Rabbit, we refurnish YOLOv6 with numerous novel enhancements on the network architecture and the training scheme. This release is identified as YOLOv6 v3.0. For a glimpse of performance, our YOLOv6-N hits 37.5% AP on the COCO dataset at a throughput of 1187 FPS tested with an NVIDIA Tesla T4 GPU. YOLOv6-S strikes 45.0% AP at 484 FPS, outperforming other mainstream detectors at the same scale (YOLOv5-S, YOLOv8-S, YOLOX-S and PPYOLOE-S). Whereas, YOLOv6-M/L also achieve better accuracy performance (50.0%/52.8% respectively) than other detectors at a similar inference speed. Additionally, with an extended backbone and neck design, our YOLOv6-L6 achieves the state-of-the-art accuracy in real-time. Extensive experiments are carefully conducted to validate the effectiveness of each improving component. Our code is made available at https://github.com/meituan/YOLOv6.
|
2302.06746
|
Ruokai Yin
|
Ruokai Yin, Youngeun Kim, Yuhang Li, Abhishek Moitra, Nitin Satpute,
Anna Hambitzer, Priyadarshini Panda
|
Workload-Balanced Pruning for Sparse Spiking Neural Networks
|
11 pages. Accepted to IEEE Transactions on Emerging Topics in
Computational Intelligence (2024)
| null | null | null |
cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pruning for Spiking Neural Networks (SNNs) has emerged as a fundamental
methodology for deploying deep SNNs on resource-constrained edge devices.
Though the existing pruning methods can provide extremely high weight sparsity
for deep SNNs, the high weight sparsity brings a workload imbalance problem.
Specifically, the workload imbalance happens when a different number of
non-zero weights are assigned to hardware units running in parallel. This
results in low hardware utilization and thus imposes longer latency and higher
energy costs. In preliminary experiments, we show that sparse SNNs (~98% weight
sparsity) can suffer as low as ~59% utilization. To alleviate the workload
imbalance problem, we propose u-Ticket, where we monitor and adjust the weight
connections of the SNN during Lottery Ticket Hypothesis (LTH) based pruning,
thus guaranteeing the final ticket gets optimal utilization when deployed onto
the hardware. Experiments indicate that our u-Ticket can guarantee up to 100%
hardware utilization, thus reducing up to 76.9% latency and 63.8% energy cost
compared to the non-utilization-aware LTH method.
|
[
{
"created": "Mon, 13 Feb 2023 23:18:47 GMT",
"version": "v1"
},
{
"created": "Fri, 22 Mar 2024 19:53:37 GMT",
"version": "v2"
}
] |
2024-03-26
|
[
[
"Yin",
"Ruokai",
""
],
[
"Kim",
"Youngeun",
""
],
[
"Li",
"Yuhang",
""
],
[
"Moitra",
"Abhishek",
""
],
[
"Satpute",
"Nitin",
""
],
[
"Hambitzer",
"Anna",
""
],
[
"Panda",
"Priyadarshini",
""
]
] |
Pruning for Spiking Neural Networks (SNNs) has emerged as a fundamental methodology for deploying deep SNNs on resource-constrained edge devices. Though the existing pruning methods can provide extremely high weight sparsity for deep SNNs, the high weight sparsity brings a workload imbalance problem. Specifically, the workload imbalance happens when a different number of non-zero weights are assigned to hardware units running in parallel. This results in low hardware utilization and thus imposes longer latency and higher energy costs. In preliminary experiments, we show that sparse SNNs (~98% weight sparsity) can suffer as low as ~59% utilization. To alleviate the workload imbalance problem, we propose u-Ticket, where we monitor and adjust the weight connections of the SNN during Lottery Ticket Hypothesis (LTH) based pruning, thus guaranteeing the final ticket gets optimal utilization when deployed onto the hardware. Experiments indicate that our u-Ticket can guarantee up to 100% hardware utilization, thus reducing up to 76.9% latency and 63.8% energy cost compared to the non-utilization-aware LTH method.
|
2212.05830
|
Yachao Li Ph.D.
|
Yachao Li, Junhui Li, Jing Jiang, Shimin Tao, Hao Yang and Min Zhang
|
P-Transformer: Towards Better Document-to-Document Neural Machine
Translation
|
Submitted to TASLP
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Directly training a document-to-document (Doc2Doc) neural machine translation
(NMT) via Transformer from scratch, especially on small datasets usually fails
to converge. Our dedicated probing tasks show that 1) both the absolute
position and relative position information gets gradually weakened or even
vanished once it reaches the upper encoder layers, and 2) the vanishing of
absolute position information in encoder output causes the training failure of
Doc2Doc NMT. To alleviate this problem, we propose a position-aware Transformer
(P-Transformer) to enhance both the absolute and relative position information
in both self-attention and cross-attention. Specifically, we integrate absolute
positional information, i.e., position embeddings, into the query-key pairs
both in self-attention and cross-attention through a simple yet effective
addition operation. Moreover, we also integrate relative position encoding in
self-attention. The proposed P-Transformer utilizes sinusoidal position
encoding and does not require any task-specified position embedding, segment
embedding, or attention mechanism. Through the above methods, we build a
Doc2Doc NMT model with P-Transformer, which ingests the source document and
completely generates the target document in a sequence-to-sequence (seq2seq)
way. In addition, P-Transformer can be applied to seq2seq-based
document-to-sentence (Doc2Sent) and sentence-to-sentence (Sent2Sent)
translation. Extensive experimental results of Doc2Doc NMT show that
P-Transformer significantly outperforms strong baselines on widely-used 9
document-level datasets in 7 language pairs, covering small-, middle-, and
large-scales, and achieves a new state-of-the-art. Experimentation on discourse
phenomena shows that our Doc2Doc NMT models improve the translation quality in
both BLEU and discourse coherence. We make our code available on Github.
|
[
{
"created": "Mon, 12 Dec 2022 11:19:05 GMT",
"version": "v1"
}
] |
2022-12-13
|
[
[
"Li",
"Yachao",
""
],
[
"Li",
"Junhui",
""
],
[
"Jiang",
"Jing",
""
],
[
"Tao",
"Shimin",
""
],
[
"Yang",
"Hao",
""
],
[
"Zhang",
"Min",
""
]
] |
Directly training a document-to-document (Doc2Doc) neural machine translation (NMT) via Transformer from scratch, especially on small datasets usually fails to converge. Our dedicated probing tasks show that 1) both the absolute position and relative position information gets gradually weakened or even vanished once it reaches the upper encoder layers, and 2) the vanishing of absolute position information in encoder output causes the training failure of Doc2Doc NMT. To alleviate this problem, we propose a position-aware Transformer (P-Transformer) to enhance both the absolute and relative position information in both self-attention and cross-attention. Specifically, we integrate absolute positional information, i.e., position embeddings, into the query-key pairs both in self-attention and cross-attention through a simple yet effective addition operation. Moreover, we also integrate relative position encoding in self-attention. The proposed P-Transformer utilizes sinusoidal position encoding and does not require any task-specified position embedding, segment embedding, or attention mechanism. Through the above methods, we build a Doc2Doc NMT model with P-Transformer, which ingests the source document and completely generates the target document in a sequence-to-sequence (seq2seq) way. In addition, P-Transformer can be applied to seq2seq-based document-to-sentence (Doc2Sent) and sentence-to-sentence (Sent2Sent) translation. Extensive experimental results of Doc2Doc NMT show that P-Transformer significantly outperforms strong baselines on widely-used 9 document-level datasets in 7 language pairs, covering small-, middle-, and large-scales, and achieves a new state-of-the-art. Experimentation on discourse phenomena shows that our Doc2Doc NMT models improve the translation quality in both BLEU and discourse coherence. We make our code available on Github.
|
1610.09012
|
Kleinner Farias
|
Kleinner Farias
|
Empirical Evaluation of Effort on Composing Design Models
|
PhD thesis, Pontifical Catholic University of Rio de Janeiro, Rio de
Janeiro, Brazil, 2012 - Keywords: design models, UML, development effort,
software modeling, aspect-oriented modeling, model merging, software merging,
quality model, empirical studies
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Model composition plays a central role in many software engineering
activities such as evolving models to add new features and reconciling
conflicting design models developed in parallel by different development teams.
As model composition is usually an error-prone and effort-consuming task, its
potential benefits, such as gains in productivity can be compromised. However,
there is no empirical knowledge nowadays about the effort required to compose
design models. Only feedbacks of model composition evangelists are available,
and they often diverge. Consequently, developers are unable to conduct any
cost-effectiveness analysis as well as identify, predict, or reduce composition
effort. The inability of evaluating composition effort is due to three key
problems. First, the current evaluation frameworks do not consider fundamental
concepts in model composition such as conflicts and inconsistencies. Second,
researchers and developers do not know what factors can influence the
composition effort in practice. Third, practical knowledge about how such
influential factors may affect the developers' effort is severely lacking. In
this context, the contributions of this thesis are threefold: (i) a quality
model for supporting the evaluation of model composition effort, (ii) practical
knowledge, derived from a family of quantitative and qualitative empirical
studies, about model composition effort and its influential factors, and (iii)
insight about how to evaluate model composition efforts and tame the side
effects of such influential factors.
|
[
{
"created": "Thu, 27 Oct 2016 20:51:55 GMT",
"version": "v1"
}
] |
2016-10-31
|
[
[
"Farias",
"Kleinner",
""
]
] |
Model composition plays a central role in many software engineering activities such as evolving models to add new features and reconciling conflicting design models developed in parallel by different development teams. As model composition is usually an error-prone and effort-consuming task, its potential benefits, such as gains in productivity can be compromised. However, there is no empirical knowledge nowadays about the effort required to compose design models. Only feedbacks of model composition evangelists are available, and they often diverge. Consequently, developers are unable to conduct any cost-effectiveness analysis as well as identify, predict, or reduce composition effort. The inability of evaluating composition effort is due to three key problems. First, the current evaluation frameworks do not consider fundamental concepts in model composition such as conflicts and inconsistencies. Second, researchers and developers do not know what factors can influence the composition effort in practice. Third, practical knowledge about how such influential factors may affect the developers' effort is severely lacking. In this context, the contributions of this thesis are threefold: (i) a quality model for supporting the evaluation of model composition effort, (ii) practical knowledge, derived from a family of quantitative and qualitative empirical studies, about model composition effort and its influential factors, and (iii) insight about how to evaluate model composition efforts and tame the side effects of such influential factors.
|
0805.0202
|
Joao Marques-Silva
|
Antonio Morgado, Joao Marques-Silva
|
A Pseudo-Boolean Solution to the Maximum Quartet Consistency Problem
| null | null | null | null |
cs.AI cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Determining the evolutionary history of a given biological data is an
important task in biological sciences. Given a set of quartet topologies over a
set of taxa, the Maximum Quartet Consistency (MQC) problem consists of
computing a global phylogeny that satisfies the maximum number of quartets. A
number of solutions have been proposed for the MQC problem, including Dynamic
Programming, Constraint Programming, and more recently Answer Set Programming
(ASP). ASP is currently the most efficient approach for optimally solving the
MQC problem. This paper proposes encoding the MQC problem with pseudo-Boolean
(PB) constraints. The use of PB allows solving the MQC problem with efficient
PB solvers, and also allows considering different modeling approaches for the
MQC problem. Initial results are promising, and suggest that PB can be an
effective alternative for solving the MQC problem.
|
[
{
"created": "Fri, 2 May 2008 10:06:27 GMT",
"version": "v1"
}
] |
2008-12-18
|
[
[
"Morgado",
"Antonio",
""
],
[
"Marques-Silva",
"Joao",
""
]
] |
Determining the evolutionary history of a given biological data is an important task in biological sciences. Given a set of quartet topologies over a set of taxa, the Maximum Quartet Consistency (MQC) problem consists of computing a global phylogeny that satisfies the maximum number of quartets. A number of solutions have been proposed for the MQC problem, including Dynamic Programming, Constraint Programming, and more recently Answer Set Programming (ASP). ASP is currently the most efficient approach for optimally solving the MQC problem. This paper proposes encoding the MQC problem with pseudo-Boolean (PB) constraints. The use of PB allows solving the MQC problem with efficient PB solvers, and also allows considering different modeling approaches for the MQC problem. Initial results are promising, and suggest that PB can be an effective alternative for solving the MQC problem.
|
2206.11335
|
Debesh Jha
|
Debesh Jha, Ashish Rauniyar, H{\aa}vard D. Johansen, Dag Johansen,
Michael A. Riegler, P{\aa}l Halvorsen, Ulas Bagci
|
Video Analytics in Elite Soccer: A Distributed Computing Perspective
| null |
published IEEE SAM 2022
| null | null |
cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
Ubiquitous sensors and Internet of Things (IoT) technologies have
revolutionized the sports industry, providing new methodologies for planning,
effective coordination of training, and match analysis post game. New methods,
including machine learning, image and video processing, have been developed for
performance evaluation, allowing the analyst to track the performance of a
player in real-time. Following FIFA's 2015 approval of electronics performance
and tracking system during games, performance data of a single player or the
entire team is allowed to be collected using GPS-based wearables. Data from
practice sessions outside the sporting arena is being collected in greater
numbers than ever before. Realizing the significance of data in professional
soccer, this paper presents video analytics, examines recent state-of-the-art
literature in elite soccer, and summarizes existing real-time video analytics
algorithms. We also discuss real-time crowdsourcing of the obtained data,
tactical and technical performance, distributed computing and its importance in
video analytics and propose a future research perspective.
|
[
{
"created": "Wed, 22 Jun 2022 19:25:28 GMT",
"version": "v1"
}
] |
2022-06-24
|
[
[
"Jha",
"Debesh",
""
],
[
"Rauniyar",
"Ashish",
""
],
[
"Johansen",
"Håvard D.",
""
],
[
"Johansen",
"Dag",
""
],
[
"Riegler",
"Michael A.",
""
],
[
"Halvorsen",
"Pål",
""
],
[
"Bagci",
"Ulas",
""
]
] |
Ubiquitous sensors and Internet of Things (IoT) technologies have revolutionized the sports industry, providing new methodologies for planning, effective coordination of training, and match analysis post game. New methods, including machine learning, image and video processing, have been developed for performance evaluation, allowing the analyst to track the performance of a player in real-time. Following FIFA's 2015 approval of electronics performance and tracking system during games, performance data of a single player or the entire team is allowed to be collected using GPS-based wearables. Data from practice sessions outside the sporting arena is being collected in greater numbers than ever before. Realizing the significance of data in professional soccer, this paper presents video analytics, examines recent state-of-the-art literature in elite soccer, and summarizes existing real-time video analytics algorithms. We also discuss real-time crowdsourcing of the obtained data, tactical and technical performance, distributed computing and its importance in video analytics and propose a future research perspective.
|
2108.00439
|
Seongjin Choi
|
Zhixiong Jin, Jiwon Kim, Hwasoo Yeo, Seongjin Choi
|
Transformer-based Map Matching Model with Limited Ground-Truth Data
using Transfer-Learning Approach
|
25 pages, 9 figures, 4 tables
| null |
10.1016/j.trc.2022.103668
| null |
cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In many spatial trajectory-based applications, it is necessary to map raw
trajectory data points onto road networks in digital maps, which is commonly
referred to as a map-matching process. While most previous map-matching methods
have focused on using rule-based algorithms to deal with the map-matching
problems, in this paper, we consider the map-matching task from the data-driven
perspective, proposing a deep learning-based map-matching model. We build a
Transformer-based map-matching model with a transfer learning approach. We
generate trajectory data to pre-train the Transformer model and then fine-tune
the model with a limited number of ground-truth data to minimize the model
development cost and reduce the real-to-virtual gap. Three metrics (Average
Hamming Distance, F-score, and BLEU) at two levels (point and segment level)
are used to evaluate the model performance. The results indicate that the
proposed model outperforms existing models. Furthermore, we use the attention
weights of the Transformer to plot the map-matching process and find how the
model matches the road segments correctly.
|
[
{
"created": "Sun, 1 Aug 2021 11:51:11 GMT",
"version": "v1"
},
{
"created": "Tue, 3 Aug 2021 01:06:50 GMT",
"version": "v2"
},
{
"created": "Thu, 30 Sep 2021 12:53:58 GMT",
"version": "v3"
},
{
"created": "Thu, 7 Oct 2021 04:08:31 GMT",
"version": "v4"
}
] |
2023-08-15
|
[
[
"Jin",
"Zhixiong",
""
],
[
"Kim",
"Jiwon",
""
],
[
"Yeo",
"Hwasoo",
""
],
[
"Choi",
"Seongjin",
""
]
] |
In many spatial trajectory-based applications, it is necessary to map raw trajectory data points onto road networks in digital maps, which is commonly referred to as a map-matching process. While most previous map-matching methods have focused on using rule-based algorithms to deal with the map-matching problems, in this paper, we consider the map-matching task from the data-driven perspective, proposing a deep learning-based map-matching model. We build a Transformer-based map-matching model with a transfer learning approach. We generate trajectory data to pre-train the Transformer model and then fine-tune the model with a limited number of ground-truth data to minimize the model development cost and reduce the real-to-virtual gap. Three metrics (Average Hamming Distance, F-score, and BLEU) at two levels (point and segment level) are used to evaluate the model performance. The results indicate that the proposed model outperforms existing models. Furthermore, we use the attention weights of the Transformer to plot the map-matching process and find how the model matches the road segments correctly.
|
2407.11784
|
Daoyuan Chen
|
Daoyuan Chen, Haibin Wang, Yilun Huang, Ce Ge, Yaliang Li, Bolin Ding,
Jingren Zhou
|
Data-Juicer Sandbox: A Comprehensive Suite for Multimodal Data-Model
Co-development
|
26 pages, 9 figures, 5 tables
| null | null | null |
cs.AI cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The emergence of large-scale multi-modal generative models has drastically
advanced artificial intelligence, introducing unprecedented levels of
performance and functionality. However, optimizing these models remains
challenging due to historically isolated paths of model-centric and
data-centric developments, leading to suboptimal outcomes and inefficient
resource utilization. In response, we present a novel sandbox suite tailored
for integrated data-model co-development. This sandbox provides a comprehensive
experimental platform, enabling rapid iteration and insight-driven refinement
of both data and models. Our proposed "Probe-Analyze-Refine" workflow,
validated through applications on state-of-the-art LLaVA-like and DiT based
models, yields significant performance boosts, such as topping the VBench
leaderboard. We also uncover fruitful insights gleaned from exhaustive
benchmarks, shedding light on the critical interplay between data quality,
diversity, and model behavior. With the hope of fostering deeper understanding
and future progress in multi-modal data and generative modeling, our codes,
datasets, and models are maintained and accessible at
https://github.com/modelscope/data-juicer/blob/main/docs/Sandbox.md.
|
[
{
"created": "Tue, 16 Jul 2024 14:40:07 GMT",
"version": "v1"
}
] |
2024-07-17
|
[
[
"Chen",
"Daoyuan",
""
],
[
"Wang",
"Haibin",
""
],
[
"Huang",
"Yilun",
""
],
[
"Ge",
"Ce",
""
],
[
"Li",
"Yaliang",
""
],
[
"Ding",
"Bolin",
""
],
[
"Zhou",
"Jingren",
""
]
] |
The emergence of large-scale multi-modal generative models has drastically advanced artificial intelligence, introducing unprecedented levels of performance and functionality. However, optimizing these models remains challenging due to historically isolated paths of model-centric and data-centric developments, leading to suboptimal outcomes and inefficient resource utilization. In response, we present a novel sandbox suite tailored for integrated data-model co-development. This sandbox provides a comprehensive experimental platform, enabling rapid iteration and insight-driven refinement of both data and models. Our proposed "Probe-Analyze-Refine" workflow, validated through applications on state-of-the-art LLaVA-like and DiT based models, yields significant performance boosts, such as topping the VBench leaderboard. We also uncover fruitful insights gleaned from exhaustive benchmarks, shedding light on the critical interplay between data quality, diversity, and model behavior. With the hope of fostering deeper understanding and future progress in multi-modal data and generative modeling, our codes, datasets, and models are maintained and accessible at https://github.com/modelscope/data-juicer/blob/main/docs/Sandbox.md.
|
1406.5946
|
Jan Broekaert
|
Lorena P\'erez-Garc\'ia, Jan Broekaert and Nicole Note
|
Is a `Wirikuta empowerment' of the Huichol measurable on the Internet?
|
14 pages, 1 figure, 4 graphs (submitted to journal)
|
Internet Research, 26, 5, 1269 - 1290 (2016)
|
10.1108/IntR-07-2014-0185
| null |
cs.SI cs.CY cs.IR physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Current social and activist movements find the opportunity in social media to
effectively impact on the agenda of governing bodies and create `global'
perceptions -- it is often claimed. Content related to the social and activist
movements is online, to be accessed, supported or disputed and distributed from
virtually anywhere at any time, in the public sphere of the Internet. This
activity allows the enlargement of social movements and would increase the
empowerment in the concerned communities. The aim of this explorative study is
to assess whether the temporal evolution of the Normalised Web Distance (NWD)
--as defined by Cilibrasi & Vit\'anyi (2007)-- between identifying terms
concerning this activism could be used to measure the progress or decline of
social empowerment through the Internet. The NWD relies on the page count
number of single and joint queries, which in our study have been registered
using a freely available web browser (e.g. Google Search) providing a time
search window for temporal query results. To explore this meta-data technique,
we introduce the case of a perceived Wirikuta online movement, which originated
in Mexico with the aim to protect the Huichols' sacred land and water resources
from open mining projects for silver ore. We conducted a small scale Internet
study relating the key terms `Wirikuta', `Huichol', and `Wixarika' and their
co-occurrence with seven positive qualifiers (e.g. `sacred land'), five
negative qualifiers (e.g. `violence') and one neutral qualifier (`table') over
time, annually from 1994 till 2013. We confirm close semantic clustering over
time of traditional indigeneous identity terms of the Huichol, and observe a
slight convergence of key terms to `mines' and less pronounced to `sacred land'
and a divergence with respect to `ancestors' indicating a complex image of a
tendency of empowerment.
|
[
{
"created": "Mon, 23 Jun 2014 15:32:14 GMT",
"version": "v1"
}
] |
2016-12-07
|
[
[
"Pérez-García",
"Lorena",
""
],
[
"Broekaert",
"Jan",
""
],
[
"Note",
"Nicole",
""
]
] |
Current social and activist movements find the opportunity in social media to effectively impact on the agenda of governing bodies and create `global' perceptions -- it is often claimed. Content related to the social and activist movements is online, to be accessed, supported or disputed and distributed from virtually anywhere at any time, in the public sphere of the Internet. This activity allows the enlargement of social movements and would increase the empowerment in the concerned communities. The aim of this explorative study is to assess whether the temporal evolution of the Normalised Web Distance (NWD) --as defined by Cilibrasi & Vit\'anyi (2007)-- between identifying terms concerning this activism could be used to measure the progress or decline of social empowerment through the Internet. The NWD relies on the page count number of single and joint queries, which in our study have been registered using a freely available web browser (e.g. Google Search) providing a time search window for temporal query results. To explore this meta-data technique, we introduce the case of a perceived Wirikuta online movement, which originated in Mexico with the aim to protect the Huichols' sacred land and water resources from open mining projects for silver ore. We conducted a small scale Internet study relating the key terms `Wirikuta', `Huichol', and `Wixarika' and their co-occurrence with seven positive qualifiers (e.g. `sacred land'), five negative qualifiers (e.g. `violence') and one neutral qualifier (`table') over time, annually from 1994 till 2013. We confirm close semantic clustering over time of traditional indigeneous identity terms of the Huichol, and observe a slight convergence of key terms to `mines' and less pronounced to `sacred land' and a divergence with respect to `ancestors' indicating a complex image of a tendency of empowerment.
|
0805.2797
|
Mikl\'os Pint\'er
|
M. Pinter
|
Young's axiomatization of the Shapley value - a new proof
|
11 pages
| null | null | null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider Young (1985)'s characterization of the Shapley value, and give a
new proof of this axiomatization. Moreover, as applications of the new proof,
we show that Young (1985)'s axiomatization of the Shapley value works on
various well-known subclasses of TU games.
|
[
{
"created": "Mon, 19 May 2008 07:11:04 GMT",
"version": "v1"
},
{
"created": "Sun, 3 May 2009 09:53:25 GMT",
"version": "v2"
},
{
"created": "Sat, 10 Mar 2012 13:32:00 GMT",
"version": "v3"
}
] |
2012-03-13
|
[
[
"Pinter",
"M.",
""
]
] |
We consider Young (1985)'s characterization of the Shapley value, and give a new proof of this axiomatization. Moreover, as applications of the new proof, we show that Young (1985)'s axiomatization of the Shapley value works on various well-known subclasses of TU games.
|
1811.00212
|
Vipul Harsh
|
Vipul Harsh, Sangeetha Abdu Jyothi, Inderdeep Singh, P. Brighten
Godfrey
|
Expander Datacenters: From Theory to Practice
|
15 pages, 17 figures
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent work has shown that expander-based data center topologies are robust
and can yield superior performance over Clos topologies. However, to achieve
these benefits, previous proposals use routing and transport schemes that
impede quick industry adoption. In this paper, we examine if expanders can be
effective for the technology and environments practical in today's data
centers, including the use of traditional protocols, at both small and large
scale while complying with common practices such as over-subscription. We study
bandwidth, latency and burst tolerance of topologies, highlighting pitfalls of
previous topology comparisons. We consider several other metrics of interest:
packet loss during failures, queue occupancy and topology degradation. Our
experiments show that expanders can realize 3x more throughput than an
equivalent fat tree, and 1.5x more throughput than an equivalent leaf-spine
topology, for a wide range of scenarios, with only traditional protocols. We
observe that expanders achieve lower flow completion times, are more resilient
to bursty load conditions like incast and outcast and degrade more gracefully
with increasing load. Our results are based on extensive simulations and
experiments on a hardware testbed with realistic topologies and real traffic
patterns.
|
[
{
"created": "Thu, 1 Nov 2018 03:52:15 GMT",
"version": "v1"
}
] |
2018-11-02
|
[
[
"Harsh",
"Vipul",
""
],
[
"Jyothi",
"Sangeetha Abdu",
""
],
[
"Singh",
"Inderdeep",
""
],
[
"Godfrey",
"P. Brighten",
""
]
] |
Recent work has shown that expander-based data center topologies are robust and can yield superior performance over Clos topologies. However, to achieve these benefits, previous proposals use routing and transport schemes that impede quick industry adoption. In this paper, we examine if expanders can be effective for the technology and environments practical in today's data centers, including the use of traditional protocols, at both small and large scale while complying with common practices such as over-subscription. We study bandwidth, latency and burst tolerance of topologies, highlighting pitfalls of previous topology comparisons. We consider several other metrics of interest: packet loss during failures, queue occupancy and topology degradation. Our experiments show that expanders can realize 3x more throughput than an equivalent fat tree, and 1.5x more throughput than an equivalent leaf-spine topology, for a wide range of scenarios, with only traditional protocols. We observe that expanders achieve lower flow completion times, are more resilient to bursty load conditions like incast and outcast and degrade more gracefully with increasing load. Our results are based on extensive simulations and experiments on a hardware testbed with realistic topologies and real traffic patterns.
|
1606.04308
|
J\"urgen Leitner
|
Donald G. Dansereau, Anders Eriksson and J\"urgen Leitner
|
Richardson-Lucy Deblurring for Moving Light Field Cameras
|
Paper accepted for oral presentation at LF4CV workshop at CVPR
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We generalize Richardson-Lucy (RL) deblurring to 4-D light fields by
replacing the convolution steps with light field rendering of motion blur. The
method deals correctly with blur caused by 6-degree-of-freedom camera motion in
complex 3-D scenes, without performing depth estimation. We introduce a novel
regularization term that maintains parallax information in the light field
while reducing noise and ringing. We demonstrate the method operating
effectively on rendered scenes and scenes captured using an off-the-shelf light
field camera. An industrial robot arm provides repeatable and known
trajectories, allowing us to establish quantitative performance in complex 3-D
scenes. Qualitative and quantitative results confirm the effectiveness of the
method, including commonly occurring cases for which previously published
methods fail. We include mathematical proof that the algorithm converges to the
maximum-likelihood estimate of the unblurred scene under Poisson noise. We
expect extension to blind methods to be possible following the generalization
of 2-D Richardson-Lucy to blind deconvolution.
|
[
{
"created": "Tue, 14 Jun 2016 10:57:34 GMT",
"version": "v1"
},
{
"created": "Fri, 19 May 2017 08:20:54 GMT",
"version": "v2"
}
] |
2017-05-22
|
[
[
"Dansereau",
"Donald G.",
""
],
[
"Eriksson",
"Anders",
""
],
[
"Leitner",
"Jürgen",
""
]
] |
We generalize Richardson-Lucy (RL) deblurring to 4-D light fields by replacing the convolution steps with light field rendering of motion blur. The method deals correctly with blur caused by 6-degree-of-freedom camera motion in complex 3-D scenes, without performing depth estimation. We introduce a novel regularization term that maintains parallax information in the light field while reducing noise and ringing. We demonstrate the method operating effectively on rendered scenes and scenes captured using an off-the-shelf light field camera. An industrial robot arm provides repeatable and known trajectories, allowing us to establish quantitative performance in complex 3-D scenes. Qualitative and quantitative results confirm the effectiveness of the method, including commonly occurring cases for which previously published methods fail. We include mathematical proof that the algorithm converges to the maximum-likelihood estimate of the unblurred scene under Poisson noise. We expect extension to blind methods to be possible following the generalization of 2-D Richardson-Lucy to blind deconvolution.
|
2405.13427
|
Qiang Chen
|
Qiang Chen, Weizhong Yu, Feiping Nie, and Xuelong Li
|
Adaptive Fuzzy C-Means with Graph Embedding
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Fuzzy clustering algorithms can be roughly categorized into two main groups:
Fuzzy C-Means (FCM) based methods and mixture model based methods. However, for
almost all existing FCM based methods, how to automatically selecting proper
membership degree hyper-parameter values remains a challenging and unsolved
problem. Mixture model based methods, while circumventing the difficulty of
manually adjusting membership degree hyper-parameters inherent in FCM based
methods, often have a preference for specific distributions, such as the
Gaussian distribution. In this paper, we propose a novel FCM based clustering
model that is capable of automatically learning an appropriate membership
degree hyper-parameter value and handling data with non-Gaussian clusters.
Moreover, by removing the graph embedding regularization, the proposed FCM
model can degenerate into the simplified generalized Gaussian mixture model.
Therefore, the proposed FCM model can be also seen as the generalized Gaussian
mixture model with graph embedding. Extensive experiments are conducted on both
synthetic and real-world datasets to demonstrate the effectiveness of the
proposed model.
|
[
{
"created": "Wed, 22 May 2024 08:15:50 GMT",
"version": "v1"
}
] |
2024-05-24
|
[
[
"Chen",
"Qiang",
""
],
[
"Yu",
"Weizhong",
""
],
[
"Nie",
"Feiping",
""
],
[
"Li",
"Xuelong",
""
]
] |
Fuzzy clustering algorithms can be roughly categorized into two main groups: Fuzzy C-Means (FCM) based methods and mixture model based methods. However, for almost all existing FCM based methods, how to automatically selecting proper membership degree hyper-parameter values remains a challenging and unsolved problem. Mixture model based methods, while circumventing the difficulty of manually adjusting membership degree hyper-parameters inherent in FCM based methods, often have a preference for specific distributions, such as the Gaussian distribution. In this paper, we propose a novel FCM based clustering model that is capable of automatically learning an appropriate membership degree hyper-parameter value and handling data with non-Gaussian clusters. Moreover, by removing the graph embedding regularization, the proposed FCM model can degenerate into the simplified generalized Gaussian mixture model. Therefore, the proposed FCM model can be also seen as the generalized Gaussian mixture model with graph embedding. Extensive experiments are conducted on both synthetic and real-world datasets to demonstrate the effectiveness of the proposed model.
|
2302.09325
|
Jie Li
|
Jie Li, Yi Liu, Xiaohu Tang, Yunghsiang S. Han, Bo Bai, and Gong Zhang
|
MDS Array Codes With (Near) Optimal Repair Bandwidth for All Admissible
Repair Degrees
|
Submitted to the IEEE Transactions on Communications
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Abundant high-rate (n, k) minimum storage regenerating (MSR) codes have been
reported in the literature. However, most of them require contacting all the
surviving nodes during a node repair process, resulting in a repair degree of
d=n-1. In practical systems, it may not always be feasible to connect and
download data from all surviving nodes, as some nodes may be unavailable.
Therefore, there is a need for MSR code constructions with a repair degree of
d<n-1. Up to now, only a few (n, k) MSR code constructions with repair degree
d<n-1 have been reported, some have a large sub-packetization level, a large
finite field, or restrictions on the repair degree d. In this paper, we propose
a new (n, k) MSR code construction that works for any repair degree d>k, and
has a smaller sub-packetization level or finite field than some existing
constructions. Additionally, in conjunction with a previous generic
transformation to reduce the sub-packetization level, we obtain an MDS array
code with a small sub-packetization level and $(1+\epsilon)$-optimal repair
bandwidth (i.e., $(1+\epsilon)$ times the optimal repair bandwidth) for repair
degree d=n-1. This code outperforms some existing ones in terms of either the
sub-packetization level or the field size.
|
[
{
"created": "Sat, 18 Feb 2023 13:11:57 GMT",
"version": "v1"
},
{
"created": "Sat, 27 May 2023 03:26:51 GMT",
"version": "v2"
}
] |
2023-05-30
|
[
[
"Li",
"Jie",
""
],
[
"Liu",
"Yi",
""
],
[
"Tang",
"Xiaohu",
""
],
[
"Han",
"Yunghsiang S.",
""
],
[
"Bai",
"Bo",
""
],
[
"Zhang",
"Gong",
""
]
] |
Abundant high-rate (n, k) minimum storage regenerating (MSR) codes have been reported in the literature. However, most of them require contacting all the surviving nodes during a node repair process, resulting in a repair degree of d=n-1. In practical systems, it may not always be feasible to connect and download data from all surviving nodes, as some nodes may be unavailable. Therefore, there is a need for MSR code constructions with a repair degree of d<n-1. Up to now, only a few (n, k) MSR code constructions with repair degree d<n-1 have been reported, some have a large sub-packetization level, a large finite field, or restrictions on the repair degree d. In this paper, we propose a new (n, k) MSR code construction that works for any repair degree d>k, and has a smaller sub-packetization level or finite field than some existing constructions. Additionally, in conjunction with a previous generic transformation to reduce the sub-packetization level, we obtain an MDS array code with a small sub-packetization level and $(1+\epsilon)$-optimal repair bandwidth (i.e., $(1+\epsilon)$ times the optimal repair bandwidth) for repair degree d=n-1. This code outperforms some existing ones in terms of either the sub-packetization level or the field size.
|
1612.02646
|
Federico Perazzi
|
Anna Khoreva, Federico Perazzi, Rodrigo Benenson, Bernt Schiele,
Alexander Sorkine-Hornung
|
Learning Video Object Segmentation from Static Images
|
Submitted to CVPR 2017
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Inspired by recent advances of deep learning in instance segmentation and
object tracking, we introduce video object segmentation problem as a concept of
guided instance segmentation. Our model proceeds on a per-frame basis, guided
by the output of the previous frame towards the object of interest in the next
frame. We demonstrate that highly accurate object segmentation in videos can be
enabled by using a convnet trained with static images only. The key ingredient
of our approach is a combination of offline and online learning strategies,
where the former serves to produce a refined mask from the previous frame
estimate and the latter allows to capture the appearance of the specific object
instance. Our method can handle different types of input annotations: bounding
boxes and segments, as well as incorporate multiple annotated frames, making
the system suitable for diverse applications. We obtain competitive results on
three different datasets, independently from the type of input annotation.
|
[
{
"created": "Thu, 8 Dec 2016 13:59:18 GMT",
"version": "v1"
}
] |
2019-02-05
|
[
[
"Khoreva",
"Anna",
""
],
[
"Perazzi",
"Federico",
""
],
[
"Benenson",
"Rodrigo",
""
],
[
"Schiele",
"Bernt",
""
],
[
"Sorkine-Hornung",
"Alexander",
""
]
] |
Inspired by recent advances of deep learning in instance segmentation and object tracking, we introduce video object segmentation problem as a concept of guided instance segmentation. Our model proceeds on a per-frame basis, guided by the output of the previous frame towards the object of interest in the next frame. We demonstrate that highly accurate object segmentation in videos can be enabled by using a convnet trained with static images only. The key ingredient of our approach is a combination of offline and online learning strategies, where the former serves to produce a refined mask from the previous frame estimate and the latter allows to capture the appearance of the specific object instance. Our method can handle different types of input annotations: bounding boxes and segments, as well as incorporate multiple annotated frames, making the system suitable for diverse applications. We obtain competitive results on three different datasets, independently from the type of input annotation.
|
2203.11556
|
Sahil Sidheekh
|
Sahil Sidheekh, Chris B. Dock, Tushar Jain, Radu Balan, Maneesh K.
Singh
|
VQ-Flows: Vector Quantized Local Normalizing Flows
|
Accepted to The 38th Conference on Uncertainty in Artificial
Intelligence (UAI) 2022
| null | null | null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Normalizing flows provide an elegant approach to generative modeling that
allows for efficient sampling and exact density evaluation of unknown data
distributions. However, current techniques have significant limitations in
their expressivity when the data distribution is supported on a low-dimensional
manifold or has a non-trivial topology. We introduce a novel statistical
framework for learning a mixture of local normalizing flows as "chart maps"
over the data manifold. Our framework augments the expressivity of recent
approaches while preserving the signature property of normalizing flows, that
they admit exact density evaluation. We learn a suitable atlas of charts for
the data manifold via a vector quantized auto-encoder (VQ-AE) and the
distributions over them using a conditional flow. We validate experimentally
that our probabilistic framework enables existing approaches to better model
data distributions over complex manifolds.
|
[
{
"created": "Tue, 22 Mar 2022 09:22:18 GMT",
"version": "v1"
},
{
"created": "Sat, 18 Jun 2022 10:13:28 GMT",
"version": "v2"
}
] |
2022-06-22
|
[
[
"Sidheekh",
"Sahil",
""
],
[
"Dock",
"Chris B.",
""
],
[
"Jain",
"Tushar",
""
],
[
"Balan",
"Radu",
""
],
[
"Singh",
"Maneesh K.",
""
]
] |
Normalizing flows provide an elegant approach to generative modeling that allows for efficient sampling and exact density evaluation of unknown data distributions. However, current techniques have significant limitations in their expressivity when the data distribution is supported on a low-dimensional manifold or has a non-trivial topology. We introduce a novel statistical framework for learning a mixture of local normalizing flows as "chart maps" over the data manifold. Our framework augments the expressivity of recent approaches while preserving the signature property of normalizing flows, that they admit exact density evaluation. We learn a suitable atlas of charts for the data manifold via a vector quantized auto-encoder (VQ-AE) and the distributions over them using a conditional flow. We validate experimentally that our probabilistic framework enables existing approaches to better model data distributions over complex manifolds.
|
1405.0637
|
Bryan Ford
|
Cristina B\u{a}sescu, Georgia Fragkouli, Enis Ceyhun Alp, Michael F.
Nowlan, Jose M. Faleiro, Gaylor Bosson, Kelong Cong, Pierluca Bors\`o-Tan,
Vero Estrada-Gali\~nanes, and Bryan Ford
|
Limiting Lamport Exposure to Distant Failures in Globally-Managed
Distributed Systems
|
14 pages, 9 figures, 5 algorithms, 1 table
| null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Globalized computing infrastructures offer the convenience and elasticity of
globally managed objects and services, but lack the resilience to distant
failures that localized infrastructures such as private clouds provide.
Providing both global management and resilience to distant failures, however,
poses a fundamental problem for configuration services: How to discover a
possibly migratory, strongly-consistent service/object in a globalized
infrastructure without dependencies on globalized state? Limix is the first
metadata configuration service that addresses this problem. With Limix, global
strongly-consistent data-plane services and objects are insulated from remote
gray failures by ensuring that the definitive, strongly-consistent metadata for
any object is always confined to the same region as the object itself. Limix
guarantees availability bounds: any user can continue accessing any strongly
consistent object that matters to the user located at distance $\Delta$ away,
insulated from failures outside a small multiple of $\Delta$. We built a Limix
metadata service based on CockroachDB. Our experiments on Internet-like
networks and on AWS, using realistic trace-driven workloads, show that Limix
enables global management and significantly improves availability over the
state-of-the-art.
|
[
{
"created": "Sun, 4 May 2014 00:35:25 GMT",
"version": "v1"
},
{
"created": "Sat, 12 May 2018 09:21:09 GMT",
"version": "v2"
},
{
"created": "Fri, 15 Jul 2022 16:06:21 GMT",
"version": "v3"
}
] |
2022-07-18
|
[
[
"Băsescu",
"Cristina",
""
],
[
"Fragkouli",
"Georgia",
""
],
[
"Alp",
"Enis Ceyhun",
""
],
[
"Nowlan",
"Michael F.",
""
],
[
"Faleiro",
"Jose M.",
""
],
[
"Bosson",
"Gaylor",
""
],
[
"Cong",
"Kelong",
""
],
[
"Borsò-Tan",
"Pierluca",
""
],
[
"Estrada-Galiñanes",
"Vero",
""
],
[
"Ford",
"Bryan",
""
]
] |
Globalized computing infrastructures offer the convenience and elasticity of globally managed objects and services, but lack the resilience to distant failures that localized infrastructures such as private clouds provide. Providing both global management and resilience to distant failures, however, poses a fundamental problem for configuration services: How to discover a possibly migratory, strongly-consistent service/object in a globalized infrastructure without dependencies on globalized state? Limix is the first metadata configuration service that addresses this problem. With Limix, global strongly-consistent data-plane services and objects are insulated from remote gray failures by ensuring that the definitive, strongly-consistent metadata for any object is always confined to the same region as the object itself. Limix guarantees availability bounds: any user can continue accessing any strongly consistent object that matters to the user located at distance $\Delta$ away, insulated from failures outside a small multiple of $\Delta$. We built a Limix metadata service based on CockroachDB. Our experiments on Internet-like networks and on AWS, using realistic trace-driven workloads, show that Limix enables global management and significantly improves availability over the state-of-the-art.
|
2106.03937
|
Zabir Al Nazi
|
Zabir Al Nazi, Sayed Mohammed Tasmimul Huda
|
Byakto Speech: Real-time long speech synthesis with convolutional neural
network: Transfer learning from English to Bangla
| null | null | null | null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Speech synthesis is one of the challenging tasks to automate by deep
learning, also being a low-resource language there are very few attempts at
Bangla speech synthesis. Most of the existing works can't work with anything
other than simple Bangla characters script, very short sentences, etc. This
work attempts to solve these problems by introducing Byakta, the first-ever
open-source deep learning-based bilingual (Bangla and English) text to a speech
synthesis system. A speech recognition model-based automated scoring metric was
also proposed to evaluate the performance of a TTS model. We also introduce a
test benchmark dataset for Bangla speech synthesis models for evaluating speech
quality. The TTS is available at https://github.com/zabir-nabil/bangla-tts
|
[
{
"created": "Mon, 31 May 2021 20:39:35 GMT",
"version": "v1"
}
] |
2021-06-09
|
[
[
"Nazi",
"Zabir Al",
""
],
[
"Huda",
"Sayed Mohammed Tasmimul",
""
]
] |
Speech synthesis is one of the challenging tasks to automate by deep learning, also being a low-resource language there are very few attempts at Bangla speech synthesis. Most of the existing works can't work with anything other than simple Bangla characters script, very short sentences, etc. This work attempts to solve these problems by introducing Byakta, the first-ever open-source deep learning-based bilingual (Bangla and English) text to a speech synthesis system. A speech recognition model-based automated scoring metric was also proposed to evaluate the performance of a TTS model. We also introduce a test benchmark dataset for Bangla speech synthesis models for evaluating speech quality. The TTS is available at https://github.com/zabir-nabil/bangla-tts
|
2302.04515
|
Clement Pernet
|
Cl\'ement Pernet (CASC), Hippolyte Signargout (CASC, ARIC), Gilles
Villard (ARIC)
|
Exact computations with quasiseparable matrices
| null | null | null | null |
cs.SC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Quasi-separable matrices are a class of rank-structured matriceswidely used
in numerical linear algebra and of growing interestin computer algebra, with
applications in e.g. the linearization ofpolynomial matrices. Various
representation formats exist for thesematrices that have rarely been
compared.We show how the most central formats SSS and HSS can beadapted to
symbolic computation, where the exact rank replacesthreshold based numerical
ranks. We clarify their links and comparethem with the Bruhat format. To this
end, we state their space andtime cost estimates based on fast matrix
multiplication, and comparethem, with their leading constants. The comparison
is supportedby software experiments.We make further progresses for the Bruhat
format, for which wegive a generation algorithm, following a Crout elimination
scheme,which specializes into fast algorithms for the construction from asparse
matrix or from the sum of Bruhat representations.
|
[
{
"created": "Thu, 9 Feb 2023 09:17:18 GMT",
"version": "v1"
}
] |
2023-02-10
|
[
[
"Pernet",
"Clément",
"",
"CASC"
],
[
"Signargout",
"Hippolyte",
"",
"CASC, ARIC"
],
[
"Villard",
"Gilles",
"",
"ARIC"
]
] |
Quasi-separable matrices are a class of rank-structured matriceswidely used in numerical linear algebra and of growing interestin computer algebra, with applications in e.g. the linearization ofpolynomial matrices. Various representation formats exist for thesematrices that have rarely been compared.We show how the most central formats SSS and HSS can beadapted to symbolic computation, where the exact rank replacesthreshold based numerical ranks. We clarify their links and comparethem with the Bruhat format. To this end, we state their space andtime cost estimates based on fast matrix multiplication, and comparethem, with their leading constants. The comparison is supportedby software experiments.We make further progresses for the Bruhat format, for which wegive a generation algorithm, following a Crout elimination scheme,which specializes into fast algorithms for the construction from asparse matrix or from the sum of Bruhat representations.
|
1205.3663
|
Johannes Klaus Fichte
|
Johannes Klaus Fichte
|
The Good, the Bad, and the Odd: Cycles in Answer-Set Programs
| null | null | null | null |
cs.AI cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Backdoors of answer-set programs are sets of atoms that represent clever
reasoning shortcuts through the search space. Assignments to backdoor atoms
reduce the given program to several programs that belong to a tractable target
class. Previous research has considered target classes based on notions of
acyclicity where various types of cycles (good and bad cycles) are excluded
from graph representations of programs. We generalize the target classes by
taking the parity of the number of negative edges on bad cycles into account
and consider backdoors for such classes. We establish new hardness results and
non-uniform polynomial-time tractability relative to directed or undirected
cycles.
|
[
{
"created": "Wed, 15 Feb 2012 20:19:57 GMT",
"version": "v1"
}
] |
2012-05-17
|
[
[
"Fichte",
"Johannes Klaus",
""
]
] |
Backdoors of answer-set programs are sets of atoms that represent clever reasoning shortcuts through the search space. Assignments to backdoor atoms reduce the given program to several programs that belong to a tractable target class. Previous research has considered target classes based on notions of acyclicity where various types of cycles (good and bad cycles) are excluded from graph representations of programs. We generalize the target classes by taking the parity of the number of negative edges on bad cycles into account and consider backdoors for such classes. We establish new hardness results and non-uniform polynomial-time tractability relative to directed or undirected cycles.
|
2004.13513
|
Arthur Douillard
|
Arthur Douillard, Matthieu Cord, Charles Ollion, Thomas Robert,
Eduardo Valle
|
PODNet: Pooled Outputs Distillation for Small-Tasks Incremental Learning
|
Accepted at ECCV 2020
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Lifelong learning has attracted much attention, but existing works still
struggle to fight catastrophic forgetting and accumulate knowledge over long
stretches of incremental learning. In this work, we propose PODNet, a model
inspired by representation learning. By carefully balancing the compromise
between remembering the old classes and learning new ones, PODNet fights
catastrophic forgetting, even over very long runs of small incremental tasks
--a setting so far unexplored by current works. PODNet innovates on existing
art with an efficient spatial-based distillation-loss applied throughout the
model and a representation comprising multiple proxy vectors for each class. We
validate those innovations thoroughly, comparing PODNet with three
state-of-the-art models on three datasets: CIFAR100, ImageNet100, and
ImageNet1000. Our results showcase a significant advantage of PODNet over
existing art, with accuracy gains of 12.10, 6.51, and 2.85 percentage points,
respectively. Code is available at
https://github.com/arthurdouillard/incremental_learning.pytorch
|
[
{
"created": "Tue, 28 Apr 2020 13:45:23 GMT",
"version": "v1"
},
{
"created": "Tue, 21 Jul 2020 13:20:18 GMT",
"version": "v2"
},
{
"created": "Tue, 6 Oct 2020 16:10:33 GMT",
"version": "v3"
}
] |
2020-10-07
|
[
[
"Douillard",
"Arthur",
""
],
[
"Cord",
"Matthieu",
""
],
[
"Ollion",
"Charles",
""
],
[
"Robert",
"Thomas",
""
],
[
"Valle",
"Eduardo",
""
]
] |
Lifelong learning has attracted much attention, but existing works still struggle to fight catastrophic forgetting and accumulate knowledge over long stretches of incremental learning. In this work, we propose PODNet, a model inspired by representation learning. By carefully balancing the compromise between remembering the old classes and learning new ones, PODNet fights catastrophic forgetting, even over very long runs of small incremental tasks --a setting so far unexplored by current works. PODNet innovates on existing art with an efficient spatial-based distillation-loss applied throughout the model and a representation comprising multiple proxy vectors for each class. We validate those innovations thoroughly, comparing PODNet with three state-of-the-art models on three datasets: CIFAR100, ImageNet100, and ImageNet1000. Our results showcase a significant advantage of PODNet over existing art, with accuracy gains of 12.10, 6.51, and 2.85 percentage points, respectively. Code is available at https://github.com/arthurdouillard/incremental_learning.pytorch
|
2208.04066
|
Yash Deshpande
|
Yash Deshpande, Cedomir Stefanovic, H. Murat G\"ursu, Wolfgang
Kellerer
|
On d-ary tree algorithms with successive interference cancellation
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we outline the approach for the derivation of the length of
the collision resolution interval for d-ary tree algorithms (TA) with gated
access and successive interference cancellation (SIC), conditioned on the
number of the contending users. This is the basic performance parameter for TA
with gated access. We identify the deficiencies of the analysis performed in
the seminal paper on TA with SIC by Yu and Giannakis, showing that their
analysis is correct only for binary splitting, i.e. for d=2. We also provide
some insightful results on the stable throughput that can be achieved for
different values of d.
|
[
{
"created": "Mon, 8 Aug 2022 11:32:37 GMT",
"version": "v1"
}
] |
2022-08-09
|
[
[
"Deshpande",
"Yash",
""
],
[
"Stefanovic",
"Cedomir",
""
],
[
"Gürsu",
"H. Murat",
""
],
[
"Kellerer",
"Wolfgang",
""
]
] |
In this paper, we outline the approach for the derivation of the length of the collision resolution interval for d-ary tree algorithms (TA) with gated access and successive interference cancellation (SIC), conditioned on the number of the contending users. This is the basic performance parameter for TA with gated access. We identify the deficiencies of the analysis performed in the seminal paper on TA with SIC by Yu and Giannakis, showing that their analysis is correct only for binary splitting, i.e. for d=2. We also provide some insightful results on the stable throughput that can be achieved for different values of d.
|
0901.3906
|
Manuel Carro
|
Pablo Chico de Guzman, Manuel Carro, Manuel V. Hermenegildo
|
A Program Transformation for Continuation Call-Based Tabled Execution
|
Part of the proceedings of CICLOPS 2008
| null | null | null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The advantages of tabled evaluation regarding program termination and
reduction of complexity are well known --as are the significant implementation,
portability, and maintenance efforts that some proposals (especially those
based on suspension) require. This implementation effort is reduced by program
transformation-based continuation call techniques, at some efficiency cost.
However, the traditional formulation of this proposal by Ramesh and Cheng
limits the interleaving of tabled and non-tabled predicates and thus cannot be
used as-is for arbitrary programs. In this paper we present a complete
translation for the continuation call technique which, using the runtime
support needed for the traditional proposal, solves these problems and makes it
possible to execute arbitrary tabled programs. We present performance results
which show that CCall offers a useful tradeoff that can be competitive with
state-of-the-art implementations.
|
[
{
"created": "Sun, 25 Jan 2009 15:40:48 GMT",
"version": "v1"
}
] |
2009-01-27
|
[
[
"de Guzman",
"Pablo Chico",
""
],
[
"Carro",
"Manuel",
""
],
[
"Hermenegildo",
"Manuel V.",
""
]
] |
The advantages of tabled evaluation regarding program termination and reduction of complexity are well known --as are the significant implementation, portability, and maintenance efforts that some proposals (especially those based on suspension) require. This implementation effort is reduced by program transformation-based continuation call techniques, at some efficiency cost. However, the traditional formulation of this proposal by Ramesh and Cheng limits the interleaving of tabled and non-tabled predicates and thus cannot be used as-is for arbitrary programs. In this paper we present a complete translation for the continuation call technique which, using the runtime support needed for the traditional proposal, solves these problems and makes it possible to execute arbitrary tabled programs. We present performance results which show that CCall offers a useful tradeoff that can be competitive with state-of-the-art implementations.
|
1907.03777
|
Matthias Frey
|
Igor Bjelakovic, Matthias Frey, Slawomir Stanczak
|
Distributed Approximation of Functions over Fast Fading Channels with
Applications to Distributed Learning and the Max-Consensus Problem
| null |
2019 57th Annual Allerton Conference on Communication, Control,
and Computing (Allerton), Monticello, IL, USA, September 24-27, 2019, pp.
1146-1153
|
10.1109/ALLERTON.2019.8919875
| null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we consider the problem of distributed approximation of
functions over multiple-access channels with additive noise. In contrast to
previous works, we take fast fading into account and give explicit probability
bounds for the approximation error allowing us to derive bounds on the number
of channel uses that are needed to approximate a function up to a given
approximation accuracy. Neither the fading nor the noise process is limited to
Gaussian distributions. Instead, we consider sub-gaussian random variables
which include Gaussian as well as many other distributions of practical
relevance. The results are motivated by and have immediate applications to a)
computing predictors in models for distributed machine learning and b) the
max-consensus problem in ultra-dense networks.
|
[
{
"created": "Mon, 8 Jul 2019 18:00:09 GMT",
"version": "v1"
},
{
"created": "Sat, 21 Sep 2019 12:15:08 GMT",
"version": "v2"
}
] |
2021-03-01
|
[
[
"Bjelakovic",
"Igor",
""
],
[
"Frey",
"Matthias",
""
],
[
"Stanczak",
"Slawomir",
""
]
] |
In this work, we consider the problem of distributed approximation of functions over multiple-access channels with additive noise. In contrast to previous works, we take fast fading into account and give explicit probability bounds for the approximation error allowing us to derive bounds on the number of channel uses that are needed to approximate a function up to a given approximation accuracy. Neither the fading nor the noise process is limited to Gaussian distributions. Instead, we consider sub-gaussian random variables which include Gaussian as well as many other distributions of practical relevance. The results are motivated by and have immediate applications to a) computing predictors in models for distributed machine learning and b) the max-consensus problem in ultra-dense networks.
|
2208.08767
|
Tommie Kerssies
|
Tommie Kerssies, Mert K{\i}l{\i}\c{c}kaya and Joaquin Vanschoren
|
Evaluating Continual Test-Time Adaptation for Contextual and Semantic
Domain Shifts
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, our goal is to adapt a pre-trained convolutional neural
network to domain shifts at test time. We do so continually with the incoming
stream of test batches, without labels. The existing literature mostly operates
on artificial shifts obtained via adversarial perturbations of a test image.
Motivated by this, we evaluate the state of the art on two realistic and
challenging sources of domain shifts, namely contextual and semantic shifts.
Contextual shifts correspond to the environment types, for example, a model
pre-trained on indoor context has to adapt to the outdoor context on CORe-50.
Semantic shifts correspond to the capture types, for example a model
pre-trained on natural images has to adapt to cliparts, sketches, and paintings
on DomainNet. We include in our analysis recent techniques such as
Prediction-Time Batch Normalization (BN), Test Entropy Minimization (TENT) and
Continual Test-Time Adaptation (CoTTA). Our findings are three-fold: i)
Test-time adaptation methods perform better and forget less on contextual
shifts compared to semantic shifts, ii) TENT outperforms other methods on
short-term adaptation, whereas CoTTA outpeforms other methods on long-term
adaptation, iii) BN is most reliable and robust. Our code is available at
https://github.com/tommiekerssies/Evaluating-Continual-Test-Time-Adaptation-for-Contextual-and-Semantic-Domain-Shifts.
|
[
{
"created": "Thu, 18 Aug 2022 11:05:55 GMT",
"version": "v1"
},
{
"created": "Sun, 12 Mar 2023 21:34:05 GMT",
"version": "v2"
}
] |
2023-03-14
|
[
[
"Kerssies",
"Tommie",
""
],
[
"Kılıçkaya",
"Mert",
""
],
[
"Vanschoren",
"Joaquin",
""
]
] |
In this paper, our goal is to adapt a pre-trained convolutional neural network to domain shifts at test time. We do so continually with the incoming stream of test batches, without labels. The existing literature mostly operates on artificial shifts obtained via adversarial perturbations of a test image. Motivated by this, we evaluate the state of the art on two realistic and challenging sources of domain shifts, namely contextual and semantic shifts. Contextual shifts correspond to the environment types, for example, a model pre-trained on indoor context has to adapt to the outdoor context on CORe-50. Semantic shifts correspond to the capture types, for example a model pre-trained on natural images has to adapt to cliparts, sketches, and paintings on DomainNet. We include in our analysis recent techniques such as Prediction-Time Batch Normalization (BN), Test Entropy Minimization (TENT) and Continual Test-Time Adaptation (CoTTA). Our findings are three-fold: i) Test-time adaptation methods perform better and forget less on contextual shifts compared to semantic shifts, ii) TENT outperforms other methods on short-term adaptation, whereas CoTTA outpeforms other methods on long-term adaptation, iii) BN is most reliable and robust. Our code is available at https://github.com/tommiekerssies/Evaluating-Continual-Test-Time-Adaptation-for-Contextual-and-Semantic-Domain-Shifts.
|
2109.02485
|
Samarth Bhatia
|
Samarth Bhatia (1), Yukti Makhija (1), Sneha Jayaswal (3), Shalendra
Singh (2), Ishaan Gupta (1) ((1) Indian Institute of Technology, Delhi, (2)
Armed Forces Medical College, Pune, (3) Christian Medical College Ludhiana)
|
Severity and Mortality Prediction Models to Triage Indian COVID-19
Patients
|
31 pages, 6 figures, 8 tables. The first two authors (SB and YM) have
equal contribution. IG is the corresponding author (ishaan@iitd.ac.in)
Changes: Author List updated
| null | null | null |
cs.LG q-bio.PE
|
http://creativecommons.org/licenses/by/4.0/
|
As the second wave in India mitigates, COVID-19 has now infected about 29
million patients countrywide, leading to more than 350 thousand people dead. As
the infections surged, the strain on the medical infrastructure in the country
became apparent. While the country vaccinates its population, opening up the
economy may lead to an increase in infection rates. In this scenario, it is
essential to effectively utilize the limited hospital resources by an informed
patient triaging system based on clinical parameters. Here, we present two
interpretable machine learning models predicting the clinical outcomes,
severity, and mortality, of the patients based on routine non-invasive
surveillance of blood parameters from one of the largest cohorts of Indian
patients at the day of admission. Patient severity and mortality prediction
models achieved 86.3% and 88.06% accuracy, respectively, with an AUC-ROC of
0.91 and 0.92. We have integrated both the models in a user-friendly web app
calculator, https://triage-COVID-19.herokuapp.com/, to showcase the potential
deployment of such efforts at scale.
|
[
{
"created": "Thu, 2 Sep 2021 23:15:04 GMT",
"version": "v1"
},
{
"created": "Sat, 23 Oct 2021 18:53:34 GMT",
"version": "v2"
}
] |
2021-10-26
|
[
[
"Bhatia",
"Samarth",
""
],
[
"Makhija",
"Yukti",
""
],
[
"Jayaswal",
"Sneha",
""
],
[
"Singh",
"Shalendra",
""
],
[
"Gupta",
"Ishaan",
""
]
] |
As the second wave in India mitigates, COVID-19 has now infected about 29 million patients countrywide, leading to more than 350 thousand people dead. As the infections surged, the strain on the medical infrastructure in the country became apparent. While the country vaccinates its population, opening up the economy may lead to an increase in infection rates. In this scenario, it is essential to effectively utilize the limited hospital resources by an informed patient triaging system based on clinical parameters. Here, we present two interpretable machine learning models predicting the clinical outcomes, severity, and mortality, of the patients based on routine non-invasive surveillance of blood parameters from one of the largest cohorts of Indian patients at the day of admission. Patient severity and mortality prediction models achieved 86.3% and 88.06% accuracy, respectively, with an AUC-ROC of 0.91 and 0.92. We have integrated both the models in a user-friendly web app calculator, https://triage-COVID-19.herokuapp.com/, to showcase the potential deployment of such efforts at scale.
|
2304.14471
|
Kangning Liu
|
Kangning Liu, Yu-Chuan Su, Wei (Alex) Hong, Ruijin Cang, Xuhui Jia
|
Controllable One-Shot Face Video Synthesis With Semantic Aware Prior
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The one-shot talking-head synthesis task aims to animate a source image to
another pose and expression, which is dictated by a driving frame. Recent
methods rely on warping the appearance feature extracted from the source, by
using motion fields estimated from the sparse keypoints, that are learned in an
unsupervised manner. Due to their lightweight formulation, they are suitable
for video conferencing with reduced bandwidth. However, based on our study,
current methods suffer from two major limitations: 1) unsatisfactory generation
quality in the case of large head poses and the existence of observable pose
misalignment between the source and the first frame in driving videos. 2) fail
to capture fine yet critical face motion details due to the lack of semantic
understanding and appropriate face geometry regularization. To address these
shortcomings, we propose a novel method that leverages the rich face prior
information, the proposed model can generate face videos with improved semantic
consistency (improve baseline by $7\%$ in average keypoint distance) and
expression-preserving (outperform baseline by $15 \%$ in average emotion
embedding distance) under equivalent bandwidth. Additionally, incorporating
such prior information provides us with a convenient interface to achieve
highly controllable generation in terms of both pose and expression.
|
[
{
"created": "Thu, 27 Apr 2023 19:17:13 GMT",
"version": "v1"
}
] |
2023-05-01
|
[
[
"Liu",
"Kangning",
"",
"Alex"
],
[
"Su",
"Yu-Chuan",
"",
"Alex"
],
[
"Wei",
"",
"",
"Alex"
],
[
"Hong",
"",
""
],
[
"Cang",
"Ruijin",
""
],
[
"Jia",
"Xuhui",
""
]
] |
The one-shot talking-head synthesis task aims to animate a source image to another pose and expression, which is dictated by a driving frame. Recent methods rely on warping the appearance feature extracted from the source, by using motion fields estimated from the sparse keypoints, that are learned in an unsupervised manner. Due to their lightweight formulation, they are suitable for video conferencing with reduced bandwidth. However, based on our study, current methods suffer from two major limitations: 1) unsatisfactory generation quality in the case of large head poses and the existence of observable pose misalignment between the source and the first frame in driving videos. 2) fail to capture fine yet critical face motion details due to the lack of semantic understanding and appropriate face geometry regularization. To address these shortcomings, we propose a novel method that leverages the rich face prior information, the proposed model can generate face videos with improved semantic consistency (improve baseline by $7\%$ in average keypoint distance) and expression-preserving (outperform baseline by $15 \%$ in average emotion embedding distance) under equivalent bandwidth. Additionally, incorporating such prior information provides us with a convenient interface to achieve highly controllable generation in terms of both pose and expression.
|
2401.12051
|
Dimitrije Anti\'c
|
Dimitrije Anti\'c, Garvita Tiwari, Batuhan Ozcomlekci, Riccardo Marin,
Gerard Pons-Moll
|
CloSe: A 3D Clothing Segmentation Dataset and Model
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
3D Clothing modeling and datasets play crucial role in the entertainment,
animation, and digital fashion industries. Existing work often lacks detailed
semantic understanding or uses synthetic datasets, lacking realism and
personalization. To address this, we first introduce CloSe-D: a novel
large-scale dataset containing 3D clothing segmentation of 3167 scans, covering
a range of 18 distinct clothing classes. Additionally, we propose CloSe-Net,
the first learning-based 3D clothing segmentation model for fine-grained
segmentation from colored point clouds. CloSe-Net uses local point features,
body-clothing correlation, and a garment-class and point features-based
attention module, improving performance over baselines and prior work. The
proposed attention module enables our model to learn appearance and
geometry-dependent clothing prior from data. We further validate the efficacy
of our approach by successfully segmenting publicly available datasets of
people in clothing. We also introduce CloSe-T, a 3D interactive tool for
refining segmentation labels. Combining the tool with CloSe-T in a continual
learning setup demonstrates improved generalization on real-world data.
Dataset, model, and tool can be found at
https://virtualhumans.mpi-inf.mpg.de/close3dv24/.
|
[
{
"created": "Mon, 22 Jan 2024 15:42:21 GMT",
"version": "v1"
}
] |
2024-01-23
|
[
[
"Antić",
"Dimitrije",
""
],
[
"Tiwari",
"Garvita",
""
],
[
"Ozcomlekci",
"Batuhan",
""
],
[
"Marin",
"Riccardo",
""
],
[
"Pons-Moll",
"Gerard",
""
]
] |
3D Clothing modeling and datasets play crucial role in the entertainment, animation, and digital fashion industries. Existing work often lacks detailed semantic understanding or uses synthetic datasets, lacking realism and personalization. To address this, we first introduce CloSe-D: a novel large-scale dataset containing 3D clothing segmentation of 3167 scans, covering a range of 18 distinct clothing classes. Additionally, we propose CloSe-Net, the first learning-based 3D clothing segmentation model for fine-grained segmentation from colored point clouds. CloSe-Net uses local point features, body-clothing correlation, and a garment-class and point features-based attention module, improving performance over baselines and prior work. The proposed attention module enables our model to learn appearance and geometry-dependent clothing prior from data. We further validate the efficacy of our approach by successfully segmenting publicly available datasets of people in clothing. We also introduce CloSe-T, a 3D interactive tool for refining segmentation labels. Combining the tool with CloSe-T in a continual learning setup demonstrates improved generalization on real-world data. Dataset, model, and tool can be found at https://virtualhumans.mpi-inf.mpg.de/close3dv24/.
|
1805.09549
|
Iran Ramezanipour
|
Iran Ramezanipour, Parisa Nouri, Hirley Alves, Pedro J. H. Nardelli,
Richard Demo Souza and Ari Pouttu
|
Finite Blocklength Communications in Smart Grids for Dynamic Spectrum
Access and Locally Licensed Scenarios
|
The manuscript has been accepted for publication in IEEE sensors
journal, 2018
| null |
10.1109/JSEN.2018.2835571
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work focuses on the performance analysis of short blocklength
communication with application in smart grids. We use stochastic geometry to
compute in closed form the success probability of a typical message
transmission as a function of its size (i.e. blocklength), the number of
information bits and the density of interferers. Two different scenarios are
investigated: (i) dynamic spectrum access where the licensed and unlicensed
users, share the uplink channel frequency band and (ii) local licensing
approach using the so called micro operator, which holds an exclusive license
of its own. Approximated outage probability expression is derived for the
dynamic spectrum access scenario, while a closed-form solution is attained for
the micro-operator. The analysis also incorporates the use of retransmissions
when messages are detected in error. Our numerical results show how reliability
and delay are related in either scenarios.
|
[
{
"created": "Thu, 24 May 2018 08:34:39 GMT",
"version": "v1"
}
] |
2018-05-25
|
[
[
"Ramezanipour",
"Iran",
""
],
[
"Nouri",
"Parisa",
""
],
[
"Alves",
"Hirley",
""
],
[
"Nardelli",
"Pedro J. H.",
""
],
[
"Souza",
"Richard Demo",
""
],
[
"Pouttu",
"Ari",
""
]
] |
This work focuses on the performance analysis of short blocklength communication with application in smart grids. We use stochastic geometry to compute in closed form the success probability of a typical message transmission as a function of its size (i.e. blocklength), the number of information bits and the density of interferers. Two different scenarios are investigated: (i) dynamic spectrum access where the licensed and unlicensed users, share the uplink channel frequency band and (ii) local licensing approach using the so called micro operator, which holds an exclusive license of its own. Approximated outage probability expression is derived for the dynamic spectrum access scenario, while a closed-form solution is attained for the micro-operator. The analysis also incorporates the use of retransmissions when messages are detected in error. Our numerical results show how reliability and delay are related in either scenarios.
|
1712.09652
|
Huizhen Yu
|
Huizhen Yu
|
On Convergence of some Gradient-based Temporal-Differences Algorithms
for Off-Policy Learning
|
Revised technical report; added Section 4.2.4 and Section 4.3; 86
pages
| null | null | null |
cs.LG math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider off-policy temporal-difference (TD) learning methods for policy
evaluation in Markov decision processes with finite spaces and discounted
reward criteria, and we present a collection of convergence results for several
gradient-based TD algorithms with linear function approximation. The algorithms
we analyze include: (i) two basic forms of two-time-scale gradient-based TD
algorithms, which we call GTD and which minimize the mean squared projected
Bellman error using stochastic gradient-descent; (ii) their "robustified"
biased variants; (iii) their mirror-descent versions which combine the
mirror-descent idea with TD learning; and (iv) a single-time-scale version of
GTD that solves minimax problems formulated for approximate policy evaluation.
We derive convergence results for three types of stepsizes: constant
stepsize, slowly diminishing stepsize, as well as the standard type of
diminishing stepsize with a square-summable condition. For the first two types
of stepsizes, we apply the weak convergence method from stochastic
approximation theory to characterize the asymptotic behavior of the algorithms,
and for the standard type of stepsize, we analyze the algorithmic behavior with
respect to a stronger mode of convergence, almost sure convergence. Our
convergence results are for the aforementioned TD algorithms with three general
ways of setting their $\lambda$-parameters: (i) state-dependent $\lambda$; (ii)
a recently proposed scheme of using history-dependent $\lambda$ to keep the
eligibility traces of the algorithms bounded while allowing for relatively
large values of $\lambda$; and (iii) a composite scheme of setting the
$\lambda$-parameters that combines the preceding two schemes and allows a
broader class of generalized Bellman operators to be used for approximate
policy evaluation with TD methods.
|
[
{
"created": "Wed, 27 Dec 2017 18:43:21 GMT",
"version": "v1"
},
{
"created": "Wed, 28 Mar 2018 22:41:33 GMT",
"version": "v2"
}
] |
2018-03-30
|
[
[
"Yu",
"Huizhen",
""
]
] |
We consider off-policy temporal-difference (TD) learning methods for policy evaluation in Markov decision processes with finite spaces and discounted reward criteria, and we present a collection of convergence results for several gradient-based TD algorithms with linear function approximation. The algorithms we analyze include: (i) two basic forms of two-time-scale gradient-based TD algorithms, which we call GTD and which minimize the mean squared projected Bellman error using stochastic gradient-descent; (ii) their "robustified" biased variants; (iii) their mirror-descent versions which combine the mirror-descent idea with TD learning; and (iv) a single-time-scale version of GTD that solves minimax problems formulated for approximate policy evaluation. We derive convergence results for three types of stepsizes: constant stepsize, slowly diminishing stepsize, as well as the standard type of diminishing stepsize with a square-summable condition. For the first two types of stepsizes, we apply the weak convergence method from stochastic approximation theory to characterize the asymptotic behavior of the algorithms, and for the standard type of stepsize, we analyze the algorithmic behavior with respect to a stronger mode of convergence, almost sure convergence. Our convergence results are for the aforementioned TD algorithms with three general ways of setting their $\lambda$-parameters: (i) state-dependent $\lambda$; (ii) a recently proposed scheme of using history-dependent $\lambda$ to keep the eligibility traces of the algorithms bounded while allowing for relatively large values of $\lambda$; and (iii) a composite scheme of setting the $\lambda$-parameters that combines the preceding two schemes and allows a broader class of generalized Bellman operators to be used for approximate policy evaluation with TD methods.
|
cs/9809006
|
Jim Gray
|
Werner Vogels, Dan Dumitriu, Ken Birman, Rod Gamache, Mike Massa, Rob
Short, John Vert, Joe Barrera
|
The Design and Architecture of the Microsoft Cluster Service -- A
Practical Approach to High-Availability and Scalability
|
Original document at:
http://research.microsoft.com/~gray/MSCS_FTCS98.doc
|
Proceedings of FTCS'98, June 23-25, 1998 in Munich, Germany
| null |
Microsoft Research MSR-TR-98-16
|
cs.OS cs.DC
| null |
Microsoft Cluster Service (MSCS) extends the Win-dows NT operating system to
support high-availability services. The goal is to offer an execution
environment where off-the-shelf server applications can continue to operate,
even in the presence of node failures. Later ver-sions of MSCS will provide
scalability via a node and application management system that allows
applications to scale to hundreds of nodes. This paper provides a de-tailed
description of the MSCS architecture and the de-sign decisions that have driven
the implementation of the service. The paper also describes how some major
appli-cations use the MSCS features, and describes features added to make it
easier to implement and manage fault-tolerant applications on MSCS.
|
[
{
"created": "Wed, 2 Sep 1998 17:11:54 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Vogels",
"Werner",
""
],
[
"Dumitriu",
"Dan",
""
],
[
"Birman",
"Ken",
""
],
[
"Gamache",
"Rod",
""
],
[
"Massa",
"Mike",
""
],
[
"Short",
"Rob",
""
],
[
"Vert",
"John",
""
],
[
"Barrera",
"Joe",
""
]
] |
Microsoft Cluster Service (MSCS) extends the Win-dows NT operating system to support high-availability services. The goal is to offer an execution environment where off-the-shelf server applications can continue to operate, even in the presence of node failures. Later ver-sions of MSCS will provide scalability via a node and application management system that allows applications to scale to hundreds of nodes. This paper provides a de-tailed description of the MSCS architecture and the de-sign decisions that have driven the implementation of the service. The paper also describes how some major appli-cations use the MSCS features, and describes features added to make it easier to implement and manage fault-tolerant applications on MSCS.
|
2211.06196
|
Alexander Fabbri
|
Alexander R. Fabbri, Prafulla Kumar Choubey, Jesse Vig, Chien-Sheng
Wu, Caiming Xiong
|
Improving Factual Consistency in Summarization with Compression-Based
Post-Editing
|
EMNLP 2022
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
State-of-the-art summarization models still struggle to be factually
consistent with the input text. A model-agnostic way to address this problem is
post-editing the generated summaries. However, existing approaches typically
fail to remove entity errors if a suitable input entity replacement is not
available or may insert erroneous content. In our work, we focus on removing
extrinsic entity errors, or entities not in the source, to improve consistency
while retaining the summary's essential information and form. We propose to use
sentence-compression data to train the post-editing model to take a summary
with extrinsic entity errors marked with special tokens and output a
compressed, well-formed summary with those errors removed. We show that this
model improves factual consistency while maintaining ROUGE, improving entity
precision by up to 30% on XSum, and that this model can be applied on top of
another post-editor, improving entity precision by up to a total of 38%. We
perform an extensive comparison of post-editing approaches that demonstrate
trade-offs between factual consistency, informativeness, and grammaticality,
and we analyze settings where post-editors show the largest improvements.
|
[
{
"created": "Fri, 11 Nov 2022 13:35:38 GMT",
"version": "v1"
}
] |
2022-11-14
|
[
[
"Fabbri",
"Alexander R.",
""
],
[
"Choubey",
"Prafulla Kumar",
""
],
[
"Vig",
"Jesse",
""
],
[
"Wu",
"Chien-Sheng",
""
],
[
"Xiong",
"Caiming",
""
]
] |
State-of-the-art summarization models still struggle to be factually consistent with the input text. A model-agnostic way to address this problem is post-editing the generated summaries. However, existing approaches typically fail to remove entity errors if a suitable input entity replacement is not available or may insert erroneous content. In our work, we focus on removing extrinsic entity errors, or entities not in the source, to improve consistency while retaining the summary's essential information and form. We propose to use sentence-compression data to train the post-editing model to take a summary with extrinsic entity errors marked with special tokens and output a compressed, well-formed summary with those errors removed. We show that this model improves factual consistency while maintaining ROUGE, improving entity precision by up to 30% on XSum, and that this model can be applied on top of another post-editor, improving entity precision by up to a total of 38%. We perform an extensive comparison of post-editing approaches that demonstrate trade-offs between factual consistency, informativeness, and grammaticality, and we analyze settings where post-editors show the largest improvements.
|
2405.04714
|
Kyle Stachowicz
|
Kyle Stachowicz and Sergey Levine
|
RACER: Epistemic Risk-Sensitive RL Enables Fast Driving with Fewer
Crashes
|
In review, RSS 2024
| null | null | null |
cs.RO cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Reinforcement learning provides an appealing framework for robotic control
due to its ability to learn expressive policies purely through real-world
interaction. However, this requires addressing real-world constraints and
avoiding catastrophic failures during training, which might severely impede
both learning progress and the performance of the final policy. In many
robotics settings, this amounts to avoiding certain "unsafe" states. The
high-speed off-road driving task represents a particularly challenging
instantiation of this problem: a high-return policy should drive as
aggressively and as quickly as possible, which often requires getting close to
the edge of the set of "safe" states, and therefore places a particular burden
on the method to avoid frequent failures.
To both learn highly performant policies and avoid excessive failures, we
propose a reinforcement learning framework that combines risk-sensitive control
with an adaptive action space curriculum.
Furthermore, we show that our risk-sensitive objective automatically avoids
out-of-distribution states when equipped with an estimator for epistemic
uncertainty.
We implement our algorithm on a small-scale rally car and show that it is
capable of learning high-speed policies for a real-world off-road driving task.
We show that our method greatly reduces the number of safety violations during
the training process, and actually leads to higher-performance policies in both
driving and non-driving simulation environments with similar challenges.
|
[
{
"created": "Tue, 7 May 2024 23:32:36 GMT",
"version": "v1"
}
] |
2024-05-09
|
[
[
"Stachowicz",
"Kyle",
""
],
[
"Levine",
"Sergey",
""
]
] |
Reinforcement learning provides an appealing framework for robotic control due to its ability to learn expressive policies purely through real-world interaction. However, this requires addressing real-world constraints and avoiding catastrophic failures during training, which might severely impede both learning progress and the performance of the final policy. In many robotics settings, this amounts to avoiding certain "unsafe" states. The high-speed off-road driving task represents a particularly challenging instantiation of this problem: a high-return policy should drive as aggressively and as quickly as possible, which often requires getting close to the edge of the set of "safe" states, and therefore places a particular burden on the method to avoid frequent failures. To both learn highly performant policies and avoid excessive failures, we propose a reinforcement learning framework that combines risk-sensitive control with an adaptive action space curriculum. Furthermore, we show that our risk-sensitive objective automatically avoids out-of-distribution states when equipped with an estimator for epistemic uncertainty. We implement our algorithm on a small-scale rally car and show that it is capable of learning high-speed policies for a real-world off-road driving task. We show that our method greatly reduces the number of safety violations during the training process, and actually leads to higher-performance policies in both driving and non-driving simulation environments with similar challenges.
|
1905.03585
|
Tamara Radivilova A
|
Igor Ivanisenko and Lyudmyla Kirichenko and Tamara Radivilova
|
Investigation of Multifractal Properties of Additive Data Stream
|
4 pages, 5 figures, 7 equations. arXiv admin note: text overlap with
arXiv:1904.05925
|
2016 IEEE First International Conference on Data Stream Mining &
Processing (DSMP), Lviv, 2016, pp. 305-308
|
10.1109/DSMP.2016.7583564
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The work presents results of a numerical study of fractal characteristics of
multifractal stream at addition of stream, which does not have multifractal
properties. They showed that the generalized Hurst exponent of total stream
tends to one of original multifractal stream with increase in signal/noise
ratio.
|
[
{
"created": "Tue, 16 Apr 2019 11:32:42 GMT",
"version": "v1"
}
] |
2019-05-10
|
[
[
"Ivanisenko",
"Igor",
""
],
[
"Kirichenko",
"Lyudmyla",
""
],
[
"Radivilova",
"Tamara",
""
]
] |
The work presents results of a numerical study of fractal characteristics of multifractal stream at addition of stream, which does not have multifractal properties. They showed that the generalized Hurst exponent of total stream tends to one of original multifractal stream with increase in signal/noise ratio.
|
2311.05608
|
Yichen Gong
|
Yichen Gong and Delong Ran and Jinyuan Liu and Conglei Wang and
Tianshuo Cong and Anyu Wang and Sisi Duan and Xiaoyun Wang
|
FigStep: Jailbreaking Large Vision-language Models via Typographic
Visual Prompts
|
Technical Report
| null | null | null |
cs.CR cs.AI cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Ensuring the safety of artificial intelligence-generated content (AIGC) is a
longstanding topic in the artificial intelligence (AI) community, and the
safety concerns associated with Large Language Models (LLMs) have been widely
investigated. Recently, large vision-language models (VLMs) represent an
unprecedented revolution, as they are built upon LLMs but can incorporate
additional modalities (e.g., images). However, the safety of VLMs lacks
systematic evaluation, and there may be an overconfidence in the safety
guarantees provided by their underlying LLMs. In this paper, to demonstrate
that introducing additional modality modules leads to unforeseen AI safety
issues, we propose FigStep, a straightforward yet effective jailbreaking
algorithm against VLMs. Instead of feeding textual harmful instructions
directly, FigStep converts the harmful content into images through typography
to bypass the safety alignment within the textual module of the VLMs, inducing
VLMs to output unsafe responses that violate common AI safety policies. In our
evaluation, we manually review 46,500 model responses generated by 3 families
of the promising open-source VLMs, i.e., LLaVA, MiniGPT4, and CogVLM (a total
of 6 VLMs). The experimental results show that FigStep can achieve an average
attack success rate of 82.50% on 500 harmful queries in 10 topics. Moreover, we
demonstrate that the methodology of FigStep can even jailbreak GPT-4V, which
already leverages an OCR detector to filter harmful queries. Above all, our
work reveals that VLMs are vulnerable to jailbreaking attacks, which highlights
the necessity of novel safety alignments between visual and textual modalities.
|
[
{
"created": "Thu, 9 Nov 2023 18:59:11 GMT",
"version": "v1"
},
{
"created": "Wed, 13 Dec 2023 17:54:16 GMT",
"version": "v2"
}
] |
2023-12-14
|
[
[
"Gong",
"Yichen",
""
],
[
"Ran",
"Delong",
""
],
[
"Liu",
"Jinyuan",
""
],
[
"Wang",
"Conglei",
""
],
[
"Cong",
"Tianshuo",
""
],
[
"Wang",
"Anyu",
""
],
[
"Duan",
"Sisi",
""
],
[
"Wang",
"Xiaoyun",
""
]
] |
Ensuring the safety of artificial intelligence-generated content (AIGC) is a longstanding topic in the artificial intelligence (AI) community, and the safety concerns associated with Large Language Models (LLMs) have been widely investigated. Recently, large vision-language models (VLMs) represent an unprecedented revolution, as they are built upon LLMs but can incorporate additional modalities (e.g., images). However, the safety of VLMs lacks systematic evaluation, and there may be an overconfidence in the safety guarantees provided by their underlying LLMs. In this paper, to demonstrate that introducing additional modality modules leads to unforeseen AI safety issues, we propose FigStep, a straightforward yet effective jailbreaking algorithm against VLMs. Instead of feeding textual harmful instructions directly, FigStep converts the harmful content into images through typography to bypass the safety alignment within the textual module of the VLMs, inducing VLMs to output unsafe responses that violate common AI safety policies. In our evaluation, we manually review 46,500 model responses generated by 3 families of the promising open-source VLMs, i.e., LLaVA, MiniGPT4, and CogVLM (a total of 6 VLMs). The experimental results show that FigStep can achieve an average attack success rate of 82.50% on 500 harmful queries in 10 topics. Moreover, we demonstrate that the methodology of FigStep can even jailbreak GPT-4V, which already leverages an OCR detector to filter harmful queries. Above all, our work reveals that VLMs are vulnerable to jailbreaking attacks, which highlights the necessity of novel safety alignments between visual and textual modalities.
|
1105.6358
|
Ali Al Khouri Dr.
|
Ali M. Al-Khouri
|
An Innovative Approach for E-Government Transformation
|
22 Pages, 15 figures, 5 tables
|
International Journal of Managing Value and Supply Chains (IJMVSC)
Vol. 2, No. 1, March 2011
| null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite the immeasurable investment in e-government initiatives throughout
the world, such initiatives have yet to succeed in fully meeting expectations
and desired outcomes. A key objective of this research article is to support
the government of the UAE in realizing its vision of e-government
transformation. It presents an innovative framework to support e-government
implementation, which was developed from a practitioner's perspective and based
on learnings from numerous e-government practices around the globe. The
framework presents an approach to guide governments worldwide, and UAE in
particular, to develop a top down strategy and leverage technology in order
realize its long term goal of e-government transformation. The study also
outlines the potential role of modern national identity schemes in enabling the
transformation of traditional identities into digital identities. The work
presented in this study is envisaged to help bridge the gap between policy
makers and implementers, by providing greater clarity and reducing misalignment
on key elements of e-government transformation. In the hands of leaders that
have a strong will to invest in e-government transformation, the work presented
in this study is envisaged to become a powerful tool to communicate and
coordinate initiatives, and provide a clear visualization of an integrated
approach to e-government transformation.
|
[
{
"created": "Tue, 31 May 2011 18:41:10 GMT",
"version": "v1"
}
] |
2011-06-01
|
[
[
"Al-Khouri",
"Ali M.",
""
]
] |
Despite the immeasurable investment in e-government initiatives throughout the world, such initiatives have yet to succeed in fully meeting expectations and desired outcomes. A key objective of this research article is to support the government of the UAE in realizing its vision of e-government transformation. It presents an innovative framework to support e-government implementation, which was developed from a practitioner's perspective and based on learnings from numerous e-government practices around the globe. The framework presents an approach to guide governments worldwide, and UAE in particular, to develop a top down strategy and leverage technology in order realize its long term goal of e-government transformation. The study also outlines the potential role of modern national identity schemes in enabling the transformation of traditional identities into digital identities. The work presented in this study is envisaged to help bridge the gap between policy makers and implementers, by providing greater clarity and reducing misalignment on key elements of e-government transformation. In the hands of leaders that have a strong will to invest in e-government transformation, the work presented in this study is envisaged to become a powerful tool to communicate and coordinate initiatives, and provide a clear visualization of an integrated approach to e-government transformation.
|
1802.09355
|
Shaoshan Liu
|
Jie Tang, Shaoshan Liu, Songwen Pei, Stephane Zuckerman, Chen Liu,
Weisong Shi, Jean-Luc Gaudiot
|
Teaching Autonomous Driving Using a Modular and Integrated Approach
| null | null | null | null |
cs.CY cs.AI cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Autonomous driving is not one single technology but rather a complex system
integrating many technologies, which means that teaching autonomous driving is
a challenging task. Indeed, most existing autonomous driving classes focus on
one of the technologies involved. This not only fails to provide a
comprehensive coverage, but also sets a high entry barrier for students with
different technology backgrounds. In this paper, we present a modular,
integrated approach to teaching autonomous driving. Specifically, we organize
the technologies used in autonomous driving into modules. This is described in
the textbook we have developed as well as a series of multimedia online
lectures designed to provide technical overview for each module. Then, once the
students have understood these modules, the experimental platforms for
integration we have developed allow the students to fully understand how the
modules interact with each other. To verify this teaching approach, we present
three case studies: an introductory class on autonomous driving for students
with only a basic technology background; a new session in an existing embedded
systems class to demonstrate how embedded system technologies can be applied to
autonomous driving; and an industry professional training session to quickly
bring up experienced engineers to work in autonomous driving. The results show
that students can maintain a high interest level and make great progress by
starting with familiar concepts before moving onto other modules.
|
[
{
"created": "Thu, 22 Feb 2018 04:01:51 GMT",
"version": "v1"
},
{
"created": "Tue, 27 Feb 2018 01:50:31 GMT",
"version": "v2"
}
] |
2018-02-28
|
[
[
"Tang",
"Jie",
""
],
[
"Liu",
"Shaoshan",
""
],
[
"Pei",
"Songwen",
""
],
[
"Zuckerman",
"Stephane",
""
],
[
"Liu",
"Chen",
""
],
[
"Shi",
"Weisong",
""
],
[
"Gaudiot",
"Jean-Luc",
""
]
] |
Autonomous driving is not one single technology but rather a complex system integrating many technologies, which means that teaching autonomous driving is a challenging task. Indeed, most existing autonomous driving classes focus on one of the technologies involved. This not only fails to provide a comprehensive coverage, but also sets a high entry barrier for students with different technology backgrounds. In this paper, we present a modular, integrated approach to teaching autonomous driving. Specifically, we organize the technologies used in autonomous driving into modules. This is described in the textbook we have developed as well as a series of multimedia online lectures designed to provide technical overview for each module. Then, once the students have understood these modules, the experimental platforms for integration we have developed allow the students to fully understand how the modules interact with each other. To verify this teaching approach, we present three case studies: an introductory class on autonomous driving for students with only a basic technology background; a new session in an existing embedded systems class to demonstrate how embedded system technologies can be applied to autonomous driving; and an industry professional training session to quickly bring up experienced engineers to work in autonomous driving. The results show that students can maintain a high interest level and make great progress by starting with familiar concepts before moving onto other modules.
|
1502.04861
|
Marius Pesavento
|
Adrian Schad, Ka L. Law, Marius Pesavento
|
Rank-Two Beamforming and Power Allocation in Multicasting Relay Networks
| null | null |
10.1109/TSP.2015.2423255
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a novel single-group multicasting relay beamforming
scheme. We assume a source that transmits common messages via multiple
amplify-and-forward relays to multiple destinations. To increase the number of
degrees of freedom in the beamforming design, the relays process two received
signals jointly and transmit the Alamouti space-time block code over two
different beams. Furthermore, in contrast to the existing relay multicasting
scheme of the literature, we take into account the direct links from the source
to the destinations. We aim to maximize the lowest received quality-of-service
by choosing the proper relay weights and the ideal distribution of the power
resources in the network. To solve the corresponding optimization problem, we
propose an iterative algorithm which solves sequences of convex approximations
of the original non-convex optimization problem. Simulation results demonstrate
significant performance improvements of the proposed methods as compared with
the existing relay multicasting scheme of the literature and an algorithm based
on the popular semidefinite relaxation technique.
|
[
{
"created": "Tue, 17 Feb 2015 11:12:29 GMT",
"version": "v1"
}
] |
2023-07-19
|
[
[
"Schad",
"Adrian",
""
],
[
"Law",
"Ka L.",
""
],
[
"Pesavento",
"Marius",
""
]
] |
In this paper, we propose a novel single-group multicasting relay beamforming scheme. We assume a source that transmits common messages via multiple amplify-and-forward relays to multiple destinations. To increase the number of degrees of freedom in the beamforming design, the relays process two received signals jointly and transmit the Alamouti space-time block code over two different beams. Furthermore, in contrast to the existing relay multicasting scheme of the literature, we take into account the direct links from the source to the destinations. We aim to maximize the lowest received quality-of-service by choosing the proper relay weights and the ideal distribution of the power resources in the network. To solve the corresponding optimization problem, we propose an iterative algorithm which solves sequences of convex approximations of the original non-convex optimization problem. Simulation results demonstrate significant performance improvements of the proposed methods as compared with the existing relay multicasting scheme of the literature and an algorithm based on the popular semidefinite relaxation technique.
|
2404.06433
|
B.Sundar Rajan
|
Mallikharjuna Chinnapadamala, Charul Rajput and B. Sundar Rajan
|
A New Hotplug Coded Caching Scheme Using PDAs
|
8 pages, 4 figures and 1 table. arXiv admin note: text overlap with
arXiv:2311.02856
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the original coded caching model introduced by Maddah-Ali and Niesen in
2014, the server starts broadcasting only after it receives demands from all
the users. So, all the users must be active during the delivery phase. In this
work, we consider a coded caching model called hotplug coded caching in which
some of the users are offline during the delivery phase. This model was first
introduced by Ma and Tuninetti (``On Coded Caching Systems with Offline Users,"
2022 IEEE International Symposium on Information Theory). The concept of
Hotplug Placement Delivery Arrays (HpPDAs) for the hotplug coded caching
systems was introduced in (``Improved Hotplug Caching Schemes Using PDAs and
$t$-Designs," \emph{arXiv:2311.02856}, 2024), in which the authors have
constructed HpPDAs from $t$-designs. This work provides a new hotplug coded
caching scheme from the existing HpPDAs. The performance comparison of the
proposed scheme with the existing schemes is presented. When applied for HpPDAs
from $t$-designs, our scheme outperforms the baseline scheme by Ma and
Tuninetti, and the Improved $t$-scheme by Rajput and Rajan in some memory
regimes.
|
[
{
"created": "Tue, 9 Apr 2024 16:25:06 GMT",
"version": "v1"
}
] |
2024-04-10
|
[
[
"Chinnapadamala",
"Mallikharjuna",
""
],
[
"Rajput",
"Charul",
""
],
[
"Rajan",
"B. Sundar",
""
]
] |
In the original coded caching model introduced by Maddah-Ali and Niesen in 2014, the server starts broadcasting only after it receives demands from all the users. So, all the users must be active during the delivery phase. In this work, we consider a coded caching model called hotplug coded caching in which some of the users are offline during the delivery phase. This model was first introduced by Ma and Tuninetti (``On Coded Caching Systems with Offline Users," 2022 IEEE International Symposium on Information Theory). The concept of Hotplug Placement Delivery Arrays (HpPDAs) for the hotplug coded caching systems was introduced in (``Improved Hotplug Caching Schemes Using PDAs and $t$-Designs," \emph{arXiv:2311.02856}, 2024), in which the authors have constructed HpPDAs from $t$-designs. This work provides a new hotplug coded caching scheme from the existing HpPDAs. The performance comparison of the proposed scheme with the existing schemes is presented. When applied for HpPDAs from $t$-designs, our scheme outperforms the baseline scheme by Ma and Tuninetti, and the Improved $t$-scheme by Rajput and Rajan in some memory regimes.
|
2309.04703
|
Jinbo Wen
|
Jinbo Wen, Jiawen Kang, Zehui Xiong, Yang Zhang, Hongyang Du, Yutao
Jiao, Dusit Niyato
|
Task Freshness-aware Incentive Mechanism for Vehicle Twin Migration in
Vehicular Metaverses
| null | null | null | null |
cs.GT
|
http://creativecommons.org/licenses/by/4.0/
|
Vehicular metaverse, which is treated as the future continuum between
automotive industry and metaverse, is envisioned as a blended immersive domain
as the digital twins of intelligent transportation systems. Vehicles access the
vehicular metaverses by their own Vehicle Twins (VTs) (e.g., avatars) that
resource-limited vehicles offload the tasks of building VTs to their nearby
RoadSide Units (RSUs). However, due to the limited coverage of RSUs and the
mobility of vehicles, VTs have to be migrated from one RSU to other RSUs to
ensure uninterrupted metaverse services for users within vehicles. This process
requires the next RSUs to contribute sufficient bandwidth resources for VT
migrations under asymmetric information. To this end, in this paper, we design
an efficient incentive mechanism framework for VT migrations. We first propose
a novel metric named Age of Migration Task (AoMT) to quantify the task
freshness of the VT migration. AoMT measures the time elapsed from the first
collected sensing data of the freshest avatar migration task to the last
successfully processed data at the next RSU. To incentivize the contribution of
bandwidth resources among the next RSUs, we propose an AoMT-based contract
model, where the optimal contract is derived to maximize the expected utility
of the RSU that provides metaverse services. Numerical results demonstrate the
efficiency of the proposed incentive mechanism for VT migrations.
|
[
{
"created": "Sat, 9 Sep 2023 07:08:17 GMT",
"version": "v1"
},
{
"created": "Tue, 12 Sep 2023 15:41:14 GMT",
"version": "v2"
}
] |
2023-09-13
|
[
[
"Wen",
"Jinbo",
""
],
[
"Kang",
"Jiawen",
""
],
[
"Xiong",
"Zehui",
""
],
[
"Zhang",
"Yang",
""
],
[
"Du",
"Hongyang",
""
],
[
"Jiao",
"Yutao",
""
],
[
"Niyato",
"Dusit",
""
]
] |
Vehicular metaverse, which is treated as the future continuum between automotive industry and metaverse, is envisioned as a blended immersive domain as the digital twins of intelligent transportation systems. Vehicles access the vehicular metaverses by their own Vehicle Twins (VTs) (e.g., avatars) that resource-limited vehicles offload the tasks of building VTs to their nearby RoadSide Units (RSUs). However, due to the limited coverage of RSUs and the mobility of vehicles, VTs have to be migrated from one RSU to other RSUs to ensure uninterrupted metaverse services for users within vehicles. This process requires the next RSUs to contribute sufficient bandwidth resources for VT migrations under asymmetric information. To this end, in this paper, we design an efficient incentive mechanism framework for VT migrations. We first propose a novel metric named Age of Migration Task (AoMT) to quantify the task freshness of the VT migration. AoMT measures the time elapsed from the first collected sensing data of the freshest avatar migration task to the last successfully processed data at the next RSU. To incentivize the contribution of bandwidth resources among the next RSUs, we propose an AoMT-based contract model, where the optimal contract is derived to maximize the expected utility of the RSU that provides metaverse services. Numerical results demonstrate the efficiency of the proposed incentive mechanism for VT migrations.
|
1811.03754
|
Duong Nguyen
|
Duong Nguyen Anh, Hieu Nguyen Kiem, Vi Ngo Van
|
Neural sequence labeling for Vietnamese POS Tagging and NER
|
5 pages
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a neural architecture for Vietnamese sequence labeling
tasks including part-of-speech (POS) tagging and named entity recognition
(NER). We applied the model described in \cite{lample-EtAl:2016:N16-1} that is
a combination of bidirectional Long-Short Term Memory and Conditional Random
Fields, which rely on two sources of information about words: character-based
word representations learned from the supervised corpus and pre-trained word
embeddings learned from other unannotated corpora. Experiments on benchmark
datasets show that this work achieves state-of-the-art performances on both
tasks - 93.52\% accuracy for POS tagging and 94.88\% F1 for NER. Our sourcecode
is available at here.
|
[
{
"created": "Fri, 9 Nov 2018 03:15:23 GMT",
"version": "v1"
},
{
"created": "Mon, 12 Nov 2018 13:23:15 GMT",
"version": "v2"
}
] |
2018-11-13
|
[
[
"Anh",
"Duong Nguyen",
""
],
[
"Kiem",
"Hieu Nguyen",
""
],
[
"Van",
"Vi Ngo",
""
]
] |
This paper presents a neural architecture for Vietnamese sequence labeling tasks including part-of-speech (POS) tagging and named entity recognition (NER). We applied the model described in \cite{lample-EtAl:2016:N16-1} that is a combination of bidirectional Long-Short Term Memory and Conditional Random Fields, which rely on two sources of information about words: character-based word representations learned from the supervised corpus and pre-trained word embeddings learned from other unannotated corpora. Experiments on benchmark datasets show that this work achieves state-of-the-art performances on both tasks - 93.52\% accuracy for POS tagging and 94.88\% F1 for NER. Our sourcecode is available at here.
|
1602.05257
|
Deovrat Kakde
|
Deovrat Kakde, Arin Chaudhuri, Seunghyun Kong, Maria Jahja, Hansi
Jiang, Jorge Silva
|
Peak Criterion for Choosing Gaussian Kernel Bandwidth in Support Vector
Data Description
| null | null |
10.1109/ICPHM.2017.7998302
| null |
cs.LG stat.AP stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Support Vector Data Description (SVDD) is a machine-learning technique used
for single class classification and outlier detection. SVDD formulation with
kernel function provides a flexible boundary around data. The value of kernel
function parameters affects the nature of the data boundary. For example, it is
observed that with a Gaussian kernel, as the value of kernel bandwidth is
lowered, the data boundary changes from spherical to wiggly. The spherical data
boundary leads to underfitting, and an extremely wiggly data boundary leads to
overfitting. In this paper, we propose empirical criterion to obtain good
values of the Gaussian kernel bandwidth parameter. This criterion provides a
smooth boundary that captures the essential geometric features of the data.
|
[
{
"created": "Wed, 17 Feb 2016 00:51:18 GMT",
"version": "v1"
},
{
"created": "Wed, 11 May 2016 21:39:53 GMT",
"version": "v2"
},
{
"created": "Tue, 8 Aug 2017 18:00:45 GMT",
"version": "v3"
}
] |
2017-09-06
|
[
[
"Kakde",
"Deovrat",
""
],
[
"Chaudhuri",
"Arin",
""
],
[
"Kong",
"Seunghyun",
""
],
[
"Jahja",
"Maria",
""
],
[
"Jiang",
"Hansi",
""
],
[
"Silva",
"Jorge",
""
]
] |
Support Vector Data Description (SVDD) is a machine-learning technique used for single class classification and outlier detection. SVDD formulation with kernel function provides a flexible boundary around data. The value of kernel function parameters affects the nature of the data boundary. For example, it is observed that with a Gaussian kernel, as the value of kernel bandwidth is lowered, the data boundary changes from spherical to wiggly. The spherical data boundary leads to underfitting, and an extremely wiggly data boundary leads to overfitting. In this paper, we propose empirical criterion to obtain good values of the Gaussian kernel bandwidth parameter. This criterion provides a smooth boundary that captures the essential geometric features of the data.
|
2005.02940
|
Remi Geraud-Stewart
|
Marc Beunardeau, \'Eric Brier, No\'emie Cartier, Aisling Connolly,
Nathana\"el Courant, R\'emi G\'eraud-Stewart, David Naccache, Ofer
Yifrach-Stav
|
Optimal Covid-19 Pool Testing with a priori Information
| null | null | null | null |
cs.AI cs.DM q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As humanity struggles to contain the global Covid-19 infection, prophylactic
actions are grandly slowed down by the shortage of testing kits. Governments
have taken several measures to work around this shortage: the FDA has become
more liberal on the approval of Covid-19 tests in the US. In the UK emergency
measures allowed to increase the daily number of locally produced test kits to
100,000. China has recently launched a massive test manufacturing program.
However, all those efforts are very insufficient and many poor countries are
still under threat. A popular method for reducing the number of tests consists
in pooling samples, i.e. mixing patient samples and testing the mixed samples
once. If all the samples are negative, pooling succeeds at a unitary cost.
However, if a single sample is positive, failure does not indicate which
patient is infected. This paper describes how to optimally detect infected
patients in pools, i.e. using a minimal number of tests to precisely identify
them, given the a priori probabilities that each of the patients is healthy.
Those probabilities can be estimated using questionnaires, supervised machine
learning or clinical examinations. The resulting algorithms, which can be
interpreted as informed divide-and-conquer strategies, are non-intuitive and
quite surprising. They are patent-free. Co-authors are listed in alphabetical
order.
|
[
{
"created": "Wed, 6 May 2020 17:08:56 GMT",
"version": "v1"
},
{
"created": "Mon, 11 May 2020 09:25:25 GMT",
"version": "v2"
}
] |
2020-05-12
|
[
[
"Beunardeau",
"Marc",
""
],
[
"Brier",
"Éric",
""
],
[
"Cartier",
"Noémie",
""
],
[
"Connolly",
"Aisling",
""
],
[
"Courant",
"Nathanaël",
""
],
[
"Géraud-Stewart",
"Rémi",
""
],
[
"Naccache",
"David",
""
],
[
"Yifrach-Stav",
"Ofer",
""
]
] |
As humanity struggles to contain the global Covid-19 infection, prophylactic actions are grandly slowed down by the shortage of testing kits. Governments have taken several measures to work around this shortage: the FDA has become more liberal on the approval of Covid-19 tests in the US. In the UK emergency measures allowed to increase the daily number of locally produced test kits to 100,000. China has recently launched a massive test manufacturing program. However, all those efforts are very insufficient and many poor countries are still under threat. A popular method for reducing the number of tests consists in pooling samples, i.e. mixing patient samples and testing the mixed samples once. If all the samples are negative, pooling succeeds at a unitary cost. However, if a single sample is positive, failure does not indicate which patient is infected. This paper describes how to optimally detect infected patients in pools, i.e. using a minimal number of tests to precisely identify them, given the a priori probabilities that each of the patients is healthy. Those probabilities can be estimated using questionnaires, supervised machine learning or clinical examinations. The resulting algorithms, which can be interpreted as informed divide-and-conquer strategies, are non-intuitive and quite surprising. They are patent-free. Co-authors are listed in alphabetical order.
|
2303.09564
|
Jiayi Wei
|
Jiayi Wei, Greg Durrett, Isil Dillig
|
TypeT5: Seq2seq Type Inference using Static Analysis
|
Published as a conference paper at ICLR 2023
| null | null | null |
cs.SE cs.LG cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There has been growing interest in automatically predicting missing type
annotations in programs written in Python and JavaScript. While prior methods
have achieved impressive accuracy when predicting the most common types, they
often perform poorly on rare or complex types. In this paper, we present a new
type inference method that treats type prediction as a code infilling task by
leveraging CodeT5, a state-of-the-art seq2seq pre-trained language model for
code. Our method uses static analysis to construct dynamic contexts for each
code element whose type signature is to be predicted by the model. We also
propose an iterative decoding scheme that incorporates previous type
predictions in the model's input context, allowing information exchange between
related code elements. Our evaluation shows that the proposed approach, TypeT5,
not only achieves a higher overall accuracy (particularly on rare and complex
types) but also produces more coherent results with fewer type errors -- while
enabling easy user intervention.
|
[
{
"created": "Thu, 16 Mar 2023 23:48:00 GMT",
"version": "v1"
}
] |
2023-03-20
|
[
[
"Wei",
"Jiayi",
""
],
[
"Durrett",
"Greg",
""
],
[
"Dillig",
"Isil",
""
]
] |
There has been growing interest in automatically predicting missing type annotations in programs written in Python and JavaScript. While prior methods have achieved impressive accuracy when predicting the most common types, they often perform poorly on rare or complex types. In this paper, we present a new type inference method that treats type prediction as a code infilling task by leveraging CodeT5, a state-of-the-art seq2seq pre-trained language model for code. Our method uses static analysis to construct dynamic contexts for each code element whose type signature is to be predicted by the model. We also propose an iterative decoding scheme that incorporates previous type predictions in the model's input context, allowing information exchange between related code elements. Our evaluation shows that the proposed approach, TypeT5, not only achieves a higher overall accuracy (particularly on rare and complex types) but also produces more coherent results with fewer type errors -- while enabling easy user intervention.
|
2403.09326
|
Duotun Wang
|
Duotun Wang, Hengyu Meng, Zeyu Cai, Zhijing Shao, Qianxi Liu, Lin
Wang, Mingming Fan, Xiaohang Zhan, Zeyu Wang
|
HeadEvolver: Text to Head Avatars via Expressive and
Attribute-Preserving Mesh Deformation
|
12 pages, 17 figures
| null | null | null |
cs.GR cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
We present HeadEvolver, a novel framework to generate stylized head avatars
from text guidance. HeadEvolver uses locally learnable mesh deformation from a
template head mesh, producing high-quality digital assets for detail-preserving
editing and animation. To tackle the challenges of lacking fine-grained and
semantic-aware local shape control in global deformation through Jacobians, we
introduce a trainable parameter as a weighting factor for the Jacobian at each
triangle to adaptively change local shapes while maintaining global
correspondences and facial features. Moreover, to ensure the coherence of the
resulting shape and appearance from different viewpoints, we use pretrained
image diffusion models for differentiable rendering with regularization terms
to refine the deformation under text guidance. Extensive experiments
demonstrate that our method can generate diverse head avatars with an
articulated mesh that can be edited seamlessly in 3D graphics software,
facilitating downstream applications such as more efficient animation with
inherited blend shapes and semantic consistency.
|
[
{
"created": "Thu, 14 Mar 2024 12:15:23 GMT",
"version": "v1"
},
{
"created": "Mon, 10 Jun 2024 04:50:36 GMT",
"version": "v2"
}
] |
2024-06-11
|
[
[
"Wang",
"Duotun",
""
],
[
"Meng",
"Hengyu",
""
],
[
"Cai",
"Zeyu",
""
],
[
"Shao",
"Zhijing",
""
],
[
"Liu",
"Qianxi",
""
],
[
"Wang",
"Lin",
""
],
[
"Fan",
"Mingming",
""
],
[
"Zhan",
"Xiaohang",
""
],
[
"Wang",
"Zeyu",
""
]
] |
We present HeadEvolver, a novel framework to generate stylized head avatars from text guidance. HeadEvolver uses locally learnable mesh deformation from a template head mesh, producing high-quality digital assets for detail-preserving editing and animation. To tackle the challenges of lacking fine-grained and semantic-aware local shape control in global deformation through Jacobians, we introduce a trainable parameter as a weighting factor for the Jacobian at each triangle to adaptively change local shapes while maintaining global correspondences and facial features. Moreover, to ensure the coherence of the resulting shape and appearance from different viewpoints, we use pretrained image diffusion models for differentiable rendering with regularization terms to refine the deformation under text guidance. Extensive experiments demonstrate that our method can generate diverse head avatars with an articulated mesh that can be edited seamlessly in 3D graphics software, facilitating downstream applications such as more efficient animation with inherited blend shapes and semantic consistency.
|
2004.05712
|
Chiranjeeb Buragohain
|
Chiranjeeb Buragohain, Knut Magne Risvik, Paul Brett, Miguel Castro,
Wonhee Cho, Joshua Cowhig, Nikolas Gloy, Karthik Kalyanaraman, Richendra
Khanna, John Pao, Matthew Renzelmann, Alex Shamis, Timothy Tan and Shuheng
Zheng
|
A1: A Distributed In-Memory Graph Database
| null | null |
10.1145/3318464.3386135
| null |
cs.DB cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A1 is an in-memory distributed database used by the Bing search engine to
support complex queries over structured data. The key enablers for A1 are
availability of cheap DRAM and high speed RDMA (Remote Direct Memory Access)
networking in commodity hardware. A1 uses FaRM as its underlying storage layer
and builds the graph abstraction and query engine on top. The combination of
in-memory storage and RDMA access requires rethinking how data is allocated,
organized and queried in a large distributed system. A single A1 cluster can
store tens of billions of vertices and edges and support a throughput of 350+
million of vertex reads per second with end to end query latency in single
digit milliseconds. In this paper we describe the A1 data model, RDMA optimized
data structures and query execution.
|
[
{
"created": "Sun, 12 Apr 2020 22:58:46 GMT",
"version": "v1"
}
] |
2020-04-14
|
[
[
"Buragohain",
"Chiranjeeb",
""
],
[
"Risvik",
"Knut Magne",
""
],
[
"Brett",
"Paul",
""
],
[
"Castro",
"Miguel",
""
],
[
"Cho",
"Wonhee",
""
],
[
"Cowhig",
"Joshua",
""
],
[
"Gloy",
"Nikolas",
""
],
[
"Kalyanaraman",
"Karthik",
""
],
[
"Khanna",
"Richendra",
""
],
[
"Pao",
"John",
""
],
[
"Renzelmann",
"Matthew",
""
],
[
"Shamis",
"Alex",
""
],
[
"Tan",
"Timothy",
""
],
[
"Zheng",
"Shuheng",
""
]
] |
A1 is an in-memory distributed database used by the Bing search engine to support complex queries over structured data. The key enablers for A1 are availability of cheap DRAM and high speed RDMA (Remote Direct Memory Access) networking in commodity hardware. A1 uses FaRM as its underlying storage layer and builds the graph abstraction and query engine on top. The combination of in-memory storage and RDMA access requires rethinking how data is allocated, organized and queried in a large distributed system. A single A1 cluster can store tens of billions of vertices and edges and support a throughput of 350+ million of vertex reads per second with end to end query latency in single digit milliseconds. In this paper we describe the A1 data model, RDMA optimized data structures and query execution.
|
1906.02353
|
Yi Ren
|
Yi Ren, Donald Goldfarb
|
Efficient Subsampled Gauss-Newton and Natural Gradient Methods for
Training Neural Networks
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present practical Levenberg-Marquardt variants of Gauss-Newton and natural
gradient methods for solving non-convex optimization problems that arise in
training deep neural networks involving enormous numbers of variables and huge
data sets. Our methods use subsampled Gauss-Newton or Fisher information
matrices and either subsampled gradient estimates (fully stochastic) or full
gradients (semi-stochastic), which, in the latter case, we prove convergent to
a stationary point. By using the Sherman-Morrison-Woodbury formula with
automatic differentiation (backpropagation) we show how our methods can be
implemented to perform efficiently. Finally, numerical results are presented to
demonstrate the effectiveness of our proposed methods.
|
[
{
"created": "Wed, 5 Jun 2019 23:13:42 GMT",
"version": "v1"
}
] |
2019-06-07
|
[
[
"Ren",
"Yi",
""
],
[
"Goldfarb",
"Donald",
""
]
] |
We present practical Levenberg-Marquardt variants of Gauss-Newton and natural gradient methods for solving non-convex optimization problems that arise in training deep neural networks involving enormous numbers of variables and huge data sets. Our methods use subsampled Gauss-Newton or Fisher information matrices and either subsampled gradient estimates (fully stochastic) or full gradients (semi-stochastic), which, in the latter case, we prove convergent to a stationary point. By using the Sherman-Morrison-Woodbury formula with automatic differentiation (backpropagation) we show how our methods can be implemented to perform efficiently. Finally, numerical results are presented to demonstrate the effectiveness of our proposed methods.
|
1512.01375
|
Veerle Timmermans
|
Tobias Harks, Veerle Timmermans
|
Uniqueness of Equilibria in Atomic Splittable Polymatroid Congestion
Games
|
17 pages
| null | null | null |
cs.GT math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study uniqueness of Nash equilibria in atomic splittable congestion games
and derive a uniqueness result based on polymatroid theory: when the strategy
space of every player is a bidirectional flow polymatroid, then equilibria are
unique. Bidirectional flow polymatroids are introduced as a subclass of
polymatroids possessing certain exchange properties. We show that important
cases such as base orderable matroids can be recovered as a special case of
bidirectional flow polymatroids. On the other hand we show that matroidal set
systems are in some sense necessary to guarantee uniqueness of equilibria: for
every atomic splittable congestion game with at least three players and
nonmatroidal set systems per player, there is an isomorphic game having
multiple equilibria. Our results leave a gap between base orderable matroids
and general matroids for which we do not know whether equilibria are unique.
|
[
{
"created": "Fri, 4 Dec 2015 11:25:25 GMT",
"version": "v1"
},
{
"created": "Tue, 8 Mar 2016 09:19:22 GMT",
"version": "v2"
},
{
"created": "Wed, 8 Aug 2018 07:01:46 GMT",
"version": "v3"
}
] |
2018-08-09
|
[
[
"Harks",
"Tobias",
""
],
[
"Timmermans",
"Veerle",
""
]
] |
We study uniqueness of Nash equilibria in atomic splittable congestion games and derive a uniqueness result based on polymatroid theory: when the strategy space of every player is a bidirectional flow polymatroid, then equilibria are unique. Bidirectional flow polymatroids are introduced as a subclass of polymatroids possessing certain exchange properties. We show that important cases such as base orderable matroids can be recovered as a special case of bidirectional flow polymatroids. On the other hand we show that matroidal set systems are in some sense necessary to guarantee uniqueness of equilibria: for every atomic splittable congestion game with at least three players and nonmatroidal set systems per player, there is an isomorphic game having multiple equilibria. Our results leave a gap between base orderable matroids and general matroids for which we do not know whether equilibria are unique.
|
1202.4050
|
Nishant Mehta
|
Nishant A. Mehta and Alexander G. Gray
|
On the Sample Complexity of Predictive Sparse Coding
|
Sparse Coding Stability Theorem from version 1 has been relaxed
considerably using a new notion of coding margin. Old Sparse Coding Stability
Theorem still in new version, now as Theorem 2. Presentation of all proofs
simplified/improved considerably. Paper reorganized. Empirical analysis
showing new coding margin is non-trivial on real datasets
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The goal of predictive sparse coding is to learn a representation of examples
as sparse linear combinations of elements from a dictionary, such that a
learned hypothesis linear in the new representation performs well on a
predictive task. Predictive sparse coding algorithms recently have demonstrated
impressive performance on a variety of supervised tasks, but their
generalization properties have not been studied. We establish the first
generalization error bounds for predictive sparse coding, covering two
settings: 1) the overcomplete setting, where the number of features k exceeds
the original dimensionality d; and 2) the high or infinite-dimensional setting,
where only dimension-free bounds are useful. Both learning bounds intimately
depend on stability properties of the learned sparse encoder, as measured on
the training sample. Consequently, we first present a fundamental stability
result for the LASSO, a result characterizing the stability of the sparse codes
with respect to perturbations to the dictionary. In the overcomplete setting,
we present an estimation error bound that decays as \tilde{O}(sqrt(d k/m)) with
respect to d and k. In the high or infinite-dimensional setting, we show a
dimension-free bound that is \tilde{O}(sqrt(k^2 s / m)) with respect to k and
s, where s is an upper bound on the number of non-zeros in the sparse code for
any training data point.
|
[
{
"created": "Sat, 18 Feb 2012 02:28:49 GMT",
"version": "v1"
},
{
"created": "Mon, 8 Oct 2012 00:07:13 GMT",
"version": "v2"
}
] |
2012-10-09
|
[
[
"Mehta",
"Nishant A.",
""
],
[
"Gray",
"Alexander G.",
""
]
] |
The goal of predictive sparse coding is to learn a representation of examples as sparse linear combinations of elements from a dictionary, such that a learned hypothesis linear in the new representation performs well on a predictive task. Predictive sparse coding algorithms recently have demonstrated impressive performance on a variety of supervised tasks, but their generalization properties have not been studied. We establish the first generalization error bounds for predictive sparse coding, covering two settings: 1) the overcomplete setting, where the number of features k exceeds the original dimensionality d; and 2) the high or infinite-dimensional setting, where only dimension-free bounds are useful. Both learning bounds intimately depend on stability properties of the learned sparse encoder, as measured on the training sample. Consequently, we first present a fundamental stability result for the LASSO, a result characterizing the stability of the sparse codes with respect to perturbations to the dictionary. In the overcomplete setting, we present an estimation error bound that decays as \tilde{O}(sqrt(d k/m)) with respect to d and k. In the high or infinite-dimensional setting, we show a dimension-free bound that is \tilde{O}(sqrt(k^2 s / m)) with respect to k and s, where s is an upper bound on the number of non-zeros in the sparse code for any training data point.
|
2309.12510
|
Yunye Gong
|
Yunye Gong, Yi Yao, Xiao Lin, Ajay Divakaran, Melinda Gervasio
|
Confidence Calibration for Systems with Cascaded Predictive Modules
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing conformal prediction algorithms estimate prediction intervals at
target confidence levels to characterize the performance of a regression model
on new test samples. However, considering an autonomous system consisting of
multiple modules, prediction intervals constructed for individual modules fall
short of accommodating uncertainty propagation over different modules and thus
cannot provide reliable predictions on system behavior. We address this
limitation and present novel solutions based on conformal prediction to provide
prediction intervals calibrated for a predictive system consisting of cascaded
modules (e.g., an upstream feature extraction module and a downstream
regression module). Our key idea is to leverage module-level validation data to
characterize the system-level error distribution without direct access to
end-to-end validation data. We provide theoretical justification and empirical
experimental results to demonstrate the effectiveness of proposed solutions. In
comparison to prediction intervals calibrated for individual modules, our
solutions generate improved intervals with more accurate performance guarantees
for system predictions, which are demonstrated on both synthetic systems and
real-world systems performing overlap prediction for indoor navigation using
the Matterport3D dataset.
|
[
{
"created": "Thu, 21 Sep 2023 22:12:24 GMT",
"version": "v1"
}
] |
2023-09-25
|
[
[
"Gong",
"Yunye",
""
],
[
"Yao",
"Yi",
""
],
[
"Lin",
"Xiao",
""
],
[
"Divakaran",
"Ajay",
""
],
[
"Gervasio",
"Melinda",
""
]
] |
Existing conformal prediction algorithms estimate prediction intervals at target confidence levels to characterize the performance of a regression model on new test samples. However, considering an autonomous system consisting of multiple modules, prediction intervals constructed for individual modules fall short of accommodating uncertainty propagation over different modules and thus cannot provide reliable predictions on system behavior. We address this limitation and present novel solutions based on conformal prediction to provide prediction intervals calibrated for a predictive system consisting of cascaded modules (e.g., an upstream feature extraction module and a downstream regression module). Our key idea is to leverage module-level validation data to characterize the system-level error distribution without direct access to end-to-end validation data. We provide theoretical justification and empirical experimental results to demonstrate the effectiveness of proposed solutions. In comparison to prediction intervals calibrated for individual modules, our solutions generate improved intervals with more accurate performance guarantees for system predictions, which are demonstrated on both synthetic systems and real-world systems performing overlap prediction for indoor navigation using the Matterport3D dataset.
|
1502.04044
|
Mohamed Hamid
|
Mohamed Hamid, Slimane Ben Slimane, and Niclas Bj\"orsell
|
Downlink Throughput Driven Channel Access Framework for Cognitive LTE
Femto-Cells
|
30 pages, 11 figures. Submitted to IEEE Transactions on Wireless
Communications for review
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes an optimized sensing based channel access framework for
the LTE cognitive femto-cells, with an objective of maximizing the femto-cells
downlink throughput. Cognitive femto-cells opportunistically transmit on the
macro-cell channels when they are free of use. Those free channels are located
by means of spectrum sensing using energy detection. Moreover, periodic sensing
is adopted to detect any changes of the sensing outcomes. The maximum
attainable femto-cell downlink throughput varies with the macro-cell channel
occupancy statistics. Therefore, the LTE macro-cell occupancy is empirically
modeled using exponential distributions mixture. The LTE cognitive femto-cell
downlink throughput is maximized by compromising the transmission efficiency,
the explored spectrum opportunities and the interference from the macro-cell.
An analytical solution for the optimal periodic sensing interval that maximizes
the throughput is found and verified by simulations. The obtained results show
that there is indeed a single periodic sensing interval value that maximizes
the LTE cognitive femto-cell downlink throughput. At the peak of the macro-cell
traffic, our framework increases the femto-cell throughput by around 15%
compared to the senseless case. The impact of the available number of channels
for opportunistic access is studied and no significant impact is found for more
than three channels.
|
[
{
"created": "Fri, 13 Feb 2015 16:18:25 GMT",
"version": "v1"
}
] |
2015-02-16
|
[
[
"Hamid",
"Mohamed",
""
],
[
"Slimane",
"Slimane Ben",
""
],
[
"Björsell",
"Niclas",
""
]
] |
This paper proposes an optimized sensing based channel access framework for the LTE cognitive femto-cells, with an objective of maximizing the femto-cells downlink throughput. Cognitive femto-cells opportunistically transmit on the macro-cell channels when they are free of use. Those free channels are located by means of spectrum sensing using energy detection. Moreover, periodic sensing is adopted to detect any changes of the sensing outcomes. The maximum attainable femto-cell downlink throughput varies with the macro-cell channel occupancy statistics. Therefore, the LTE macro-cell occupancy is empirically modeled using exponential distributions mixture. The LTE cognitive femto-cell downlink throughput is maximized by compromising the transmission efficiency, the explored spectrum opportunities and the interference from the macro-cell. An analytical solution for the optimal periodic sensing interval that maximizes the throughput is found and verified by simulations. The obtained results show that there is indeed a single periodic sensing interval value that maximizes the LTE cognitive femto-cell downlink throughput. At the peak of the macro-cell traffic, our framework increases the femto-cell throughput by around 15% compared to the senseless case. The impact of the available number of channels for opportunistic access is studied and no significant impact is found for more than three channels.
|
1806.00429
|
Claudia Flores-Saviaga
|
Claudia Flores-Saviaga (1), Brian C. Keegan (2), Saiph Savage (1 and
3) ((1) West Virginia University, (2) University of Colorado Boulder, (3)
Universidad Nacional Autonoma de Mexico (UNAM))
|
Mobilizing the Trump Train: Understanding Collective Action in a
Political Trolling Community
| null |
International 12th AAAI Conference on Web and Social Media (ICWSM
2018)
| null | null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Political trolls initiate online discord not only for the lulz (laughs) but
also for ideological reasons, such as promoting their desired political
candidates. Political troll groups recently gained spotlight because they were
considered central in helping Donald Trump win the 2016 US presidential
election, which involved difficult mass mobilizations. Political trolls face
unique challenges as they must build their own communities while simultaneously
disrupting others. However, little is known about how political trolls mobilize
sufficient participation to suddenly become problems for others. We performed a
quantitative longitudinal analysis of more than 16 million comments from one of
the most popular and disruptive political trolling communities, the subreddit
/r/The\_Donald (T\D). We use T_D as a lens to understand participation and
collective action within these deviant spaces. In specific, we first study the
characteristics of the most active participants to uncover what might drive
their sustained participation. Next, we investigate how these active
individuals mobilize their community to action. Through our analysis, we
uncover that the most active employed distinct discursive strategies to
mobilize participation, and deployed technical tools like bots to create a
shared identity and sustain engagement. We conclude by providing data-backed
design implications for designers of civic media.
|
[
{
"created": "Fri, 1 Jun 2018 16:35:42 GMT",
"version": "v1"
}
] |
2018-06-04
|
[
[
"Flores-Saviaga",
"Claudia",
"",
"1 and\n 3"
],
[
"Keegan",
"Brian C.",
"",
"1 and\n 3"
],
[
"Savage",
"Saiph",
"",
"1 and\n 3"
]
] |
Political trolls initiate online discord not only for the lulz (laughs) but also for ideological reasons, such as promoting their desired political candidates. Political troll groups recently gained spotlight because they were considered central in helping Donald Trump win the 2016 US presidential election, which involved difficult mass mobilizations. Political trolls face unique challenges as they must build their own communities while simultaneously disrupting others. However, little is known about how political trolls mobilize sufficient participation to suddenly become problems for others. We performed a quantitative longitudinal analysis of more than 16 million comments from one of the most popular and disruptive political trolling communities, the subreddit /r/The\_Donald (T\D). We use T_D as a lens to understand participation and collective action within these deviant spaces. In specific, we first study the characteristics of the most active participants to uncover what might drive their sustained participation. Next, we investigate how these active individuals mobilize their community to action. Through our analysis, we uncover that the most active employed distinct discursive strategies to mobilize participation, and deployed technical tools like bots to create a shared identity and sustain engagement. We conclude by providing data-backed design implications for designers of civic media.
|
2211.07923
|
Mitsunori Ogihara
|
Mitsunori Ogihara and Kei Uchizawa
|
A Theory for Discrete-time Boolean Finite Dynamical Systems with
Uncertainty
|
37 pages
| null | null | null |
cs.CC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Dynamical Systems is a field that studies the collective behavior of objects
that update their states according to some rules. Discrete-time Boolean Finite
Dynamical System (DT-BFDS) is a subfield where the systems have some finite
number of objects whose states are Boolean values, and the state updates occur
in discrete time. In the subfield of DT-BFDS, researchers aim to (i) design
models for capturing real-world phenomena and using the models to make
predictions and (ii) develop simulation techniques for acquiring insights about
the systems' behavior. Useful for both aims is understanding the system
dynamics mathematically before executing the systems. Obtaining a mathematical
understanding of BFDS is quite challenging, even for simple systems, because
the state space of a system grows exponentially in the number of objects.
Researchers have used computational complexity to circumvent the challenge. The
complexity theoretic research in DT-BFDS has successfully produced complete
characterizations for many dynamical problems.
The DT-BFDS studies have mainly dealt with deterministic models, where the
update at each time step is deterministic, so the system dynamics are
completely determinable from the initial setting. However, natural systems have
uncertainty. Models having uncertainty may lead to far-better understandings of
nature. Although a few attempts have explored DT-BFDS with uncertainty,
including stochastic initialization and tie-breaking, they have scratched only
a tiny surface of models with uncertainty. The introduction of uncertainty can
be through two schemes. One is the introduction of alternate update functions.
The other is the introduction of alternate update schedules. 37This paper
establishes a theory of models with uncertainty and proves some fundamental
results.
|
[
{
"created": "Tue, 15 Nov 2022 06:12:34 GMT",
"version": "v1"
}
] |
2022-11-16
|
[
[
"Ogihara",
"Mitsunori",
""
],
[
"Uchizawa",
"Kei",
""
]
] |
Dynamical Systems is a field that studies the collective behavior of objects that update their states according to some rules. Discrete-time Boolean Finite Dynamical System (DT-BFDS) is a subfield where the systems have some finite number of objects whose states are Boolean values, and the state updates occur in discrete time. In the subfield of DT-BFDS, researchers aim to (i) design models for capturing real-world phenomena and using the models to make predictions and (ii) develop simulation techniques for acquiring insights about the systems' behavior. Useful for both aims is understanding the system dynamics mathematically before executing the systems. Obtaining a mathematical understanding of BFDS is quite challenging, even for simple systems, because the state space of a system grows exponentially in the number of objects. Researchers have used computational complexity to circumvent the challenge. The complexity theoretic research in DT-BFDS has successfully produced complete characterizations for many dynamical problems. The DT-BFDS studies have mainly dealt with deterministic models, where the update at each time step is deterministic, so the system dynamics are completely determinable from the initial setting. However, natural systems have uncertainty. Models having uncertainty may lead to far-better understandings of nature. Although a few attempts have explored DT-BFDS with uncertainty, including stochastic initialization and tie-breaking, they have scratched only a tiny surface of models with uncertainty. The introduction of uncertainty can be through two schemes. One is the introduction of alternate update functions. The other is the introduction of alternate update schedules. 37This paper establishes a theory of models with uncertainty and proves some fundamental results.
|
1409.6584
|
Bernhard Rumpe
|
Fred W. Rauskolb, Kai Berger, Christian Lipski, Marcus Magnor, Karsten
Cornelsen, Jan Effertz, Thomas Form, Fabian Graefe, Sebastian Ohl, Walter
Schumacher, J\"orn Marten Wille, Peter Hecker, Tobias Nothdurft, Michael
Doering, Kai Homeier, Johannes Morgenroth, Lars Wolf, Christian Basarke,
Christian Berger, Tim G\"ulke, Felix Klose, Bernhard Rumpe
|
Caroline: An Autonomously Driving Vehicle for Urban Environments
|
68 pages, 7 figures
|
M. Buehler, K. Iagnemma, S. Singh (Eds.). The DARPA Urban
Challenge - Autonomous Vehicles in City Traffic. Springer Tracts in Advanced
Robotics, Volume 56, pp. 441-508, 2010
| null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The 2007 DARPA Urban Challenge afforded the golden opportunity for the
Technische Universit\"at Braunschweig to demonstrate its abilities to develop
an autonomously driving vehicle to compete with the world's best competitors.
After several stages of qualification, our team CarOLO qualified early for the
DARPA Urban Challenge Final Event and was among only eleven teams from
initially 89 competitors to compete in the final. We had the ability to work
together in a large group of experts, each contributing his expertise in his
discipline, and significant organisational, financial and technical support by
local sponsors who helped us to become the best non-US team. In this report, we
describe the 2007 DARPA Urban Challenge, our contribution "Caroline", the
technology and algorithms along with her performance in the DARPA Urban
Challenge Final Event on November 3, 2007.
|
[
{
"created": "Mon, 22 Sep 2014 11:57:55 GMT",
"version": "v1"
}
] |
2014-09-24
|
[
[
"Rauskolb",
"Fred W.",
""
],
[
"Berger",
"Kai",
""
],
[
"Lipski",
"Christian",
""
],
[
"Magnor",
"Marcus",
""
],
[
"Cornelsen",
"Karsten",
""
],
[
"Effertz",
"Jan",
""
],
[
"Form",
"Thomas",
""
],
[
"Graefe",
"Fabian",
""
],
[
"Ohl",
"Sebastian",
""
],
[
"Schumacher",
"Walter",
""
],
[
"Wille",
"Jörn Marten",
""
],
[
"Hecker",
"Peter",
""
],
[
"Nothdurft",
"Tobias",
""
],
[
"Doering",
"Michael",
""
],
[
"Homeier",
"Kai",
""
],
[
"Morgenroth",
"Johannes",
""
],
[
"Wolf",
"Lars",
""
],
[
"Basarke",
"Christian",
""
],
[
"Berger",
"Christian",
""
],
[
"Gülke",
"Tim",
""
],
[
"Klose",
"Felix",
""
],
[
"Rumpe",
"Bernhard",
""
]
] |
The 2007 DARPA Urban Challenge afforded the golden opportunity for the Technische Universit\"at Braunschweig to demonstrate its abilities to develop an autonomously driving vehicle to compete with the world's best competitors. After several stages of qualification, our team CarOLO qualified early for the DARPA Urban Challenge Final Event and was among only eleven teams from initially 89 competitors to compete in the final. We had the ability to work together in a large group of experts, each contributing his expertise in his discipline, and significant organisational, financial and technical support by local sponsors who helped us to become the best non-US team. In this report, we describe the 2007 DARPA Urban Challenge, our contribution "Caroline", the technology and algorithms along with her performance in the DARPA Urban Challenge Final Event on November 3, 2007.
|
2104.03651
|
Daniel Reti
|
Daniel Reti, Norman Becker
|
Escape the Fake: Introducing Simulated Container-Escapes for Honeypots
| null |
2020 Workshop on Next Generation Networks and Applications (NGNA)
| null | null |
cs.CR cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the field of network security, the concept of honeypots is well
established in research as well as in production. Honeypots are used to imitate
a legitimate target on the network and to raise an alert on any interaction.
This does not only help learning about a breach, but also allows researchers to
study the techniques of an attacker. With the rise of cloud computing,
container-based virtualization gained popularity for application deployment.
This paper investigates the possibilities of container-based honeypots and
introduces the concept of simulating container escapes as a deception
technique.
|
[
{
"created": "Thu, 8 Apr 2021 10:16:03 GMT",
"version": "v1"
}
] |
2021-04-09
|
[
[
"Reti",
"Daniel",
""
],
[
"Becker",
"Norman",
""
]
] |
In the field of network security, the concept of honeypots is well established in research as well as in production. Honeypots are used to imitate a legitimate target on the network and to raise an alert on any interaction. This does not only help learning about a breach, but also allows researchers to study the techniques of an attacker. With the rise of cloud computing, container-based virtualization gained popularity for application deployment. This paper investigates the possibilities of container-based honeypots and introduces the concept of simulating container escapes as a deception technique.
|
1506.00412
|
Demia Della Penda
|
Demia Della Penda, Liqun Fu, and Mikael Johansson
|
Energy efficient D2D communications in dynamic TDD systems
|
Submitted to IEEE Journal of Selected Areas in Communications
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Network-assisted device-to-device communication is a promising technology for
improving the performance of proximity-based services. This paper demonstrates
how the integration of device-to-device communications and dynamic
time-division duplex can improve the energy efficiency of future cellular
networks, leading to a greener system operation and a prolonged battery
lifetime of mobile devices. We jointly optimize the mode selection,
transmission period and power allocation to minimize the energy consumption
(from both a system and a device perspective) while satisfying a certain rate
requirement. The radio resource management problems are formulated as
mixed-integer nonlinear programming problems. Although they are known to be
NP-hard in general, we exploit the problem structure to design efficient
algorithms that optimally solve several problem cases. For the remaining cases,
a heuristic algorithm that computes near-optimal solutions while respecting
practical constraints on execution times and signaling overhead is also
proposed. Simulation results confirm that the combination of device-to-device
and flexible time-division-duplex technologies can significantly enhance
spectrum and energy-efficiency of next generation cellular systems.
|
[
{
"created": "Mon, 1 Jun 2015 09:59:19 GMT",
"version": "v1"
}
] |
2015-06-02
|
[
[
"Della Penda",
"Demia",
""
],
[
"Fu",
"Liqun",
""
],
[
"Johansson",
"Mikael",
""
]
] |
Network-assisted device-to-device communication is a promising technology for improving the performance of proximity-based services. This paper demonstrates how the integration of device-to-device communications and dynamic time-division duplex can improve the energy efficiency of future cellular networks, leading to a greener system operation and a prolonged battery lifetime of mobile devices. We jointly optimize the mode selection, transmission period and power allocation to minimize the energy consumption (from both a system and a device perspective) while satisfying a certain rate requirement. The radio resource management problems are formulated as mixed-integer nonlinear programming problems. Although they are known to be NP-hard in general, we exploit the problem structure to design efficient algorithms that optimally solve several problem cases. For the remaining cases, a heuristic algorithm that computes near-optimal solutions while respecting practical constraints on execution times and signaling overhead is also proposed. Simulation results confirm that the combination of device-to-device and flexible time-division-duplex technologies can significantly enhance spectrum and energy-efficiency of next generation cellular systems.
|
1711.00536
|
Rossano Schifanella
|
Luca M. Aiello, Rossano Schifanella, Miriam Redi, Stacey Svetlichnaya,
Frank Liu, Simon Osindero
|
Beautiful and damned. Combined effect of content quality and social ties
on user engagement
|
13 pages, 12 figures, final version published in IEEE Transactions on
Knowledge and Data Engineering (Volume: PP, Issue: 99)
| null |
10.1109/TKDE.2017.2747552
| null |
cs.SI cs.AI cs.CV cs.MM physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
User participation in online communities is driven by the intertwinement of
the social network structure with the crowd-generated content that flows along
its links. These aspects are rarely explored jointly and at scale. By looking
at how users generate and access pictures of varying beauty on Flickr, we
investigate how the production of quality impacts the dynamics of online social
systems. We develop a deep learning computer vision model to score images
according to their aesthetic value and we validate its output through
crowdsourcing. By applying it to over 15B Flickr photos, we study for the first
time how image beauty is distributed over a large-scale social system.
Beautiful images are evenly distributed in the network, although only a small
core of people get social recognition for them. To study the impact of exposure
to quality on user engagement, we set up matching experiments aimed at
detecting causality from observational data. Exposure to beauty is
double-edged: following people who produce high-quality content increases one's
probability of uploading better photos; however, an excessive imbalance between
the quality generated by a user and the user's neighbors leads to a decline in
engagement. Our analysis has practical implications for improving link
recommender systems.
|
[
{
"created": "Wed, 1 Nov 2017 20:48:30 GMT",
"version": "v1"
}
] |
2017-11-03
|
[
[
"Aiello",
"Luca M.",
""
],
[
"Schifanella",
"Rossano",
""
],
[
"Redi",
"Miriam",
""
],
[
"Svetlichnaya",
"Stacey",
""
],
[
"Liu",
"Frank",
""
],
[
"Osindero",
"Simon",
""
]
] |
User participation in online communities is driven by the intertwinement of the social network structure with the crowd-generated content that flows along its links. These aspects are rarely explored jointly and at scale. By looking at how users generate and access pictures of varying beauty on Flickr, we investigate how the production of quality impacts the dynamics of online social systems. We develop a deep learning computer vision model to score images according to their aesthetic value and we validate its output through crowdsourcing. By applying it to over 15B Flickr photos, we study for the first time how image beauty is distributed over a large-scale social system. Beautiful images are evenly distributed in the network, although only a small core of people get social recognition for them. To study the impact of exposure to quality on user engagement, we set up matching experiments aimed at detecting causality from observational data. Exposure to beauty is double-edged: following people who produce high-quality content increases one's probability of uploading better photos; however, an excessive imbalance between the quality generated by a user and the user's neighbors leads to a decline in engagement. Our analysis has practical implications for improving link recommender systems.
|
0912.3004
|
Panagiotis Cheilaris
|
Panagiotis Cheilaris and Geza Toth
|
Graph unique-maximum and conflict-free colorings
| null | null | null | null |
cs.DM cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We investigate the relationship between two kinds of vertex colorings of
graphs: unique-maximum colorings and conflict-free colorings. In a
unique-maximum coloring, the colors are ordered, and in every path of the graph
the maximum color appears only once. In a conflict-free coloring, in every path
of the graph there is a color that appears only once. We also study
computational complexity aspects of conflict-free colorings and prove a
completeness result. Finally, we improve lower bounds for those chromatic
numbers of the grid graph.
|
[
{
"created": "Tue, 15 Dec 2009 21:01:42 GMT",
"version": "v1"
}
] |
2009-12-17
|
[
[
"Cheilaris",
"Panagiotis",
""
],
[
"Toth",
"Geza",
""
]
] |
We investigate the relationship between two kinds of vertex colorings of graphs: unique-maximum colorings and conflict-free colorings. In a unique-maximum coloring, the colors are ordered, and in every path of the graph the maximum color appears only once. In a conflict-free coloring, in every path of the graph there is a color that appears only once. We also study computational complexity aspects of conflict-free colorings and prove a completeness result. Finally, we improve lower bounds for those chromatic numbers of the grid graph.
|
1609.09395
|
Constantinos Heracleous
|
Constantinos Heracleous
|
Micropolis Interdependency Modeling using Open Hybrid Automata
|
7 pages
| null | null | null |
cs.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Micropolis is a virtual city that is used for various studies, such as
modeling and analyzing water networks. In this paper we model the various
interdependencies between three major infrastructures in Micropolis, the power
system, the communication network and the water network, in order to study
cascades effects between them. Specifically, we develop open hybrid automata
models for the main components of the three infrastructures and then composed
them together based on their interdependencies.
|
[
{
"created": "Tue, 27 Sep 2016 20:47:43 GMT",
"version": "v1"
}
] |
2016-09-30
|
[
[
"Heracleous",
"Constantinos",
""
]
] |
Micropolis is a virtual city that is used for various studies, such as modeling and analyzing water networks. In this paper we model the various interdependencies between three major infrastructures in Micropolis, the power system, the communication network and the water network, in order to study cascades effects between them. Specifically, we develop open hybrid automata models for the main components of the three infrastructures and then composed them together based on their interdependencies.
|
2211.11380
|
Yuhao Wang
|
Yuhao Wang, Kai Wang, Xiaohong Liu, Tianrun Gao, Jingyue Zhang,
Guangyu Wang
|
Self adaptive global-local feature enhancement for radiology report
generation
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Automated radiology report generation aims at automatically generating a
detailed description of medical images, which can greatly alleviate the
workload of radiologists and provide better medical services to remote areas.
Most existing works pay attention to the holistic impression of medical images,
failing to utilize important anatomy information. However, in actual clinical
practice, radiologists usually locate important anatomical structures, and then
look for signs of abnormalities in certain structures and reason the underlying
disease. In this paper, we propose a novel framework AGFNet to dynamically fuse
the global and anatomy region feature to generate multi-grained radiology
report. Firstly, we extract important anatomy region features and global
features of input Chest X-ray (CXR). Then, with the region features and the
global features as input, our proposed self-adaptive fusion gate module could
dynamically fuse multi-granularity information. Finally, the captioning
generator generates the radiology reports through multi-granularity features.
Experiment results illustrate that our model achieved the state-of-the-art
performance on two benchmark datasets including the IU X-Ray and MIMIC-CXR.
Further analyses also prove that our model is able to leverage the
multi-grained information from radiology images and texts so as to help
generate more accurate reports.
|
[
{
"created": "Mon, 21 Nov 2022 11:50:42 GMT",
"version": "v1"
}
] |
2022-11-22
|
[
[
"Wang",
"Yuhao",
""
],
[
"Wang",
"Kai",
""
],
[
"Liu",
"Xiaohong",
""
],
[
"Gao",
"Tianrun",
""
],
[
"Zhang",
"Jingyue",
""
],
[
"Wang",
"Guangyu",
""
]
] |
Automated radiology report generation aims at automatically generating a detailed description of medical images, which can greatly alleviate the workload of radiologists and provide better medical services to remote areas. Most existing works pay attention to the holistic impression of medical images, failing to utilize important anatomy information. However, in actual clinical practice, radiologists usually locate important anatomical structures, and then look for signs of abnormalities in certain structures and reason the underlying disease. In this paper, we propose a novel framework AGFNet to dynamically fuse the global and anatomy region feature to generate multi-grained radiology report. Firstly, we extract important anatomy region features and global features of input Chest X-ray (CXR). Then, with the region features and the global features as input, our proposed self-adaptive fusion gate module could dynamically fuse multi-granularity information. Finally, the captioning generator generates the radiology reports through multi-granularity features. Experiment results illustrate that our model achieved the state-of-the-art performance on two benchmark datasets including the IU X-Ray and MIMIC-CXR. Further analyses also prove that our model is able to leverage the multi-grained information from radiology images and texts so as to help generate more accurate reports.
|
2002.11495
|
Joni Pajarinen
|
Joni Pajarinen, Oleg Arenz, Jan Peters, Gerhard Neumann
|
Probabilistic approach to physical object disentangling
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Physically disentangling entangled objects from each other is a problem
encountered in waste segregation or in any task that requires disassembly of
structures. Often there are no object models, and, especially with cluttered
irregularly shaped objects, the robot can not create a model of the scene due
to occlusion. One of our key insights is that based on previous sensory input
we are only interested in moving an object out of the disentanglement around
obstacles. That is, we only need to know where the robot can successfully move
in order to plan the disentangling. Due to the uncertainty we integrate
information about blocked movements into a probability map. The map defines the
probability of the robot successfully moving to a specific configuration. Using
as cost the failure probability of a sequence of movements we can then plan and
execute disentangling iteratively. Since our approach circumvents only
previously encountered obstacles, new movements will yield information about
unknown obstacles that block movement until the robot has learned to circumvent
all obstacles and disentangling succeeds. In the experiments, we use a special
probabilistic version of the Rapidly exploring Random Tree (RRT) algorithm for
planning and demonstrate successful disentanglement of objects both in 2-D and
3-D simulation, and, on a KUKA LBR 7-DOF robot. Moreover, our approach
outperforms baseline methods.
|
[
{
"created": "Wed, 26 Feb 2020 14:01:24 GMT",
"version": "v1"
},
{
"created": "Mon, 12 Apr 2021 13:06:49 GMT",
"version": "v2"
}
] |
2021-04-13
|
[
[
"Pajarinen",
"Joni",
""
],
[
"Arenz",
"Oleg",
""
],
[
"Peters",
"Jan",
""
],
[
"Neumann",
"Gerhard",
""
]
] |
Physically disentangling entangled objects from each other is a problem encountered in waste segregation or in any task that requires disassembly of structures. Often there are no object models, and, especially with cluttered irregularly shaped objects, the robot can not create a model of the scene due to occlusion. One of our key insights is that based on previous sensory input we are only interested in moving an object out of the disentanglement around obstacles. That is, we only need to know where the robot can successfully move in order to plan the disentangling. Due to the uncertainty we integrate information about blocked movements into a probability map. The map defines the probability of the robot successfully moving to a specific configuration. Using as cost the failure probability of a sequence of movements we can then plan and execute disentangling iteratively. Since our approach circumvents only previously encountered obstacles, new movements will yield information about unknown obstacles that block movement until the robot has learned to circumvent all obstacles and disentangling succeeds. In the experiments, we use a special probabilistic version of the Rapidly exploring Random Tree (RRT) algorithm for planning and demonstrate successful disentanglement of objects both in 2-D and 3-D simulation, and, on a KUKA LBR 7-DOF robot. Moreover, our approach outperforms baseline methods.
|
2308.10410
|
Irene Li
|
Fan Gao, Hang Jiang, Rui Yang, Qingcheng Zeng, Jinghui Lu, Moritz
Blum, Dairui Liu, Tianwei She, Yuang Jiang, Irene Li
|
Large Language Models on Wikipedia-Style Survey Generation: an
Evaluation in NLP Concepts
| null |
ACL 2024 Findings
| null | null |
cs.CL
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Educational materials such as survey articles in specialized fields like
computer science traditionally require tremendous expert inputs and are
therefore expensive to create and update. Recently, Large Language Models
(LLMs) have achieved significant success across various general tasks. However,
their effectiveness and limitations in the education domain are yet to be fully
explored. In this work, we examine the proficiency of LLMs in generating
succinct survey articles specific to the niche field of NLP in computer
science, focusing on a curated list of 99 topics. Automated benchmarks reveal
that GPT-4 surpasses its predecessors, inluding GPT-3.5, PaLM2, and LLaMa2 by
margins ranging from 2% to 20% in comparison to the established ground truth.
We compare both human and GPT-based evaluation scores and provide in-depth
analysis. While our findings suggest that GPT-created surveys are more
contemporary and accessible than human-authored ones, certain limitations were
observed. Notably, GPT-4, despite often delivering outstanding content,
occasionally exhibited lapses like missing details or factual errors. At last,
we compared the rating behavior between humans and GPT-4 and found systematic
bias in using GPT evaluation.
|
[
{
"created": "Mon, 21 Aug 2023 01:32:45 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Sep 2023 00:03:11 GMT",
"version": "v2"
},
{
"created": "Thu, 22 Feb 2024 02:54:19 GMT",
"version": "v3"
},
{
"created": "Thu, 23 May 2024 12:42:06 GMT",
"version": "v4"
}
] |
2024-05-24
|
[
[
"Gao",
"Fan",
""
],
[
"Jiang",
"Hang",
""
],
[
"Yang",
"Rui",
""
],
[
"Zeng",
"Qingcheng",
""
],
[
"Lu",
"Jinghui",
""
],
[
"Blum",
"Moritz",
""
],
[
"Liu",
"Dairui",
""
],
[
"She",
"Tianwei",
""
],
[
"Jiang",
"Yuang",
""
],
[
"Li",
"Irene",
""
]
] |
Educational materials such as survey articles in specialized fields like computer science traditionally require tremendous expert inputs and are therefore expensive to create and update. Recently, Large Language Models (LLMs) have achieved significant success across various general tasks. However, their effectiveness and limitations in the education domain are yet to be fully explored. In this work, we examine the proficiency of LLMs in generating succinct survey articles specific to the niche field of NLP in computer science, focusing on a curated list of 99 topics. Automated benchmarks reveal that GPT-4 surpasses its predecessors, inluding GPT-3.5, PaLM2, and LLaMa2 by margins ranging from 2% to 20% in comparison to the established ground truth. We compare both human and GPT-based evaluation scores and provide in-depth analysis. While our findings suggest that GPT-created surveys are more contemporary and accessible than human-authored ones, certain limitations were observed. Notably, GPT-4, despite often delivering outstanding content, occasionally exhibited lapses like missing details or factual errors. At last, we compared the rating behavior between humans and GPT-4 and found systematic bias in using GPT evaluation.
|
2010.16095
|
Mrigank Raman
|
Mrigank Raman, Ojal Kumar, Arpan Chattopadhyay
|
Centralized active tracking of a Markov chain with unknown dynamics
|
Accepted at the IEEE MASS 2020 Conference
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, selection of an active sensor subset for tracking a discrete
time, finite state Markov chain having an unknown transition probability matrix
(TPM) is considered. A total of N sensors are available for making observations
of the Markov chain, out of which a subset of sensors are activated each time
in order to perform reliable estimation of the process. The trade-off is
between activating more sensors to gather more observations for the remote
estimation, and restricting sensor usage in order to save energy and bandwidth
consumption. The problem is formulated as a constrained minimization problem,
where the objective is the long-run averaged mean-squared error (MSE) in
estimation, and the constraint is on sensor activation rate. A Lagrangian
relaxation of the problem is solved by an artful blending of two tools: Gibbs
sampling for MSE minimization and an on-line version of expectation
maximization (EM) to estimate the unknown TPM. Finally, the Lagrange multiplier
is updated using slower timescale stochastic approximation in order to satisfy
the sensor activation rate constraint. The on-line EM algorithm, though adapted
from literature, can estimate vector-valued parameters even under time-varying
dimension of the sensor observations. Numerical results demonstrate
approximately 1 dB better error performance than uniform sensor sampling and
comparable error performance (within 2 dB bound) against complete sensor
observation. This makes the proposed algorithm amenable to practical
implementation.
|
[
{
"created": "Fri, 30 Oct 2020 06:32:11 GMT",
"version": "v1"
}
] |
2020-11-02
|
[
[
"Raman",
"Mrigank",
""
],
[
"Kumar",
"Ojal",
""
],
[
"Chattopadhyay",
"Arpan",
""
]
] |
In this paper, selection of an active sensor subset for tracking a discrete time, finite state Markov chain having an unknown transition probability matrix (TPM) is considered. A total of N sensors are available for making observations of the Markov chain, out of which a subset of sensors are activated each time in order to perform reliable estimation of the process. The trade-off is between activating more sensors to gather more observations for the remote estimation, and restricting sensor usage in order to save energy and bandwidth consumption. The problem is formulated as a constrained minimization problem, where the objective is the long-run averaged mean-squared error (MSE) in estimation, and the constraint is on sensor activation rate. A Lagrangian relaxation of the problem is solved by an artful blending of two tools: Gibbs sampling for MSE minimization and an on-line version of expectation maximization (EM) to estimate the unknown TPM. Finally, the Lagrange multiplier is updated using slower timescale stochastic approximation in order to satisfy the sensor activation rate constraint. The on-line EM algorithm, though adapted from literature, can estimate vector-valued parameters even under time-varying dimension of the sensor observations. Numerical results demonstrate approximately 1 dB better error performance than uniform sensor sampling and comparable error performance (within 2 dB bound) against complete sensor observation. This makes the proposed algorithm amenable to practical implementation.
|
1910.08914
|
Yuhang Li
|
Yuhang Li, Xuejin Chen, Feng Wu, and Zheng-Jun Zha
|
LinesToFacePhoto: Face Photo Generation from Lines with Conditional
Self-Attention Generative Adversarial Network
| null | null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we explore the task of generating photo-realistic face images
from lines. Previous methods based on conditional generative adversarial
networks (cGANs) have shown their power to generate visually plausible images
when a conditional image and an output image share well-aligned structures.
However, these models fail to synthesize face images with a whole set of
well-defined structures, e.g. eyes, noses, mouths, etc., especially when the
conditional line map lacks one or several parts. To address this problem, we
propose a conditional self-attention generative adversarial network (CSAGAN).
We introduce a conditional self-attention mechanism to cGANs to capture
long-range dependencies between different regions in faces. We also build a
multi-scale discriminator. The large-scale discriminator enforces the
completeness of global structures and the small-scale discriminator encourages
fine details, thereby enhancing the realism of generated face images. We
evaluate the proposed model on the CelebA-HD dataset by two perceptual user
studies and three quantitative metrics. The experiment results demonstrate that
our method generates high-quality facial images while preserving facial
structures. Our results outperform state-of-the-art methods both quantitatively
and qualitatively.
|
[
{
"created": "Sun, 20 Oct 2019 07:05:24 GMT",
"version": "v1"
}
] |
2019-10-22
|
[
[
"Li",
"Yuhang",
""
],
[
"Chen",
"Xuejin",
""
],
[
"Wu",
"Feng",
""
],
[
"Zha",
"Zheng-Jun",
""
]
] |
In this paper, we explore the task of generating photo-realistic face images from lines. Previous methods based on conditional generative adversarial networks (cGANs) have shown their power to generate visually plausible images when a conditional image and an output image share well-aligned structures. However, these models fail to synthesize face images with a whole set of well-defined structures, e.g. eyes, noses, mouths, etc., especially when the conditional line map lacks one or several parts. To address this problem, we propose a conditional self-attention generative adversarial network (CSAGAN). We introduce a conditional self-attention mechanism to cGANs to capture long-range dependencies between different regions in faces. We also build a multi-scale discriminator. The large-scale discriminator enforces the completeness of global structures and the small-scale discriminator encourages fine details, thereby enhancing the realism of generated face images. We evaluate the proposed model on the CelebA-HD dataset by two perceptual user studies and three quantitative metrics. The experiment results demonstrate that our method generates high-quality facial images while preserving facial structures. Our results outperform state-of-the-art methods both quantitatively and qualitatively.
|
1809.10846
|
Najma Gill
|
Najma Gill
|
Comparison of Self-Aware and Organic Computing Systems
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With increasing complexity and heterogeneity of computing devices, it has
become crucial for system to be autonomous, adaptive to dynamic environment,
robust, flexible, and having so called self-*properties. These autonomous
systems are called organic computing(OC) systems. OC system was proposed as a
solution to tackle complex systems. Design time decisions have been shifted to
run time in highly complex and interconnected systems as it is very hard to
consider all scenarios and their appropriate actions in advance. Consequently,
Self-awareness becomes crucial for these adaptive autonomous systems. To cope
with evolving environment and changing user needs, system need to have
knowledge about itself and its surroundings. Literature review shows that for
autonomous and intelligent systems, researchers are concerned about knowledge
acquisition, representation and learning which is necessary for a system to
adapt. This paper is written to compare self-awareness and organic computing by
discussing their definitions, properties and architecture.
|
[
{
"created": "Tue, 28 Aug 2018 13:52:44 GMT",
"version": "v1"
}
] |
2018-10-01
|
[
[
"Gill",
"Najma",
""
]
] |
With increasing complexity and heterogeneity of computing devices, it has become crucial for system to be autonomous, adaptive to dynamic environment, robust, flexible, and having so called self-*properties. These autonomous systems are called organic computing(OC) systems. OC system was proposed as a solution to tackle complex systems. Design time decisions have been shifted to run time in highly complex and interconnected systems as it is very hard to consider all scenarios and their appropriate actions in advance. Consequently, Self-awareness becomes crucial for these adaptive autonomous systems. To cope with evolving environment and changing user needs, system need to have knowledge about itself and its surroundings. Literature review shows that for autonomous and intelligent systems, researchers are concerned about knowledge acquisition, representation and learning which is necessary for a system to adapt. This paper is written to compare self-awareness and organic computing by discussing their definitions, properties and architecture.
|
2304.14790
|
Jose Manuel S\'anchez Ruiz
|
Jos\'e Manuel S\'anchez Ruiz, Francisco Jos\'e Dom\'inguez Mayo,
Xavier Oriol, Jos\'e Francisco Crespo, David Benavides, Ernest Teniente
|
A Benchmarking Proposal for DevOps Practices on Open Source Software
Projects
|
18 pages, 10 figures
| null | null | null |
cs.SE cs.PF
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The popularity of open-source software (OSS) projects has grown significantly
over the last few years with more organizations relying on them. As these
projects become larger, the need for higher quality also increases. DevOps
practices have been shown to improve quality and performance. The DORA
benchmarking reports provide useful information to compare DevOps practices
performance between organizations, but they focus on continuous deployment and
delivery to production, while OSS projects focus on the continuous release of
code and its impact on third parties. The DORA reports mention the increasing
presence of OSS projects as they are widely used in the industry, but they have
never been used to measure OSS projects performance levels. This study reveals
that the DORA benchmark cannot be applied to OSS projects and proposes
benchmarking metrics for OSS projects, being the first one that adapts the DORA
metrics and applies them in OSS projects. The metrics proposed in this study
for benchmarking OSS projects include Release Frequency and Lead Time For
Released Changes to measure throughput, and Time To Repair Code and Bug Issues
Rate to assess stability. In contrast to the DORA reports, where data is
collected through manual surveys, in our proposal, data is collected
automatically by a tool we developed that retrieves information from public
GitHub repositories. This reduces the risk of survey-based data collection. Our
study also shows the benchmark feasibility by applying it to four popular OSS
projects: Angular, Kubernetes, Tensorflow, and VS Code. In addition, we
proposed challenges that address the topics and future works to expand the
knowledge and findings of this study. Overall, the findings of the study can
help to improve future research on OSS projects and provide a better
understanding and challenges of the role of DevOps practices in OSS projects.
|
[
{
"created": "Fri, 28 Apr 2023 12:00:16 GMT",
"version": "v1"
}
] |
2023-05-01
|
[
[
"Ruiz",
"José Manuel Sánchez",
""
],
[
"Mayo",
"Francisco José Domínguez",
""
],
[
"Oriol",
"Xavier",
""
],
[
"Crespo",
"José Francisco",
""
],
[
"Benavides",
"David",
""
],
[
"Teniente",
"Ernest",
""
]
] |
The popularity of open-source software (OSS) projects has grown significantly over the last few years with more organizations relying on them. As these projects become larger, the need for higher quality also increases. DevOps practices have been shown to improve quality and performance. The DORA benchmarking reports provide useful information to compare DevOps practices performance between organizations, but they focus on continuous deployment and delivery to production, while OSS projects focus on the continuous release of code and its impact on third parties. The DORA reports mention the increasing presence of OSS projects as they are widely used in the industry, but they have never been used to measure OSS projects performance levels. This study reveals that the DORA benchmark cannot be applied to OSS projects and proposes benchmarking metrics for OSS projects, being the first one that adapts the DORA metrics and applies them in OSS projects. The metrics proposed in this study for benchmarking OSS projects include Release Frequency and Lead Time For Released Changes to measure throughput, and Time To Repair Code and Bug Issues Rate to assess stability. In contrast to the DORA reports, where data is collected through manual surveys, in our proposal, data is collected automatically by a tool we developed that retrieves information from public GitHub repositories. This reduces the risk of survey-based data collection. Our study also shows the benchmark feasibility by applying it to four popular OSS projects: Angular, Kubernetes, Tensorflow, and VS Code. In addition, we proposed challenges that address the topics and future works to expand the knowledge and findings of this study. Overall, the findings of the study can help to improve future research on OSS projects and provide a better understanding and challenges of the role of DevOps practices in OSS projects.
|
2107.13429
|
Weixia Zhang
|
Weixia Zhang and Kede Ma and Guangtao Zhai and Xiaokang Yang
|
Task-Specific Normalization for Continual Learning of Blind Image
Quality Models
|
Accepted by IEEE T-IP
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we present a simple yet effective continual learning method
for blind image quality assessment (BIQA) with improved quality prediction
accuracy, plasticity-stability trade-off, and task-order/-length robustness.
The key step in our approach is to freeze all convolution filters of a
pre-trained deep neural network (DNN) for an explicit promise of stability, and
learn task-specific normalization parameters for plasticity. We assign each new
IQA dataset (i.e., task) a prediction head, and load the corresponding
normalization parameters to produce a quality score. The final quality estimate
is computed by black a weighted summation of predictions from all heads with a
lightweight $K$-means gating mechanism. Extensive experiments on six IQA
datasets demonstrate the advantages of the proposed method in comparison to
previous training techniques for BIQA.
|
[
{
"created": "Wed, 28 Jul 2021 15:21:01 GMT",
"version": "v1"
},
{
"created": "Thu, 2 Mar 2023 06:39:53 GMT",
"version": "v2"
},
{
"created": "Mon, 19 Feb 2024 15:36:23 GMT",
"version": "v3"
}
] |
2024-02-20
|
[
[
"Zhang",
"Weixia",
""
],
[
"Ma",
"Kede",
""
],
[
"Zhai",
"Guangtao",
""
],
[
"Yang",
"Xiaokang",
""
]
] |
In this paper, we present a simple yet effective continual learning method for blind image quality assessment (BIQA) with improved quality prediction accuracy, plasticity-stability trade-off, and task-order/-length robustness. The key step in our approach is to freeze all convolution filters of a pre-trained deep neural network (DNN) for an explicit promise of stability, and learn task-specific normalization parameters for plasticity. We assign each new IQA dataset (i.e., task) a prediction head, and load the corresponding normalization parameters to produce a quality score. The final quality estimate is computed by black a weighted summation of predictions from all heads with a lightweight $K$-means gating mechanism. Extensive experiments on six IQA datasets demonstrate the advantages of the proposed method in comparison to previous training techniques for BIQA.
|
cs/0012002
|
Abdun
|
Md. Enamul Karim (1), Abdun Naser Mahmood (1) ((1) University of
Dhaka)
|
Random Shuffling to Reduce Disorder in Adaptive Sorting Scheme
|
7 pages, 2 tables
| null | null | null |
cs.DS
| null |
In this paper we present a random shuffling scheme to apply with adaptive
sorting algorithms. Adaptive sorting algorithms utilize the presortedness
present in a given sequence. We have probabilistically increased the amount of
presortedness present in a sequence by using a random shuffling technique that
requires little computation. Theoretical analysis suggests that the proposed
scheme can improve the performance of adaptive sorting. Experimental results
show that it significantly reduces the amount of disorder present in a given
sequence and improves the execution time of adaptive sorting algorithm as well.
|
[
{
"created": "Sat, 2 Dec 2000 17:47:26 GMT",
"version": "v1"
}
] |
2016-08-31
|
[
[
"Karim",
"Md. Enamul",
""
],
[
"Mahmood",
"Abdun Naser",
""
]
] |
In this paper we present a random shuffling scheme to apply with adaptive sorting algorithms. Adaptive sorting algorithms utilize the presortedness present in a given sequence. We have probabilistically increased the amount of presortedness present in a sequence by using a random shuffling technique that requires little computation. Theoretical analysis suggests that the proposed scheme can improve the performance of adaptive sorting. Experimental results show that it significantly reduces the amount of disorder present in a given sequence and improves the execution time of adaptive sorting algorithm as well.
|
2311.03797
|
Daogao Liu
|
Hilal Asi, Daogao Liu
|
User-level Differentially Private Stochastic Convex Optimization:
Efficient Algorithms with Optimal Rates
| null | null | null | null |
cs.LG cs.CR cs.DS math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study differentially private stochastic convex optimization (DP-SCO) under
user-level privacy, where each user may hold multiple data items. Existing work
for user-level DP-SCO either requires super-polynomial runtime [Ghazi et al.
(2023)] or requires the number of users to grow polynomially with the
dimensionality of the problem with additional strict assumptions [Bassily et
al. (2023)]. We develop new algorithms for user-level DP-SCO that obtain
optimal rates for both convex and strongly convex functions in polynomial time
and require the number of users to grow only logarithmically in the dimension.
Moreover, our algorithms are the first to obtain optimal rates for non-smooth
functions in polynomial time. These algorithms are based on multiple-pass
DP-SGD, combined with a novel private mean estimation procedure for
concentrated data, which applies an outlier removal step before estimating the
mean of the gradients.
|
[
{
"created": "Tue, 7 Nov 2023 08:26:51 GMT",
"version": "v1"
}
] |
2023-11-08
|
[
[
"Asi",
"Hilal",
""
],
[
"Liu",
"Daogao",
""
]
] |
We study differentially private stochastic convex optimization (DP-SCO) under user-level privacy, where each user may hold multiple data items. Existing work for user-level DP-SCO either requires super-polynomial runtime [Ghazi et al. (2023)] or requires the number of users to grow polynomially with the dimensionality of the problem with additional strict assumptions [Bassily et al. (2023)]. We develop new algorithms for user-level DP-SCO that obtain optimal rates for both convex and strongly convex functions in polynomial time and require the number of users to grow only logarithmically in the dimension. Moreover, our algorithms are the first to obtain optimal rates for non-smooth functions in polynomial time. These algorithms are based on multiple-pass DP-SGD, combined with a novel private mean estimation procedure for concentrated data, which applies an outlier removal step before estimating the mean of the gradients.
|
2405.15861
|
Zhe Li
|
Zhe Li, Bicheng Ying, Zidong Liu, Haibo Yang
|
Achieving Dimension-Free Communication in Federated Learning via
Zeroth-Order Optimization
| null | null | null | null |
cs.LG cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Federated Learning (FL) offers a promising framework for collaborative and
privacy-preserving machine learning across distributed data sources. However,
the substantial communication costs associated with FL pose a significant
challenge to its efficiency. Specifically, in each communication round, the
communication costs scale linearly with the model's dimension, which presents a
formidable obstacle, especially in large model scenarios. Despite various
communication efficient strategies, the intrinsic dimension-dependent
communication cost remains a major bottleneck for current FL implementations.
In this paper, we introduce a novel dimension-free communication strategy for
FL, leveraging zero-order optimization techniques. We propose a new algorithm,
FedDisco, which facilitates the transmission of only a constant number of
scalar values between clients and the server in each communication round,
thereby reducing the communication cost from $\mathscr{O}(d)$ to
$\mathscr{O}(1)$, where $d$ is the dimension of the model parameters.
Theoretically, in non-convex functions, we prove that our algorithm achieves
state-of-the-art rates, which show a linear speedup of the number of clients
and local steps under standard assumptions and dimension-free rate for low
effective rank scenarios. Empirical evaluations through classic deep learning
training and large language model fine-tuning substantiate significant
reductions in communication overhead compared to traditional FL approaches. Our
code is available at https://github.com/ZidongLiu/FedDisco.
|
[
{
"created": "Fri, 24 May 2024 18:07:05 GMT",
"version": "v1"
},
{
"created": "Mon, 24 Jun 2024 04:52:25 GMT",
"version": "v2"
}
] |
2024-06-25
|
[
[
"Li",
"Zhe",
""
],
[
"Ying",
"Bicheng",
""
],
[
"Liu",
"Zidong",
""
],
[
"Yang",
"Haibo",
""
]
] |
Federated Learning (FL) offers a promising framework for collaborative and privacy-preserving machine learning across distributed data sources. However, the substantial communication costs associated with FL pose a significant challenge to its efficiency. Specifically, in each communication round, the communication costs scale linearly with the model's dimension, which presents a formidable obstacle, especially in large model scenarios. Despite various communication efficient strategies, the intrinsic dimension-dependent communication cost remains a major bottleneck for current FL implementations. In this paper, we introduce a novel dimension-free communication strategy for FL, leveraging zero-order optimization techniques. We propose a new algorithm, FedDisco, which facilitates the transmission of only a constant number of scalar values between clients and the server in each communication round, thereby reducing the communication cost from $\mathscr{O}(d)$ to $\mathscr{O}(1)$, where $d$ is the dimension of the model parameters. Theoretically, in non-convex functions, we prove that our algorithm achieves state-of-the-art rates, which show a linear speedup of the number of clients and local steps under standard assumptions and dimension-free rate for low effective rank scenarios. Empirical evaluations through classic deep learning training and large language model fine-tuning substantiate significant reductions in communication overhead compared to traditional FL approaches. Our code is available at https://github.com/ZidongLiu/FedDisco.
|
2209.10017
|
Jonathan Ponniah
|
Jonathan Ponniah
|
Compress-Forward Schemes for General Networks
|
arXiv admin note: text overlap with arXiv:1801.04310
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Compress-forward (CF) schemes are studied in general networks. The CF rate
for the one-relay channel defines outerbounds on both the CF rate for general
networks and the compression rate-vector region supporting this rate. We show
the outerbound is achievable using regular decoding with constant encoding
delays, avoiding the exponential delays and restrictions on bidirectional
communication in noisy network coding and backward decoding. The concept of
layering is introduced to harmonize regular CF schemes with the framework of
flow decomposition in the decode-forward setting. Layerings correspond to
regular decoding schemes. Any desired compression rate-vector in the outerbound
is achievable by some layering, which is found using the same "shift" operation
in flow decomposition. In separate work, we show that "shifting" minimizes the
operations needed to find layerings and thus minimizes the complexity of the
compression rate-vector region.
|
[
{
"created": "Tue, 20 Sep 2022 21:57:36 GMT",
"version": "v1"
}
] |
2022-09-22
|
[
[
"Ponniah",
"Jonathan",
""
]
] |
Compress-forward (CF) schemes are studied in general networks. The CF rate for the one-relay channel defines outerbounds on both the CF rate for general networks and the compression rate-vector region supporting this rate. We show the outerbound is achievable using regular decoding with constant encoding delays, avoiding the exponential delays and restrictions on bidirectional communication in noisy network coding and backward decoding. The concept of layering is introduced to harmonize regular CF schemes with the framework of flow decomposition in the decode-forward setting. Layerings correspond to regular decoding schemes. Any desired compression rate-vector in the outerbound is achievable by some layering, which is found using the same "shift" operation in flow decomposition. In separate work, we show that "shifting" minimizes the operations needed to find layerings and thus minimizes the complexity of the compression rate-vector region.
|
1503.03378
|
Marlon Dumas
|
T\~onis Saar, Marlon Dumas, Marti Kaljuve and Nataliia Semenenko
|
Browserbite: Cross-Browser Testing via Image Processing
|
28 pages, 16 figures
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cross-browser compatibility testing is concerned with identifying perceptible
differences in the way a Web page is rendered across different browsers or
configurations thereof. Existing automated cross-browser compatibility testing
methods are generally based on Document Object Model (DOM) analysis, or in some
cases, a combination of DOM analysis with screenshot capture and image
processing. DOM analysis however may miss incompatibilities that arise not
during DOM construction, but rather during rendering. Conversely, DOM analysis
produces false alarms because different DOMs may lead to identical or
sufficiently similar renderings. This paper presents a novel method for
cross-browser testing based purely on image processing. The method relies on
image segmentation to extract regions from a Web page and computer vision
techniques to extract a set of characteristic features from each region.
Regions extracted from a screenshot taken on a baseline browser are compared
against regions extracted from the browser under test based on characteristic
features. A machine learning classifier is used to determine if differences
between two matched regions should be classified as an incompatibility. An
evaluation involving 140 pages shows that the proposed method achieves an
F-score exceeding 0.9, outperforming a state-of-the-art cross-browser testing
tool based on DOM analysis.
|
[
{
"created": "Wed, 11 Mar 2015 15:37:15 GMT",
"version": "v1"
}
] |
2015-03-12
|
[
[
"Saar",
"Tõnis",
""
],
[
"Dumas",
"Marlon",
""
],
[
"Kaljuve",
"Marti",
""
],
[
"Semenenko",
"Nataliia",
""
]
] |
Cross-browser compatibility testing is concerned with identifying perceptible differences in the way a Web page is rendered across different browsers or configurations thereof. Existing automated cross-browser compatibility testing methods are generally based on Document Object Model (DOM) analysis, or in some cases, a combination of DOM analysis with screenshot capture and image processing. DOM analysis however may miss incompatibilities that arise not during DOM construction, but rather during rendering. Conversely, DOM analysis produces false alarms because different DOMs may lead to identical or sufficiently similar renderings. This paper presents a novel method for cross-browser testing based purely on image processing. The method relies on image segmentation to extract regions from a Web page and computer vision techniques to extract a set of characteristic features from each region. Regions extracted from a screenshot taken on a baseline browser are compared against regions extracted from the browser under test based on characteristic features. A machine learning classifier is used to determine if differences between two matched regions should be classified as an incompatibility. An evaluation involving 140 pages shows that the proposed method achieves an F-score exceeding 0.9, outperforming a state-of-the-art cross-browser testing tool based on DOM analysis.
|
2311.06861
|
Xinquan Wang
|
Xinquan Wang, Fenghao Zhu, Qianyun Zhou, Qihao Yu, Chongwen Huang,
Ahmed Alhammadi, Zhaoyang Zhang, Chau Yuen, and M\'erouane Debbah
|
Energy-efficient Beamforming for RISs-aided Communications: Gradient
Based Meta Learning
|
5 pages, 8 figures. Accepted in IEEE ICC 2024 (GCSN symposium)
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reconfigurable intelligent surfaces (RISs) have become a promising technology
to meet the requirements of energy efficiency and scalability in future
six-generation (6G) communications. However, a significant challenge in
RISs-aided communications is the joint optimization of active and passive
beamforming at base stations (BSs) and RISs respectively. Specifically, the
main difficulty is attributed to the highly non-convex optimization space of
beamforming matrices at both BSs and RISs, as well as the diversity and
mobility of communication scenarios. To address this, we present a greenly
gradient based meta learning beamforming (GMLB) approach. Unlike traditional
deep learning based methods which take channel information directly as input,
GMLB feeds the gradient of sum rate into neural networks. Coherently, we design
a differential regulator to address the phase shift optimization of RISs.
Moreover, we use the meta learning to iteratively optimize the beamforming
matrices of BSs and RISs. These techniques make the proposed method to work
well without requiring energy-consuming pre-training. Simulations show that
GMLB could achieve higher sum rate than that of typical alternating
optimization algorithms with the energy consumption by two orders of magnitude
less.
|
[
{
"created": "Sun, 12 Nov 2023 14:34:08 GMT",
"version": "v1"
},
{
"created": "Thu, 1 Feb 2024 15:04:02 GMT",
"version": "v2"
},
{
"created": "Fri, 16 Feb 2024 13:17:27 GMT",
"version": "v3"
}
] |
2024-02-19
|
[
[
"Wang",
"Xinquan",
""
],
[
"Zhu",
"Fenghao",
""
],
[
"Zhou",
"Qianyun",
""
],
[
"Yu",
"Qihao",
""
],
[
"Huang",
"Chongwen",
""
],
[
"Alhammadi",
"Ahmed",
""
],
[
"Zhang",
"Zhaoyang",
""
],
[
"Yuen",
"Chau",
""
],
[
"Debbah",
"Mérouane",
""
]
] |
Reconfigurable intelligent surfaces (RISs) have become a promising technology to meet the requirements of energy efficiency and scalability in future six-generation (6G) communications. However, a significant challenge in RISs-aided communications is the joint optimization of active and passive beamforming at base stations (BSs) and RISs respectively. Specifically, the main difficulty is attributed to the highly non-convex optimization space of beamforming matrices at both BSs and RISs, as well as the diversity and mobility of communication scenarios. To address this, we present a greenly gradient based meta learning beamforming (GMLB) approach. Unlike traditional deep learning based methods which take channel information directly as input, GMLB feeds the gradient of sum rate into neural networks. Coherently, we design a differential regulator to address the phase shift optimization of RISs. Moreover, we use the meta learning to iteratively optimize the beamforming matrices of BSs and RISs. These techniques make the proposed method to work well without requiring energy-consuming pre-training. Simulations show that GMLB could achieve higher sum rate than that of typical alternating optimization algorithms with the energy consumption by two orders of magnitude less.
|
2105.09742
|
Aashish Agarwal
|
Aashish Agarwal and Torsten Zesch
|
Robustness of end-to-end Automatic Speech Recognition Models -- A Case
Study using Mozilla DeepSpeech
| null | null | null | null |
cs.CL cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
When evaluating the performance of automatic speech recognition models,
usually word error rate within a certain dataset is used. Special care must be
taken in understanding the dataset in order to report realistic performance
numbers. We argue that many performance numbers reported probably underestimate
the expected error rate. We conduct experiments controlling for selection bias,
gender as well as overlap (between training and test data) in content, voices,
and recording conditions. We find that content overlap has the biggest impact,
but other factors like gender also play a role.
|
[
{
"created": "Sat, 8 May 2021 16:46:44 GMT",
"version": "v1"
}
] |
2021-05-21
|
[
[
"Agarwal",
"Aashish",
""
],
[
"Zesch",
"Torsten",
""
]
] |
When evaluating the performance of automatic speech recognition models, usually word error rate within a certain dataset is used. Special care must be taken in understanding the dataset in order to report realistic performance numbers. We argue that many performance numbers reported probably underestimate the expected error rate. We conduct experiments controlling for selection bias, gender as well as overlap (between training and test data) in content, voices, and recording conditions. We find that content overlap has the biggest impact, but other factors like gender also play a role.
|
1606.09163
|
Akash Kumar Dhaka
|
Akash Kumar Dhaka and Giampiero Salvi
|
Optimising The Input Window Alignment in CD-DNN Based Phoneme
Recognition for Low Latency Processing
|
4 pages, 3 figures
| null | null | null |
cs.CL cs.CV cs.NE stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a systematic analysis on the performance of a phonetic recogniser
when the window of input features is not symmetric with respect to the current
frame. The recogniser is based on Context Dependent Deep Neural Networks
(CD-DNNs) and Hidden Markov Models (HMMs). The objective is to reduce the
latency of the system by reducing the number of future feature frames required
to estimate the current output. Our tests performed on the TIMIT database show
that the performance does not degrade when the input window is shifted up to 5
frames in the past compared to common practice (no future frame). This
corresponds to improving the latency by 50 ms in our settings. Our tests also
show that the best results are not obtained with the symmetric window commonly
employed, but with an asymmetric window with eight past and two future context
frames, although this observation should be confirmed on other data sets. The
reduction in latency suggested by our results is critical for specific
applications such as real-time lip synchronisation for tele-presence, but may
also be beneficial in general applications to improve the lag in human-machine
spoken interaction.
|
[
{
"created": "Wed, 29 Jun 2016 15:51:44 GMT",
"version": "v1"
}
] |
2016-06-30
|
[
[
"Dhaka",
"Akash Kumar",
""
],
[
"Salvi",
"Giampiero",
""
]
] |
We present a systematic analysis on the performance of a phonetic recogniser when the window of input features is not symmetric with respect to the current frame. The recogniser is based on Context Dependent Deep Neural Networks (CD-DNNs) and Hidden Markov Models (HMMs). The objective is to reduce the latency of the system by reducing the number of future feature frames required to estimate the current output. Our tests performed on the TIMIT database show that the performance does not degrade when the input window is shifted up to 5 frames in the past compared to common practice (no future frame). This corresponds to improving the latency by 50 ms in our settings. Our tests also show that the best results are not obtained with the symmetric window commonly employed, but with an asymmetric window with eight past and two future context frames, although this observation should be confirmed on other data sets. The reduction in latency suggested by our results is critical for specific applications such as real-time lip synchronisation for tele-presence, but may also be beneficial in general applications to improve the lag in human-machine spoken interaction.
|
2307.06479
|
Emek Baris Kucuktabak
|
Emek Bar{\i}\c{s} K\"u\c{c}\"uktabak, Yue Wen, Matthew Short, Efe
Demirba\c{s}, Kevin Lynch, Jose Pons
|
Virtual Physical Coupling of Two Lower-Limb Exoskeletons
|
6 pages, 9 figures, accepted at 2023 IEEE International Conference on
Rehabilitation Robotics (ICORR)
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Physical interaction between individuals plays an important role in human
motor learning and performance during shared tasks. Using robotic devices,
researchers have studied the effects of dyadic haptic interaction mostly
focusing on the upper-limb. Developing infrastructure that enables physical
interactions between multiple individuals' lower limbs can extend the previous
work and facilitate investigation of new dyadic lower-limb rehabilitation
schemes.
We designed a system to render haptic interactions between two users while
they walk in multi-joint lower-limb exoskeletons. Specifically, we developed an
infrastructure where desired interaction torques are commanded to the
individual lower-limb exoskeletons based on the users' kinematics and the
properties of the virtual coupling. In this pilot study, we demonstrated the
capacity of the platform to render different haptic properties (e.g., soft and
hard), different haptic connection types (e.g., bidirectional and
unidirectional), and connections expressed in joint space and in task space.
With haptic connection, dyads generated synchronized movement, and the
difference between joint angles decreased as the virtual stiffness increased.
This is the first study where multi-joint dyadic haptic interactions are
created between lower-limb exoskeletons. This platform will be used to
investigate effects of haptic interaction on motor learning and task
performance during walking, a complex and meaningful task for gait
rehabilitation.
|
[
{
"created": "Wed, 12 Jul 2023 22:43:41 GMT",
"version": "v1"
},
{
"created": "Thu, 20 Jul 2023 22:08:51 GMT",
"version": "v2"
}
] |
2023-07-24
|
[
[
"Küçüktabak",
"Emek Barış",
""
],
[
"Wen",
"Yue",
""
],
[
"Short",
"Matthew",
""
],
[
"Demirbaş",
"Efe",
""
],
[
"Lynch",
"Kevin",
""
],
[
"Pons",
"Jose",
""
]
] |
Physical interaction between individuals plays an important role in human motor learning and performance during shared tasks. Using robotic devices, researchers have studied the effects of dyadic haptic interaction mostly focusing on the upper-limb. Developing infrastructure that enables physical interactions between multiple individuals' lower limbs can extend the previous work and facilitate investigation of new dyadic lower-limb rehabilitation schemes. We designed a system to render haptic interactions between two users while they walk in multi-joint lower-limb exoskeletons. Specifically, we developed an infrastructure where desired interaction torques are commanded to the individual lower-limb exoskeletons based on the users' kinematics and the properties of the virtual coupling. In this pilot study, we demonstrated the capacity of the platform to render different haptic properties (e.g., soft and hard), different haptic connection types (e.g., bidirectional and unidirectional), and connections expressed in joint space and in task space. With haptic connection, dyads generated synchronized movement, and the difference between joint angles decreased as the virtual stiffness increased. This is the first study where multi-joint dyadic haptic interactions are created between lower-limb exoskeletons. This platform will be used to investigate effects of haptic interaction on motor learning and task performance during walking, a complex and meaningful task for gait rehabilitation.
|
2208.12653
|
Jianing Li
|
Jianing Li, Jiaming Liu, Xiaobao Wei, Jiyuan Zhang, Ming Lu, Lei Ma,
Li Du, Tiejun Huang, Shanghang Zhang
|
Uncertainty Guided Depth Fusion for Spike Camera
|
18 pages, 11 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Depth estimation is essential for various important real-world applications
such as autonomous driving. However, it suffers from severe performance
degradation in high-velocity scenario since traditional cameras can only
capture blurred images. To deal with this problem, the spike camera is designed
to capture the pixel-wise luminance intensity at high frame rate. However,
depth estimation with spike camera remains very challenging using traditional
monocular or stereo depth estimation algorithms, which are based on the
photometric consistency. In this paper, we propose a novel Uncertainty-Guided
Depth Fusion (UGDF) framework to fuse the predictions of monocular and stereo
depth estimation networks for spike camera. Our framework is motivated by the
fact that stereo spike depth estimation achieves better results at close range
while monocular spike depth estimation obtains better results at long range.
Therefore, we introduce a dual-task depth estimation architecture with a joint
training strategy and estimate the distributed uncertainty to fuse the
monocular and stereo results. In order to demonstrate the advantage of spike
depth estimation over traditional camera depth estimation, we contribute a
spike-depth dataset named CitySpike20K, which contains 20K paired samples, for
spike depth estimation. UGDF achieves state-of-the-art results on CitySpike20K,
surpassing all monocular or stereo spike depth estimation baselines. We conduct
extensive experiments to evaluate the effectiveness and generalization of our
method on CitySpike20K. To the best of our knowledge, our framework is the
first dual-task fusion framework for spike camera depth estimation. Code and
dataset will be released.
|
[
{
"created": "Fri, 26 Aug 2022 13:04:01 GMT",
"version": "v1"
},
{
"created": "Mon, 29 Aug 2022 06:48:58 GMT",
"version": "v2"
}
] |
2022-08-30
|
[
[
"Li",
"Jianing",
""
],
[
"Liu",
"Jiaming",
""
],
[
"Wei",
"Xiaobao",
""
],
[
"Zhang",
"Jiyuan",
""
],
[
"Lu",
"Ming",
""
],
[
"Ma",
"Lei",
""
],
[
"Du",
"Li",
""
],
[
"Huang",
"Tiejun",
""
],
[
"Zhang",
"Shanghang",
""
]
] |
Depth estimation is essential for various important real-world applications such as autonomous driving. However, it suffers from severe performance degradation in high-velocity scenario since traditional cameras can only capture blurred images. To deal with this problem, the spike camera is designed to capture the pixel-wise luminance intensity at high frame rate. However, depth estimation with spike camera remains very challenging using traditional monocular or stereo depth estimation algorithms, which are based on the photometric consistency. In this paper, we propose a novel Uncertainty-Guided Depth Fusion (UGDF) framework to fuse the predictions of monocular and stereo depth estimation networks for spike camera. Our framework is motivated by the fact that stereo spike depth estimation achieves better results at close range while monocular spike depth estimation obtains better results at long range. Therefore, we introduce a dual-task depth estimation architecture with a joint training strategy and estimate the distributed uncertainty to fuse the monocular and stereo results. In order to demonstrate the advantage of spike depth estimation over traditional camera depth estimation, we contribute a spike-depth dataset named CitySpike20K, which contains 20K paired samples, for spike depth estimation. UGDF achieves state-of-the-art results on CitySpike20K, surpassing all monocular or stereo spike depth estimation baselines. We conduct extensive experiments to evaluate the effectiveness and generalization of our method on CitySpike20K. To the best of our knowledge, our framework is the first dual-task fusion framework for spike camera depth estimation. Code and dataset will be released.
|
0908.0932
|
Grenville Croll
|
M. Sriram Iyengar, John R. Svirbely
|
The Medical Algorithms Project
|
6 Pages, 2 Colour Figures
|
Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2009 113-118
ISBN 978-1-905617-89-0
| null | null |
cs.HC cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Medical Algorithms Project, a web-based resource located at
www.medal.org, is the world's largest collection of medical-related
spreadsheets, consisting of over 13,500 Excel spreadsheets each encoding a
medical algorithm from 45 different areas of medical practice. This free
resource is in use worldwide with over 106,000 registered users as of March 1,
2009.
|
[
{
"created": "Thu, 6 Aug 2009 18:52:41 GMT",
"version": "v1"
}
] |
2009-08-07
|
[
[
"Iyengar",
"M. Sriram",
""
],
[
"Svirbely",
"John R.",
""
]
] |
The Medical Algorithms Project, a web-based resource located at www.medal.org, is the world's largest collection of medical-related spreadsheets, consisting of over 13,500 Excel spreadsheets each encoding a medical algorithm from 45 different areas of medical practice. This free resource is in use worldwide with over 106,000 registered users as of March 1, 2009.
|
2306.13937
|
Luiz Fernando Afra Brito
|
Luiz F. Afra Brito and Marcelo Keese Albertini and Bruno A. N.
Traven\c{c}olo
|
A Dynamic Data Structure for Representing Timed Transitive Closures on
Disk
|
22 pages, 4 figures
| null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
Temporal graphs represent interactions between entities over time. These
interactions may be direct, a contact between two vertices at some time
instant, or indirect, through sequences of contacts called journeys. Deciding
whether an entity can reach another through a journey is useful for various
applications in complex networks. In this paper, we present a disk-based data
structure that maintains temporal reachability information under the addition
of new contacts in a non-chronological order. It represents the \emph{timed
transitive closure} (TTC) by a set of \emph{expanded} R-tuples of the form $(u,
v, t^-, t^+)$, which encodes the existence of journeys from vertex $u$ to
vertex $v$ with departure at time $t^-$ and arrival at time $t^+$. Let $n$ be
the number of vertices and $\tau$ be the number of timestamps in the lifetime
of the temporal graph. Our data structure explicitly maintains this information
in linear arrays using $O(n^2\tau)$ space so that sequential accesses on disk
are prioritized. Furthermore, it adds a new unsorted contact $(u, v, t)$
accessing $O\left(\frac{n^2\tau}{B}\right)$ sequential pages in the worst-case,
where $B$ is the of pages on disk; it answers whether there is of a journey
from a vertex $u$ to a vertex $v$ within a time interval $[t_1, t_2]$ accessing
a single page; it answers whether all vertices can reach each other in $[t_1,
t_2]$; and it reconstructs a valid journey that validates the reachability from
a vertex $u$ to a vertex $v$ within $[t_1, t_1]$ accessing
$O\left(\frac{n\tau}{B}\right)$ pages. Our experiments show that our novel data
structure are better that the best known approach for the majority of cases
using synthetic and real world datasets.
|
[
{
"created": "Sat, 24 Jun 2023 10:58:23 GMT",
"version": "v1"
}
] |
2023-06-27
|
[
[
"Brito",
"Luiz F. Afra",
""
],
[
"Albertini",
"Marcelo Keese",
""
],
[
"Travençolo",
"Bruno A. N.",
""
]
] |
Temporal graphs represent interactions between entities over time. These interactions may be direct, a contact between two vertices at some time instant, or indirect, through sequences of contacts called journeys. Deciding whether an entity can reach another through a journey is useful for various applications in complex networks. In this paper, we present a disk-based data structure that maintains temporal reachability information under the addition of new contacts in a non-chronological order. It represents the \emph{timed transitive closure} (TTC) by a set of \emph{expanded} R-tuples of the form $(u, v, t^-, t^+)$, which encodes the existence of journeys from vertex $u$ to vertex $v$ with departure at time $t^-$ and arrival at time $t^+$. Let $n$ be the number of vertices and $\tau$ be the number of timestamps in the lifetime of the temporal graph. Our data structure explicitly maintains this information in linear arrays using $O(n^2\tau)$ space so that sequential accesses on disk are prioritized. Furthermore, it adds a new unsorted contact $(u, v, t)$ accessing $O\left(\frac{n^2\tau}{B}\right)$ sequential pages in the worst-case, where $B$ is the of pages on disk; it answers whether there is of a journey from a vertex $u$ to a vertex $v$ within a time interval $[t_1, t_2]$ accessing a single page; it answers whether all vertices can reach each other in $[t_1, t_2]$; and it reconstructs a valid journey that validates the reachability from a vertex $u$ to a vertex $v$ within $[t_1, t_1]$ accessing $O\left(\frac{n\tau}{B}\right)$ pages. Our experiments show that our novel data structure are better that the best known approach for the majority of cases using synthetic and real world datasets.
|
1705.09888
|
Peng Xu
|
Peng Xu, Qiyue Yin, Yongye Huang, Yi-Zhe Song, Zhanyu Ma, Liang Wang,
Tao Xiang, W. Bastiaan Kleijn, Jun Guo
|
Cross-modal Subspace Learning for Fine-grained Sketch-based Image
Retrieval
|
Accepted by Neurocomputing
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sketch-based image retrieval (SBIR) is challenging due to the inherent
domain-gap between sketch and photo. Compared with pixel-perfect depictions of
photos, sketches are iconic renderings of the real world with highly abstract.
Therefore, matching sketch and photo directly using low-level visual clues are
unsufficient, since a common low-level subspace that traverses semantically
across the two modalities is non-trivial to establish. Most existing SBIR
studies do not directly tackle this cross-modal problem. This naturally
motivates us to explore the effectiveness of cross-modal retrieval methods in
SBIR, which have been applied in the image-text matching successfully. In this
paper, we introduce and compare a series of state-of-the-art cross-modal
subspace learning methods and benchmark them on two recently released
fine-grained SBIR datasets. Through thorough examination of the experimental
results, we have demonstrated that the subspace learning can effectively model
the sketch-photo domain-gap. In addition we draw a few key insights to drive
future research.
|
[
{
"created": "Sun, 28 May 2017 03:45:26 GMT",
"version": "v1"
}
] |
2017-05-30
|
[
[
"Xu",
"Peng",
""
],
[
"Yin",
"Qiyue",
""
],
[
"Huang",
"Yongye",
""
],
[
"Song",
"Yi-Zhe",
""
],
[
"Ma",
"Zhanyu",
""
],
[
"Wang",
"Liang",
""
],
[
"Xiang",
"Tao",
""
],
[
"Kleijn",
"W. Bastiaan",
""
],
[
"Guo",
"Jun",
""
]
] |
Sketch-based image retrieval (SBIR) is challenging due to the inherent domain-gap between sketch and photo. Compared with pixel-perfect depictions of photos, sketches are iconic renderings of the real world with highly abstract. Therefore, matching sketch and photo directly using low-level visual clues are unsufficient, since a common low-level subspace that traverses semantically across the two modalities is non-trivial to establish. Most existing SBIR studies do not directly tackle this cross-modal problem. This naturally motivates us to explore the effectiveness of cross-modal retrieval methods in SBIR, which have been applied in the image-text matching successfully. In this paper, we introduce and compare a series of state-of-the-art cross-modal subspace learning methods and benchmark them on two recently released fine-grained SBIR datasets. Through thorough examination of the experimental results, we have demonstrated that the subspace learning can effectively model the sketch-photo domain-gap. In addition we draw a few key insights to drive future research.
|
1602.04234
|
Stefanos Baros Stefanos Baros
|
Stefanos Baros
|
Consensus-Based Torque Control of Deloaded Wind DFIGs for Distributed
and Fair Dynamic Dispatching
| null | null | null | null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we aim to address the problem of dynamically dispatching a
group of state-of-the-art deloaded wind generators (WGs) in a fair-sharing
manner. We use the term dynamically since the WGs aim to dispatch themselves
according to a varying committed WF power output. We first propose a
leader-follower protocol whose execution guarantees asymptotically, two control
objectives. These are 1) reaching asymptotic consensus on the utilization level
of all WGs and 2) the total power output of the WGs asymptotically converges to
the reference value. Thereafter, we combine singular perturbation and Lyapunov
theory to prove that, under certain conditions, the proposed protocol will
asymptotically converge to its equilibrium. Finally, we derive a cooperative
Control Lyapunov Function-based (CLF) controller for the rotor side converter
(RSC) of each WG that realizes the protocol in practice. We demonstrate the
effectiveness of our proposed protocol and the corresponding RSC controller
design via simulations on the modified IEEE 24-bus RT system.
|
[
{
"created": "Fri, 12 Feb 2016 21:50:26 GMT",
"version": "v1"
}
] |
2016-02-16
|
[
[
"Baros",
"Stefanos",
""
]
] |
In this paper we aim to address the problem of dynamically dispatching a group of state-of-the-art deloaded wind generators (WGs) in a fair-sharing manner. We use the term dynamically since the WGs aim to dispatch themselves according to a varying committed WF power output. We first propose a leader-follower protocol whose execution guarantees asymptotically, two control objectives. These are 1) reaching asymptotic consensus on the utilization level of all WGs and 2) the total power output of the WGs asymptotically converges to the reference value. Thereafter, we combine singular perturbation and Lyapunov theory to prove that, under certain conditions, the proposed protocol will asymptotically converge to its equilibrium. Finally, we derive a cooperative Control Lyapunov Function-based (CLF) controller for the rotor side converter (RSC) of each WG that realizes the protocol in practice. We demonstrate the effectiveness of our proposed protocol and the corresponding RSC controller design via simulations on the modified IEEE 24-bus RT system.
|
cs/0406013
|
Ester Zumpano
|
G. Greco, S. Greco, I. Trubtsyna, E. Zumpano
|
Optimization of Bound Disjunctive Queries with Constraints
|
35 pages
| null | null | null |
cs.LO
| null |
"To Appear in Theory and Practice of Logic Programming (TPLP)" This paper
presents a technique for the optimization of bound queries over disjunctive
deductive databases with constraints. The proposed approach is an extension of
the well-known Magic-Set technique and is well-suited for being integrated in
current bottom-up (stable) model inference engines. More specifically, it is
based on the exploitation of binding propagation techniques which reduce the
size of the data relevant to answer the query and, consequently, reduces both
the complexity of computing a single model and the number of models to be
considered. The motivation of this work stems from the observation that
traditional binding propagation optimization techniques for bottom-up model
generator systems, simulating the goal driven evaluation of top-down engines,
are only suitable for positive (disjunctive) queries, while hard problems are
expressed using unstratified negation. The main contribution of the paper
consists in the extension of a previous technique, defined for positive
disjunctive queries, to queries containing both disjunctive heads and
constraints (a simple and expressive form of unstratified negation). As the
usual way of expressing declaratively hard problems is based on the
guess-and-check technique, where the guess part is expressed by means of
disjunctive rules and the check part is expressed by means of constraints, the
technique proposed here is highly relevant for the optimization of queries
expressing hard problems. The value of the technique has been proved by several
experiments.
|
[
{
"created": "Mon, 7 Jun 2004 12:13:58 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Greco",
"G.",
""
],
[
"Greco",
"S.",
""
],
[
"Trubtsyna",
"I.",
""
],
[
"Zumpano",
"E.",
""
]
] |
"To Appear in Theory and Practice of Logic Programming (TPLP)" This paper presents a technique for the optimization of bound queries over disjunctive deductive databases with constraints. The proposed approach is an extension of the well-known Magic-Set technique and is well-suited for being integrated in current bottom-up (stable) model inference engines. More specifically, it is based on the exploitation of binding propagation techniques which reduce the size of the data relevant to answer the query and, consequently, reduces both the complexity of computing a single model and the number of models to be considered. The motivation of this work stems from the observation that traditional binding propagation optimization techniques for bottom-up model generator systems, simulating the goal driven evaluation of top-down engines, are only suitable for positive (disjunctive) queries, while hard problems are expressed using unstratified negation. The main contribution of the paper consists in the extension of a previous technique, defined for positive disjunctive queries, to queries containing both disjunctive heads and constraints (a simple and expressive form of unstratified negation). As the usual way of expressing declaratively hard problems is based on the guess-and-check technique, where the guess part is expressed by means of disjunctive rules and the check part is expressed by means of constraints, the technique proposed here is highly relevant for the optimization of queries expressing hard problems. The value of the technique has been proved by several experiments.
|
1606.05614
|
Cristiano Premebida
|
C. Premebida, L. Garrote, A. Asvadi, A. Pedro Ribeiro, and U. Nunes
|
High-resolution LIDAR-based Depth Mapping using Bilateral Filter
|
8 pages, 6 figures, submitted to IEEE-ITSC'16
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
High resolution depth-maps, obtained by upsampling sparse range data from a
3D-LIDAR, find applications in many fields ranging from sensory perception to
semantic segmentation and object detection. Upsampling is often based on
combining data from a monocular camera to compensate the low-resolution of a
LIDAR. This paper, on the other hand, introduces a novel framework to obtain
dense depth-map solely from a single LIDAR point cloud; which is a research
direction that has been barely explored. The formulation behind the proposed
depth-mapping process relies on local spatial interpolation, using
sliding-window (mask) technique, and on the Bilateral Filter (BF) where the
variable of interest, the distance from the sensor, is considered in the
interpolation problem. In particular, the BF is conveniently modified to
perform depth-map upsampling such that the edges (foreground-background
discontinuities) are better preserved by means of a proposed method which
influences the range-based weighting term. Other methods for spatial upsampling
are discussed, evaluated and compared in terms of different error measures.
This paper also researches the role of the mask's size in the performance of
the implemented methods. Quantitative and qualitative results from experiments
on the KITTI Database, using LIDAR point clouds only, show very satisfactory
performance of the approach introduced in this work.
|
[
{
"created": "Fri, 17 Jun 2016 18:14:59 GMT",
"version": "v1"
}
] |
2016-06-20
|
[
[
"Premebida",
"C.",
""
],
[
"Garrote",
"L.",
""
],
[
"Asvadi",
"A.",
""
],
[
"Ribeiro",
"A. Pedro",
""
],
[
"Nunes",
"U.",
""
]
] |
High resolution depth-maps, obtained by upsampling sparse range data from a 3D-LIDAR, find applications in many fields ranging from sensory perception to semantic segmentation and object detection. Upsampling is often based on combining data from a monocular camera to compensate the low-resolution of a LIDAR. This paper, on the other hand, introduces a novel framework to obtain dense depth-map solely from a single LIDAR point cloud; which is a research direction that has been barely explored. The formulation behind the proposed depth-mapping process relies on local spatial interpolation, using sliding-window (mask) technique, and on the Bilateral Filter (BF) where the variable of interest, the distance from the sensor, is considered in the interpolation problem. In particular, the BF is conveniently modified to perform depth-map upsampling such that the edges (foreground-background discontinuities) are better preserved by means of a proposed method which influences the range-based weighting term. Other methods for spatial upsampling are discussed, evaluated and compared in terms of different error measures. This paper also researches the role of the mask's size in the performance of the implemented methods. Quantitative and qualitative results from experiments on the KITTI Database, using LIDAR point clouds only, show very satisfactory performance of the approach introduced in this work.
|
2407.14912
|
Weiqin Jiao
|
Weiqin Jiao, Claudio Persello, George Vosselman
|
PolyR-CNN: R-CNN for end-to-end polygonal building outline extraction
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Polygonal building outline extraction has been a research focus in recent
years. Most existing methods have addressed this challenging task by
decomposing it into several subtasks and employing carefully designed
architectures. Despite their accuracy, such pipelines often introduce
inefficiencies during training and inference. This paper presents an end-to-end
framework, denoted as PolyR-CNN, which offers an efficient and fully integrated
approach to predict vectorized building polygons and bounding boxes directly
from remotely sensed images. Notably, PolyR-CNN leverages solely the features
of the Region of Interest (RoI) for the prediction, thereby mitigating the
necessity for complex designs. Furthermore, we propose a novel scheme with
PolyR-CNN to extract detailed outline information from polygon vertex
coordinates, termed vertex proposal feature, to guide the RoI features to
predict more regular buildings. PolyR-CNN demonstrates the capacity to deal
with buildings with holes through a simple post-processing method on the Inria
dataset. Comprehensive experiments conducted on the CrowdAI dataset show that
PolyR-CNN achieves competitive accuracy compared to state-of-the-art methods
while significantly improving computational efficiency, i.e., achieving 79.2
Average Precision (AP), exhibiting a 15.9 AP gain and operating 2.5 times
faster and four times lighter than the well-established end-to-end method
PolyWorld. Replacing the backbone with a simple ResNet-50, PolyR-CNN maintains
a 71.1 AP while running four times faster than PolyWorld.
|
[
{
"created": "Sat, 20 Jul 2024 15:48:54 GMT",
"version": "v1"
}
] |
2024-07-23
|
[
[
"Jiao",
"Weiqin",
""
],
[
"Persello",
"Claudio",
""
],
[
"Vosselman",
"George",
""
]
] |
Polygonal building outline extraction has been a research focus in recent years. Most existing methods have addressed this challenging task by decomposing it into several subtasks and employing carefully designed architectures. Despite their accuracy, such pipelines often introduce inefficiencies during training and inference. This paper presents an end-to-end framework, denoted as PolyR-CNN, which offers an efficient and fully integrated approach to predict vectorized building polygons and bounding boxes directly from remotely sensed images. Notably, PolyR-CNN leverages solely the features of the Region of Interest (RoI) for the prediction, thereby mitigating the necessity for complex designs. Furthermore, we propose a novel scheme with PolyR-CNN to extract detailed outline information from polygon vertex coordinates, termed vertex proposal feature, to guide the RoI features to predict more regular buildings. PolyR-CNN demonstrates the capacity to deal with buildings with holes through a simple post-processing method on the Inria dataset. Comprehensive experiments conducted on the CrowdAI dataset show that PolyR-CNN achieves competitive accuracy compared to state-of-the-art methods while significantly improving computational efficiency, i.e., achieving 79.2 Average Precision (AP), exhibiting a 15.9 AP gain and operating 2.5 times faster and four times lighter than the well-established end-to-end method PolyWorld. Replacing the backbone with a simple ResNet-50, PolyR-CNN maintains a 71.1 AP while running four times faster than PolyWorld.
|
2205.12523
|
Rongjie Huang
|
Rongjie Huang, Jinglin Liu, Huadai Liu, Yi Ren, Lichao Zhang, Jinzheng
He, Zhou Zhao
|
TranSpeech: Speech-to-Speech Translation With Bilateral Perturbation
|
Accpeted to ICLR 2023
| null | null | null |
cs.CL cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Direct speech-to-speech translation (S2ST) with discrete units leverages
recent progress in speech representation learning. Specifically, a sequence of
discrete representations derived in a self-supervised manner are predicted from
the model and passed to a vocoder for speech reconstruction, while still facing
the following challenges: 1) Acoustic multimodality: the discrete units derived
from speech with same content could be indeterministic due to the acoustic
property (e.g., rhythm, pitch, and energy), which causes deterioration of
translation accuracy; 2) high latency: current S2ST systems utilize
autoregressive models which predict each unit conditioned on the sequence
previously generated, failing to take full advantage of parallelism. In this
work, we propose TranSpeech, a speech-to-speech translation model with
bilateral perturbation. To alleviate the acoustic multimodal problem, we
propose bilateral perturbation (BiP), which consists of the style normalization
and information enhancement stages, to learn only the linguistic information
from speech samples and generate more deterministic representations. With
reduced multimodality, we step forward and become the first to establish a
non-autoregressive S2ST technique, which repeatedly masks and predicts unit
choices and produces high-accuracy results in just a few cycles. Experimental
results on three language pairs demonstrate that BiP yields an improvement of
2.9 BLEU on average compared with a baseline textless S2ST model. Moreover, our
parallel decoding shows a significant reduction of inference latency, enabling
speedup up to 21.4x than autoregressive technique. Audio samples are available
at \url{https://TranSpeech.github.io/}
|
[
{
"created": "Wed, 25 May 2022 06:34:14 GMT",
"version": "v1"
},
{
"created": "Thu, 2 Mar 2023 09:17:01 GMT",
"version": "v2"
}
] |
2023-03-03
|
[
[
"Huang",
"Rongjie",
""
],
[
"Liu",
"Jinglin",
""
],
[
"Liu",
"Huadai",
""
],
[
"Ren",
"Yi",
""
],
[
"Zhang",
"Lichao",
""
],
[
"He",
"Jinzheng",
""
],
[
"Zhao",
"Zhou",
""
]
] |
Direct speech-to-speech translation (S2ST) with discrete units leverages recent progress in speech representation learning. Specifically, a sequence of discrete representations derived in a self-supervised manner are predicted from the model and passed to a vocoder for speech reconstruction, while still facing the following challenges: 1) Acoustic multimodality: the discrete units derived from speech with same content could be indeterministic due to the acoustic property (e.g., rhythm, pitch, and energy), which causes deterioration of translation accuracy; 2) high latency: current S2ST systems utilize autoregressive models which predict each unit conditioned on the sequence previously generated, failing to take full advantage of parallelism. In this work, we propose TranSpeech, a speech-to-speech translation model with bilateral perturbation. To alleviate the acoustic multimodal problem, we propose bilateral perturbation (BiP), which consists of the style normalization and information enhancement stages, to learn only the linguistic information from speech samples and generate more deterministic representations. With reduced multimodality, we step forward and become the first to establish a non-autoregressive S2ST technique, which repeatedly masks and predicts unit choices and produces high-accuracy results in just a few cycles. Experimental results on three language pairs demonstrate that BiP yields an improvement of 2.9 BLEU on average compared with a baseline textless S2ST model. Moreover, our parallel decoding shows a significant reduction of inference latency, enabling speedup up to 21.4x than autoregressive technique. Audio samples are available at \url{https://TranSpeech.github.io/}
|
2102.11097
|
Joseph O'Rourke
|
Joseph O'Rourke and Costin V\^ilcu
|
Cut Locus Realizations on Convex Polyhedra
|
16 pages, 7 figures, 16 references
| null | null | null |
cs.CG math.MG
|
http://creativecommons.org/licenses/by/4.0/
|
We prove that every positively-weighted tree T can be realized as the cut
locus C(x) of a point x on a convex polyhedron P, with T weights matching C(x)
lengths. If T has n leaves, P has (in general) n+1 vertices. We show there are
in fact a continuum of polyhedra P each realizing T for some x on P. Three main
tools in the proof are properties of the star unfolding of P, Alexandrov's
gluing theorem, and a cut-locus partition lemma. The construction of P from T
is surprisingly simple.
|
[
{
"created": "Mon, 22 Feb 2021 15:11:44 GMT",
"version": "v1"
}
] |
2021-02-23
|
[
[
"O'Rourke",
"Joseph",
""
],
[
"Vîlcu",
"Costin",
""
]
] |
We prove that every positively-weighted tree T can be realized as the cut locus C(x) of a point x on a convex polyhedron P, with T weights matching C(x) lengths. If T has n leaves, P has (in general) n+1 vertices. We show there are in fact a continuum of polyhedra P each realizing T for some x on P. Three main tools in the proof are properties of the star unfolding of P, Alexandrov's gluing theorem, and a cut-locus partition lemma. The construction of P from T is surprisingly simple.
|
2402.04520
|
Jerry Yao-Chieh Hu
|
Jerry Yao-Chieh Hu, Thomas Lin, Zhao Song, Han Liu
|
On Computational Limits of Modern Hopfield Models: A Fine-Grained
Complexity Analysis
|
Accepted at ICML 2024; v2 corrected typos; v3 added clarifications
and references; v4,5 updated to camera-ready version
| null | null | null |
cs.LG cs.AI stat.ML
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We investigate the computational limits of the memory retrieval dynamics of
modern Hopfield models from the fine-grained complexity analysis. Our key
contribution is the characterization of a phase transition behavior in the
efficiency of all possible modern Hopfield models based on the norm of
patterns. Specifically, we establish an upper bound criterion for the norm of
input query patterns and memory patterns. Only below this criterion,
sub-quadratic (efficient) variants of the modern Hopfield model exist, assuming
the Strong Exponential Time Hypothesis (SETH). To showcase our theory, we
provide a formal example of efficient constructions of modern Hopfield models
using low-rank approximation when the efficient criterion holds. This includes
a derivation of a lower bound on the computational time, scaling linearly with
$\max\{$# of stored memory patterns, length of input query sequence$\}$. In
addition, we prove its memory retrieval error bound and exponential memory
capacity.
|
[
{
"created": "Wed, 7 Feb 2024 01:58:21 GMT",
"version": "v1"
},
{
"created": "Wed, 14 Feb 2024 20:35:31 GMT",
"version": "v2"
},
{
"created": "Thu, 4 Apr 2024 21:56:56 GMT",
"version": "v3"
},
{
"created": "Sun, 26 May 2024 17:18:34 GMT",
"version": "v4"
},
{
"created": "Sat, 1 Jun 2024 00:49:17 GMT",
"version": "v5"
}
] |
2024-06-04
|
[
[
"Hu",
"Jerry Yao-Chieh",
""
],
[
"Lin",
"Thomas",
""
],
[
"Song",
"Zhao",
""
],
[
"Liu",
"Han",
""
]
] |
We investigate the computational limits of the memory retrieval dynamics of modern Hopfield models from the fine-grained complexity analysis. Our key contribution is the characterization of a phase transition behavior in the efficiency of all possible modern Hopfield models based on the norm of patterns. Specifically, we establish an upper bound criterion for the norm of input query patterns and memory patterns. Only below this criterion, sub-quadratic (efficient) variants of the modern Hopfield model exist, assuming the Strong Exponential Time Hypothesis (SETH). To showcase our theory, we provide a formal example of efficient constructions of modern Hopfield models using low-rank approximation when the efficient criterion holds. This includes a derivation of a lower bound on the computational time, scaling linearly with $\max\{$# of stored memory patterns, length of input query sequence$\}$. In addition, we prove its memory retrieval error bound and exponential memory capacity.
|
2004.12881
|
Shay Golan
|
Shay Golan, Tomasz Kociumaka, Tsvi Kopelowitz, Ely Porat
|
The Streaming k-Mismatch Problem: Tradeoffs between Space and Total Time
|
Extended abstract to appear in CPM 2020
| null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We revisit the $k$-mismatch problem in the streaming model on a pattern of
length $m$ and a streaming text of length $n$, both over a size-$\sigma$
alphabet. The current state-of-the-art algorithm for the streaming $k$-mismatch
problem, by Clifford et al. [SODA 2019], uses $\tilde O(k)$ space and $\tilde
O\big(\sqrt k\big)$ worst-case time per character. The space complexity is
known to be (unconditionally) optimal, and the worst-case time per character
matches a conditional lower bound. However, there is a gap between the total
time cost of the algorithm, which is $\tilde O(n\sqrt k)$, and the fastest
known offline algorithm, which costs $\tilde O\big(n + \min\big(\frac{nk}{\sqrt
m},\sigma n\big)\big)$ time. Moreover, it is not known whether improvements
over the $\tilde O(n\sqrt k)$ total time are possible when using more than
$O(k)$ space.
We address these gaps by designing a randomized streaming algorithm for the
$k$-mismatch problem that, given an integer parameter $k\le s \le m$, uses
$\tilde O(s)$ space and costs $\tilde O\big(n+\min\big(\frac
{nk^2}m,\frac{nk}{\sqrt s},\frac{\sigma nm}s\big)\big)$ total time. For $s=m$,
the total runtime becomes $\tilde O\big(n + \min\big(\frac{nk}{\sqrt m},\sigma
n\big)\big)$, which matches the time cost of the fastest offline algorithm.
Moreover, the worst-case time cost per character is still $\tilde O\big(\sqrt
k\big)$.
|
[
{
"created": "Mon, 27 Apr 2020 15:41:49 GMT",
"version": "v1"
}
] |
2020-04-28
|
[
[
"Golan",
"Shay",
""
],
[
"Kociumaka",
"Tomasz",
""
],
[
"Kopelowitz",
"Tsvi",
""
],
[
"Porat",
"Ely",
""
]
] |
We revisit the $k$-mismatch problem in the streaming model on a pattern of length $m$ and a streaming text of length $n$, both over a size-$\sigma$ alphabet. The current state-of-the-art algorithm for the streaming $k$-mismatch problem, by Clifford et al. [SODA 2019], uses $\tilde O(k)$ space and $\tilde O\big(\sqrt k\big)$ worst-case time per character. The space complexity is known to be (unconditionally) optimal, and the worst-case time per character matches a conditional lower bound. However, there is a gap between the total time cost of the algorithm, which is $\tilde O(n\sqrt k)$, and the fastest known offline algorithm, which costs $\tilde O\big(n + \min\big(\frac{nk}{\sqrt m},\sigma n\big)\big)$ time. Moreover, it is not known whether improvements over the $\tilde O(n\sqrt k)$ total time are possible when using more than $O(k)$ space. We address these gaps by designing a randomized streaming algorithm for the $k$-mismatch problem that, given an integer parameter $k\le s \le m$, uses $\tilde O(s)$ space and costs $\tilde O\big(n+\min\big(\frac {nk^2}m,\frac{nk}{\sqrt s},\frac{\sigma nm}s\big)\big)$ total time. For $s=m$, the total runtime becomes $\tilde O\big(n + \min\big(\frac{nk}{\sqrt m},\sigma n\big)\big)$, which matches the time cost of the fastest offline algorithm. Moreover, the worst-case time cost per character is still $\tilde O\big(\sqrt k\big)$.
|
2202.12885
|
Sadra Seyedmasoumian
|
Sadra Seyedmasoumian, Tolga M. Duman
|
Approximate Weight Distribution of Polarization-Adjusted Convolutional
(PAC) Codes
|
6 pages, 5 figures
|
2022 IEEE International Symposium on Information Theory (ISIT)
|
10.1109/ISIT50566.2022.9834587
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Polarization-adjusted convolutional (PAC) codes combine the polar and
convolutional transformations to enhance the distance properties of polar
codes. They offer a performance very close to the finite length
information-theoretic bounds for short block lengths. In this paper, we develop
a method of computing the weight distribution of PAC codes in an approximate
form by employing a probabilistic technique. We demonstrate that the results
well match the exact weight distributions for small codes that can be computed
using a brute-force algorithm. We also present a way employing the results
(along with the union bound on the code performance) to design specific PAC
codes, more precisely, to determine suitable rate profiles via simulated
annealing. Numerical examples illustrate that the PAC codes with the designed
rate profiles offer superior performance.
|
[
{
"created": "Fri, 25 Feb 2022 18:53:36 GMT",
"version": "v1"
}
] |
2022-11-17
|
[
[
"Seyedmasoumian",
"Sadra",
""
],
[
"Duman",
"Tolga M.",
""
]
] |
Polarization-adjusted convolutional (PAC) codes combine the polar and convolutional transformations to enhance the distance properties of polar codes. They offer a performance very close to the finite length information-theoretic bounds for short block lengths. In this paper, we develop a method of computing the weight distribution of PAC codes in an approximate form by employing a probabilistic technique. We demonstrate that the results well match the exact weight distributions for small codes that can be computed using a brute-force algorithm. We also present a way employing the results (along with the union bound on the code performance) to design specific PAC codes, more precisely, to determine suitable rate profiles via simulated annealing. Numerical examples illustrate that the PAC codes with the designed rate profiles offer superior performance.
|
1706.03824
|
Baskaran Sankaran
|
Baskaran Sankaran, Markus Freitag and Yaser Al-Onaizan
|
Attention-based Vocabulary Selection for NMT Decoding
|
Submitted to Second Conference on Machine Translation (WMT-17); 7
pages
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural Machine Translation (NMT) models usually use large target vocabulary
sizes to capture most of the words in the target language. The vocabulary size
is a big factor when decoding new sentences as the final softmax layer
normalizes over all possible target words. To address this problem, it is
widely common to restrict the target vocabulary with candidate lists based on
the source sentence. Usually, the candidate lists are a combination of external
word-to-word aligner, phrase table entries or most frequent words. In this
work, we propose a simple and yet novel approach to learn candidate lists
directly from the attention layer during NMT training. The candidate lists are
highly optimized for the current NMT model and do not need any external
computation of the candidate pool. We show significant decoding speedup
compared with using the entire vocabulary, without losing any translation
quality for two language pairs.
|
[
{
"created": "Mon, 12 Jun 2017 19:51:00 GMT",
"version": "v1"
}
] |
2017-06-14
|
[
[
"Sankaran",
"Baskaran",
""
],
[
"Freitag",
"Markus",
""
],
[
"Al-Onaizan",
"Yaser",
""
]
] |
Neural Machine Translation (NMT) models usually use large target vocabulary sizes to capture most of the words in the target language. The vocabulary size is a big factor when decoding new sentences as the final softmax layer normalizes over all possible target words. To address this problem, it is widely common to restrict the target vocabulary with candidate lists based on the source sentence. Usually, the candidate lists are a combination of external word-to-word aligner, phrase table entries or most frequent words. In this work, we propose a simple and yet novel approach to learn candidate lists directly from the attention layer during NMT training. The candidate lists are highly optimized for the current NMT model and do not need any external computation of the candidate pool. We show significant decoding speedup compared with using the entire vocabulary, without losing any translation quality for two language pairs.
|
1409.6624
|
Bernhard Rumpe
|
Holger Krahn, Bernhard Rumpe, Steven V\"olkel
|
Integrated Definition of Abstract and Concrete Syntax for Textual
Languages
|
15 pages, 12 figures. arXiv admin note: text overlap with
arXiv:1409.2367
|
Proceedings of the ACM/IEEE 10th International Conference on Model
Driven Engineering Languages and Systems (MODELS 2007), Nashville, TN, USA,
LNCS 4735
|
10.1007/978-3-540-75209-7_20
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An understandable concrete syntax and a comprehensible abstract syntax are
two central aspects of defining a modeling language. Both representations of a
language significantly overlap in their structure and also information, but may
also differ in parts of the information. To avoid discrepancies and problems
while handling the language, concrete and abstract syntax need to be
consistently defined. This will become an even bigger problem, when domain
specific languages will become used to a larger extent. In this paper we
present an extended grammar format that avoids redundancy between concrete and
abstract syntax by allowing an integrated definition of both for textual
modeling languages. For an amendment of the usability of the abstract syntax it
furthermore integrates meta-modeling concepts like associations and inheritance
into a well-understood grammar-based approach. This forms a sound foundation
for an extensible grammar and therefore language definition.
|
[
{
"created": "Mon, 22 Sep 2014 12:39:44 GMT",
"version": "v1"
}
] |
2016-11-17
|
[
[
"Krahn",
"Holger",
""
],
[
"Rumpe",
"Bernhard",
""
],
[
"Völkel",
"Steven",
""
]
] |
An understandable concrete syntax and a comprehensible abstract syntax are two central aspects of defining a modeling language. Both representations of a language significantly overlap in their structure and also information, but may also differ in parts of the information. To avoid discrepancies and problems while handling the language, concrete and abstract syntax need to be consistently defined. This will become an even bigger problem, when domain specific languages will become used to a larger extent. In this paper we present an extended grammar format that avoids redundancy between concrete and abstract syntax by allowing an integrated definition of both for textual modeling languages. For an amendment of the usability of the abstract syntax it furthermore integrates meta-modeling concepts like associations and inheritance into a well-understood grammar-based approach. This forms a sound foundation for an extensible grammar and therefore language definition.
|
2310.14858
|
Patrick Holzer
|
Patrick Holzer, Tania Jacob, Shubham Kavane
|
Dynamically Weighted Federated k-Means
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Federated clustering, an integral aspect of federated machine learning,
enables multiple data sources to collaboratively cluster their data,
maintaining decentralization and preserving privacy. In this paper, we
introduce a novel federated clustering algorithm named Dynamically Weighted
Federated k-means (DWF k-means) based on Lloyd's method for k-means clustering,
to address the challenges associated with distributed data sources and
heterogeneous data. Our proposed algorithm combines the benefits of traditional
clustering techniques with the privacy and scalability benefits offered by
federated learning. The algorithm facilitates collaborative clustering among
multiple data owners, allowing them to cluster their local data collectively
while exchanging minimal information with the central coordinator. The
algorithm optimizes the clustering process by adaptively aggregating cluster
assignments and centroids from each data source, thereby learning a global
clustering solution that reflects the collective knowledge of the entire
federated network. We address the issue of empty clusters, which commonly
arises in the context of federated clustering. We conduct experiments on
multiple datasets and data distribution settings to evaluate the performance of
our algorithm in terms of clustering score, accuracy, and v-measure. The
results demonstrate that our approach can match the performance of the
centralized classical k-means baseline, and outperform existing federated
clustering methods like k-FED in realistic scenarios.
|
[
{
"created": "Mon, 23 Oct 2023 12:28:21 GMT",
"version": "v1"
},
{
"created": "Fri, 17 Nov 2023 10:35:48 GMT",
"version": "v2"
}
] |
2023-11-20
|
[
[
"Holzer",
"Patrick",
""
],
[
"Jacob",
"Tania",
""
],
[
"Kavane",
"Shubham",
""
]
] |
Federated clustering, an integral aspect of federated machine learning, enables multiple data sources to collaboratively cluster their data, maintaining decentralization and preserving privacy. In this paper, we introduce a novel federated clustering algorithm named Dynamically Weighted Federated k-means (DWF k-means) based on Lloyd's method for k-means clustering, to address the challenges associated with distributed data sources and heterogeneous data. Our proposed algorithm combines the benefits of traditional clustering techniques with the privacy and scalability benefits offered by federated learning. The algorithm facilitates collaborative clustering among multiple data owners, allowing them to cluster their local data collectively while exchanging minimal information with the central coordinator. The algorithm optimizes the clustering process by adaptively aggregating cluster assignments and centroids from each data source, thereby learning a global clustering solution that reflects the collective knowledge of the entire federated network. We address the issue of empty clusters, which commonly arises in the context of federated clustering. We conduct experiments on multiple datasets and data distribution settings to evaluate the performance of our algorithm in terms of clustering score, accuracy, and v-measure. The results demonstrate that our approach can match the performance of the centralized classical k-means baseline, and outperform existing federated clustering methods like k-FED in realistic scenarios.
|
2407.03652
|
Teo Susnjak
|
Teo Susnjak, Timothy R. McIntosh, Andre L. C. Barczak, Napoleon H.
Reyes, Tong Liu, Paul Watters, Malka N. Halgamuge
|
Over the Edge of Chaos? Excess Complexity as a Roadblock to Artificial
General Intelligence
| null | null | null | null |
cs.AI cs.CC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In this study, we explored the progression trajectories of artificial
intelligence (AI) systems through the lens of complexity theory. We challenged
the conventional linear and exponential projections of AI advancement toward
Artificial General Intelligence (AGI) underpinned by transformer-based
architectures, and posited the existence of critical points, akin to phase
transitions in complex systems, where AI performance might plateau or regress
into instability upon exceeding a critical complexity threshold. We employed
agent-based modelling (ABM) to simulate hypothetical scenarios of AI systems'
evolution under specific assumptions, using benchmark performance as a proxy
for capability and complexity. Our simulations demonstrated how increasing the
complexity of the AI system could exceed an upper criticality threshold,
leading to unpredictable performance behaviours. Additionally, we developed a
practical methodology for detecting these critical thresholds using simulation
data and stochastic gradient descent to fine-tune detection thresholds. This
research offers a novel perspective on AI advancement that has a particular
relevance to Large Language Models (LLMs), emphasising the need for a tempered
approach to extrapolating AI's growth potential and underscoring the importance
of developing more robust and comprehensive AI performance benchmarks.
|
[
{
"created": "Thu, 4 Jul 2024 05:46:39 GMT",
"version": "v1"
}
] |
2024-07-08
|
[
[
"Susnjak",
"Teo",
""
],
[
"McIntosh",
"Timothy R.",
""
],
[
"Barczak",
"Andre L. C.",
""
],
[
"Reyes",
"Napoleon H.",
""
],
[
"Liu",
"Tong",
""
],
[
"Watters",
"Paul",
""
],
[
"Halgamuge",
"Malka N.",
""
]
] |
In this study, we explored the progression trajectories of artificial intelligence (AI) systems through the lens of complexity theory. We challenged the conventional linear and exponential projections of AI advancement toward Artificial General Intelligence (AGI) underpinned by transformer-based architectures, and posited the existence of critical points, akin to phase transitions in complex systems, where AI performance might plateau or regress into instability upon exceeding a critical complexity threshold. We employed agent-based modelling (ABM) to simulate hypothetical scenarios of AI systems' evolution under specific assumptions, using benchmark performance as a proxy for capability and complexity. Our simulations demonstrated how increasing the complexity of the AI system could exceed an upper criticality threshold, leading to unpredictable performance behaviours. Additionally, we developed a practical methodology for detecting these critical thresholds using simulation data and stochastic gradient descent to fine-tune detection thresholds. This research offers a novel perspective on AI advancement that has a particular relevance to Large Language Models (LLMs), emphasising the need for a tempered approach to extrapolating AI's growth potential and underscoring the importance of developing more robust and comprehensive AI performance benchmarks.
|
0910.5947
|
Jennifer Kloke
|
Jennifer Kloke and Gunnar Carlsson
|
Topological De-Noising: Strengthening the Topological Signal
|
13 pages, 37 figures, content added
| null | null | null |
cs.CG cs.NA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Topological methods, including persistent homology, are powerful tools for
analysis of high-dimensional data sets but these methods rely almost
exclusively on thresholding techniques. In noisy data sets, thresholding does
not always allow for the recovery of topological information. We present an
easy to implement, computationally efficient pre-processing algorithm to
prepare noisy point cloud data sets for topological data analysis. The
topological de-noising algorithm allows for the recovery of topological
information that is inaccessible by thresholding methods. We apply the
algorithm to synthetically-generated noisy data sets and show the recovery of
topological information which is impossible to obtain by thresholding. We also
apply the algorithm to natural image data in R^8 and show a very clean recovery
of topological information previously only available with large amounts of
thresholding. Finally, we discuss future directions for improving this
algorithm using zig-zag persistence methods.
|
[
{
"created": "Fri, 30 Oct 2009 19:08:44 GMT",
"version": "v1"
},
{
"created": "Mon, 8 Feb 2010 23:50:41 GMT",
"version": "v2"
}
] |
2016-09-08
|
[
[
"Kloke",
"Jennifer",
""
],
[
"Carlsson",
"Gunnar",
""
]
] |
Topological methods, including persistent homology, are powerful tools for analysis of high-dimensional data sets but these methods rely almost exclusively on thresholding techniques. In noisy data sets, thresholding does not always allow for the recovery of topological information. We present an easy to implement, computationally efficient pre-processing algorithm to prepare noisy point cloud data sets for topological data analysis. The topological de-noising algorithm allows for the recovery of topological information that is inaccessible by thresholding methods. We apply the algorithm to synthetically-generated noisy data sets and show the recovery of topological information which is impossible to obtain by thresholding. We also apply the algorithm to natural image data in R^8 and show a very clean recovery of topological information previously only available with large amounts of thresholding. Finally, we discuss future directions for improving this algorithm using zig-zag persistence methods.
|
2111.15651
|
Stuart Synakowski
|
Stuart Synakowski, Fabian Benitez-Quiroz, Aleix M. Martinez
|
Leveraging The Topological Consistencies of Learning in Deep Neural
Networks
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Recently, methods have been developed to accurately predict the testing
performance of a Deep Neural Network (DNN) on a particular task, given
statistics of its underlying topological structure. However, further leveraging
this newly found insight for practical applications is intractable due to the
high computational cost in terms of time and memory. In this work, we define a
new class of topological features that accurately characterize the progress of
learning while being quick to compute during running time. Additionally, our
proposed topological features are readily equipped for backpropagation, meaning
that they can be incorporated in end-to-end training. Our newly developed
practical topological characterization of DNNs allows for an additional set of
applications. We first show we can predict the performance of a DNN without a
testing set and without the need for high-performance computing. We also
demonstrate our topological characterization of DNNs is effective in estimating
task similarity. Lastly, we show we can induce learning in DNNs by actively
constraining the DNN's topological structure. This opens up new avenues in
constricting the underlying structure of DNNs in a meta-learning framework.
|
[
{
"created": "Tue, 30 Nov 2021 18:34:48 GMT",
"version": "v1"
}
] |
2021-12-01
|
[
[
"Synakowski",
"Stuart",
""
],
[
"Benitez-Quiroz",
"Fabian",
""
],
[
"Martinez",
"Aleix M.",
""
]
] |
Recently, methods have been developed to accurately predict the testing performance of a Deep Neural Network (DNN) on a particular task, given statistics of its underlying topological structure. However, further leveraging this newly found insight for practical applications is intractable due to the high computational cost in terms of time and memory. In this work, we define a new class of topological features that accurately characterize the progress of learning while being quick to compute during running time. Additionally, our proposed topological features are readily equipped for backpropagation, meaning that they can be incorporated in end-to-end training. Our newly developed practical topological characterization of DNNs allows for an additional set of applications. We first show we can predict the performance of a DNN without a testing set and without the need for high-performance computing. We also demonstrate our topological characterization of DNNs is effective in estimating task similarity. Lastly, we show we can induce learning in DNNs by actively constraining the DNN's topological structure. This opens up new avenues in constricting the underlying structure of DNNs in a meta-learning framework.
|
2204.07196
|
Arsen Vasilyan
|
Ronitt Rubinfeld and Arsen Vasilyan
|
Testing distributional assumptions of learning algorithms
| null | null | null | null |
cs.LG cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There are many high dimensional function classes that have fast agnostic
learning algorithms when assumptions on the distribution of examples can be
made, such as Gaussianity or uniformity over the domain. But how can one be
confident that data indeed satisfies such assumption, so that one can trust in
output quality of the agnostic learning algorithm? We propose a model by which
to systematically study the design of tester-learner pairs
$(\mathcal{A},\mathcal{T})$, such that if the distribution on examples in the
data passes the tester $\mathcal{T}$ then one can safely trust the output of
the agnostic learner $\mathcal{A}$ on the data.
To demonstrate the power of the model, we apply it to the classical problem
of agnostically learning halfspaces under the standard Gaussian distribution
and present a tester-learner pair with combined run-time of
$n^{\tilde{O}(1/\epsilon^4)}$. This qualitatively matches that of the best
known ordinary agnostic learning algorithms for this task. In contrast, finite
sample Gaussianity testers do not exist for the $L_1$ and EMD distance
measures. A key step is to show that half-spaces are well-approximated with
low-degree polynomials relative to distributions with low-degree moments close
to those of a Gaussian.
We also go beyond spherically-symmetric distributions, and give a
tester-learner pair for halfspaces under the uniform distribution on
$\{0,1\}^n$ with combined run-time of $n^{\tilde{O}(1/\epsilon^4)}$. This is
achieved using polynomial approximation theory and critical index machinery.
We also show there exist some well-studied settings where
$2^{\tilde{O}(\sqrt{n})}$ run-time agnostic learning algorithms are available,
yet the combined run-times of tester-learner pairs must be as high as
$2^{\Omega(n)}$. On that account, the design of tester-learner pairs is a
research direction in its own right independent of standard agnostic learning.
|
[
{
"created": "Thu, 14 Apr 2022 19:10:53 GMT",
"version": "v1"
},
{
"created": "Sun, 20 Nov 2022 01:01:47 GMT",
"version": "v2"
}
] |
2022-11-22
|
[
[
"Rubinfeld",
"Ronitt",
""
],
[
"Vasilyan",
"Arsen",
""
]
] |
There are many high dimensional function classes that have fast agnostic learning algorithms when assumptions on the distribution of examples can be made, such as Gaussianity or uniformity over the domain. But how can one be confident that data indeed satisfies such assumption, so that one can trust in output quality of the agnostic learning algorithm? We propose a model by which to systematically study the design of tester-learner pairs $(\mathcal{A},\mathcal{T})$, such that if the distribution on examples in the data passes the tester $\mathcal{T}$ then one can safely trust the output of the agnostic learner $\mathcal{A}$ on the data. To demonstrate the power of the model, we apply it to the classical problem of agnostically learning halfspaces under the standard Gaussian distribution and present a tester-learner pair with combined run-time of $n^{\tilde{O}(1/\epsilon^4)}$. This qualitatively matches that of the best known ordinary agnostic learning algorithms for this task. In contrast, finite sample Gaussianity testers do not exist for the $L_1$ and EMD distance measures. A key step is to show that half-spaces are well-approximated with low-degree polynomials relative to distributions with low-degree moments close to those of a Gaussian. We also go beyond spherically-symmetric distributions, and give a tester-learner pair for halfspaces under the uniform distribution on $\{0,1\}^n$ with combined run-time of $n^{\tilde{O}(1/\epsilon^4)}$. This is achieved using polynomial approximation theory and critical index machinery. We also show there exist some well-studied settings where $2^{\tilde{O}(\sqrt{n})}$ run-time agnostic learning algorithms are available, yet the combined run-times of tester-learner pairs must be as high as $2^{\Omega(n)}$. On that account, the design of tester-learner pairs is a research direction in its own right independent of standard agnostic learning.
|
1911.03607
|
Ke Xu
|
Ke Xu, Kaiyu Guan, Jian Peng, Yunan Luo, Sibo Wang
|
DeepMask: an algorithm for cloud and cloud shadow detection in optical
satellite remote sensing images using deep residual network
|
17 pages, 4 figures, 6 tables
| null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Detecting and masking cloud and cloud shadow from satellite remote sensing
images is a pervasive problem in the remote sensing community. Accurate and
efficient detection of cloud and cloud shadow is an essential step to harness
the value of remotely sensed data for almost all downstream analysis. DeepMask,
a new algorithm for cloud and cloud shadow detection in optical satellite
remote sensing imagery, is proposed in this study. DeepMask utilizes ResNet, a
deep convolutional neural network, for pixel-level cloud mask generation. The
algorithm is trained and evaluated on the Landsat 8 Cloud Cover Assessment
Validation Dataset distributed across 8 different land types. Compared with
CFMask, the most widely used cloud detection algorithm, land-type-specific
DeepMask models achieve higher accuracy across all land types. The average
accuracy is 93.56%, compared with 85.36% from CFMask. DeepMask also achieves
91.02% accuracy on all-land-type dataset. Compared with other CNN-based cloud
mask algorithms, DeepMask benefits from the parsimonious architecture and the
residual connection of ResNet. It is compatible with input of any size and
shape. DeepMask still maintains high performance when using only red, green,
blue, and NIR bands, indicating its potential to be applied to other satellite
platforms that only have limited optical bands.
|
[
{
"created": "Sat, 9 Nov 2019 03:44:07 GMT",
"version": "v1"
}
] |
2019-11-12
|
[
[
"Xu",
"Ke",
""
],
[
"Guan",
"Kaiyu",
""
],
[
"Peng",
"Jian",
""
],
[
"Luo",
"Yunan",
""
],
[
"Wang",
"Sibo",
""
]
] |
Detecting and masking cloud and cloud shadow from satellite remote sensing images is a pervasive problem in the remote sensing community. Accurate and efficient detection of cloud and cloud shadow is an essential step to harness the value of remotely sensed data for almost all downstream analysis. DeepMask, a new algorithm for cloud and cloud shadow detection in optical satellite remote sensing imagery, is proposed in this study. DeepMask utilizes ResNet, a deep convolutional neural network, for pixel-level cloud mask generation. The algorithm is trained and evaluated on the Landsat 8 Cloud Cover Assessment Validation Dataset distributed across 8 different land types. Compared with CFMask, the most widely used cloud detection algorithm, land-type-specific DeepMask models achieve higher accuracy across all land types. The average accuracy is 93.56%, compared with 85.36% from CFMask. DeepMask also achieves 91.02% accuracy on all-land-type dataset. Compared with other CNN-based cloud mask algorithms, DeepMask benefits from the parsimonious architecture and the residual connection of ResNet. It is compatible with input of any size and shape. DeepMask still maintains high performance when using only red, green, blue, and NIR bands, indicating its potential to be applied to other satellite platforms that only have limited optical bands.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.