id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2210.15731
|
Domenico Tortorella
|
Domenico Tortorella, Alessio Micheli
|
Beyond Homophily with Graph Echo State Networks
|
Accepted for oral presentation at ESANN 2022
|
Proceedings of the 30th European Symposium on Artificial Neural
Networks, Computational Intelligence and Machine Learning (ESANN 2022), pp.
491-496
|
10.14428/esann/2022.ES2022-58
| null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graph Echo State Networks (GESN) have already demonstrated their efficacy and
efficiency in graph classification tasks. However, semi-supervised node
classification brought out the problem of over-smoothing in end-to-end trained
deep models, which causes a bias towards high homophily graphs. We evaluate for
the first time GESN on node classification tasks with different degrees of
homophily, analyzing also the impact of the reservoir radius. Our experiments
show that reservoir models are able to achieve better or comparable accuracy
with respect to fully trained deep models that implement ad hoc variations in
the architectural bias, with a gain in terms of efficiency.
|
[
{
"created": "Thu, 27 Oct 2022 19:25:56 GMT",
"version": "v1"
}
] |
2022-10-31
|
[
[
"Tortorella",
"Domenico",
""
],
[
"Micheli",
"Alessio",
""
]
] |
Graph Echo State Networks (GESN) have already demonstrated their efficacy and efficiency in graph classification tasks. However, semi-supervised node classification brought out the problem of over-smoothing in end-to-end trained deep models, which causes a bias towards high homophily graphs. We evaluate for the first time GESN on node classification tasks with different degrees of homophily, analyzing also the impact of the reservoir radius. Our experiments show that reservoir models are able to achieve better or comparable accuracy with respect to fully trained deep models that implement ad hoc variations in the architectural bias, with a gain in terms of efficiency.
|
2404.00494
|
Nathaniel Dennler
|
Nathaniel S. Dennler, Mina Kian, Stefanos Nikolaidis, and Maja
Matari\'c
|
Designing Robot Identity: The Role of Voice, Clothing, and Task on Robot
Gender Perception
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Perceptions of gender are a significant aspect of human-human interaction,
and gender has wide-reaching social implications for robots deployed in
contexts where they are expected to interact with humans. This work explored
two flexible modalities for communicating gender in robots--voice and
appearance--and we studied their individual and combined influences on a
robot's perceived gender. We evaluated the perception of a robot's gender
through three video-based studies. First, we conducted a study (n=65) on the
gender perception of robot voices by varying speaker identity and pitch.
Second, we conducted a study (n=93) on the gender perception of robot clothing
designed for two different tasks. Finally, building on the results of the first
two studies, we completed a large integrative video-based study (n=273)
involving two human-robot interaction tasks. We found that voice and clothing
can be used to reliably establish a robot's perceived gender, and that
combining these two modalities can have different effects on the robot's
perceived gender. Taken together, these results inform the design of robot
voices and clothing as individual and interacting components in the perceptions
of robot gender.
|
[
{
"created": "Sat, 30 Mar 2024 23:27:39 GMT",
"version": "v1"
}
] |
2024-04-02
|
[
[
"Dennler",
"Nathaniel S.",
""
],
[
"Kian",
"Mina",
""
],
[
"Nikolaidis",
"Stefanos",
""
],
[
"Matarić",
"Maja",
""
]
] |
Perceptions of gender are a significant aspect of human-human interaction, and gender has wide-reaching social implications for robots deployed in contexts where they are expected to interact with humans. This work explored two flexible modalities for communicating gender in robots--voice and appearance--and we studied their individual and combined influences on a robot's perceived gender. We evaluated the perception of a robot's gender through three video-based studies. First, we conducted a study (n=65) on the gender perception of robot voices by varying speaker identity and pitch. Second, we conducted a study (n=93) on the gender perception of robot clothing designed for two different tasks. Finally, building on the results of the first two studies, we completed a large integrative video-based study (n=273) involving two human-robot interaction tasks. We found that voice and clothing can be used to reliably establish a robot's perceived gender, and that combining these two modalities can have different effects on the robot's perceived gender. Taken together, these results inform the design of robot voices and clothing as individual and interacting components in the perceptions of robot gender.
|
2002.03736
|
Yaozu Ye
|
Yaozu Ye, Kailun Yang, Kaite Xiang, Juan Wang and Kaiwei Wang
|
Universal Semantic Segmentation for Fisheye Urban Driving Images
|
SMC2020 recieved
| null | null | null |
cs.CV cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Semantic segmentation is a critical method in the field of autonomous
driving. When performing semantic image segmentation, a wider field of view
(FoV) helps to obtain more information about the surrounding environment,
making automatic driving safer and more reliable, which could be offered by
fisheye cameras. However, large public fisheye datasets are not available, and
the fisheye images captured by the fisheye camera with large FoV comes with
large distortion, so commonly-used semantic segmentation model cannot be
directly utilized. In this paper, a seven degrees of freedom (DoF) augmentation
method is proposed to transform rectilinear image to fisheye image in a more
comprehensive way. In the training process, rectilinear images are transformed
into fisheye images in seven DoF, which simulates the fisheye images taken by
cameras of different positions, orientations and focal lengths. The result
shows that training with the seven-DoF augmentation can improve the model's
accuracy and robustness against different distorted fisheye data. This
seven-DoF augmentation provides a universal semantic segmentation solution for
fisheye cameras in different autonomous driving applications. Also, we provide
specific parameter settings of the augmentation for autonomous driving. At
last, we tested our universal semantic segmentation model on real fisheye
images and obtained satisfactory results. The code and configurations are
released at https://github.com/Yaozhuwa/FisheyeSeg.
|
[
{
"created": "Fri, 31 Jan 2020 11:19:00 GMT",
"version": "v1"
},
{
"created": "Mon, 24 Aug 2020 13:02:09 GMT",
"version": "v2"
}
] |
2020-08-25
|
[
[
"Ye",
"Yaozu",
""
],
[
"Yang",
"Kailun",
""
],
[
"Xiang",
"Kaite",
""
],
[
"Wang",
"Juan",
""
],
[
"Wang",
"Kaiwei",
""
]
] |
Semantic segmentation is a critical method in the field of autonomous driving. When performing semantic image segmentation, a wider field of view (FoV) helps to obtain more information about the surrounding environment, making automatic driving safer and more reliable, which could be offered by fisheye cameras. However, large public fisheye datasets are not available, and the fisheye images captured by the fisheye camera with large FoV comes with large distortion, so commonly-used semantic segmentation model cannot be directly utilized. In this paper, a seven degrees of freedom (DoF) augmentation method is proposed to transform rectilinear image to fisheye image in a more comprehensive way. In the training process, rectilinear images are transformed into fisheye images in seven DoF, which simulates the fisheye images taken by cameras of different positions, orientations and focal lengths. The result shows that training with the seven-DoF augmentation can improve the model's accuracy and robustness against different distorted fisheye data. This seven-DoF augmentation provides a universal semantic segmentation solution for fisheye cameras in different autonomous driving applications. Also, we provide specific parameter settings of the augmentation for autonomous driving. At last, we tested our universal semantic segmentation model on real fisheye images and obtained satisfactory results. The code and configurations are released at https://github.com/Yaozhuwa/FisheyeSeg.
|
2001.01469
|
Vishwanath D
|
Shubham Paliwal, Vishwanath D, Rohit Rahul, Monika Sharma, Lovekesh
Vig
|
TableNet: Deep Learning model for end-to-end Table detection and Tabular
data extraction from Scanned Document Images
| null | null | null | null |
cs.CV cs.LG eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the widespread use of mobile phones and scanners to photograph and
upload documents, the need for extracting the information trapped in
unstructured document images such as retail receipts, insurance claim forms and
financial invoices is becoming more acute. A major hurdle to this objective is
that these images often contain information in the form of tables and
extracting data from tabular sub-images presents a unique set of challenges.
This includes accurate detection of the tabular region within an image, and
subsequently detecting and extracting information from the rows and columns of
the detected table. While some progress has been made in table detection,
extracting the table contents is still a challenge since this involves more
fine grained table structure(rows & columns) recognition. Prior approaches have
attempted to solve the table detection and structure recognition problems
independently using two separate models. In this paper, we propose TableNet: a
novel end-to-end deep learning model for both table detection and structure
recognition. The model exploits the interdependence between the twin tasks of
table detection and table structure recognition to segment out the table and
column regions. This is followed by semantic rule-based row extraction from the
identified tabular sub-regions. The proposed model and extraction approach was
evaluated on the publicly available ICDAR 2013 and Marmot Table datasets
obtaining state of the art results. Additionally, we demonstrate that feeding
additional semantic features further improves model performance and that the
model exhibits transfer learning across datasets. Another contribution of this
paper is to provide additional table structure annotations for the Marmot data,
which currently only has annotations for table detection.
|
[
{
"created": "Mon, 6 Jan 2020 10:25:32 GMT",
"version": "v1"
}
] |
2020-01-07
|
[
[
"Paliwal",
"Shubham",
""
],
[
"D",
"Vishwanath",
""
],
[
"Rahul",
"Rohit",
""
],
[
"Sharma",
"Monika",
""
],
[
"Vig",
"Lovekesh",
""
]
] |
With the widespread use of mobile phones and scanners to photograph and upload documents, the need for extracting the information trapped in unstructured document images such as retail receipts, insurance claim forms and financial invoices is becoming more acute. A major hurdle to this objective is that these images often contain information in the form of tables and extracting data from tabular sub-images presents a unique set of challenges. This includes accurate detection of the tabular region within an image, and subsequently detecting and extracting information from the rows and columns of the detected table. While some progress has been made in table detection, extracting the table contents is still a challenge since this involves more fine grained table structure(rows & columns) recognition. Prior approaches have attempted to solve the table detection and structure recognition problems independently using two separate models. In this paper, we propose TableNet: a novel end-to-end deep learning model for both table detection and structure recognition. The model exploits the interdependence between the twin tasks of table detection and table structure recognition to segment out the table and column regions. This is followed by semantic rule-based row extraction from the identified tabular sub-regions. The proposed model and extraction approach was evaluated on the publicly available ICDAR 2013 and Marmot Table datasets obtaining state of the art results. Additionally, we demonstrate that feeding additional semantic features further improves model performance and that the model exhibits transfer learning across datasets. Another contribution of this paper is to provide additional table structure annotations for the Marmot data, which currently only has annotations for table detection.
|
1912.12214
|
Tin Huynh Van
|
Tin Van Huynh, Kiet Van Nguyen, Ngan Luu-Thuy Nguyen, Anh Gia-Tuan
Nguyen
|
Job Prediction: From Deep Neural Network Models to Applications
|
Accepted by IEEE RIVF 2020 Conference
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Determining the job is suitable for a student or a person looking for work
based on their job's descriptions such as knowledge and skills that are
difficult, as well as how employers must find ways to choose the candidates
that match the job they require. In this paper, we focus on studying the job
prediction using different deep neural network models including TextCNN,
Bi-GRU-LSTM-CNN, and Bi-GRU-CNN with various pre-trained word embeddings on the
IT Job dataset. In addition, we also proposed a simple and effective ensemble
model combining different deep neural network models. The experimental results
illustrated that our proposed ensemble model achieved the highest result with
an F1 score of 72.71%. Moreover, we analyze these experimental results to have
insights about this problem to find better solutions in the future.
|
[
{
"created": "Fri, 27 Dec 2019 16:13:43 GMT",
"version": "v1"
},
{
"created": "Fri, 31 Jan 2020 09:36:49 GMT",
"version": "v2"
}
] |
2020-02-03
|
[
[
"Van Huynh",
"Tin",
""
],
[
"Van Nguyen",
"Kiet",
""
],
[
"Nguyen",
"Ngan Luu-Thuy",
""
],
[
"Nguyen",
"Anh Gia-Tuan",
""
]
] |
Determining the job is suitable for a student or a person looking for work based on their job's descriptions such as knowledge and skills that are difficult, as well as how employers must find ways to choose the candidates that match the job they require. In this paper, we focus on studying the job prediction using different deep neural network models including TextCNN, Bi-GRU-LSTM-CNN, and Bi-GRU-CNN with various pre-trained word embeddings on the IT Job dataset. In addition, we also proposed a simple and effective ensemble model combining different deep neural network models. The experimental results illustrated that our proposed ensemble model achieved the highest result with an F1 score of 72.71%. Moreover, we analyze these experimental results to have insights about this problem to find better solutions in the future.
|
1912.02820
|
Vikram Sharma
|
Prashant Batra, Vikram Sharma
|
Complexity of a Root Clustering Algorithm
|
52 pages, 1 figure
| null | null | null |
cs.DS cs.CC cs.SC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Approximating the roots of a holomorphic function in an input box is a
fundamental problem in many domains. Most algorithms in the literature for
solving this problem are conditional, i.e., they make some simplifying
assumptions, such as, all the roots are simple or there are no roots on the
boundary of the input box, or the underlying machine model is Real RAM. Root
clustering is a generalization of the root approximation problem that allows
for errors in the computation and makes no assumption on the multiplicity of
the roots. An unconditional algorithm for computing a root clustering of a
holomorphic function was given by Yap, Sagraloff and Sharma in 2013. They
proposed a subdivision based algorithm using effective predicates based on
Pellet's test while avoiding any comparison with zeros (using soft zero
comparisons instead). In this paper, we analyze the running time of their
algorithm. We use the continuous amortization framework to derive an upper
bound on the size of the subdivision tree. We specialize this bound to the case
of polynomials and some simple transcendental functions such as exponential and
trigonometric sine. We show that the algorithm takes exponential time even for
these simple functions, unlike the case of polynomials. We also derive a bound
on the bit-precision used by the algorithm. To the best of our knowledge, this
is the first such result for holomorphic functions. We introduce new geometric
parameters, such as the relative growth of the function on the input box, for
analyzing the algorithm. Thus, our estimates naturally generalize the known
results, i.e., for the case of polynomials.
|
[
{
"created": "Thu, 5 Dec 2019 08:59:16 GMT",
"version": "v1"
}
] |
2019-12-09
|
[
[
"Batra",
"Prashant",
""
],
[
"Sharma",
"Vikram",
""
]
] |
Approximating the roots of a holomorphic function in an input box is a fundamental problem in many domains. Most algorithms in the literature for solving this problem are conditional, i.e., they make some simplifying assumptions, such as, all the roots are simple or there are no roots on the boundary of the input box, or the underlying machine model is Real RAM. Root clustering is a generalization of the root approximation problem that allows for errors in the computation and makes no assumption on the multiplicity of the roots. An unconditional algorithm for computing a root clustering of a holomorphic function was given by Yap, Sagraloff and Sharma in 2013. They proposed a subdivision based algorithm using effective predicates based on Pellet's test while avoiding any comparison with zeros (using soft zero comparisons instead). In this paper, we analyze the running time of their algorithm. We use the continuous amortization framework to derive an upper bound on the size of the subdivision tree. We specialize this bound to the case of polynomials and some simple transcendental functions such as exponential and trigonometric sine. We show that the algorithm takes exponential time even for these simple functions, unlike the case of polynomials. We also derive a bound on the bit-precision used by the algorithm. To the best of our knowledge, this is the first such result for holomorphic functions. We introduce new geometric parameters, such as the relative growth of the function on the input box, for analyzing the algorithm. Thus, our estimates naturally generalize the known results, i.e., for the case of polynomials.
|
2311.04924
|
Andrej Lucny
|
Andrej Lucny, Pavel Petrovic
|
Tuning-less Object Naming with a Foundation Model
|
This work was funded (or co-funded) by the Horizon-Widera-2021
European Twinning project TERAIS G.A. n. 101079338 World Symposium on Digital
Intelligence for Systems and Machines (DISA2023), Kosice, September 21-22,
2023 citations: https://ieeexplore.ieee.org/document/10308905 codes:
https://github.com/andylucny/whatisthis,
https://doi.org/10.5281/zenodo.10702868 7 pages, 9 figures, 0 tables
|
2023 World Symposium on Digital Intelligence for Systems and
Machines (DISA) https://ieeexplore.ieee.org/xpl/conhome/10308901/proceeding
pages 154-160
|
10.1109/DISA59116.2023
| null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
We implement a real-time object naming system that enables learning a set of
named entities never seen. Our approach employs an existing foundation model
that we consider ready to see anything before starting. It turns seen images
into relatively small feature vectors that we associate with index to a
gradually built vocabulary without any training of fine-tuning of the model.
Our contribution is using the association mechanism known from transformers as
attention. It has features that support generalization from irrelevant
information for distinguishing the entities and potentially enable associating
with much more than indices to vocabulary. As a result, the system can work in
a one-shot manner and correctly name objects named in different contents. We
also outline implementation details of the system modules integrated by a
blackboard architecture. Finally, we investigate the system's quality, mainly
how many objects it can handle in this way.
|
[
{
"created": "Fri, 3 Nov 2023 09:11:49 GMT",
"version": "v1"
},
{
"created": "Mon, 26 Feb 2024 13:08:43 GMT",
"version": "v2"
}
] |
2024-02-27
|
[
[
"Lucny",
"Andrej",
""
],
[
"Petrovic",
"Pavel",
""
]
] |
We implement a real-time object naming system that enables learning a set of named entities never seen. Our approach employs an existing foundation model that we consider ready to see anything before starting. It turns seen images into relatively small feature vectors that we associate with index to a gradually built vocabulary without any training of fine-tuning of the model. Our contribution is using the association mechanism known from transformers as attention. It has features that support generalization from irrelevant information for distinguishing the entities and potentially enable associating with much more than indices to vocabulary. As a result, the system can work in a one-shot manner and correctly name objects named in different contents. We also outline implementation details of the system modules integrated by a blackboard architecture. Finally, we investigate the system's quality, mainly how many objects it can handle in this way.
|
1209.3356
|
Rajkumar Buyya
|
Rajkumar Buyya, Rodrigo N. Calheiros, and Xiaorong Li
|
Autonomic Cloud Computing: Open Challenges and Architectural Elements
|
8 pages, 6 figures, conference keynote paper
|
Proceedings of the Third International Conference of Emerging
Applications of Information Technology (EAIT 2012, IEEE Press, USA), Kolkata,
India, November 29-December 01, 2012
|
10.1109/EAIT.2012.6407847
| null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As Clouds are complex, large-scale, and heterogeneous distributed systems,
management of their resources is a challenging task. They need automated and
integrated intelligent strategies for provisioning of resources to offer
services that are secure, reliable, and cost-efficient. Hence, effective
management of services becomes fundamental in software platforms that
constitute the fabric of computing Clouds. In this direction, this paper
identifies open issues in autonomic resource provisioning and presents
innovative management techniques for supporting SaaS applications hosted on
Clouds. We present a conceptual architecture and early results evidencing the
benefits of autonomic management of Clouds.
|
[
{
"created": "Sat, 15 Sep 2012 04:40:46 GMT",
"version": "v1"
}
] |
2016-11-17
|
[
[
"Buyya",
"Rajkumar",
""
],
[
"Calheiros",
"Rodrigo N.",
""
],
[
"Li",
"Xiaorong",
""
]
] |
As Clouds are complex, large-scale, and heterogeneous distributed systems, management of their resources is a challenging task. They need automated and integrated intelligent strategies for provisioning of resources to offer services that are secure, reliable, and cost-efficient. Hence, effective management of services becomes fundamental in software platforms that constitute the fabric of computing Clouds. In this direction, this paper identifies open issues in autonomic resource provisioning and presents innovative management techniques for supporting SaaS applications hosted on Clouds. We present a conceptual architecture and early results evidencing the benefits of autonomic management of Clouds.
|
1706.05476
|
Zijian Li
|
Zijian Li, Xun Jian, Xiang Lian, Lei Chen
|
An Efficient Probabilistic Approach for Graph Similarity Search
| null | null | null | null |
cs.DB cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graph similarity search is a common and fundamental operation in graph
databases. One of the most popular graph similarity measures is the Graph Edit
Distance (GED) mainly because of its broad applicability and high
interpretability. Despite its prevalence, exact GED computation is proved to be
NP-hard, which could result in unsatisfactory computational efficiency on large
graphs. However, exactly accurate search results are usually unnecessary for
real-world applications especially when the responsiveness is far more
important than the accuracy. Thus, in this paper, we propose a novel
probabilistic approach to efficiently estimate GED, which is further leveraged
for the graph similarity search. Specifically, we first take branches as
elementary structures in graphs, and introduce a novel graph similarity measure
by comparing branches between graphs, i.e., Graph Branch Distance (GBD), which
can be efficiently calculated in polynomial time. Then, we formulate the
relationship between GED and GBD by considering branch variations as the result
ascribed to graph edit operations, and model this process by probabilistic
approaches. By applying our model, the GED between any two graphs can be
efficiently estimated by their GBD, and these estimations are finally utilized
in the graph similarity search. Extensive experiments show that our approach
has better accuracy, efficiency and scalability than other comparable methods
in the graph similarity search over real and synthetic data sets.
|
[
{
"created": "Sat, 17 Jun 2017 05:25:10 GMT",
"version": "v1"
},
{
"created": "Tue, 23 Jan 2018 19:42:42 GMT",
"version": "v2"
}
] |
2018-01-25
|
[
[
"Li",
"Zijian",
""
],
[
"Jian",
"Xun",
""
],
[
"Lian",
"Xiang",
""
],
[
"Chen",
"Lei",
""
]
] |
Graph similarity search is a common and fundamental operation in graph databases. One of the most popular graph similarity measures is the Graph Edit Distance (GED) mainly because of its broad applicability and high interpretability. Despite its prevalence, exact GED computation is proved to be NP-hard, which could result in unsatisfactory computational efficiency on large graphs. However, exactly accurate search results are usually unnecessary for real-world applications especially when the responsiveness is far more important than the accuracy. Thus, in this paper, we propose a novel probabilistic approach to efficiently estimate GED, which is further leveraged for the graph similarity search. Specifically, we first take branches as elementary structures in graphs, and introduce a novel graph similarity measure by comparing branches between graphs, i.e., Graph Branch Distance (GBD), which can be efficiently calculated in polynomial time. Then, we formulate the relationship between GED and GBD by considering branch variations as the result ascribed to graph edit operations, and model this process by probabilistic approaches. By applying our model, the GED between any two graphs can be efficiently estimated by their GBD, and these estimations are finally utilized in the graph similarity search. Extensive experiments show that our approach has better accuracy, efficiency and scalability than other comparable methods in the graph similarity search over real and synthetic data sets.
|
2302.11506
|
Pranav Kadam
|
Pranav Kadam, Hardik Prajapati, Min Zhang, Jintang Xue, Shan Liu,
C.-C. Jay Kuo
|
S3I-PointHop: SO(3)-Invariant PointHop for 3D Point Cloud Classification
|
5 pages, 3 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many point cloud classification methods are developed under the assumption
that all point clouds in the dataset are well aligned with the canonical axes
so that the 3D Cartesian point coordinates can be employed to learn features.
When input point clouds are not aligned, the classification performance drops
significantly. In this work, we focus on a mathematically transparent point
cloud classification method called PointHop, analyze its reason for failure due
to pose variations, and solve the problem by replacing its pose dependent
modules with rotation invariant counterparts. The proposed method is named
SO(3)-Invariant PointHop (or S3I-PointHop in short). We also significantly
simplify the PointHop pipeline using only one single hop along with multiple
spatial aggregation techniques. The idea of exploiting more spatial information
is novel. Experiments on the ModelNet40 dataset demonstrate the superiority of
S3I-PointHop over traditional PointHop-like methods.
|
[
{
"created": "Wed, 22 Feb 2023 17:23:33 GMT",
"version": "v1"
}
] |
2023-02-23
|
[
[
"Kadam",
"Pranav",
""
],
[
"Prajapati",
"Hardik",
""
],
[
"Zhang",
"Min",
""
],
[
"Xue",
"Jintang",
""
],
[
"Liu",
"Shan",
""
],
[
"Kuo",
"C. -C. Jay",
""
]
] |
Many point cloud classification methods are developed under the assumption that all point clouds in the dataset are well aligned with the canonical axes so that the 3D Cartesian point coordinates can be employed to learn features. When input point clouds are not aligned, the classification performance drops significantly. In this work, we focus on a mathematically transparent point cloud classification method called PointHop, analyze its reason for failure due to pose variations, and solve the problem by replacing its pose dependent modules with rotation invariant counterparts. The proposed method is named SO(3)-Invariant PointHop (or S3I-PointHop in short). We also significantly simplify the PointHop pipeline using only one single hop along with multiple spatial aggregation techniques. The idea of exploiting more spatial information is novel. Experiments on the ModelNet40 dataset demonstrate the superiority of S3I-PointHop over traditional PointHop-like methods.
|
2203.10789
|
Junbum Cha
|
Junbum Cha, Kyungjae Lee, Sungrae Park, Sanghyuk Chun
|
Domain Generalization by Mutual-Information Regularization with
Pre-trained Models
|
ECCV 2022 camera-ready
| null | null | null |
cs.LG cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Domain generalization (DG) aims to learn a generalized model to an unseen
target domain using only limited source domains. Previous attempts to DG fail
to learn domain-invariant representations only from the source domains due to
the significant domain shifts between training and test domains. Instead, we
re-formulate the DG objective using mutual information with the oracle model, a
model generalized to any possible domain. We derive a tractable variational
lower bound via approximating the oracle model by a pre-trained model, called
Mutual Information Regularization with Oracle (MIRO). Our extensive experiments
show that MIRO significantly improves the out-of-distribution performance.
Furthermore, our scaling experiments show that the larger the scale of the
pre-trained model, the greater the performance improvement of MIRO. Source code
is available at https://github.com/kakaobrain/miro.
|
[
{
"created": "Mon, 21 Mar 2022 08:07:46 GMT",
"version": "v1"
},
{
"created": "Fri, 22 Jul 2022 06:16:48 GMT",
"version": "v2"
}
] |
2022-07-25
|
[
[
"Cha",
"Junbum",
""
],
[
"Lee",
"Kyungjae",
""
],
[
"Park",
"Sungrae",
""
],
[
"Chun",
"Sanghyuk",
""
]
] |
Domain generalization (DG) aims to learn a generalized model to an unseen target domain using only limited source domains. Previous attempts to DG fail to learn domain-invariant representations only from the source domains due to the significant domain shifts between training and test domains. Instead, we re-formulate the DG objective using mutual information with the oracle model, a model generalized to any possible domain. We derive a tractable variational lower bound via approximating the oracle model by a pre-trained model, called Mutual Information Regularization with Oracle (MIRO). Our extensive experiments show that MIRO significantly improves the out-of-distribution performance. Furthermore, our scaling experiments show that the larger the scale of the pre-trained model, the greater the performance improvement of MIRO. Source code is available at https://github.com/kakaobrain/miro.
|
2012.11153
|
Xavier Porte
|
Xavier Porte, Anas Skalli, Nasibeh Haghighi, Stephan Reitzenstein,
James A. Lott, Daniel Brunner
|
A complete, parallel and autonomous photonic neural network in a
semiconductor multimode laser
|
16 pages, 4 figures
| null | null | null |
cs.NE cs.ET cs.LG physics.optics
|
http://creativecommons.org/licenses/by/4.0/
|
Neural networks are one of the disruptive computing concepts of our time.
However, they fundamentally differ from classical, algorithmic computing in a
number of fundamental aspects. These differences result in equally fundamental,
severe and relevant challenges for neural network computing using current
computing substrates. Neural networks urge for parallelism across the entire
processor and for a co-location of memory and arithmetic, i.e. beyond von
Neumann architectures. Parallelism in particular made photonics a highly
promising platform, yet until now scalable and integratable concepts are
scarce. Here, we demonstrate for the first time how a fully parallel and fully
implemented photonic neural network can be realized using spatially distributed
modes of an efficient and fast semiconductor laser. Importantly, all neural
network connections are realized in hardware, and our processor produces
results without pre- or post-processing. 130+ nodes are implemented in a
large-area vertical cavity surface emitting laser, input and output weights are
realized via the complex transmission matrix of a multimode fiber and a digital
micro-mirror array, respectively. We train the readout weights to perform 2-bit
header recognition, a 2-bit XOR and 2-bit digital analog conversion, and obtain
< 0.9 10^-3 and 2.9 10^-2 error rates for digit recognition and XOR,
respectively. Finally, the digital analog conversion can be realized with a
standard deviation of only 5.4 10^-2. Our system is scalable to much larger
sizes and to bandwidths in excess of 20 GHz.
|
[
{
"created": "Mon, 21 Dec 2020 07:03:43 GMT",
"version": "v1"
}
] |
2020-12-22
|
[
[
"Porte",
"Xavier",
""
],
[
"Skalli",
"Anas",
""
],
[
"Haghighi",
"Nasibeh",
""
],
[
"Reitzenstein",
"Stephan",
""
],
[
"Lott",
"James A.",
""
],
[
"Brunner",
"Daniel",
""
]
] |
Neural networks are one of the disruptive computing concepts of our time. However, they fundamentally differ from classical, algorithmic computing in a number of fundamental aspects. These differences result in equally fundamental, severe and relevant challenges for neural network computing using current computing substrates. Neural networks urge for parallelism across the entire processor and for a co-location of memory and arithmetic, i.e. beyond von Neumann architectures. Parallelism in particular made photonics a highly promising platform, yet until now scalable and integratable concepts are scarce. Here, we demonstrate for the first time how a fully parallel and fully implemented photonic neural network can be realized using spatially distributed modes of an efficient and fast semiconductor laser. Importantly, all neural network connections are realized in hardware, and our processor produces results without pre- or post-processing. 130+ nodes are implemented in a large-area vertical cavity surface emitting laser, input and output weights are realized via the complex transmission matrix of a multimode fiber and a digital micro-mirror array, respectively. We train the readout weights to perform 2-bit header recognition, a 2-bit XOR and 2-bit digital analog conversion, and obtain < 0.9 10^-3 and 2.9 10^-2 error rates for digit recognition and XOR, respectively. Finally, the digital analog conversion can be realized with a standard deviation of only 5.4 10^-2. Our system is scalable to much larger sizes and to bandwidths in excess of 20 GHz.
|
2204.08332
|
Luo Ziwei
|
Ziwei Luo, Youwei Li, Shen Cheng, Lei Yu, Qi Wu, Zhihong Wen, Haoqiang
Fan, Jian Sun, Shuaicheng Liu
|
BSRT: Improving Burst Super-Resolution with Swin Transformer and
Flow-Guided Deformable Alignment
|
CVPRW, Winner method in NTIRE 2022 Burst Super-Resolution Challenge
Real-World Track
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work addresses the Burst Super-Resolution (BurstSR) task using a new
architecture, which requires restoring a high-quality image from a sequence of
noisy, misaligned, and low-resolution RAW bursts. To overcome the challenges in
BurstSR, we propose a Burst Super-Resolution Transformer (BSRT), which can
significantly improve the capability of extracting inter-frame information and
reconstruction. To achieve this goal, we propose a Pyramid Flow-Guided
Deformable Convolution Network (Pyramid FG-DCN) and incorporate Swin
Transformer Blocks and Groups as our main backbone. More specifically, we
combine optical flows and deformable convolutions, hence our BSRT can handle
misalignment and aggregate the potential texture information in multi-frames
more efficiently. In addition, our Transformer-based structure can capture
long-range dependency to further improve the performance. The evaluation on
both synthetic and real-world tracks demonstrates that our approach achieves a
new state-of-the-art in BurstSR task. Further, our BSRT wins the championship
in the NTIRE2022 Burst Super-Resolution Challenge.
|
[
{
"created": "Mon, 18 Apr 2022 14:23:10 GMT",
"version": "v1"
},
{
"created": "Fri, 22 Apr 2022 15:02:42 GMT",
"version": "v2"
}
] |
2022-04-25
|
[
[
"Luo",
"Ziwei",
""
],
[
"Li",
"Youwei",
""
],
[
"Cheng",
"Shen",
""
],
[
"Yu",
"Lei",
""
],
[
"Wu",
"Qi",
""
],
[
"Wen",
"Zhihong",
""
],
[
"Fan",
"Haoqiang",
""
],
[
"Sun",
"Jian",
""
],
[
"Liu",
"Shuaicheng",
""
]
] |
This work addresses the Burst Super-Resolution (BurstSR) task using a new architecture, which requires restoring a high-quality image from a sequence of noisy, misaligned, and low-resolution RAW bursts. To overcome the challenges in BurstSR, we propose a Burst Super-Resolution Transformer (BSRT), which can significantly improve the capability of extracting inter-frame information and reconstruction. To achieve this goal, we propose a Pyramid Flow-Guided Deformable Convolution Network (Pyramid FG-DCN) and incorporate Swin Transformer Blocks and Groups as our main backbone. More specifically, we combine optical flows and deformable convolutions, hence our BSRT can handle misalignment and aggregate the potential texture information in multi-frames more efficiently. In addition, our Transformer-based structure can capture long-range dependency to further improve the performance. The evaluation on both synthetic and real-world tracks demonstrates that our approach achieves a new state-of-the-art in BurstSR task. Further, our BSRT wins the championship in the NTIRE2022 Burst Super-Resolution Challenge.
|
2103.06172
|
Sam Corbett-Davies
|
Chlo\'e Bakalar, Renata Barreto, Stevie Bergman, Miranda Bogen, Bobbie
Chern, Sam Corbett-Davies, Melissa Hall, Isabel Kloumann, Michelle Lam,
Joaquin Qui\~nonero Candela, Manish Raghavan, Joshua Simons, Jonathan Tannen,
Edmund Tong, Kate Vredenburgh, Jiejing Zhao
|
Fairness On The Ground: Applying Algorithmic Fairness Approaches to
Production Systems
|
12 pages, 2 figures
| null | null | null |
cs.LG cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Many technical approaches have been proposed for ensuring that decisions made
by machine learning systems are fair, but few of these proposals have been
stress-tested in real-world systems. This paper presents an example of one
team's approach to the challenge of applying algorithmic fairness approaches to
complex production systems within the context of a large technology company. We
discuss how we disentangle normative questions of product and policy design
(like, "how should the system trade off between different stakeholders'
interests and needs?") from empirical questions of system implementation (like,
"is the system achieving the desired tradeoff in practice?"). We also present
an approach for answering questions of the latter sort, which allows us to
measure how machine learning systems and human labelers are making these
tradeoffs across different relevant groups. We hope our experience integrating
fairness tools and approaches into large-scale and complex production systems
will be useful to other practitioners facing similar challenges, and
illuminating to academics and researchers looking to better address the needs
of practitioners.
|
[
{
"created": "Wed, 10 Mar 2021 16:42:20 GMT",
"version": "v1"
},
{
"created": "Wed, 24 Mar 2021 17:15:40 GMT",
"version": "v2"
}
] |
2021-03-25
|
[
[
"Bakalar",
"Chloé",
""
],
[
"Barreto",
"Renata",
""
],
[
"Bergman",
"Stevie",
""
],
[
"Bogen",
"Miranda",
""
],
[
"Chern",
"Bobbie",
""
],
[
"Corbett-Davies",
"Sam",
""
],
[
"Hall",
"Melissa",
""
],
[
"Kloumann",
"Isabel",
""
],
[
"Lam",
"Michelle",
""
],
[
"Candela",
"Joaquin Quiñonero",
""
],
[
"Raghavan",
"Manish",
""
],
[
"Simons",
"Joshua",
""
],
[
"Tannen",
"Jonathan",
""
],
[
"Tong",
"Edmund",
""
],
[
"Vredenburgh",
"Kate",
""
],
[
"Zhao",
"Jiejing",
""
]
] |
Many technical approaches have been proposed for ensuring that decisions made by machine learning systems are fair, but few of these proposals have been stress-tested in real-world systems. This paper presents an example of one team's approach to the challenge of applying algorithmic fairness approaches to complex production systems within the context of a large technology company. We discuss how we disentangle normative questions of product and policy design (like, "how should the system trade off between different stakeholders' interests and needs?") from empirical questions of system implementation (like, "is the system achieving the desired tradeoff in practice?"). We also present an approach for answering questions of the latter sort, which allows us to measure how machine learning systems and human labelers are making these tradeoffs across different relevant groups. We hope our experience integrating fairness tools and approaches into large-scale and complex production systems will be useful to other practitioners facing similar challenges, and illuminating to academics and researchers looking to better address the needs of practitioners.
|
1510.06623
|
Rafael Dowsley
|
Rafael Dowsley, Felipe Lacerda, Anderson C. A. Nascimento
|
Commitment and Oblivious Transfer in the Bounded Storage Model with
Errors
| null | null | null | null |
cs.CR cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The bounded storage model restricts the memory of an adversary in a
cryptographic protocol, rather than restricting its computational power, making
information theoretically secure protocols feasible. We present the first
protocols for commitment and oblivious transfer in the bounded storage model
with errors, i.e., the model where the public random sources available to the
two parties are not exactly the same, but instead are only required to have a
small Hamming distance between themselves. Commitment and oblivious transfer
protocols were known previously only for the error-free variant of the bounded
storage model, which is harder to realize.
|
[
{
"created": "Thu, 22 Oct 2015 13:44:12 GMT",
"version": "v1"
},
{
"created": "Tue, 24 Oct 2017 15:36:27 GMT",
"version": "v2"
}
] |
2017-10-25
|
[
[
"Dowsley",
"Rafael",
""
],
[
"Lacerda",
"Felipe",
""
],
[
"Nascimento",
"Anderson C. A.",
""
]
] |
The bounded storage model restricts the memory of an adversary in a cryptographic protocol, rather than restricting its computational power, making information theoretically secure protocols feasible. We present the first protocols for commitment and oblivious transfer in the bounded storage model with errors, i.e., the model where the public random sources available to the two parties are not exactly the same, but instead are only required to have a small Hamming distance between themselves. Commitment and oblivious transfer protocols were known previously only for the error-free variant of the bounded storage model, which is harder to realize.
|
2010.01423
|
Merav Parter
|
Yael Hitron, Cameron Musco and Merav Parter
|
Spiking Neural Networks Through the Lens of Streaming Algorithms
|
To appear in DISC'20, shorten abstract
| null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We initiate the study of biological neural networks from the perspective of
streaming algorithms. Like computers, human brains suffer from memory
limitations which pose a significant obstacle when processing large scale and
dynamically changing data. In computer science, these challenges are captured
by the well-known streaming model, which can be traced back to Munro and
Paterson `78 and has had significant impact in theory and beyond. In the
classical streaming setting, one must compute some function $f$ of a stream of
updates $\mathcal{S} = \{u_1,\ldots,u_m\}$, given restricted single-pass access
to the stream. The primary complexity measure is the space used by the
algorithm.
We take the first steps towards understanding the connection between
streaming and neural algorithms. On the upper bound side, we design neural
algorithms based on known streaming algorithms for fundamental tasks, including
distinct elements, approximate median, heavy hitters, and more. The number of
neurons in our neural solutions almost matches the space bounds of the
corresponding streaming algorithms. As a general algorithmic primitive, we show
how to implement the important streaming technique of linear sketching
efficient in spiking neural networks. On the lower bound side, we give a
generic reduction, showing that any space-efficient spiking neural network can
be simulated by a space-efficiently streaming algorithm. This reduction lets us
translate streaming-space lower bounds into nearly matching neural-space lower
bounds, establishing a close connection between these two models.
|
[
{
"created": "Sat, 3 Oct 2020 20:31:52 GMT",
"version": "v1"
}
] |
2020-10-06
|
[
[
"Hitron",
"Yael",
""
],
[
"Musco",
"Cameron",
""
],
[
"Parter",
"Merav",
""
]
] |
We initiate the study of biological neural networks from the perspective of streaming algorithms. Like computers, human brains suffer from memory limitations which pose a significant obstacle when processing large scale and dynamically changing data. In computer science, these challenges are captured by the well-known streaming model, which can be traced back to Munro and Paterson `78 and has had significant impact in theory and beyond. In the classical streaming setting, one must compute some function $f$ of a stream of updates $\mathcal{S} = \{u_1,\ldots,u_m\}$, given restricted single-pass access to the stream. The primary complexity measure is the space used by the algorithm. We take the first steps towards understanding the connection between streaming and neural algorithms. On the upper bound side, we design neural algorithms based on known streaming algorithms for fundamental tasks, including distinct elements, approximate median, heavy hitters, and more. The number of neurons in our neural solutions almost matches the space bounds of the corresponding streaming algorithms. As a general algorithmic primitive, we show how to implement the important streaming technique of linear sketching efficient in spiking neural networks. On the lower bound side, we give a generic reduction, showing that any space-efficient spiking neural network can be simulated by a space-efficiently streaming algorithm. This reduction lets us translate streaming-space lower bounds into nearly matching neural-space lower bounds, establishing a close connection between these two models.
|
1811.00246
|
Jinwon An
|
Jinwon An, Sungwon Lyu, Sungzoon Cho
|
SARN: Relational Reasoning through Sequential Attention
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes an attention module augmented relational network called
SARN(Sequential Attention Relational Network) that can carry out relational
reasoning by extracting reference objects and making efficient pairing between
objects. SARN greatly reduces the computational and memory requirements of the
relational network, which computes all object pairs. It also shows high
accuracy on the Sort-of-CLEVR dataset compared to other models, especially on
relational questions.
|
[
{
"created": "Thu, 1 Nov 2018 05:45:43 GMT",
"version": "v1"
}
] |
2018-11-02
|
[
[
"An",
"Jinwon",
""
],
[
"Lyu",
"Sungwon",
""
],
[
"Cho",
"Sungzoon",
""
]
] |
This paper proposes an attention module augmented relational network called SARN(Sequential Attention Relational Network) that can carry out relational reasoning by extracting reference objects and making efficient pairing between objects. SARN greatly reduces the computational and memory requirements of the relational network, which computes all object pairs. It also shows high accuracy on the Sort-of-CLEVR dataset compared to other models, especially on relational questions.
|
2311.03016
|
Andrea Bombarda
|
Andrea Bombarda and Angelo Gargantini
|
Design, implementation, and validation of a benchmark generator for
combinatorial interaction testing tools
| null | null |
10.1016/j.jss.2023.111920
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Combinatorial testing is a widely adopted technique for efficiently detecting
faults in software. The quality of combinatorial test generators plays a
crucial role in achieving effective test coverage. Evaluating combinatorial
test generators remains a challenging task that requires diverse and
representative benchmarks. Having such benchmarks might help developers to test
their tools, and improve their performance. For this reason, in this paper, we
present BenCIGen, a highly configurable generator of benchmarks to be used by
combinatorial test generators, empowering users to customize the type of
benchmarks generated, including constraints and parameters, as well as their
complexity. An initial version of such a tool has been used during the
CT-Competition, held yearly during the International Workshop on Combinatorial
Testing. This paper describes the requirements, the design, the implementation,
and the validation of BenCIGen. Tests for the validation of BenCIGen are
derived from its requirements by using a combinatorial interaction approach.
Moreover, we demonstrate the tool's ability to generate benchmarks that reflect
the characteristics of real software systems. BenCIGen not only facilitates the
evaluation of existing generators but also serves as a valuable resource for
researchers and practitioners seeking to enhance the quality and effectiveness
of combinatorial testing methodologies.
|
[
{
"created": "Mon, 6 Nov 2023 10:44:48 GMT",
"version": "v1"
}
] |
2023-12-19
|
[
[
"Bombarda",
"Andrea",
""
],
[
"Gargantini",
"Angelo",
""
]
] |
Combinatorial testing is a widely adopted technique for efficiently detecting faults in software. The quality of combinatorial test generators plays a crucial role in achieving effective test coverage. Evaluating combinatorial test generators remains a challenging task that requires diverse and representative benchmarks. Having such benchmarks might help developers to test their tools, and improve their performance. For this reason, in this paper, we present BenCIGen, a highly configurable generator of benchmarks to be used by combinatorial test generators, empowering users to customize the type of benchmarks generated, including constraints and parameters, as well as their complexity. An initial version of such a tool has been used during the CT-Competition, held yearly during the International Workshop on Combinatorial Testing. This paper describes the requirements, the design, the implementation, and the validation of BenCIGen. Tests for the validation of BenCIGen are derived from its requirements by using a combinatorial interaction approach. Moreover, we demonstrate the tool's ability to generate benchmarks that reflect the characteristics of real software systems. BenCIGen not only facilitates the evaluation of existing generators but also serves as a valuable resource for researchers and practitioners seeking to enhance the quality and effectiveness of combinatorial testing methodologies.
|
1511.06316
|
Zinelabidine Boulkenafet Mr
|
Zinelabidine Boulkenafet, Jukka Komulainen, Abdenour Hadid
|
face anti-spoofing based on color texture analysis
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Research on face spoofing detection has mainly been focused on analyzing the
luminance of the face images, hence discarding the chrominance information
which can be useful for discriminating fake faces from genuine ones. In this
work, we propose a new face anti-spoofing method based on color texture
analysis. We analyze the joint color-texture information from the luminance and
the chrominance channels using a color local binary pattern descriptor. More
specifically, the feature histograms are extracted from each image band
separately. Extensive experiments on two benchmark datasets, namely CASIA face
anti-spoofing and Replay-Attack databases, showed excellent results compared to
the state-of-the-art. Most importantly, our inter-database evaluation depicts
that the proposed approach showed very promising generalization capabilities.
|
[
{
"created": "Thu, 19 Nov 2015 19:28:20 GMT",
"version": "v1"
}
] |
2015-11-20
|
[
[
"Boulkenafet",
"Zinelabidine",
""
],
[
"Komulainen",
"Jukka",
""
],
[
"Hadid",
"Abdenour",
""
]
] |
Research on face spoofing detection has mainly been focused on analyzing the luminance of the face images, hence discarding the chrominance information which can be useful for discriminating fake faces from genuine ones. In this work, we propose a new face anti-spoofing method based on color texture analysis. We analyze the joint color-texture information from the luminance and the chrominance channels using a color local binary pattern descriptor. More specifically, the feature histograms are extracted from each image band separately. Extensive experiments on two benchmark datasets, namely CASIA face anti-spoofing and Replay-Attack databases, showed excellent results compared to the state-of-the-art. Most importantly, our inter-database evaluation depicts that the proposed approach showed very promising generalization capabilities.
|
1405.7520
|
Gianluca Della Vedova
|
Paola Bonizzoni, Gianluca Della Vedova, Yuri Pirola, Marco Previtali,
Raffaella Rizzi
|
An External-Memory Algorithm for String Graph Construction
| null | null | null | null |
cs.DS q-bio.GN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Some recent results have introduced external-memory algorithms to compute
self-indexes of a set of strings, mainly via computing the Burrows-Wheeler
Transform (BWT) of the input strings. The motivations for those results stem
from Bioinformatics, where a large number of short strings (called reads) are
routinely produced and analyzed. In that field, a fundamental problem is to
assemble a genome from a large set of much shorter samples extracted from the
unknown genome. The approaches that are currently used to tackle this problem
are memory-intensive. This fact does not bode well with the ongoing increase in
the availability of genomic data. A data structure that is used in genome
assembly is the string graph, where vertices correspond to samples and arcs
represent two overlapping samples. In this paper we address an open problem: to
design an external-memory algorithm to compute the string graph.
|
[
{
"created": "Thu, 29 May 2014 11:09:55 GMT",
"version": "v1"
},
{
"created": "Thu, 11 Jun 2015 15:08:26 GMT",
"version": "v2"
}
] |
2015-06-12
|
[
[
"Bonizzoni",
"Paola",
""
],
[
"Della Vedova",
"Gianluca",
""
],
[
"Pirola",
"Yuri",
""
],
[
"Previtali",
"Marco",
""
],
[
"Rizzi",
"Raffaella",
""
]
] |
Some recent results have introduced external-memory algorithms to compute self-indexes of a set of strings, mainly via computing the Burrows-Wheeler Transform (BWT) of the input strings. The motivations for those results stem from Bioinformatics, where a large number of short strings (called reads) are routinely produced and analyzed. In that field, a fundamental problem is to assemble a genome from a large set of much shorter samples extracted from the unknown genome. The approaches that are currently used to tackle this problem are memory-intensive. This fact does not bode well with the ongoing increase in the availability of genomic data. A data structure that is used in genome assembly is the string graph, where vertices correspond to samples and arcs represent two overlapping samples. In this paper we address an open problem: to design an external-memory algorithm to compute the string graph.
|
2108.12189
|
Diego Molla Aliod
|
Diego Moll\'a, Urvashi Khanna, Dima Galat, Vincent Nguyen, Maciej
Rybinski
|
Query-Focused Extractive Summarisation for Finding Ideal Answers to
Biomedical and COVID-19 Questions
|
12 pages, 2 figures, 6 tables. Accepted at BioASQ workshop, CLEF 2021
| null | null | null |
cs.CL cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents Macquarie University's participation to the BioASQ
Synergy Task, and BioASQ9b Phase B. In each of these tasks, our participation
focused on the use of query-focused extractive summarisation to obtain the
ideal answers to medical questions. The Synergy Task is an end-to-end question
answering task on COVID-19 where systems are required to return relevant
documents, snippets, and answers to a given question. Given the absence of
training data, we used a query-focused summarisation system that was trained
with the BioASQ8b training data set and we experimented with methods to
retrieve the documents and snippets. Considering the poor quality of the
documents and snippets retrieved by our system, we observed reasonably good
quality in the answers returned. For phase B of the BioASQ9b task, the relevant
documents and snippets were already included in the test data. Our system split
the snippets into candidate sentences and used BERT variants under a sentence
classification setup. The system used the question and candidate sentence as
input and was trained to predict the likelihood of the candidate sentence being
part of the ideal answer. The runs obtained either the best or second best
ROUGE-F1 results of all participants to all batches of BioASQ9b. This shows
that using BERT in a classification setup is a very strong baseline for the
identification of ideal answers.
|
[
{
"created": "Fri, 27 Aug 2021 09:19:42 GMT",
"version": "v1"
},
{
"created": "Tue, 31 Aug 2021 01:31:39 GMT",
"version": "v2"
}
] |
2021-09-01
|
[
[
"Mollá",
"Diego",
""
],
[
"Khanna",
"Urvashi",
""
],
[
"Galat",
"Dima",
""
],
[
"Nguyen",
"Vincent",
""
],
[
"Rybinski",
"Maciej",
""
]
] |
This paper presents Macquarie University's participation to the BioASQ Synergy Task, and BioASQ9b Phase B. In each of these tasks, our participation focused on the use of query-focused extractive summarisation to obtain the ideal answers to medical questions. The Synergy Task is an end-to-end question answering task on COVID-19 where systems are required to return relevant documents, snippets, and answers to a given question. Given the absence of training data, we used a query-focused summarisation system that was trained with the BioASQ8b training data set and we experimented with methods to retrieve the documents and snippets. Considering the poor quality of the documents and snippets retrieved by our system, we observed reasonably good quality in the answers returned. For phase B of the BioASQ9b task, the relevant documents and snippets were already included in the test data. Our system split the snippets into candidate sentences and used BERT variants under a sentence classification setup. The system used the question and candidate sentence as input and was trained to predict the likelihood of the candidate sentence being part of the ideal answer. The runs obtained either the best or second best ROUGE-F1 results of all participants to all batches of BioASQ9b. This shows that using BERT in a classification setup is a very strong baseline for the identification of ideal answers.
|
1202.2465
|
Jierui Xie
|
Jierui Xie and Boleslaw K. Szymanski
|
Towards Linear Time Overlapping Community Detection in Social Networks
|
PAKDD 2012
|
Proc. 16th PAKDD Pacific-Asia Conference on Knowledge Discovery
and Data Mining, Kuala Lumpur, Malaysia, 2012, Lecture Notes AI vol. 7302,
Part II, Springer, Berlin, Germany, 2012, pp. 25-36
| null | null |
cs.SI cs.CY cs.DS physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Membership diversity is a characteristic aspect of social networks in which a
person may belong to more than one social group. For this reason, discovering
overlapping structures is necessary for realistic social analysis. In this
paper, we present a fast algorithm1, called SLPA, for overlapping community
detection in large-scale networks. SLPA spreads labels according to dynamic
interaction rules. It can be applied to both unipartite and bipartite networks.
It is also able to uncover overlapping nested hierarchy. The time complexity of
SLPA scales linearly with the number of edges in the network. Experiments in
both synthetic and real- world networks show that SLPA has an excellent
performance in identifying both node and community level overlapping
structures.
|
[
{
"created": "Sat, 11 Feb 2012 20:07:45 GMT",
"version": "v1"
}
] |
2013-05-15
|
[
[
"Xie",
"Jierui",
""
],
[
"Szymanski",
"Boleslaw K.",
""
]
] |
Membership diversity is a characteristic aspect of social networks in which a person may belong to more than one social group. For this reason, discovering overlapping structures is necessary for realistic social analysis. In this paper, we present a fast algorithm1, called SLPA, for overlapping community detection in large-scale networks. SLPA spreads labels according to dynamic interaction rules. It can be applied to both unipartite and bipartite networks. It is also able to uncover overlapping nested hierarchy. The time complexity of SLPA scales linearly with the number of edges in the network. Experiments in both synthetic and real- world networks show that SLPA has an excellent performance in identifying both node and community level overlapping structures.
|
1807.04868
|
Suhad Faisal Behadili
|
Suhad Faisal Behadili, Cyrille Bertelle, Loay E. George
|
Human Mobility Patterns Modelling using CDRs
| null | null |
10.5121/iju.2016.7102
| null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The research objectives are exploring characteristics of human mobility
patterns, subsequently modelling them mathematically depending on inter-event
time and traveled distances parameters using CDRs (Call Detailed Records). The
observations are obtained from Armada festival in France. Understanding,
modelling and simulating human mobility among urban regions is excitement
approach, due to itsimportance in rescue situations for various events either
indoor events like evacuation of buildings or outdoor ones like public
assemblies,community evacuation in casesemerged during emergency situations,
moreover serves urban planning and smart cities.
|
[
{
"created": "Thu, 12 Jul 2018 23:52:05 GMT",
"version": "v1"
}
] |
2018-07-16
|
[
[
"Behadili",
"Suhad Faisal",
""
],
[
"Bertelle",
"Cyrille",
""
],
[
"George",
"Loay E.",
""
]
] |
The research objectives are exploring characteristics of human mobility patterns, subsequently modelling them mathematically depending on inter-event time and traveled distances parameters using CDRs (Call Detailed Records). The observations are obtained from Armada festival in France. Understanding, modelling and simulating human mobility among urban regions is excitement approach, due to itsimportance in rescue situations for various events either indoor events like evacuation of buildings or outdoor ones like public assemblies,community evacuation in casesemerged during emergency situations, moreover serves urban planning and smart cities.
|
2406.15742
|
Alexander Lew
|
McCoy R. Becker, Alexander K. Lew, Xiaoyan Wang, Matin Ghavami,
Mathieu Huot, Martin C. Rinard, Vikash K. Mansinghka
|
Probabilistic Programming with Programmable Variational Inference
| null |
PLDI 2024
|
10.1145/3656463
| null |
cs.PL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Compared to the wide array of advanced Monte Carlo methods supported by
modern probabilistic programming languages (PPLs), PPL support for variational
inference (VI) is less developed: users are typically limited to a predefined
selection of variational objectives and gradient estimators, which are
implemented monolithically (and without formal correctness arguments) in PPL
backends. In this paper, we propose a more modular approach to supporting
variational inference in PPLs, based on compositional program transformation.
In our approach, variational objectives are expressed as programs, that may
employ first-class constructs for computing densities of and expected values
under user-defined models and variational families. We then transform these
programs systematically into unbiased gradient estimators for optimizing the
objectives they define. Our design enables modular reasoning about many
interacting concerns, including automatic differentiation, density
accumulation, tracing, and the application of unbiased gradient estimation
strategies. Additionally, relative to existing support for VI in PPLs, our
design increases expressiveness along three axes: (1) it supports an open-ended
set of user-defined variational objectives, rather than a fixed menu of
options; (2) it supports a combinatorial space of gradient estimation
strategies, many not automated by today's PPLs; and (3) it supports a broader
class of models and variational families, because it supports constructs for
approximate marginalization and normalization (previously introduced only for
Monte Carlo inference). We implement our approach in an extension to the Gen
probabilistic programming system (genjax.vi, implemented in JAX), and evaluate
on several deep generative modeling tasks, showing minimal performance overhead
vs. hand-coded implementations and performance competitive with
well-established open-source PPLs.
|
[
{
"created": "Sat, 22 Jun 2024 05:49:37 GMT",
"version": "v1"
}
] |
2024-06-25
|
[
[
"Becker",
"McCoy R.",
""
],
[
"Lew",
"Alexander K.",
""
],
[
"Wang",
"Xiaoyan",
""
],
[
"Ghavami",
"Matin",
""
],
[
"Huot",
"Mathieu",
""
],
[
"Rinard",
"Martin C.",
""
],
[
"Mansinghka",
"Vikash K.",
""
]
] |
Compared to the wide array of advanced Monte Carlo methods supported by modern probabilistic programming languages (PPLs), PPL support for variational inference (VI) is less developed: users are typically limited to a predefined selection of variational objectives and gradient estimators, which are implemented monolithically (and without formal correctness arguments) in PPL backends. In this paper, we propose a more modular approach to supporting variational inference in PPLs, based on compositional program transformation. In our approach, variational objectives are expressed as programs, that may employ first-class constructs for computing densities of and expected values under user-defined models and variational families. We then transform these programs systematically into unbiased gradient estimators for optimizing the objectives they define. Our design enables modular reasoning about many interacting concerns, including automatic differentiation, density accumulation, tracing, and the application of unbiased gradient estimation strategies. Additionally, relative to existing support for VI in PPLs, our design increases expressiveness along three axes: (1) it supports an open-ended set of user-defined variational objectives, rather than a fixed menu of options; (2) it supports a combinatorial space of gradient estimation strategies, many not automated by today's PPLs; and (3) it supports a broader class of models and variational families, because it supports constructs for approximate marginalization and normalization (previously introduced only for Monte Carlo inference). We implement our approach in an extension to the Gen probabilistic programming system (genjax.vi, implemented in JAX), and evaluate on several deep generative modeling tasks, showing minimal performance overhead vs. hand-coded implementations and performance competitive with well-established open-source PPLs.
|
1703.04171
|
Oliver Gutsche
|
Oliver Gutsche (1), Matteo Cremonesi (1), Peter Elmer (2), Bo
Jayatilaka (1), Jim Kowalkowski (1), Jim Pivarski (2), Saba Sehrish (1),
Cristina Mantilla Surez (3), Alexey Svyatkovskiy (2), Nhan Tran (1) ((1)
Fermi National Accelerator Laboratory, (2) Princeton University, (3) Fermi
National Accelerator Laboratory now Johns Hopkins University)
|
Big Data in HEP: A comprehensive use case study
|
Proceedings for 22nd International Conference on Computing in High
Energy and Nuclear Physics (CHEP 2016)
| null |
10.1088/1742-6596/898/7/072012
| null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Experimental Particle Physics has been at the forefront of analyzing the
worlds largest datasets for decades. The HEP community was the first to develop
suitable software and computing tools for this task. In recent times, new
toolkits and systems collectively called Big Data technologies have emerged to
support the analysis of Petabyte and Exabyte datasets in industry. While the
principles of data analysis in HEP have not changed (filtering and transforming
experiment-specific data formats), these new technologies use different
approaches and promise a fresh look at analysis of very large datasets and
could potentially reduce the time-to-physics with increased interactivity. In
this talk, we present an active LHC Run 2 analysis, searching for dark matter
with the CMS detector, as a testbed for Big Data technologies. We directly
compare the traditional NTuple-based analysis with an equivalent analysis using
Apache Spark on the Hadoop ecosystem and beyond. In both cases, we start the
analysis with the official experiment data formats and produce publication
physics plots. We will discuss advantages and disadvantages of each approach
and give an outlook on further studies needed.
|
[
{
"created": "Sun, 12 Mar 2017 20:37:29 GMT",
"version": "v1"
}
] |
2017-11-23
|
[
[
"Gutsche",
"Oliver",
""
],
[
"Cremonesi",
"Matteo",
""
],
[
"Elmer",
"Peter",
""
],
[
"Jayatilaka",
"Bo",
""
],
[
"Kowalkowski",
"Jim",
""
],
[
"Pivarski",
"Jim",
""
],
[
"Sehrish",
"Saba",
""
],
[
"Surez",
"Cristina Mantilla",
""
],
[
"Svyatkovskiy",
"Alexey",
""
],
[
"Tran",
"Nhan",
""
]
] |
Experimental Particle Physics has been at the forefront of analyzing the worlds largest datasets for decades. The HEP community was the first to develop suitable software and computing tools for this task. In recent times, new toolkits and systems collectively called Big Data technologies have emerged to support the analysis of Petabyte and Exabyte datasets in industry. While the principles of data analysis in HEP have not changed (filtering and transforming experiment-specific data formats), these new technologies use different approaches and promise a fresh look at analysis of very large datasets and could potentially reduce the time-to-physics with increased interactivity. In this talk, we present an active LHC Run 2 analysis, searching for dark matter with the CMS detector, as a testbed for Big Data technologies. We directly compare the traditional NTuple-based analysis with an equivalent analysis using Apache Spark on the Hadoop ecosystem and beyond. In both cases, we start the analysis with the official experiment data formats and produce publication physics plots. We will discuss advantages and disadvantages of each approach and give an outlook on further studies needed.
|
2404.13130
|
Jayasri Dontabhaktuni Prof
|
Sreeraj Rajan Warrier, D Sri Harshavardhan Reddy, Sriya Bada, Rohith
Achampeta, Sebastian Uppapalli and Jayasri Dontabhaktuni
|
On-board classification of underwater images using hybrid
classical-quantum CNN based method
| null | null | null | null |
cs.CV quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Underwater images taken from autonomous underwater vehicles (AUV's) often
suffer from low light, high turbidity, poor contrast, motion-blur and excessive
light scattering and hence require image enhancement techniques for object
recognition. Machine learning methods are being increasingly used for object
recognition under such adverse conditions. These enhanced object recognition
methods of images taken from AUV's has potential applications in underwater
pipeline and optical fibre surveillance, ocean bed resource extraction, ocean
floor mapping, underwater species exploration, etc. While the classical machine
learning methods are very efficient in terms of accuracy, they require large
datasets and high computational time for image classification. In the current
work, we use quantum-classical hybrid machine learning methods for real-time
under-water object recognition on-board an AUV for the first time. We use
real-time motion-blurred and low-light images taken from an on-board camera of
AUV built in-house and apply existing hybrid machine learning methods for
object recognition. Our hybrid methods consist of quantum encoding and
flattening of classical images using quantum circuits and sending them to
classical neural networks for image classification. The results of hybrid
methods carried out using Pennylane based quantum simulators both on GPU and
using pre-trained models on an on-board NVIDIA GPU chipset are compared with
results from corresponding classical machine learning methods. We observe that
the hybrid quantum machine learning methods show an efficiency greater than
65\% and reduction in run-time by one-thirds and require 50\% smaller dataset
sizes for training the models compared to classical machine learning methods.
We hope that our work opens up further possibilities in quantum enhanced
real-time computer vision in autonomous vehicles.
|
[
{
"created": "Fri, 19 Apr 2024 18:34:52 GMT",
"version": "v1"
}
] |
2024-04-23
|
[
[
"Warrier",
"Sreeraj Rajan",
""
],
[
"Reddy",
"D Sri Harshavardhan",
""
],
[
"Bada",
"Sriya",
""
],
[
"Achampeta",
"Rohith",
""
],
[
"Uppapalli",
"Sebastian",
""
],
[
"Dontabhaktuni",
"Jayasri",
""
]
] |
Underwater images taken from autonomous underwater vehicles (AUV's) often suffer from low light, high turbidity, poor contrast, motion-blur and excessive light scattering and hence require image enhancement techniques for object recognition. Machine learning methods are being increasingly used for object recognition under such adverse conditions. These enhanced object recognition methods of images taken from AUV's has potential applications in underwater pipeline and optical fibre surveillance, ocean bed resource extraction, ocean floor mapping, underwater species exploration, etc. While the classical machine learning methods are very efficient in terms of accuracy, they require large datasets and high computational time for image classification. In the current work, we use quantum-classical hybrid machine learning methods for real-time under-water object recognition on-board an AUV for the first time. We use real-time motion-blurred and low-light images taken from an on-board camera of AUV built in-house and apply existing hybrid machine learning methods for object recognition. Our hybrid methods consist of quantum encoding and flattening of classical images using quantum circuits and sending them to classical neural networks for image classification. The results of hybrid methods carried out using Pennylane based quantum simulators both on GPU and using pre-trained models on an on-board NVIDIA GPU chipset are compared with results from corresponding classical machine learning methods. We observe that the hybrid quantum machine learning methods show an efficiency greater than 65\% and reduction in run-time by one-thirds and require 50\% smaller dataset sizes for training the models compared to classical machine learning methods. We hope that our work opens up further possibilities in quantum enhanced real-time computer vision in autonomous vehicles.
|
2404.07976
|
Zhiqiang Shen
|
Muxin Zhou and Zeyuan Yin and Shitong Shao and Zhiqiang Shen
|
Self-supervised Dataset Distillation: A Good Compression Is All You Need
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Dataset distillation aims to compress information from a large-scale original
dataset to a new compact dataset while striving to preserve the utmost degree
of the original data informational essence. Previous studies have predominantly
concentrated on aligning the intermediate statistics between the original and
distilled data, such as weight trajectory, features, gradient, BatchNorm, etc.
In this work, we consider addressing this task through the new lens of model
informativeness in the compression stage on the original dataset pretraining.
We observe that with the prior state-of-the-art SRe$^2$L, as model sizes
increase, it becomes increasingly challenging for supervised pretrained models
to recover learned information during data synthesis, as the channel-wise mean
and variance inside the model are flatting and less informative. We further
notice that larger variances in BN statistics from self-supervised models
enable larger loss signals to update the recovered data by gradients, enjoying
more informativeness during synthesis. Building on this observation, we
introduce SC-DD, a simple yet effective Self-supervised Compression framework
for Dataset Distillation that facilitates diverse information compression and
recovery compared to traditional supervised learning schemes, further reaps the
potential of large pretrained models with enhanced capabilities. Extensive
experiments are conducted on CIFAR-100, Tiny-ImageNet and ImageNet-1K datasets
to demonstrate the superiority of our proposed approach. The proposed SC-DD
outperforms all previous state-of-the-art supervised dataset distillation
methods when employing larger models, such as SRe$^2$L, MTT, TESLA, DC, CAFE,
etc., by large margins under the same recovery and post-training budgets. Code
is available at https://github.com/VILA-Lab/SRe2L/tree/main/SCDD/.
|
[
{
"created": "Thu, 11 Apr 2024 17:56:40 GMT",
"version": "v1"
}
] |
2024-04-12
|
[
[
"Zhou",
"Muxin",
""
],
[
"Yin",
"Zeyuan",
""
],
[
"Shao",
"Shitong",
""
],
[
"Shen",
"Zhiqiang",
""
]
] |
Dataset distillation aims to compress information from a large-scale original dataset to a new compact dataset while striving to preserve the utmost degree of the original data informational essence. Previous studies have predominantly concentrated on aligning the intermediate statistics between the original and distilled data, such as weight trajectory, features, gradient, BatchNorm, etc. In this work, we consider addressing this task through the new lens of model informativeness in the compression stage on the original dataset pretraining. We observe that with the prior state-of-the-art SRe$^2$L, as model sizes increase, it becomes increasingly challenging for supervised pretrained models to recover learned information during data synthesis, as the channel-wise mean and variance inside the model are flatting and less informative. We further notice that larger variances in BN statistics from self-supervised models enable larger loss signals to update the recovered data by gradients, enjoying more informativeness during synthesis. Building on this observation, we introduce SC-DD, a simple yet effective Self-supervised Compression framework for Dataset Distillation that facilitates diverse information compression and recovery compared to traditional supervised learning schemes, further reaps the potential of large pretrained models with enhanced capabilities. Extensive experiments are conducted on CIFAR-100, Tiny-ImageNet and ImageNet-1K datasets to demonstrate the superiority of our proposed approach. The proposed SC-DD outperforms all previous state-of-the-art supervised dataset distillation methods when employing larger models, such as SRe$^2$L, MTT, TESLA, DC, CAFE, etc., by large margins under the same recovery and post-training budgets. Code is available at https://github.com/VILA-Lab/SRe2L/tree/main/SCDD/.
|
1906.04034
|
Sebastien Gros Prof.
|
Sebastien Gros, Mario Zanon
|
Towards Safe Reinforcement Learning Using NMPC and Policy Gradients:
Part II - Deterministic Case
| null | null | null | null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present a methodology to deploy the deterministic policy
gradient method, using actor-critic techniques, when the optimal policy is
approximated using a parametric optimization problem, where safety is enforced
via hard constraints. For continuous input space, imposing safety restrictions
on the exploration needed to deploying the deterministic policy gradient method
poses some technical difficulties, which we address here. We will investigate
in particular policy approximations based on robust Nonlinear Model Predictive
Control (NMPC), where safety can be treated explicitly. For the sake of
brevity, we will detail the construction of the safe scheme in the robust
linear MPC context only. The extension to the nonlinear case is possible but
more complex. We will additionally present a technique to maintain the system
safety throughout the learning process in the context of robust linear MPC.
This paper has a companion paper treating the stochastic policy gradient case.
|
[
{
"created": "Mon, 10 Jun 2019 14:45:03 GMT",
"version": "v1"
}
] |
2019-06-11
|
[
[
"Gros",
"Sebastien",
""
],
[
"Zanon",
"Mario",
""
]
] |
In this paper, we present a methodology to deploy the deterministic policy gradient method, using actor-critic techniques, when the optimal policy is approximated using a parametric optimization problem, where safety is enforced via hard constraints. For continuous input space, imposing safety restrictions on the exploration needed to deploying the deterministic policy gradient method poses some technical difficulties, which we address here. We will investigate in particular policy approximations based on robust Nonlinear Model Predictive Control (NMPC), where safety can be treated explicitly. For the sake of brevity, we will detail the construction of the safe scheme in the robust linear MPC context only. The extension to the nonlinear case is possible but more complex. We will additionally present a technique to maintain the system safety throughout the learning process in the context of robust linear MPC. This paper has a companion paper treating the stochastic policy gradient case.
|
1509.01938
|
Katrin Kirchhoff
|
Katrin Kirchhoff, Bing Zhao, Wen Wang
|
Exploiting Out-of-Domain Data Sources for Dialectal Arabic Statistical
Machine Translation
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Statistical machine translation for dialectal Arabic is characterized by a
lack of data since data acquisition involves the transcription and translation
of spoken language. In this study we develop techniques for extracting parallel
data for one particular dialect of Arabic (Iraqi Arabic) from out-of-domain
corpora in different dialects of Arabic or in Modern Standard Arabic. We
compare two different data selection strategies (cross-entropy based and
submodular selection) and demonstrate that a very small but highly targeted
amount of found data can improve the performance of a baseline machine
translation system. We furthermore report on preliminary experiments on using
automatically translated speech data as additional training data.
|
[
{
"created": "Mon, 7 Sep 2015 07:54:17 GMT",
"version": "v1"
}
] |
2015-09-08
|
[
[
"Kirchhoff",
"Katrin",
""
],
[
"Zhao",
"Bing",
""
],
[
"Wang",
"Wen",
""
]
] |
Statistical machine translation for dialectal Arabic is characterized by a lack of data since data acquisition involves the transcription and translation of spoken language. In this study we develop techniques for extracting parallel data for one particular dialect of Arabic (Iraqi Arabic) from out-of-domain corpora in different dialects of Arabic or in Modern Standard Arabic. We compare two different data selection strategies (cross-entropy based and submodular selection) and demonstrate that a very small but highly targeted amount of found data can improve the performance of a baseline machine translation system. We furthermore report on preliminary experiments on using automatically translated speech data as additional training data.
|
1302.1575
|
Nevin Lianwen Zhang
|
Nevin Lianwen Zhang, Weihong Zhang
|
Fast Value Iteration for Goal-Directed Markov Decision Processes
|
Appears in Proceedings of the Thirteenth Conference on Uncertainty in
Artificial Intelligence (UAI1997)
| null | null |
UAI-P-1997-PG-489-494
|
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Planning problems where effects of actions are non-deterministic can be
modeled as Markov decision processes. Planning problems are usually
goal-directed. This paper proposes several techniques for exploiting the
goal-directedness to accelerate value iteration, a standard algorithm for
solving Markov decision processes. Empirical studies have shown that the
techniques can bring about significant speedups.
|
[
{
"created": "Wed, 6 Feb 2013 15:59:41 GMT",
"version": "v1"
}
] |
2013-02-08
|
[
[
"Zhang",
"Nevin Lianwen",
""
],
[
"Zhang",
"Weihong",
""
]
] |
Planning problems where effects of actions are non-deterministic can be modeled as Markov decision processes. Planning problems are usually goal-directed. This paper proposes several techniques for exploiting the goal-directedness to accelerate value iteration, a standard algorithm for solving Markov decision processes. Empirical studies have shown that the techniques can bring about significant speedups.
|
2402.09113
|
Aditya Gilra
|
Reabetswe M. Nkhumise, Debabrota Basu, Tony J. Prescott, Aditya Gilra
|
Measuring Exploration in Reinforcement Learning via Optimal Transport in
Policy Space
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Exploration is the key ingredient of reinforcement learning (RL) that
determines the speed and success of learning. Here, we quantify and compare the
amount of exploration and learning accomplished by a Reinforcement Learning
(RL) algorithm. Specifically, we propose a novel measure, named Exploration
Index, that quantifies the relative effort of knowledge transfer
(transferability) by an RL algorithm in comparison to supervised learning (SL)
that transforms the initial data distribution of RL to the corresponding final
data distribution. The comparison is established by formulating learning in RL
as a sequence of SL tasks, and using optimal transport based metrics to compare
the total path traversed by the RL and SL algorithms in the data distribution
space. We perform extensive empirical analysis on various environments and with
multiple algorithms to demonstrate that the exploration index yields insights
about the exploration behaviour of any RL algorithm, and also allows us to
compare the exploratory behaviours of different RL algorithms.
|
[
{
"created": "Wed, 14 Feb 2024 11:55:50 GMT",
"version": "v1"
}
] |
2024-02-15
|
[
[
"Nkhumise",
"Reabetswe M.",
""
],
[
"Basu",
"Debabrota",
""
],
[
"Prescott",
"Tony J.",
""
],
[
"Gilra",
"Aditya",
""
]
] |
Exploration is the key ingredient of reinforcement learning (RL) that determines the speed and success of learning. Here, we quantify and compare the amount of exploration and learning accomplished by a Reinforcement Learning (RL) algorithm. Specifically, we propose a novel measure, named Exploration Index, that quantifies the relative effort of knowledge transfer (transferability) by an RL algorithm in comparison to supervised learning (SL) that transforms the initial data distribution of RL to the corresponding final data distribution. The comparison is established by formulating learning in RL as a sequence of SL tasks, and using optimal transport based metrics to compare the total path traversed by the RL and SL algorithms in the data distribution space. We perform extensive empirical analysis on various environments and with multiple algorithms to demonstrate that the exploration index yields insights about the exploration behaviour of any RL algorithm, and also allows us to compare the exploratory behaviours of different RL algorithms.
|
1712.06302
|
Jose Oramas
|
Jose Oramas, Kaili Wang, Tinne Tuytelaars
|
Visual Explanation by Interpretation: Improving Visual Feedback
Capabilities of Deep Neural Networks
|
Accepted at International Conference on Learning Representations
(ICLR) 2019. Project website:
http://homes.esat.kuleuven.be/~joramas/projects/visualExplanationByInterpretation
| null | null | null |
cs.CV cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Interpretation and explanation of deep models is critical towards wide
adoption of systems that rely on them. In this paper, we propose a novel scheme
for both interpretation as well as explanation in which, given a pretrained
model, we automatically identify internal features relevant for the set of
classes considered by the model, without relying on additional annotations. We
interpret the model through average visualizations of this reduced set of
features. Then, at test time, we explain the network prediction by accompanying
the predicted class label with supporting visualizations derived from the
identified features. In addition, we propose a method to address the artifacts
introduced by stridded operations in deconvNet-based visualizations. Moreover,
we introduce an8Flower, a dataset specifically designed for objective
quantitative evaluation of methods for visual explanation.Experiments on the
MNIST,ILSVRC12,Fashion144k and an8Flower datasets show that our method produces
detailed explanations with good coverage of relevant features of the classes of
interest
|
[
{
"created": "Mon, 18 Dec 2017 09:17:44 GMT",
"version": "v1"
},
{
"created": "Tue, 22 May 2018 15:04:25 GMT",
"version": "v2"
},
{
"created": "Fri, 8 Mar 2019 12:11:15 GMT",
"version": "v3"
}
] |
2019-03-11
|
[
[
"Oramas",
"Jose",
""
],
[
"Wang",
"Kaili",
""
],
[
"Tuytelaars",
"Tinne",
""
]
] |
Interpretation and explanation of deep models is critical towards wide adoption of systems that rely on them. In this paper, we propose a novel scheme for both interpretation as well as explanation in which, given a pretrained model, we automatically identify internal features relevant for the set of classes considered by the model, without relying on additional annotations. We interpret the model through average visualizations of this reduced set of features. Then, at test time, we explain the network prediction by accompanying the predicted class label with supporting visualizations derived from the identified features. In addition, we propose a method to address the artifacts introduced by stridded operations in deconvNet-based visualizations. Moreover, we introduce an8Flower, a dataset specifically designed for objective quantitative evaluation of methods for visual explanation.Experiments on the MNIST,ILSVRC12,Fashion144k and an8Flower datasets show that our method produces detailed explanations with good coverage of relevant features of the classes of interest
|
2210.03948
|
Manne Pavan Reddy
|
Pavan Reddy M. and SaiDhiraj Amuru and Kiran Kuchi
|
Optimizing the Placement and Beamforming of RIS in Cellular Networks: A
System-Level Modeling Perspective
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
In this letter, we present in detail the system-level modeling of
reconfigurable intelligent surface (RIS)-assisted cellular systems by
considering a 3-dimensional channel model between base station, RIS, and user.
We prove that the optimal placement of RIS to achieve wider coverage is exactly
opposite to the base station, under the constraint of single RIS in each
sector. We propose a novel beamforming design for RIS-assisted cellular systems
and derive the achievable sum rate in the presence of ideal, discrete, and
random phase shifters at RIS. Through extensive system-level evaluations, we
then show that the proposed beamforming design achieves significant
improvements as compared to the state-of-the-art algorithms.
|
[
{
"created": "Sat, 8 Oct 2022 07:33:54 GMT",
"version": "v1"
},
{
"created": "Tue, 2 May 2023 17:41:09 GMT",
"version": "v2"
}
] |
2023-05-03
|
[
[
"M.",
"Pavan Reddy",
""
],
[
"Amuru",
"SaiDhiraj",
""
],
[
"Kuchi",
"Kiran",
""
]
] |
In this letter, we present in detail the system-level modeling of reconfigurable intelligent surface (RIS)-assisted cellular systems by considering a 3-dimensional channel model between base station, RIS, and user. We prove that the optimal placement of RIS to achieve wider coverage is exactly opposite to the base station, under the constraint of single RIS in each sector. We propose a novel beamforming design for RIS-assisted cellular systems and derive the achievable sum rate in the presence of ideal, discrete, and random phase shifters at RIS. Through extensive system-level evaluations, we then show that the proposed beamforming design achieves significant improvements as compared to the state-of-the-art algorithms.
|
2106.10464
|
Stanis{\l}aw Ka\'zmierczak
|
Stanis{\l}aw Ka\'zmierczak, Zofia Juszka, Piotr Fudalej, Jacek
Ma\'ndziuk
|
Prediction of the facial growth direction with Machine Learning methods
| null | null | null | null |
cs.LG cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
First attempts of prediction of the facial growth (FG) direction were made
over half of a century ago. Despite numerous attempts and elapsed time, a
satisfactory method has not been established yet and the problem still poses a
challenge for medical experts. To our knowledge, this paper is the first
Machine Learning approach to the prediction of FG direction. Conducted data
analysis reveals the inherent complexity of the problem and explains the
reasons of difficulty in FG direction prediction based on 2D X-ray images. To
perform growth forecasting, we employ a wide range of algorithms, from logistic
regression, through tree ensembles to neural networks and consider three,
slightly different, problem formulations. The resulting classification accuracy
varies between 71% and 75%.
|
[
{
"created": "Sat, 19 Jun 2021 10:12:12 GMT",
"version": "v1"
}
] |
2021-06-22
|
[
[
"Kaźmierczak",
"Stanisław",
""
],
[
"Juszka",
"Zofia",
""
],
[
"Fudalej",
"Piotr",
""
],
[
"Mańdziuk",
"Jacek",
""
]
] |
First attempts of prediction of the facial growth (FG) direction were made over half of a century ago. Despite numerous attempts and elapsed time, a satisfactory method has not been established yet and the problem still poses a challenge for medical experts. To our knowledge, this paper is the first Machine Learning approach to the prediction of FG direction. Conducted data analysis reveals the inherent complexity of the problem and explains the reasons of difficulty in FG direction prediction based on 2D X-ray images. To perform growth forecasting, we employ a wide range of algorithms, from logistic regression, through tree ensembles to neural networks and consider three, slightly different, problem formulations. The resulting classification accuracy varies between 71% and 75%.
|
1709.10008
|
Valentin Touzeau
|
Valentin Touzeau (1), Claire Ma\"iza (1), David Monniaux (1), Jan
Reineke (2) ((1) VERIMAG-IMAG, (2) Saarland University)
|
Ascertaining Uncertainty for Efficient Exact Cache Analysis
| null |
Rupak Majumdar; Viktor Kuncak. Computer Aided Verification - 29th
International Conference, Jul 2017, Heidelberg, France. Springer, 10427 (2),
pp.20 - 40, 2017, Lecture notes in computer science.
http://cavconference.org/2017/
|
10.1007/978-3-319-63390-9_2
| null |
cs.PL cs.AR cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Static cache analysis characterizes a program's cache behavior by determining
in a sound but approximate manner which memory accesses result in cache hits
and which result in cache misses. Such information is valuable in optimizing
compilers, worst-case execution time analysis, and side-channel attack
quantification and mitigation.Cache analysis is usually performed as a
combination of `must' and `may' abstract interpretations, classifying
instructions as either `always hit', `always miss', or `unknown'. Instructions
classified as `unknown' might result in a hit or a miss depending on program
inputs or the initial cache state. It is equally possible that they do in fact
always hit or always miss, but the cache analysis is too coarse to see it.Our
approach to eliminate this uncertainty consists in (i) a novel abstract
interpretation able to ascertain that a particular instruction may definitely
cause a hit and a miss on different paths, and (ii) an exact analysis, removing
all remaining uncertainty, based on model checking, using
abstract-interpretation results to prune down the model for scalability.We
evaluated our approach on a variety of examples; it notably improves precision
upon classical abstract interpretation at reasonable cost.
|
[
{
"created": "Thu, 28 Sep 2017 15:05:54 GMT",
"version": "v1"
},
{
"created": "Thu, 20 Dec 2018 08:38:34 GMT",
"version": "v2"
}
] |
2021-08-23
|
[
[
"Touzeau",
"Valentin",
"",
"VERIMAG-IMAG"
],
[
"Maïza",
"Claire",
"",
"VERIMAG-IMAG"
],
[
"Monniaux",
"David",
"",
"VERIMAG-IMAG"
],
[
"Reineke",
"Jan",
"",
"Saarland University"
]
] |
Static cache analysis characterizes a program's cache behavior by determining in a sound but approximate manner which memory accesses result in cache hits and which result in cache misses. Such information is valuable in optimizing compilers, worst-case execution time analysis, and side-channel attack quantification and mitigation.Cache analysis is usually performed as a combination of `must' and `may' abstract interpretations, classifying instructions as either `always hit', `always miss', or `unknown'. Instructions classified as `unknown' might result in a hit or a miss depending on program inputs or the initial cache state. It is equally possible that they do in fact always hit or always miss, but the cache analysis is too coarse to see it.Our approach to eliminate this uncertainty consists in (i) a novel abstract interpretation able to ascertain that a particular instruction may definitely cause a hit and a miss on different paths, and (ii) an exact analysis, removing all remaining uncertainty, based on model checking, using abstract-interpretation results to prune down the model for scalability.We evaluated our approach on a variety of examples; it notably improves precision upon classical abstract interpretation at reasonable cost.
|
2106.14885
|
Anastasios Nentidis
|
Anastasios Nentidis, Georgios Katsimpras, Eirini Vandorou, Anastasia
Krithara, Luis Gasco, Martin Krallinger, Georgios Paliouras
|
Overview of BioASQ 2021: The ninth BioASQ challenge on Large-Scale
Biomedical Semantic Indexing and Question Answering
|
25 pages, 15 tables, 3 figures. arXiv admin note: text overlap with
arXiv:2106.14618
|
Candan K.S. et al. (eds) Experimental IR Meets Multilinguality,
Multimodality, and Interaction. CLEF 2021. Lecture Notes in Computer Science,
vol 12880. Springer, Cham
|
10.1007/978-3-030-85251-1_18
| null |
cs.CL cs.AI cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Advancing the state-of-the-art in large-scale biomedical semantic indexing
and question answering is the main focus of the BioASQ challenge. BioASQ
organizes respective tasks where different teams develop systems that are
evaluated on the same benchmark datasets that represent the real information
needs of experts in the biomedical domain. This paper presents an overview of
the ninth edition of the BioASQ challenge in the context of the Conference and
Labs of the Evaluation Forum (CLEF) 2021. In this year, a new question
answering task, named Synergy, is introduced to support researchers studying
the COVID-19 disease and measure the ability of the participating teams to
discern information while the problem is still developing. In total, 42 teams
with more than 170 systems were registered to participate in the four tasks of
the challenge. The evaluation results, similarly to previous years, show a
performance gain against the baselines which indicates the continuous
improvement of the state-of-the-art in this field.
|
[
{
"created": "Mon, 28 Jun 2021 10:03:11 GMT",
"version": "v1"
}
] |
2021-09-16
|
[
[
"Nentidis",
"Anastasios",
""
],
[
"Katsimpras",
"Georgios",
""
],
[
"Vandorou",
"Eirini",
""
],
[
"Krithara",
"Anastasia",
""
],
[
"Gasco",
"Luis",
""
],
[
"Krallinger",
"Martin",
""
],
[
"Paliouras",
"Georgios",
""
]
] |
Advancing the state-of-the-art in large-scale biomedical semantic indexing and question answering is the main focus of the BioASQ challenge. BioASQ organizes respective tasks where different teams develop systems that are evaluated on the same benchmark datasets that represent the real information needs of experts in the biomedical domain. This paper presents an overview of the ninth edition of the BioASQ challenge in the context of the Conference and Labs of the Evaluation Forum (CLEF) 2021. In this year, a new question answering task, named Synergy, is introduced to support researchers studying the COVID-19 disease and measure the ability of the participating teams to discern information while the problem is still developing. In total, 42 teams with more than 170 systems were registered to participate in the four tasks of the challenge. The evaluation results, similarly to previous years, show a performance gain against the baselines which indicates the continuous improvement of the state-of-the-art in this field.
|
1411.3107
|
Xiaoqiang Ren
|
Xiaoqiang Ren, Jiming Chen, Karl H. Johansson and Ling Shi
|
Quickest Change Detection with a Censoring Sensor in the Minimax Setting
| null | null | null | null |
cs.SY cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The problem of quickest change detection with a wireless sensor node is
studied in this paper. The sensor that is deployed to monitor the environment
has limited energy constraint to the classical quickest change detection
problem. We consider the "censoring" strategy at the sensor side, i.e., the
sensor selectively sends its observations to the decision maker. The quickest
change detection problem is formulated in a minimax way. In particular, our
goal is to find the optimal censoring strategy and stopping time such that the
detection delay is minimized subject to constraints on both average run length
(ARL) and average energy cost before the change. We show that the censoring
strategy that has the maximal post-censoring Kullback-Leibler (K-L) divergence
coupled with Cumulative Sum (CuSum) and Shiryaev-Roberts-Pollak (SRP) detection
procedure is asymptotically optimal for the Lorden's and Pollak's problem as
the ARL goes to infinity, respectively. We also show that the asymptotically
optimal censoring strategy should use up the available energy and has a very
special structure, i.e., the likelihood ratio of the no send region is a single
interval, which can be utilized to significantly reduce the computational
complexity. Numerical examples are shown to illustrate our results.
|
[
{
"created": "Wed, 12 Nov 2014 09:10:01 GMT",
"version": "v1"
}
] |
2014-11-13
|
[
[
"Ren",
"Xiaoqiang",
""
],
[
"Chen",
"Jiming",
""
],
[
"Johansson",
"Karl H.",
""
],
[
"Shi",
"Ling",
""
]
] |
The problem of quickest change detection with a wireless sensor node is studied in this paper. The sensor that is deployed to monitor the environment has limited energy constraint to the classical quickest change detection problem. We consider the "censoring" strategy at the sensor side, i.e., the sensor selectively sends its observations to the decision maker. The quickest change detection problem is formulated in a minimax way. In particular, our goal is to find the optimal censoring strategy and stopping time such that the detection delay is minimized subject to constraints on both average run length (ARL) and average energy cost before the change. We show that the censoring strategy that has the maximal post-censoring Kullback-Leibler (K-L) divergence coupled with Cumulative Sum (CuSum) and Shiryaev-Roberts-Pollak (SRP) detection procedure is asymptotically optimal for the Lorden's and Pollak's problem as the ARL goes to infinity, respectively. We also show that the asymptotically optimal censoring strategy should use up the available energy and has a very special structure, i.e., the likelihood ratio of the no send region is a single interval, which can be utilized to significantly reduce the computational complexity. Numerical examples are shown to illustrate our results.
|
2005.07926
|
Savvas Zannettou
|
Savvas Zannettou, Mai ElSherief, Elizabeth Belding, Shirin Nilizadeh,
Gianluca Stringhini
|
Measuring and Characterizing Hate Speech on News Websites
|
Accepted at WebSci'20
| null | null | null |
cs.SI cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Web has become the main source for news acquisition. At the same time,
news discussion has become more social: users can post comments on news
articles or discuss news articles on other platforms like Reddit. These
features empower and enable discussions among the users; however, they also act
as the medium for the dissemination of toxic discourse and hate speech. The
research community lacks a general understanding on what type of content
attracts hateful discourse and the possible effects of social networks on the
commenting activity on news articles. In this work, we perform a large-scale
quantitative analysis of 125M comments posted on 412K news articles over the
course of 19 months. We analyze the content of the collected articles and their
comments using temporal analysis, user-based analysis, and linguistic analysis,
to shed light on what elements attract hateful comments on news articles. We
also investigate commenting activity when an article is posted on either
4chan's Politically Incorrect board (/pol/) or six selected subreddits. We find
statistically significant increases in hateful commenting activity around
real-world divisive events like the "Unite the Right" rally in Charlottesville
and political events like the second and third 2016 US presidential debates.
Also, we find that articles that attract a substantial number of hateful
comments have different linguistic characteristics when compared to articles
that do not attract hateful comments. Furthermore, we observe that the post of
a news articles on either /pol/ or the six subreddits is correlated with an
increase of (hateful) commenting activity on the news articles.
|
[
{
"created": "Sat, 16 May 2020 09:59:01 GMT",
"version": "v1"
}
] |
2020-05-19
|
[
[
"Zannettou",
"Savvas",
""
],
[
"ElSherief",
"Mai",
""
],
[
"Belding",
"Elizabeth",
""
],
[
"Nilizadeh",
"Shirin",
""
],
[
"Stringhini",
"Gianluca",
""
]
] |
The Web has become the main source for news acquisition. At the same time, news discussion has become more social: users can post comments on news articles or discuss news articles on other platforms like Reddit. These features empower and enable discussions among the users; however, they also act as the medium for the dissemination of toxic discourse and hate speech. The research community lacks a general understanding on what type of content attracts hateful discourse and the possible effects of social networks on the commenting activity on news articles. In this work, we perform a large-scale quantitative analysis of 125M comments posted on 412K news articles over the course of 19 months. We analyze the content of the collected articles and their comments using temporal analysis, user-based analysis, and linguistic analysis, to shed light on what elements attract hateful comments on news articles. We also investigate commenting activity when an article is posted on either 4chan's Politically Incorrect board (/pol/) or six selected subreddits. We find statistically significant increases in hateful commenting activity around real-world divisive events like the "Unite the Right" rally in Charlottesville and political events like the second and third 2016 US presidential debates. Also, we find that articles that attract a substantial number of hateful comments have different linguistic characteristics when compared to articles that do not attract hateful comments. Furthermore, we observe that the post of a news articles on either /pol/ or the six subreddits is correlated with an increase of (hateful) commenting activity on the news articles.
|
0804.1840
|
Sudhir Singh
|
Aditya Ramamoorthy, Vwani Roychowdhury, Sudhir Kumar Singh
|
Selfish Distributed Compression over Networks: Correlation Induces
Anarchy
|
replaced with revised version, 32 pages, 2 figures
| null | null | null |
cs.GT cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the min-cost multicast problem (under network coding) with
multiple correlated sources where each terminal wants to losslessly reconstruct
all the sources. We study the inefficiency brought forth by the selfish
behavior of the terminals in this scenario by modeling it as a noncooperative
game among the terminals. The degradation in performance due to the lack of
regulation is measured by the {\it Price of Anarchy} (POA), which is defined as
the ratio between the cost of the worst possible \textit{Wardrop equilibrium}
and the socially optimum cost. Our main result is that in contrast with the
case of independent sources, the presence of source correlations can
significantly increase the price of anarchy. Towards establishing this result,
we first characterize the socially optimal flow and rate allocation in terms of
four intuitive conditions. Next, we show that the Wardrop equilibrium is a
socially optimal solution for a different set of (related) cost functions.
Using this, we construct explicit examples that demonstrate that the POA $> 1$
and determine near-tight upper bounds on the POA as well. The main techniques
in our analysis are Lagrangian duality theory and the usage of the
supermodularity of conditional entropy.
|
[
{
"created": "Fri, 11 Apr 2008 07:22:39 GMT",
"version": "v1"
},
{
"created": "Sun, 1 Mar 2009 21:23:31 GMT",
"version": "v2"
}
] |
2009-03-01
|
[
[
"Ramamoorthy",
"Aditya",
""
],
[
"Roychowdhury",
"Vwani",
""
],
[
"Singh",
"Sudhir Kumar",
""
]
] |
We consider the min-cost multicast problem (under network coding) with multiple correlated sources where each terminal wants to losslessly reconstruct all the sources. We study the inefficiency brought forth by the selfish behavior of the terminals in this scenario by modeling it as a noncooperative game among the terminals. The degradation in performance due to the lack of regulation is measured by the {\it Price of Anarchy} (POA), which is defined as the ratio between the cost of the worst possible \textit{Wardrop equilibrium} and the socially optimum cost. Our main result is that in contrast with the case of independent sources, the presence of source correlations can significantly increase the price of anarchy. Towards establishing this result, we first characterize the socially optimal flow and rate allocation in terms of four intuitive conditions. Next, we show that the Wardrop equilibrium is a socially optimal solution for a different set of (related) cost functions. Using this, we construct explicit examples that demonstrate that the POA $> 1$ and determine near-tight upper bounds on the POA as well. The main techniques in our analysis are Lagrangian duality theory and the usage of the supermodularity of conditional entropy.
|
2401.15071
|
Zhenfei Yin
|
Chaochao Lu, Chen Qian, Guodong Zheng, Hongxing Fan, Hongzhi Gao, Jie
Zhang, Jing Shao, Jingyi Deng, Jinlan Fu, Kexin Huang, Kunchang Li, Lijun Li,
Limin Wang, Lu Sheng, Meiqi Chen, Ming Zhang, Qibing Ren, Sirui Chen, Tao
Gui, Wanli Ouyang, Yali Wang, Yan Teng, Yaru Wang, Yi Wang, Yinan He,
Yingchun Wang, Yixu Wang, Yongting Zhang, Yu Qiao, Yujiong Shen, Yurong Mou,
Yuxi Chen, Zaibin Zhang, Zhelun Shi, Zhenfei Yin, Zhipin Wang
|
From GPT-4 to Gemini and Beyond: Assessing the Landscape of MLLMs on
Generalizability, Trustworthiness and Causality through Four Modalities
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-modal Large Language Models (MLLMs) have shown impressive abilities in
generating reasonable responses with respect to multi-modal contents. However,
there is still a wide gap between the performance of recent MLLM-based
applications and the expectation of the broad public, even though the most
powerful OpenAI's GPT-4 and Google's Gemini have been deployed. This paper
strives to enhance understanding of the gap through the lens of a qualitative
study on the generalizability, trustworthiness, and causal reasoning
capabilities of recent proprietary and open-source MLLMs across four
modalities: ie, text, code, image, and video, ultimately aiming to improve the
transparency of MLLMs. We believe these properties are several representative
factors that define the reliability of MLLMs, in supporting various downstream
applications. To be specific, we evaluate the closed-source GPT-4 and Gemini
and 6 open-source LLMs and MLLMs. Overall we evaluate 230 manually designed
cases, where the qualitative results are then summarized into 12 scores (ie, 4
modalities times 3 properties). In total, we uncover 14 empirical findings that
are useful to understand the capabilities and limitations of both proprietary
and open-source MLLMs, towards more reliable downstream multi-modal
applications.
|
[
{
"created": "Fri, 26 Jan 2024 18:53:03 GMT",
"version": "v1"
},
{
"created": "Mon, 29 Jan 2024 15:18:45 GMT",
"version": "v2"
}
] |
2024-01-30
|
[
[
"Lu",
"Chaochao",
""
],
[
"Qian",
"Chen",
""
],
[
"Zheng",
"Guodong",
""
],
[
"Fan",
"Hongxing",
""
],
[
"Gao",
"Hongzhi",
""
],
[
"Zhang",
"Jie",
""
],
[
"Shao",
"Jing",
""
],
[
"Deng",
"Jingyi",
""
],
[
"Fu",
"Jinlan",
""
],
[
"Huang",
"Kexin",
""
],
[
"Li",
"Kunchang",
""
],
[
"Li",
"Lijun",
""
],
[
"Wang",
"Limin",
""
],
[
"Sheng",
"Lu",
""
],
[
"Chen",
"Meiqi",
""
],
[
"Zhang",
"Ming",
""
],
[
"Ren",
"Qibing",
""
],
[
"Chen",
"Sirui",
""
],
[
"Gui",
"Tao",
""
],
[
"Ouyang",
"Wanli",
""
],
[
"Wang",
"Yali",
""
],
[
"Teng",
"Yan",
""
],
[
"Wang",
"Yaru",
""
],
[
"Wang",
"Yi",
""
],
[
"He",
"Yinan",
""
],
[
"Wang",
"Yingchun",
""
],
[
"Wang",
"Yixu",
""
],
[
"Zhang",
"Yongting",
""
],
[
"Qiao",
"Yu",
""
],
[
"Shen",
"Yujiong",
""
],
[
"Mou",
"Yurong",
""
],
[
"Chen",
"Yuxi",
""
],
[
"Zhang",
"Zaibin",
""
],
[
"Shi",
"Zhelun",
""
],
[
"Yin",
"Zhenfei",
""
],
[
"Wang",
"Zhipin",
""
]
] |
Multi-modal Large Language Models (MLLMs) have shown impressive abilities in generating reasonable responses with respect to multi-modal contents. However, there is still a wide gap between the performance of recent MLLM-based applications and the expectation of the broad public, even though the most powerful OpenAI's GPT-4 and Google's Gemini have been deployed. This paper strives to enhance understanding of the gap through the lens of a qualitative study on the generalizability, trustworthiness, and causal reasoning capabilities of recent proprietary and open-source MLLMs across four modalities: ie, text, code, image, and video, ultimately aiming to improve the transparency of MLLMs. We believe these properties are several representative factors that define the reliability of MLLMs, in supporting various downstream applications. To be specific, we evaluate the closed-source GPT-4 and Gemini and 6 open-source LLMs and MLLMs. Overall we evaluate 230 manually designed cases, where the qualitative results are then summarized into 12 scores (ie, 4 modalities times 3 properties). In total, we uncover 14 empirical findings that are useful to understand the capabilities and limitations of both proprietary and open-source MLLMs, towards more reliable downstream multi-modal applications.
|
2306.11426
|
Ioannis Panopoulos
|
Ioannis Panopoulos, Sokratis Nikolaidis, Stylianos I. Venieris,
Iakovos S. Venieris
|
Exploring the Performance and Efficiency of Transformer Models for NLP
on Mobile Devices
|
Accepted at the 3rd IEEE International Workshop on Distributed
Intelligent Systems (DistInSys), 2023
| null | null | null |
cs.LG cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep learning (DL) is characterised by its dynamic nature, with new deep
neural network (DNN) architectures and approaches emerging every few years,
driving the field's advancement. At the same time, the ever-increasing use of
mobile devices (MDs) has resulted in a surge of DNN-based mobile applications.
Although traditional architectures, like CNNs and RNNs, have been successfully
integrated into MDs, this is not the case for Transformers, a relatively new
model family that has achieved new levels of accuracy across AI tasks, but
poses significant computational challenges. In this work, we aim to make steps
towards bridging this gap by examining the current state of Transformers'
on-device execution. To this end, we construct a benchmark of representative
models and thoroughly evaluate their performance across MDs with different
computational capabilities. Our experimental results show that Transformers are
not accelerator-friendly and indicate the need for software and hardware
optimisations to achieve efficient deployment.
|
[
{
"created": "Tue, 20 Jun 2023 10:15:01 GMT",
"version": "v1"
}
] |
2023-07-25
|
[
[
"Panopoulos",
"Ioannis",
""
],
[
"Nikolaidis",
"Sokratis",
""
],
[
"Venieris",
"Stylianos I.",
""
],
[
"Venieris",
"Iakovos S.",
""
]
] |
Deep learning (DL) is characterised by its dynamic nature, with new deep neural network (DNN) architectures and approaches emerging every few years, driving the field's advancement. At the same time, the ever-increasing use of mobile devices (MDs) has resulted in a surge of DNN-based mobile applications. Although traditional architectures, like CNNs and RNNs, have been successfully integrated into MDs, this is not the case for Transformers, a relatively new model family that has achieved new levels of accuracy across AI tasks, but poses significant computational challenges. In this work, we aim to make steps towards bridging this gap by examining the current state of Transformers' on-device execution. To this end, we construct a benchmark of representative models and thoroughly evaluate their performance across MDs with different computational capabilities. Our experimental results show that Transformers are not accelerator-friendly and indicate the need for software and hardware optimisations to achieve efficient deployment.
|
1801.06267
|
Kevin Moran P
|
Mario Linares Vasquez, Kevin Moran, and Denys Poshyvanyk
|
Continuous, Evolutionary and Large-Scale: A New Perspective for
Automated Mobile App Testing
|
12 pages, accepted to the Proceedings of 33rd IEEE International
Conference on Software Maintenance and Evolution (ICSME'17)
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mobile app development involves a unique set of challenges including device
fragmentation and rapidly evolving platforms, making testing a difficult task.
The design space for a comprehensive mobile testing strategy includes features,
inputs, potential contextual app states, and large combinations of devices and
underlying platforms. Therefore, automated testing is an essential activity of
the development process. However, current state of the art of automated testing
tools for mobile apps poses limitations that has driven a preference for manual
testing in practice. As of today, there is no comprehensive automated solution
for mobile testing that overcomes fundamental issues such as automated oracles,
history awareness in test cases, or automated evolution of test cases.
In this perspective paper we survey the current state of the art in terms of
the frameworks, tools, and services available to developers to aid in mobile
testing, highlighting present shortcomings. Next, we provide commentary on
current key challenges that restrict the possibility of a comprehensive,
effective, and practical automated testing solution. Finally, we offer our
vision of a comprehensive mobile app testing framework, complete with research
agenda, that is succinctly summarized along three principles: Continuous,
Evolutionary and Large-scale (CEL).
|
[
{
"created": "Fri, 19 Jan 2018 01:58:56 GMT",
"version": "v1"
}
] |
2018-01-22
|
[
[
"Vasquez",
"Mario Linares",
""
],
[
"Moran",
"Kevin",
""
],
[
"Poshyvanyk",
"Denys",
""
]
] |
Mobile app development involves a unique set of challenges including device fragmentation and rapidly evolving platforms, making testing a difficult task. The design space for a comprehensive mobile testing strategy includes features, inputs, potential contextual app states, and large combinations of devices and underlying platforms. Therefore, automated testing is an essential activity of the development process. However, current state of the art of automated testing tools for mobile apps poses limitations that has driven a preference for manual testing in practice. As of today, there is no comprehensive automated solution for mobile testing that overcomes fundamental issues such as automated oracles, history awareness in test cases, or automated evolution of test cases. In this perspective paper we survey the current state of the art in terms of the frameworks, tools, and services available to developers to aid in mobile testing, highlighting present shortcomings. Next, we provide commentary on current key challenges that restrict the possibility of a comprehensive, effective, and practical automated testing solution. Finally, we offer our vision of a comprehensive mobile app testing framework, complete with research agenda, that is succinctly summarized along three principles: Continuous, Evolutionary and Large-scale (CEL).
|
2106.13306
|
Dingwen Tao
|
Chengming Zhang, Sian Jin, Tong Geng, Jiannan Tian, Ang Li, Dingwen
Tao
|
CEAZ: Accelerating Parallel I/O via Hardware-Algorithm Co-Designed
Adaptive Lossy Compression
|
13 pages, 15 figures, 8 tables, accepted by ACM ICS '22
| null |
10.1145/3524059.3532362
| null |
cs.DC cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
As HPC systems continue to grow to exascale, the amount of data that needs to
be saved or transmitted is exploding. To this end, many previous works have
studied using error-bounded lossy compressors to reduce the data size and
improve the I/O performance. However, little work has been done for effectively
offloading lossy compression onto FPGA-based SmartNICs to reduce the
compression overhead. In this paper, we propose a hardware-algorithm codesign
for an efficient and adaptive lossy compressor for scientific data on FPGAs
(called CEAZ), which is the first lossy compressor that can achieve high
compression ratios and throughputs simultaneously. Specifically, we propose an
efficient Huffman coding approach that can adaptively update Huffman codewords
online based on codewords generated offline, from a variety of representative
scientific datasets. Moreover, we derive a theoretical analysis to support a
precise control of compression ratio under an error-bounded compression mode,
enabling accurate offline Huffman codewords generation. This also helps us
create a fixed-ratio compression mode for consistent throughput. In addition,
we develop an efficient compression pipeline by adopting cuSZ's
dual-quantization algorithm to our hardware use cases. Finally, we evaluate
CEAZ on five real-world datasets with both a single FPGA board and 128 nodes
(to accelerate parallel I/O). Experiments show that CEAZ outperforms the
second-best FPGA-based lossy compressor by 2.3X of throughput and 3.0X of
ratio. It also improves MPI_File_write and MPI_Gather throughputs by up to
28.9X and 37.8X, respectively.
|
[
{
"created": "Thu, 24 Jun 2021 20:26:52 GMT",
"version": "v1"
},
{
"created": "Sun, 3 Oct 2021 22:17:53 GMT",
"version": "v2"
},
{
"created": "Fri, 13 May 2022 04:22:58 GMT",
"version": "v3"
}
] |
2022-05-16
|
[
[
"Zhang",
"Chengming",
""
],
[
"Jin",
"Sian",
""
],
[
"Geng",
"Tong",
""
],
[
"Tian",
"Jiannan",
""
],
[
"Li",
"Ang",
""
],
[
"Tao",
"Dingwen",
""
]
] |
As HPC systems continue to grow to exascale, the amount of data that needs to be saved or transmitted is exploding. To this end, many previous works have studied using error-bounded lossy compressors to reduce the data size and improve the I/O performance. However, little work has been done for effectively offloading lossy compression onto FPGA-based SmartNICs to reduce the compression overhead. In this paper, we propose a hardware-algorithm codesign for an efficient and adaptive lossy compressor for scientific data on FPGAs (called CEAZ), which is the first lossy compressor that can achieve high compression ratios and throughputs simultaneously. Specifically, we propose an efficient Huffman coding approach that can adaptively update Huffman codewords online based on codewords generated offline, from a variety of representative scientific datasets. Moreover, we derive a theoretical analysis to support a precise control of compression ratio under an error-bounded compression mode, enabling accurate offline Huffman codewords generation. This also helps us create a fixed-ratio compression mode for consistent throughput. In addition, we develop an efficient compression pipeline by adopting cuSZ's dual-quantization algorithm to our hardware use cases. Finally, we evaluate CEAZ on five real-world datasets with both a single FPGA board and 128 nodes (to accelerate parallel I/O). Experiments show that CEAZ outperforms the second-best FPGA-based lossy compressor by 2.3X of throughput and 3.0X of ratio. It also improves MPI_File_write and MPI_Gather throughputs by up to 28.9X and 37.8X, respectively.
|
1904.09029
|
Spyros Chatzivasileiadis
|
Jos\'e-Mar\'ia Hidalgo-Arteaga, Fiodar Hancharou, Florian Thams,
Spyros Chatzivasileiadis
|
Deep Learning for Power System Security Assessment
|
Accepted at IEEE Powertech 2019, Milan, Italy
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Security assessment is among the most fundamental functions of power system
operator. The sheer complexity of power systems exceeding a few buses, however,
makes it an extremely computationally demanding task. The emergence of deep
learning methods that are able to handle immense amounts of data, and infer
valuable information appears as a promising alternative. This paper has two
main contributions. First, inspired by the remarkable performance of
convolutional neural networks for image processing, we represent for the first
time power system snapshots as 2-dimensional images, thus taking advantage of
the wide range of deep learning methods available for image processing. Second,
we train deep neural networks on a large database for the NESTA 162-bus system
to assess both N-1 security and small-signal stability. We find that our
approach is over 255 times faster than a standard small-signal stability
assessment, and it can correctly determine unsafe points with over 99%
accuracy.
|
[
{
"created": "Sun, 31 Mar 2019 20:07:44 GMT",
"version": "v1"
}
] |
2019-04-22
|
[
[
"Hidalgo-Arteaga",
"José-María",
""
],
[
"Hancharou",
"Fiodar",
""
],
[
"Thams",
"Florian",
""
],
[
"Chatzivasileiadis",
"Spyros",
""
]
] |
Security assessment is among the most fundamental functions of power system operator. The sheer complexity of power systems exceeding a few buses, however, makes it an extremely computationally demanding task. The emergence of deep learning methods that are able to handle immense amounts of data, and infer valuable information appears as a promising alternative. This paper has two main contributions. First, inspired by the remarkable performance of convolutional neural networks for image processing, we represent for the first time power system snapshots as 2-dimensional images, thus taking advantage of the wide range of deep learning methods available for image processing. Second, we train deep neural networks on a large database for the NESTA 162-bus system to assess both N-1 security and small-signal stability. We find that our approach is over 255 times faster than a standard small-signal stability assessment, and it can correctly determine unsafe points with over 99% accuracy.
|
2306.16394
|
Zihan Zhang
|
Zihan Zhang and Qiaomin Xie
|
Sharper Model-free Reinforcement Learning for Average-reward Markov
Decision Processes
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We develop several provably efficient model-free reinforcement learning (RL)
algorithms for infinite-horizon average-reward Markov Decision Processes
(MDPs). We consider both online setting and the setting with access to a
simulator. In the online setting, we propose model-free RL algorithms based on
reference-advantage decomposition. Our algorithm achieves
$\widetilde{O}(S^5A^2\mathrm{sp}(h^*)\sqrt{T})$ regret after $T$ steps, where
$S\times A$ is the size of state-action space, and
$\mathrm{sp}(h^*)$ the span of the optimal bias function. Our results are the
first to achieve optimal dependence in $T$ for weakly communicating MDPs.
In the simulator setting, we propose a model-free RL algorithm that finds an
$\epsilon$-optimal policy using $\widetilde{O}
\left(\frac{SA\mathrm{sp}^2(h^*)}{\epsilon^2}+\frac{S^2A\mathrm{sp}(h^*)}{\epsilon}
\right)$ samples, whereas the minimax lower bound is
$\Omega\left(\frac{SA\mathrm{sp}(h^*)}{\epsilon^2}\right)$.
Our results are based on two new techniques that are unique in the
average-reward setting: 1) better discounted approximation by value-difference
estimation; 2) efficient construction of confidence region for the optimal bias
function with space complexity $O(SA)$.
|
[
{
"created": "Wed, 28 Jun 2023 17:43:19 GMT",
"version": "v1"
}
] |
2023-06-29
|
[
[
"Zhang",
"Zihan",
""
],
[
"Xie",
"Qiaomin",
""
]
] |
We develop several provably efficient model-free reinforcement learning (RL) algorithms for infinite-horizon average-reward Markov Decision Processes (MDPs). We consider both online setting and the setting with access to a simulator. In the online setting, we propose model-free RL algorithms based on reference-advantage decomposition. Our algorithm achieves $\widetilde{O}(S^5A^2\mathrm{sp}(h^*)\sqrt{T})$ regret after $T$ steps, where $S\times A$ is the size of state-action space, and $\mathrm{sp}(h^*)$ the span of the optimal bias function. Our results are the first to achieve optimal dependence in $T$ for weakly communicating MDPs. In the simulator setting, we propose a model-free RL algorithm that finds an $\epsilon$-optimal policy using $\widetilde{O} \left(\frac{SA\mathrm{sp}^2(h^*)}{\epsilon^2}+\frac{S^2A\mathrm{sp}(h^*)}{\epsilon} \right)$ samples, whereas the minimax lower bound is $\Omega\left(\frac{SA\mathrm{sp}(h^*)}{\epsilon^2}\right)$. Our results are based on two new techniques that are unique in the average-reward setting: 1) better discounted approximation by value-difference estimation; 2) efficient construction of confidence region for the optimal bias function with space complexity $O(SA)$.
|
2406.12084
|
Yebowen Hu
|
Yebowen Hu, Kaiqiang Song, Sangwoo Cho, Xiaoyang Wang, Wenlin Yao,
Hassan Foroosh, Dong Yu, Fei Liu
|
When Reasoning Meets Information Aggregation: A Case Study with Sports
Narratives
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reasoning is most powerful when an LLM accurately aggregates relevant
information. We examine the critical role of information aggregation in
reasoning by requiring the LLM to analyze sports narratives. To succeed at this
task, an LLM must infer points from actions, identify related entities,
attribute points accurately to players and teams, and compile key statistics to
draw conclusions. We conduct comprehensive experiments with real NBA basketball
data and present SportsGen, a new method to synthesize game narratives. By
synthesizing data, we can rigorously evaluate LLMs' reasoning capabilities
under complex scenarios with varying narrative lengths and density of
information. Our findings show that most models, including GPT-4o, often fail
to accurately aggregate basketball scores due to frequent scoring patterns.
Open-source models like Llama-3 further suffer from significant score
hallucinations. Finally, the effectiveness of reasoning is influenced by
narrative complexity, information density, and domain-specific terms,
highlighting the challenges in analytical reasoning tasks.
|
[
{
"created": "Mon, 17 Jun 2024 20:49:35 GMT",
"version": "v1"
}
] |
2024-06-19
|
[
[
"Hu",
"Yebowen",
""
],
[
"Song",
"Kaiqiang",
""
],
[
"Cho",
"Sangwoo",
""
],
[
"Wang",
"Xiaoyang",
""
],
[
"Yao",
"Wenlin",
""
],
[
"Foroosh",
"Hassan",
""
],
[
"Yu",
"Dong",
""
],
[
"Liu",
"Fei",
""
]
] |
Reasoning is most powerful when an LLM accurately aggregates relevant information. We examine the critical role of information aggregation in reasoning by requiring the LLM to analyze sports narratives. To succeed at this task, an LLM must infer points from actions, identify related entities, attribute points accurately to players and teams, and compile key statistics to draw conclusions. We conduct comprehensive experiments with real NBA basketball data and present SportsGen, a new method to synthesize game narratives. By synthesizing data, we can rigorously evaluate LLMs' reasoning capabilities under complex scenarios with varying narrative lengths and density of information. Our findings show that most models, including GPT-4o, often fail to accurately aggregate basketball scores due to frequent scoring patterns. Open-source models like Llama-3 further suffer from significant score hallucinations. Finally, the effectiveness of reasoning is influenced by narrative complexity, information density, and domain-specific terms, highlighting the challenges in analytical reasoning tasks.
|
2109.01254
|
Krishna Kant
|
Sanjeev Sondur and Krishna Kant
|
Performance Health Index for Complex Cyber Infrastructures
|
27 pages
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Most IT systems depend on a set of configuration variables (CVs), expressed
as a name/value pair that collectively define the resource allocation for the
system. While the ill-effects of misconfiguration or improper resource
allocation are well-known, there is no effective a priori metric to quantify
the impact of the configuration on the desired system attributes such as
performance, availability, etc. In this paper, we propose a
\textit{Configuration Health Index} (CHI) framework specifically attuned to the
performance attribute to capture the influence of CVs on the performance
aspects of the system. We show how CHI, which is defined as a configuration
scoring system, can take advantage of the domain knowledge and the available
(but rather limited) performance data to produce important insights into the
configuration settings. We compare the CHI with both well-advertised segmented
non-linear models and state-of-the-art data-driven models, and show that the
CHI not only consistently provides better results but also avoids the dangers
of pure data drive approach which may predict incorrect behavior or eliminate
some essential configuration variables from consideration.
|
[
{
"created": "Fri, 3 Sep 2021 00:37:42 GMT",
"version": "v1"
}
] |
2021-09-06
|
[
[
"Sondur",
"Sanjeev",
""
],
[
"Kant",
"Krishna",
""
]
] |
Most IT systems depend on a set of configuration variables (CVs), expressed as a name/value pair that collectively define the resource allocation for the system. While the ill-effects of misconfiguration or improper resource allocation are well-known, there is no effective a priori metric to quantify the impact of the configuration on the desired system attributes such as performance, availability, etc. In this paper, we propose a \textit{Configuration Health Index} (CHI) framework specifically attuned to the performance attribute to capture the influence of CVs on the performance aspects of the system. We show how CHI, which is defined as a configuration scoring system, can take advantage of the domain knowledge and the available (but rather limited) performance data to produce important insights into the configuration settings. We compare the CHI with both well-advertised segmented non-linear models and state-of-the-art data-driven models, and show that the CHI not only consistently provides better results but also avoids the dangers of pure data drive approach which may predict incorrect behavior or eliminate some essential configuration variables from consideration.
|
2010.04985
|
Marcel Dall'Agnol
|
Marcel Dall'Agnol, Tom Gur and Oded Lachish
|
A Structural Theorem for Local Algorithms with Applications to Coding,
Testing, and Verification
| null |
SIAM J. Comput., 52 (2023), pp. 1413-1463
|
10.1137/21M1422781
| null |
cs.CC cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We prove a general structural theorem for a wide family of local algorithms,
which includes property testers, local decoders, and PCPs of proximity. Namely,
we show that the structure of every algorithm that makes $q$ adaptive queries
and satisfies a natural robustness condition admits a sample-based algorithm
with $n^{1- 1/O(q^2 \log^2 q)}$ sample complexity, following the definition of
Goldreich and Ron (TOCT 2016). We prove that this transformation is nearly
optimal. Our theorem also admits a scheme for constructing privacy-preserving
local algorithms. Using the unified view that our structural theorem provides,
we obtain results regarding various types of local algorithms, including the
following.
- We strengthen the state-of-the-art lower bound for relaxed locally
decodable codes, obtaining an exponential improvement on the dependency in
query complexity; this resolves an open problem raised by Gur and Lachish
(SICOMP 2021).
- We show that any (constant-query) testable property admits a sample-based
tester with sublinear sample complexity; this resolves a problem left open in a
work of Fischer, Lachish, and Vasudev (FOCS 2015) by extending their main
result to adaptive testers.
- We prove that the known separation between proofs of proximity and testers
is essentially maximal; this resolves a problem left open by Gur and Rothblum
(ECCC 2013, Computational Complexity 2018) regarding sublinear-time delegation
of computation.
Our techniques strongly rely on relaxed sunflower lemmas and the
Hajnal-Szemer\'edi theorem.
|
[
{
"created": "Sat, 10 Oct 2020 12:46:42 GMT",
"version": "v1"
},
{
"created": "Tue, 12 Dec 2023 16:00:36 GMT",
"version": "v2"
}
] |
2023-12-13
|
[
[
"Dall'Agnol",
"Marcel",
""
],
[
"Gur",
"Tom",
""
],
[
"Lachish",
"Oded",
""
]
] |
We prove a general structural theorem for a wide family of local algorithms, which includes property testers, local decoders, and PCPs of proximity. Namely, we show that the structure of every algorithm that makes $q$ adaptive queries and satisfies a natural robustness condition admits a sample-based algorithm with $n^{1- 1/O(q^2 \log^2 q)}$ sample complexity, following the definition of Goldreich and Ron (TOCT 2016). We prove that this transformation is nearly optimal. Our theorem also admits a scheme for constructing privacy-preserving local algorithms. Using the unified view that our structural theorem provides, we obtain results regarding various types of local algorithms, including the following. - We strengthen the state-of-the-art lower bound for relaxed locally decodable codes, obtaining an exponential improvement on the dependency in query complexity; this resolves an open problem raised by Gur and Lachish (SICOMP 2021). - We show that any (constant-query) testable property admits a sample-based tester with sublinear sample complexity; this resolves a problem left open in a work of Fischer, Lachish, and Vasudev (FOCS 2015) by extending their main result to adaptive testers. - We prove that the known separation between proofs of proximity and testers is essentially maximal; this resolves a problem left open by Gur and Rothblum (ECCC 2013, Computational Complexity 2018) regarding sublinear-time delegation of computation. Our techniques strongly rely on relaxed sunflower lemmas and the Hajnal-Szemer\'edi theorem.
|
2206.13396
|
Brandon Trabucco
|
Brandon Trabucco, Gunnar Sigurdsson, Robinson Piramuthu, Gaurav S.
Sukhatme, Ruslan Salakhutdinov
|
A Simple Approach for Visual Rearrangement: 3D Mapping and Semantic
Search
|
Winner of the Rearrangement Challenge at CVPR 2022
| null | null | null |
cs.CV cs.AI cs.LG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Physically rearranging objects is an important capability for embodied
agents. Visual room rearrangement evaluates an agent's ability to rearrange
objects in a room to a desired goal based solely on visual input. We propose a
simple yet effective method for this problem: (1) search for and map which
objects need to be rearranged, and (2) rearrange each object until the task is
complete. Our approach consists of an off-the-shelf semantic segmentation
model, voxel-based semantic map, and semantic search policy to efficiently find
objects that need to be rearranged. On the AI2-THOR Rearrangement Challenge,
our method improves on current state-of-the-art end-to-end reinforcement
learning-based methods that learn visual rearrangement policies from 0.53%
correct rearrangement to 16.56%, using only 2.7% as many samples from the
environment.
|
[
{
"created": "Tue, 21 Jun 2022 02:33:57 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Aug 2022 20:47:35 GMT",
"version": "v2"
}
] |
2022-08-11
|
[
[
"Trabucco",
"Brandon",
""
],
[
"Sigurdsson",
"Gunnar",
""
],
[
"Piramuthu",
"Robinson",
""
],
[
"Sukhatme",
"Gaurav S.",
""
],
[
"Salakhutdinov",
"Ruslan",
""
]
] |
Physically rearranging objects is an important capability for embodied agents. Visual room rearrangement evaluates an agent's ability to rearrange objects in a room to a desired goal based solely on visual input. We propose a simple yet effective method for this problem: (1) search for and map which objects need to be rearranged, and (2) rearrange each object until the task is complete. Our approach consists of an off-the-shelf semantic segmentation model, voxel-based semantic map, and semantic search policy to efficiently find objects that need to be rearranged. On the AI2-THOR Rearrangement Challenge, our method improves on current state-of-the-art end-to-end reinforcement learning-based methods that learn visual rearrangement policies from 0.53% correct rearrangement to 16.56%, using only 2.7% as many samples from the environment.
|
1611.01962
|
Emmanuel Maggiori
|
Emmanuel Maggiori, Yuliya Tarabalka, Guillaume Charpiat and Pierre
Alliez
|
High-Resolution Semantic Labeling with Convolutional Neural Networks
| null | null |
10.1109/TGRS.2017.2740362
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Convolutional neural networks (CNNs) have received increasing attention over
the last few years. They were initially conceived for image categorization,
i.e., the problem of assigning a semantic label to an entire input image.
In this paper we address the problem of dense semantic labeling, which
consists in assigning a semantic label to every pixel in an image. Since this
requires a high spatial accuracy to determine where labels are assigned,
categorization CNNs, intended to be highly robust to local deformations, are
not directly applicable.
By adapting categorization networks, many semantic labeling CNNs have been
recently proposed. Our first contribution is an in-depth analysis of these
architectures. We establish the desired properties of an ideal semantic
labeling CNN, and assess how those methods stand with regard to these
properties. We observe that even though they provide competitive results, these
CNNs often underexploit properties of semantic labeling that could lead to more
effective and efficient architectures.
Out of these observations, we then derive a CNN framework specifically
adapted to the semantic labeling problem. In addition to learning features at
different resolutions, it learns how to combine these features. By integrating
local and global information in an efficient and flexible manner, it
outperforms previous techniques. We evaluate the proposed framework and compare
it with state-of-the-art architectures on public benchmarks of high-resolution
aerial image labeling.
|
[
{
"created": "Mon, 7 Nov 2016 10:02:49 GMT",
"version": "v1"
}
] |
2018-02-14
|
[
[
"Maggiori",
"Emmanuel",
""
],
[
"Tarabalka",
"Yuliya",
""
],
[
"Charpiat",
"Guillaume",
""
],
[
"Alliez",
"Pierre",
""
]
] |
Convolutional neural networks (CNNs) have received increasing attention over the last few years. They were initially conceived for image categorization, i.e., the problem of assigning a semantic label to an entire input image. In this paper we address the problem of dense semantic labeling, which consists in assigning a semantic label to every pixel in an image. Since this requires a high spatial accuracy to determine where labels are assigned, categorization CNNs, intended to be highly robust to local deformations, are not directly applicable. By adapting categorization networks, many semantic labeling CNNs have been recently proposed. Our first contribution is an in-depth analysis of these architectures. We establish the desired properties of an ideal semantic labeling CNN, and assess how those methods stand with regard to these properties. We observe that even though they provide competitive results, these CNNs often underexploit properties of semantic labeling that could lead to more effective and efficient architectures. Out of these observations, we then derive a CNN framework specifically adapted to the semantic labeling problem. In addition to learning features at different resolutions, it learns how to combine these features. By integrating local and global information in an efficient and flexible manner, it outperforms previous techniques. We evaluate the proposed framework and compare it with state-of-the-art architectures on public benchmarks of high-resolution aerial image labeling.
|
2303.09044
|
Soufiane Belharbi
|
Soufiane Belharbi, Shakeeb Murtaza, Marco Pedersoli, Ismail Ben Ayed,
Luke McCaffrey, Eric Granger
|
CoLo-CAM: Class Activation Mapping for Object Co-Localization in
Weakly-Labeled Unconstrained Videos
|
18 pages, 6 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Leveraging spatiotemporal information in videos is critical for weakly
supervised video object localization (WSVOL) tasks. However, state-of-the-art
methods only rely on visual and motion cues, while discarding discriminative
information, making them susceptible to inaccurate localizations. Recently,
discriminative models have been explored for WSVOL tasks using a temporal class
activation mapping (CAM) method. Although their results are promising, objects
are assumed to have limited movement from frame to frame, leading to
degradation in performance for relatively long-term dependencies. This paper
proposes a novel CAM method for WSVOL that exploits spatiotemporal information
in activation maps during training without constraining an object's position.
Its training relies on Co-Localization, hence, the name CoLo-CAM. Given a
sequence of frames, localization is jointly learned based on color cues
extracted across the corresponding maps, by assuming that an object has similar
color in consecutive frames. CAM activations are constrained to respond
similarly over pixels with similar colors, achieving co-localization. This
improves localization performance because the joint learning creates direct
communication among pixels across all image locations and over all frames,
allowing for transfer, aggregation, and correction of localizations.
Co-localization is integrated into training by minimizing the color term of a
conditional random field (CRF) loss over a sequence of frames/CAMs. Extensive
experiments on two challenging YouTube-Objects datasets of unconstrained videos
show the merits of our CoLo-CAM method, and its robustness to long-term
dependencies, leading to new state-of-the-art performance for WSVOL task.
|
[
{
"created": "Thu, 16 Mar 2023 02:29:53 GMT",
"version": "v1"
},
{
"created": "Sat, 2 Sep 2023 00:01:48 GMT",
"version": "v2"
},
{
"created": "Tue, 27 Feb 2024 06:24:31 GMT",
"version": "v3"
},
{
"created": "Wed, 28 Feb 2024 13:53:28 GMT",
"version": "v4"
}
] |
2024-02-29
|
[
[
"Belharbi",
"Soufiane",
""
],
[
"Murtaza",
"Shakeeb",
""
],
[
"Pedersoli",
"Marco",
""
],
[
"Ayed",
"Ismail Ben",
""
],
[
"McCaffrey",
"Luke",
""
],
[
"Granger",
"Eric",
""
]
] |
Leveraging spatiotemporal information in videos is critical for weakly supervised video object localization (WSVOL) tasks. However, state-of-the-art methods only rely on visual and motion cues, while discarding discriminative information, making them susceptible to inaccurate localizations. Recently, discriminative models have been explored for WSVOL tasks using a temporal class activation mapping (CAM) method. Although their results are promising, objects are assumed to have limited movement from frame to frame, leading to degradation in performance for relatively long-term dependencies. This paper proposes a novel CAM method for WSVOL that exploits spatiotemporal information in activation maps during training without constraining an object's position. Its training relies on Co-Localization, hence, the name CoLo-CAM. Given a sequence of frames, localization is jointly learned based on color cues extracted across the corresponding maps, by assuming that an object has similar color in consecutive frames. CAM activations are constrained to respond similarly over pixels with similar colors, achieving co-localization. This improves localization performance because the joint learning creates direct communication among pixels across all image locations and over all frames, allowing for transfer, aggregation, and correction of localizations. Co-localization is integrated into training by minimizing the color term of a conditional random field (CRF) loss over a sequence of frames/CAMs. Extensive experiments on two challenging YouTube-Objects datasets of unconstrained videos show the merits of our CoLo-CAM method, and its robustness to long-term dependencies, leading to new state-of-the-art performance for WSVOL task.
|
1911.08802
|
Gangxiang Shen
|
Shifeng Ding, Kevin X. Pan, Sanjay K. Bose, Qiong Zhang, and Gangxiang
Shen
|
Blockchain-Assisted Spectrum Trading between Elastic Virtual Optical
Networks
|
7 pages, 5 figures
| null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In communication networks, network virtualization can usually provide better
capacity utilization and quality of service (QoS) than what can be achieved
otherwise. However, conventional resource allocation for virtualized networks
would still follow a fixed pattern based on the predicted capacity needs of the
users, even though, in reality, the actual traffic demand of a user will always
tend to fluctuate. The mismatch between the fixed capacity allocation and the
actual fluctuating traffic would lead to degradation of provisioned network
services and inefficiency in the assigned network capacity. To overcome this,
we propose a new spectrum trading (ST) scheme between virtual optical networks
(VONs) in the context of an elastic optical network (EON). The key idea here is
to allow different VONs to trade their spectrum resources according to the
actual capacity they need at different time instants. A VON with unused spectra
can then trade away its unused spectra to other VONs that are short of spectrum
resources at that time. In exchange, it is rewarded with a certain amount of
credit for its contribution to the ST community, which it can then use later to
get extra bandwidth, if needed. The trust-worthiness of the trading records
between the VONs is ensured in a distributed fashion through a
blockchain-assisted account book that is updated whenever a new trade occurs.
For this, we develop a software-defined control plane to enable spectrum
trading in an EON. The performance of the ST scheme is evaluated and compared
with a scenario without such trading. Our results show that the proposed ST
scheme is effective in improving the QoS of each VON and significantly improves
the overall network capacity utilization.
|
[
{
"created": "Wed, 20 Nov 2019 10:22:37 GMT",
"version": "v1"
}
] |
2019-11-21
|
[
[
"Ding",
"Shifeng",
""
],
[
"Pan",
"Kevin X.",
""
],
[
"Bose",
"Sanjay K.",
""
],
[
"Zhang",
"Qiong",
""
],
[
"Shen",
"Gangxiang",
""
]
] |
In communication networks, network virtualization can usually provide better capacity utilization and quality of service (QoS) than what can be achieved otherwise. However, conventional resource allocation for virtualized networks would still follow a fixed pattern based on the predicted capacity needs of the users, even though, in reality, the actual traffic demand of a user will always tend to fluctuate. The mismatch between the fixed capacity allocation and the actual fluctuating traffic would lead to degradation of provisioned network services and inefficiency in the assigned network capacity. To overcome this, we propose a new spectrum trading (ST) scheme between virtual optical networks (VONs) in the context of an elastic optical network (EON). The key idea here is to allow different VONs to trade their spectrum resources according to the actual capacity they need at different time instants. A VON with unused spectra can then trade away its unused spectra to other VONs that are short of spectrum resources at that time. In exchange, it is rewarded with a certain amount of credit for its contribution to the ST community, which it can then use later to get extra bandwidth, if needed. The trust-worthiness of the trading records between the VONs is ensured in a distributed fashion through a blockchain-assisted account book that is updated whenever a new trade occurs. For this, we develop a software-defined control plane to enable spectrum trading in an EON. The performance of the ST scheme is evaluated and compared with a scenario without such trading. Our results show that the proposed ST scheme is effective in improving the QoS of each VON and significantly improves the overall network capacity utilization.
|
2407.09083
|
Zekai Xu
|
Zekai Xu, Kang You, Qinghai Guo, Xiang Wang and Zhezhi He
|
BKDSNN: Enhancing the Performance of Learning-based Spiking Neural
Networks Training with Blurred Knowledge Distillation
|
accepted by European Conference on Computer Vision (ECCV) 2024
| null | null | null |
cs.NE
|
http://creativecommons.org/licenses/by/4.0/
|
Spiking neural networks (SNNs), which mimic biological neural system to
convey information via discrete spikes, are well known as brain-inspired models
with excellent computing efficiency. By utilizing the surrogate gradient
estimation for discrete spikes, learning-based SNN training methods that can
achieve ultra-low inference latency (number of time-step) emerge recently.
Nevertheless, due to the difficulty in deriving precise gradient estimation for
discrete spikes using learning-based method, a distinct accuracy gap persists
between SNN and its artificial neural networks (ANNs) counterpart. To address
the aforementioned issue, we propose a blurred knowledge distillation (BKD)
technique, which leverages random blurred SNN feature to restore and imitate
the ANN feature. Note that, our BKD is applied upon the feature map right
before the last layer of SNN, which can also mix with prior logits-based
knowledge distillation for maximized accuracy boost. To our best knowledge, in
the category of learning-based methods, our work achieves state-of-the-art
performance for training SNNs on both static and neuromorphic datasets. On
ImageNet dataset, BKDSNN outperforms prior best results by 4.51% and 0.93% with
the network topology of CNN and Transformer respectively.
|
[
{
"created": "Fri, 12 Jul 2024 08:17:24 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Jul 2024 02:19:34 GMT",
"version": "v2"
}
] |
2024-07-16
|
[
[
"Xu",
"Zekai",
""
],
[
"You",
"Kang",
""
],
[
"Guo",
"Qinghai",
""
],
[
"Wang",
"Xiang",
""
],
[
"He",
"Zhezhi",
""
]
] |
Spiking neural networks (SNNs), which mimic biological neural system to convey information via discrete spikes, are well known as brain-inspired models with excellent computing efficiency. By utilizing the surrogate gradient estimation for discrete spikes, learning-based SNN training methods that can achieve ultra-low inference latency (number of time-step) emerge recently. Nevertheless, due to the difficulty in deriving precise gradient estimation for discrete spikes using learning-based method, a distinct accuracy gap persists between SNN and its artificial neural networks (ANNs) counterpart. To address the aforementioned issue, we propose a blurred knowledge distillation (BKD) technique, which leverages random blurred SNN feature to restore and imitate the ANN feature. Note that, our BKD is applied upon the feature map right before the last layer of SNN, which can also mix with prior logits-based knowledge distillation for maximized accuracy boost. To our best knowledge, in the category of learning-based methods, our work achieves state-of-the-art performance for training SNNs on both static and neuromorphic datasets. On ImageNet dataset, BKDSNN outperforms prior best results by 4.51% and 0.93% with the network topology of CNN and Transformer respectively.
|
1607.05447
|
Stephen Gould
|
Stephen Gould and Basura Fernando and Anoop Cherian and Peter Anderson
and Rodrigo Santa Cruz and Edison Guo
|
On Differentiating Parameterized Argmin and Argmax Problems with
Application to Bi-level Optimization
|
16 pages, 6 figures
| null | null | null |
cs.CV math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Some recent works in machine learning and computer vision involve the
solution of a bi-level optimization problem. Here the solution of a
parameterized lower-level problem binds variables that appear in the objective
of an upper-level problem. The lower-level problem typically appears as an
argmin or argmax optimization problem. Many techniques have been proposed to
solve bi-level optimization problems, including gradient descent, which is
popular with current end-to-end learning approaches. In this technical report
we collect some results on differentiating argmin and argmax optimization
problems with and without constraints and provide some insightful motivating
examples.
|
[
{
"created": "Tue, 19 Jul 2016 08:09:30 GMT",
"version": "v1"
},
{
"created": "Thu, 21 Jul 2016 03:43:35 GMT",
"version": "v2"
}
] |
2016-07-22
|
[
[
"Gould",
"Stephen",
""
],
[
"Fernando",
"Basura",
""
],
[
"Cherian",
"Anoop",
""
],
[
"Anderson",
"Peter",
""
],
[
"Cruz",
"Rodrigo Santa",
""
],
[
"Guo",
"Edison",
""
]
] |
Some recent works in machine learning and computer vision involve the solution of a bi-level optimization problem. Here the solution of a parameterized lower-level problem binds variables that appear in the objective of an upper-level problem. The lower-level problem typically appears as an argmin or argmax optimization problem. Many techniques have been proposed to solve bi-level optimization problems, including gradient descent, which is popular with current end-to-end learning approaches. In this technical report we collect some results on differentiating argmin and argmax optimization problems with and without constraints and provide some insightful motivating examples.
|
2306.15612
|
Peng Xu
|
Peng Xu, Zhiyu Xiang, Chenyu Qiao, Jingyun Fu, Tianyu Pu
|
Adaptive Multi-Modal Cross-Entropy Loss for Stereo Matching
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite the great success of deep learning in stereo matching, recovering
accurate disparity maps is still challenging. Currently, L1 and cross-entropy
are the two most widely used losses for stereo network training. Compared with
the former, the latter usually performs better thanks to its probability
modeling and direct supervision to the cost volume. However, how to accurately
model the stereo ground-truth for cross-entropy loss remains largely
under-explored. Existing works simply assume that the ground-truth
distributions are uni-modal, which ignores the fact that most of the edge
pixels can be multi-modal. In this paper, a novel adaptive multi-modal
cross-entropy loss (ADL) is proposed to guide the networks to learn different
distribution patterns for each pixel. Moreover, we optimize the disparity
estimator to further alleviate the bleeding or misalignment artifacts in
inference. Extensive experimental results show that our method is generic and
can help classic stereo networks regain state-of-the-art performance. In
particular, GANet with our method ranks $1^{st}$ on both the KITTI 2015 and
2012 benchmarks among the published methods. Meanwhile, excellent
synthetic-to-realistic generalization performance can be achieved by simply
replacing the traditional loss with ours.
|
[
{
"created": "Tue, 27 Jun 2023 16:53:35 GMT",
"version": "v1"
},
{
"created": "Fri, 15 Mar 2024 10:04:38 GMT",
"version": "v2"
}
] |
2024-03-18
|
[
[
"Xu",
"Peng",
""
],
[
"Xiang",
"Zhiyu",
""
],
[
"Qiao",
"Chenyu",
""
],
[
"Fu",
"Jingyun",
""
],
[
"Pu",
"Tianyu",
""
]
] |
Despite the great success of deep learning in stereo matching, recovering accurate disparity maps is still challenging. Currently, L1 and cross-entropy are the two most widely used losses for stereo network training. Compared with the former, the latter usually performs better thanks to its probability modeling and direct supervision to the cost volume. However, how to accurately model the stereo ground-truth for cross-entropy loss remains largely under-explored. Existing works simply assume that the ground-truth distributions are uni-modal, which ignores the fact that most of the edge pixels can be multi-modal. In this paper, a novel adaptive multi-modal cross-entropy loss (ADL) is proposed to guide the networks to learn different distribution patterns for each pixel. Moreover, we optimize the disparity estimator to further alleviate the bleeding or misalignment artifacts in inference. Extensive experimental results show that our method is generic and can help classic stereo networks regain state-of-the-art performance. In particular, GANet with our method ranks $1^{st}$ on both the KITTI 2015 and 2012 benchmarks among the published methods. Meanwhile, excellent synthetic-to-realistic generalization performance can be achieved by simply replacing the traditional loss with ours.
|
2405.09355
|
Gary Sarwin
|
Gary Sarwin, Alessandro Carretta, Victor Staartjes, Matteo Zoli, Diego
Mazzatenta, Luca Regli, Carlo Serra, Ender Konukoglu
|
Vision-Based Neurosurgical Guidance: Unsupervised Localization and
Camera-Pose Prediction
|
Early Accept at MICCAI 2024
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Localizing oneself during endoscopic procedures can be problematic due to the
lack of distinguishable textures and landmarks, as well as difficulties due to
the endoscopic device such as a limited field of view and challenging lighting
conditions. Expert knowledge shaped by years of experience is required for
localization within the human body during endoscopic procedures. In this work,
we present a deep learning method based on anatomy recognition, that constructs
a surgical path in an unsupervised manner from surgical videos, modelling
relative location and variations due to different viewing angles. At inference
time, the model can map an unseen video's frames on the path and estimate the
viewing angle, aiming to provide guidance, for instance, to reach a particular
destination. We test the method on a dataset consisting of surgical videos of
transsphenoidal adenomectomies, as well as on a synthetic dataset. An online
tool that lets researchers upload their surgical videos to obtain anatomy
detections and the weights of the trained YOLOv7 model are available at:
https://surgicalvision.bmic.ethz.ch.
|
[
{
"created": "Wed, 15 May 2024 14:09:11 GMT",
"version": "v1"
}
] |
2024-05-16
|
[
[
"Sarwin",
"Gary",
""
],
[
"Carretta",
"Alessandro",
""
],
[
"Staartjes",
"Victor",
""
],
[
"Zoli",
"Matteo",
""
],
[
"Mazzatenta",
"Diego",
""
],
[
"Regli",
"Luca",
""
],
[
"Serra",
"Carlo",
""
],
[
"Konukoglu",
"Ender",
""
]
] |
Localizing oneself during endoscopic procedures can be problematic due to the lack of distinguishable textures and landmarks, as well as difficulties due to the endoscopic device such as a limited field of view and challenging lighting conditions. Expert knowledge shaped by years of experience is required for localization within the human body during endoscopic procedures. In this work, we present a deep learning method based on anatomy recognition, that constructs a surgical path in an unsupervised manner from surgical videos, modelling relative location and variations due to different viewing angles. At inference time, the model can map an unseen video's frames on the path and estimate the viewing angle, aiming to provide guidance, for instance, to reach a particular destination. We test the method on a dataset consisting of surgical videos of transsphenoidal adenomectomies, as well as on a synthetic dataset. An online tool that lets researchers upload their surgical videos to obtain anatomy detections and the weights of the trained YOLOv7 model are available at: https://surgicalvision.bmic.ethz.ch.
|
2006.15904
|
Hazel Murray
|
Hazel Murray and David Malone
|
Multi-armed bandit approach to password guessing
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The multi-armed bandit is a mathematical interpretation of the problem a
gambler faces when confronted with a number of different machines (bandits).
The gambler wants to explore different machines to discover which machine
offers the best rewards, but simultaneously wants to exploit the most
profitable machine. A password guesser is faced with a similar dilemma. They
have lists of leaked password sets, dictionaries of words, and demographic
information about the users, but they don't know which dictionary will reap the
best rewards. In this paper we provide a framework for using the multi-armed
bandit problem in the context of the password guesser and use some examples to
show that it can be effective.
|
[
{
"created": "Mon, 29 Jun 2020 09:50:55 GMT",
"version": "v1"
},
{
"created": "Wed, 22 Jul 2020 10:43:50 GMT",
"version": "v2"
},
{
"created": "Wed, 5 Aug 2020 16:59:09 GMT",
"version": "v3"
}
] |
2020-08-06
|
[
[
"Murray",
"Hazel",
""
],
[
"Malone",
"David",
""
]
] |
The multi-armed bandit is a mathematical interpretation of the problem a gambler faces when confronted with a number of different machines (bandits). The gambler wants to explore different machines to discover which machine offers the best rewards, but simultaneously wants to exploit the most profitable machine. A password guesser is faced with a similar dilemma. They have lists of leaked password sets, dictionaries of words, and demographic information about the users, but they don't know which dictionary will reap the best rewards. In this paper we provide a framework for using the multi-armed bandit problem in the context of the password guesser and use some examples to show that it can be effective.
|
2206.03487
|
Evgenii Vityaev
|
E.E. Vityaev, A.G. Kolonin, A.V. Kurpatov A.A. Molchanov
|
Formalization of the principles of brain Programming (Brain Principles
Programming)
|
28 pages, in Russian, 4 figures
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the monograph "Strong artificial intelligence. On the Approaches to
Superintelligence" contains an overview of general artificial intelligence
(AGI). As an anthropomorphic research area, it includes Brain Principles
Programming (BPP) -- the formalization of universal mechanisms (principles) of
the brain work with information, which are implemented at all levels of the
organization of nervous tissue. This monograph contains a formalization of
these principles in terms of category theory. However, this formalization is
not enough to develop algorithms for working with information. In this paper,
for the description and modeling of BPP, it is proposed to apply mathematical
models and algorithms developed earlier, which modeling cognitive functions and
base on well-known physiological, psychological and other natural science
theories. The paper uses mathematical models and algorithms of the following
theories: P.K.Anokhin Theory of Functional Brain Systems, Eleanor Rosch
prototypical categorization theory, Bob Rehder theory of causal models and
"natural" classification. As a result, a formalization of BPP is obtained and
computer experiments demonstrating the operation of algorithms are presented.
|
[
{
"created": "Fri, 13 May 2022 13:16:34 GMT",
"version": "v1"
},
{
"created": "Tue, 14 Jun 2022 13:45:08 GMT",
"version": "v2"
},
{
"created": "Wed, 15 Jun 2022 02:26:12 GMT",
"version": "v3"
}
] |
2022-06-16
|
[
[
"Vityaev",
"E. E.",
""
],
[
"Kolonin",
"A. G.",
""
],
[
"Molchanov",
"A. V. Kurpatov A. A.",
""
]
] |
In the monograph "Strong artificial intelligence. On the Approaches to Superintelligence" contains an overview of general artificial intelligence (AGI). As an anthropomorphic research area, it includes Brain Principles Programming (BPP) -- the formalization of universal mechanisms (principles) of the brain work with information, which are implemented at all levels of the organization of nervous tissue. This monograph contains a formalization of these principles in terms of category theory. However, this formalization is not enough to develop algorithms for working with information. In this paper, for the description and modeling of BPP, it is proposed to apply mathematical models and algorithms developed earlier, which modeling cognitive functions and base on well-known physiological, psychological and other natural science theories. The paper uses mathematical models and algorithms of the following theories: P.K.Anokhin Theory of Functional Brain Systems, Eleanor Rosch prototypical categorization theory, Bob Rehder theory of causal models and "natural" classification. As a result, a formalization of BPP is obtained and computer experiments demonstrating the operation of algorithms are presented.
|
2402.01515
|
Chiwun Yang
|
Yichuan Deng, Zhao Song, Chiwun Yang
|
Enhancing Stochastic Gradient Descent: A Unified Framework and Novel
Acceleration Methods for Faster Convergence
| null | null | null | null |
cs.LG cs.AI math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Based on SGD, previous works have proposed many algorithms that have improved
convergence speed and generalization in stochastic optimization, such as SGDm,
AdaGrad, Adam, etc. However, their convergence analysis under non-convex
conditions is challenging. In this work, we propose a unified framework to
address this issue. For any first-order methods, we interpret the updated
direction $g_t$ as the sum of the stochastic subgradient $\nabla f_t(x_t)$ and
an additional acceleration term $\frac{2|\langle v_t, \nabla f_t(x_t)
\rangle|}{\|v_t\|_2^2} v_t$, thus we can discuss the convergence by analyzing
$\langle v_t, \nabla f_t(x_t) \rangle$. Through our framework, we have
discovered two plug-and-play acceleration methods: \textbf{Reject Accelerating}
and \textbf{Random Vector Accelerating}, we theoretically demonstrate that
these two methods can directly lead to an improvement in convergence rate.
|
[
{
"created": "Fri, 2 Feb 2024 15:55:25 GMT",
"version": "v1"
}
] |
2024-02-05
|
[
[
"Deng",
"Yichuan",
""
],
[
"Song",
"Zhao",
""
],
[
"Yang",
"Chiwun",
""
]
] |
Based on SGD, previous works have proposed many algorithms that have improved convergence speed and generalization in stochastic optimization, such as SGDm, AdaGrad, Adam, etc. However, their convergence analysis under non-convex conditions is challenging. In this work, we propose a unified framework to address this issue. For any first-order methods, we interpret the updated direction $g_t$ as the sum of the stochastic subgradient $\nabla f_t(x_t)$ and an additional acceleration term $\frac{2|\langle v_t, \nabla f_t(x_t) \rangle|}{\|v_t\|_2^2} v_t$, thus we can discuss the convergence by analyzing $\langle v_t, \nabla f_t(x_t) \rangle$. Through our framework, we have discovered two plug-and-play acceleration methods: \textbf{Reject Accelerating} and \textbf{Random Vector Accelerating}, we theoretically demonstrate that these two methods can directly lead to an improvement in convergence rate.
|
2108.10565
|
Sebastian Wolf
|
Sebastian Wolf and Martin Galis and Carsten Uphoff and Alice-Agnes
Gabriel and Peter Moczo and David Gregor and Michael Bader
|
An Efficient ADER-DG Local Time Stepping Scheme for 3D HPC Simulation of
Seismic Waves in Poroelastic Media
|
37 pages, 18 figures, published in the Journal of Computational
Physics
|
Journal of Computational Physics: Volume 455, 2022
|
10.1016/j.jcp.2021.110886
| null |
cs.DC cs.MS physics.comp-ph physics.geo-ph
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Many applications from geosciences require simulations of seismic waves in
porous media. Biot's theory of poroelasticity describes the coupling between
solid and fluid phases and introduces a stiff source term, thereby increasing
computational cost and motivating efficient methods utilising High-Performance
Computing. We present a novel realisation of the discontinuous Galerkin scheme
with Arbitrary DERivative time stepping (ADER-DG) that copes with stiff source
terms.
To integrate this source term with a reasonable time step size, we use an
element-local space-time predictor, which needs to solve medium-sized linear
systems - with 1000 to 10000 unknowns - in each element update (i.e., billions
of times). We present a novel block-wise back-substitution algorithm for
solving these systems efficiently. In comparison to LU decomposition, we reduce
the number of floating-point operations by a factor of up to 25. The block-wise
back-substitution is mapped to a sequence of small matrix-matrix
multiplications, for which code generators are available to generate highly
optimised code.
We verify the new solver thoroughly in problems of increasing complexity. We
demonstrate high-order convergence for 3D problems. We verify the correct
treatment of point sources, material interfaces and traction-free boundary
conditions. In addition, we compare against a finite difference code for a
newly defined layer over half-space problem. We find that extremely high
accuracy is required to resolve the slow P-wave at a free surface, while solid
particle velocities are not affected by coarser resolutions. By using a
clustered local time stepping scheme, we reduce time to solution by a factor of
6 to 10 compared to global time stepping. We conclude our study with a scaling
and performance analysis, demonstrating our implementation's efficiency and its
potential for extreme-scale simulations.
|
[
{
"created": "Tue, 24 Aug 2021 08:04:13 GMT",
"version": "v1"
},
{
"created": "Wed, 25 Aug 2021 07:14:38 GMT",
"version": "v2"
},
{
"created": "Tue, 1 Mar 2022 08:30:49 GMT",
"version": "v3"
}
] |
2022-03-02
|
[
[
"Wolf",
"Sebastian",
""
],
[
"Galis",
"Martin",
""
],
[
"Uphoff",
"Carsten",
""
],
[
"Gabriel",
"Alice-Agnes",
""
],
[
"Moczo",
"Peter",
""
],
[
"Gregor",
"David",
""
],
[
"Bader",
"Michael",
""
]
] |
Many applications from geosciences require simulations of seismic waves in porous media. Biot's theory of poroelasticity describes the coupling between solid and fluid phases and introduces a stiff source term, thereby increasing computational cost and motivating efficient methods utilising High-Performance Computing. We present a novel realisation of the discontinuous Galerkin scheme with Arbitrary DERivative time stepping (ADER-DG) that copes with stiff source terms. To integrate this source term with a reasonable time step size, we use an element-local space-time predictor, which needs to solve medium-sized linear systems - with 1000 to 10000 unknowns - in each element update (i.e., billions of times). We present a novel block-wise back-substitution algorithm for solving these systems efficiently. In comparison to LU decomposition, we reduce the number of floating-point operations by a factor of up to 25. The block-wise back-substitution is mapped to a sequence of small matrix-matrix multiplications, for which code generators are available to generate highly optimised code. We verify the new solver thoroughly in problems of increasing complexity. We demonstrate high-order convergence for 3D problems. We verify the correct treatment of point sources, material interfaces and traction-free boundary conditions. In addition, we compare against a finite difference code for a newly defined layer over half-space problem. We find that extremely high accuracy is required to resolve the slow P-wave at a free surface, while solid particle velocities are not affected by coarser resolutions. By using a clustered local time stepping scheme, we reduce time to solution by a factor of 6 to 10 compared to global time stepping. We conclude our study with a scaling and performance analysis, demonstrating our implementation's efficiency and its potential for extreme-scale simulations.
|
2107.05049
|
Attique Ur Rehman
|
TahirMohammadAli, Attique Ur Rehman, AliNawaz, Wasi Haider Butt
|
An Adaptive E-Learning System Using Justification Based Truth
Maintenance System
| null |
Pakistan Journal of Engineering and Technology, Vol. 4, no. 2,
June 2021, pp. 44-48
| null | null |
cs.SE
|
http://creativecommons.org/licenses/by-sa/4.0/
|
In most E learning systems educational activities are presented in a static
way without bearing in mind the particulars or student levels and skills.
Personalization and adaptation of an E learning management system are dependent
on the flexibility of the system in providing different learning and content
models to individual students based on their characteristics. In this paper we
suggest an Adaptive E learning system which is providing adaptability with
support of justification based truth maintenance system. The system is
accomplished of signifying students with suitable knowledge fillings and
customized learning paths based on the students profile interests and previous
results. The validation of proposed framework is performed by meta model.
|
[
{
"created": "Sun, 11 Jul 2021 13:49:45 GMT",
"version": "v1"
}
] |
2021-07-13
|
[
[
"TahirMohammadAli",
"",
""
],
[
"Rehman",
"Attique Ur",
""
],
[
"AliNawaz",
"",
""
],
[
"Butt",
"Wasi Haider",
""
]
] |
In most E learning systems educational activities are presented in a static way without bearing in mind the particulars or student levels and skills. Personalization and adaptation of an E learning management system are dependent on the flexibility of the system in providing different learning and content models to individual students based on their characteristics. In this paper we suggest an Adaptive E learning system which is providing adaptability with support of justification based truth maintenance system. The system is accomplished of signifying students with suitable knowledge fillings and customized learning paths based on the students profile interests and previous results. The validation of proposed framework is performed by meta model.
|
2307.02007
|
Liu Chenglong
|
Chenglong Liu
|
Remote Sensing Image Change Detection with Graph Interaction
| null | null | null | null |
cs.CV cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern remote sensing image change detection has witnessed substantial
advancements by harnessing the potent feature extraction capabilities of CNNs
and Transforms.Yet,prevailing change detection techniques consistently
prioritize extracting semantic features related to significant
alterations,overlooking the viability of directly interacting with bitemporal
image features.In this letter,we propose a bitemporal image graph Interaction
network for remote sensing change detection,namely BGINet-CD. More
specifically,by leveraging the concept of non-local operations and mapping the
features obtained from the backbone network to the graph structure space,we
propose a unified self-focus mechanism for bitemporal images.This approach
enhances the information coupling between the two temporal images while
effectively suppressing task-irrelevant interference,Based on a streamlined
backbone architecture,namely ResNet18,our model demonstrates superior
performance compared to other state-of-the-art methods (SOTA) on the GZ CD
dataset. Moreover,the model exhibits an enhanced trade-off between accuracy and
computational efficiency,further improving its overall effectiveness
|
[
{
"created": "Wed, 5 Jul 2023 03:32:49 GMT",
"version": "v1"
}
] |
2023-07-06
|
[
[
"Liu",
"Chenglong",
""
]
] |
Modern remote sensing image change detection has witnessed substantial advancements by harnessing the potent feature extraction capabilities of CNNs and Transforms.Yet,prevailing change detection techniques consistently prioritize extracting semantic features related to significant alterations,overlooking the viability of directly interacting with bitemporal image features.In this letter,we propose a bitemporal image graph Interaction network for remote sensing change detection,namely BGINet-CD. More specifically,by leveraging the concept of non-local operations and mapping the features obtained from the backbone network to the graph structure space,we propose a unified self-focus mechanism for bitemporal images.This approach enhances the information coupling between the two temporal images while effectively suppressing task-irrelevant interference,Based on a streamlined backbone architecture,namely ResNet18,our model demonstrates superior performance compared to other state-of-the-art methods (SOTA) on the GZ CD dataset. Moreover,the model exhibits an enhanced trade-off between accuracy and computational efficiency,further improving its overall effectiveness
|
1410.0176
|
David Lillis
|
David Lillis, Rem Collier, Mauro Dragone, G. M. P. O'Hare
|
An Agent-Based Approach to Component Management
|
In Proceedings of the 8th International Conference on Autonomous
Agents and Multi-Agent Systems (AAMAS '09), Budapest, Hungary, 2009
| null |
10.1145/1558013.1558086
| null |
cs.MA cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper details the implementation of a software framework that aids the
development of distributed and self-configurable software systems. This
framework is an instance of a novel integration strategy called SoSAA (SOcially
Situated Agent Architecture), which combines Component-Based Software
Engineering and Agent-Oriented Software Engineering, drawing its inspiration
from hybrid agent control architectures. The framework defines a complete
construction process by enhancing a simple component-based framework with
reasoning and self-awareness capabilities through a standardized interface.
The capabilities of the resulting framework are demonstrated through its
application to a non-trivial Multi Agent System (MAS). The system in question
is a pre-existing Information Retrieval (IR) system that has not previously
taken advantage of CBSE principles. In this paper we contrast these two systems
so as to highlight the benefits of using this new hybrid approach. We also
outline how component-based elements may be integrated into the Agent Factory
agent-oriented application framework.
|
[
{
"created": "Wed, 1 Oct 2014 10:55:44 GMT",
"version": "v1"
}
] |
2014-10-02
|
[
[
"Lillis",
"David",
""
],
[
"Collier",
"Rem",
""
],
[
"Dragone",
"Mauro",
""
],
[
"O'Hare",
"G. M. P.",
""
]
] |
This paper details the implementation of a software framework that aids the development of distributed and self-configurable software systems. This framework is an instance of a novel integration strategy called SoSAA (SOcially Situated Agent Architecture), which combines Component-Based Software Engineering and Agent-Oriented Software Engineering, drawing its inspiration from hybrid agent control architectures. The framework defines a complete construction process by enhancing a simple component-based framework with reasoning and self-awareness capabilities through a standardized interface. The capabilities of the resulting framework are demonstrated through its application to a non-trivial Multi Agent System (MAS). The system in question is a pre-existing Information Retrieval (IR) system that has not previously taken advantage of CBSE principles. In this paper we contrast these two systems so as to highlight the benefits of using this new hybrid approach. We also outline how component-based elements may be integrated into the Agent Factory agent-oriented application framework.
|
2310.20425
|
Marcus Haywood-Alexander
|
Marcus Haywood-Alexander, Wei Liu, Kiran Bacsa, Zhilu Lai, Eleni
Chatzi
|
Discussing the Spectrum of Physics-Enhanced Machine Learning; a Survey
on Structural Mechanics Applications
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The intersection of physics and machine learning has given rise to the
physics-enhanced machine learning (PEML) paradigm, aiming to improve the
capabilities and reduce the individual shortcomings of data- or physics-only
methods. In this paper, the spectrum of physics-enhanced machine learning
methods, expressed across the defining axes of physics and data, is discussed
by engaging in a comprehensive exploration of its characteristics, usage, and
motivations. In doing so, we present a survey of recent applications and
developments of PEML techniques, revealing the potency of PEML in addressing
complex challenges. We further demonstrate application of select such schemes
on the simple working example of a single degree-of-freedom Duffing oscillator,
which allows to highlight the individual characteristics and motivations of
different `genres' of PEML approaches. To promote collaboration and
transparency, and to provide practical examples for the reader, the code
generating these working examples is provided alongside this paper. As a
foundational contribution, this paper underscores the significance of PEML in
pushing the boundaries of scientific and engineering research, underpinned by
the synergy of physical insights and machine learning capabilities.
|
[
{
"created": "Tue, 31 Oct 2023 12:50:25 GMT",
"version": "v1"
},
{
"created": "Wed, 1 Nov 2023 08:21:02 GMT",
"version": "v2"
},
{
"created": "Mon, 22 Apr 2024 11:42:02 GMT",
"version": "v3"
}
] |
2024-04-23
|
[
[
"Haywood-Alexander",
"Marcus",
""
],
[
"Liu",
"Wei",
""
],
[
"Bacsa",
"Kiran",
""
],
[
"Lai",
"Zhilu",
""
],
[
"Chatzi",
"Eleni",
""
]
] |
The intersection of physics and machine learning has given rise to the physics-enhanced machine learning (PEML) paradigm, aiming to improve the capabilities and reduce the individual shortcomings of data- or physics-only methods. In this paper, the spectrum of physics-enhanced machine learning methods, expressed across the defining axes of physics and data, is discussed by engaging in a comprehensive exploration of its characteristics, usage, and motivations. In doing so, we present a survey of recent applications and developments of PEML techniques, revealing the potency of PEML in addressing complex challenges. We further demonstrate application of select such schemes on the simple working example of a single degree-of-freedom Duffing oscillator, which allows to highlight the individual characteristics and motivations of different `genres' of PEML approaches. To promote collaboration and transparency, and to provide practical examples for the reader, the code generating these working examples is provided alongside this paper. As a foundational contribution, this paper underscores the significance of PEML in pushing the boundaries of scientific and engineering research, underpinned by the synergy of physical insights and machine learning capabilities.
|
2010.13972
|
Rory Mitchell
|
Rory Mitchell, Eibe Frank, Geoffrey Holmes
|
GPUTreeShap: Massively Parallel Exact Calculation of SHAP Scores for
Tree Ensembles
| null | null | null | null |
cs.LG cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
SHAP (SHapley Additive exPlanation) values provide a game theoretic
interpretation of the predictions of machine learning models based on Shapley
values. While exact calculation of SHAP values is computationally intractable
in general, a recursive polynomial-time algorithm called TreeShap is available
for decision tree models. However, despite its polynomial time complexity,
TreeShap can become a significant bottleneck in practical machine learning
pipelines when applied to large decision tree ensembles. Unfortunately, the
complicated TreeShap algorithm is difficult to map to hardware accelerators
such as GPUs. In this work, we present GPUTreeShap, a reformulated TreeShap
algorithm suitable for massively parallel computation on graphics processing
units. Our approach first preprocesses each decision tree to isolate variable
sized sub-problems from the original recursive algorithm, then solves a bin
packing problem, and finally maps sub-problems to single-instruction,
multiple-thread (SIMT) tasks for parallel execution with specialised hardware
instructions. With a single NVIDIA Tesla V100-32 GPU, we achieve speedups of up
to 19x for SHAP values, and speedups of up to 340x for SHAP interaction values,
over a state-of-the-art multi-core CPU implementation executed on two 20-core
Xeon E5-2698 v4 2.2 GHz CPUs. We also experiment with multi-GPU computing using
eight V100 GPUs, demonstrating throughput of 1.2M rows per second -- equivalent
CPU-based performance is estimated to require 6850 CPU cores.
|
[
{
"created": "Tue, 27 Oct 2020 00:55:07 GMT",
"version": "v1"
},
{
"created": "Mon, 19 Jul 2021 22:54:35 GMT",
"version": "v2"
},
{
"created": "Thu, 3 Feb 2022 11:53:13 GMT",
"version": "v3"
}
] |
2022-02-04
|
[
[
"Mitchell",
"Rory",
""
],
[
"Frank",
"Eibe",
""
],
[
"Holmes",
"Geoffrey",
""
]
] |
SHAP (SHapley Additive exPlanation) values provide a game theoretic interpretation of the predictions of machine learning models based on Shapley values. While exact calculation of SHAP values is computationally intractable in general, a recursive polynomial-time algorithm called TreeShap is available for decision tree models. However, despite its polynomial time complexity, TreeShap can become a significant bottleneck in practical machine learning pipelines when applied to large decision tree ensembles. Unfortunately, the complicated TreeShap algorithm is difficult to map to hardware accelerators such as GPUs. In this work, we present GPUTreeShap, a reformulated TreeShap algorithm suitable for massively parallel computation on graphics processing units. Our approach first preprocesses each decision tree to isolate variable sized sub-problems from the original recursive algorithm, then solves a bin packing problem, and finally maps sub-problems to single-instruction, multiple-thread (SIMT) tasks for parallel execution with specialised hardware instructions. With a single NVIDIA Tesla V100-32 GPU, we achieve speedups of up to 19x for SHAP values, and speedups of up to 340x for SHAP interaction values, over a state-of-the-art multi-core CPU implementation executed on two 20-core Xeon E5-2698 v4 2.2 GHz CPUs. We also experiment with multi-GPU computing using eight V100 GPUs, demonstrating throughput of 1.2M rows per second -- equivalent CPU-based performance is estimated to require 6850 CPU cores.
|
1908.11527
|
Le Fang
|
Le Fang, Chunyuan Li, Jianfeng Gao, Wen Dong and Changyou Chen
|
Implicit Deep Latent Variable Models for Text Generation
|
13 pages, 8 Tables, 1 Figure, Accepted at 2019 Conference on
Empirical Methods in Natural Language Processing (EMNLP 2019)
| null | null | null |
cs.LG cs.CL stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep latent variable models (LVM) such as variational auto-encoder (VAE) have
recently played an important role in text generation. One key factor is the
exploitation of smooth latent structures to guide the generation. However, the
representation power of VAEs is limited due to two reasons: (1) the Gaussian
assumption is often made on the variational posteriors; and meanwhile (2) a
notorious "posterior collapse" issue occurs. In this paper, we advocate
sample-based representations of variational distributions for natural language,
leading to implicit latent features, which can provide flexible representation
power compared with Gaussian-based posteriors. We further develop an LVM to
directly match the aggregated posterior to the prior. It can be viewed as a
natural extension of VAEs with a regularization of maximizing mutual
information, mitigating the "posterior collapse" issue. We demonstrate the
effectiveness and versatility of our models in various text generation
scenarios, including language modeling, unaligned style transfer, and dialog
response generation. The source code to reproduce our experimental results is
available on GitHub.
|
[
{
"created": "Fri, 30 Aug 2019 04:12:08 GMT",
"version": "v1"
},
{
"created": "Wed, 18 Sep 2019 05:48:05 GMT",
"version": "v2"
},
{
"created": "Wed, 27 Nov 2019 19:53:57 GMT",
"version": "v3"
}
] |
2019-12-02
|
[
[
"Fang",
"Le",
""
],
[
"Li",
"Chunyuan",
""
],
[
"Gao",
"Jianfeng",
""
],
[
"Dong",
"Wen",
""
],
[
"Chen",
"Changyou",
""
]
] |
Deep latent variable models (LVM) such as variational auto-encoder (VAE) have recently played an important role in text generation. One key factor is the exploitation of smooth latent structures to guide the generation. However, the representation power of VAEs is limited due to two reasons: (1) the Gaussian assumption is often made on the variational posteriors; and meanwhile (2) a notorious "posterior collapse" issue occurs. In this paper, we advocate sample-based representations of variational distributions for natural language, leading to implicit latent features, which can provide flexible representation power compared with Gaussian-based posteriors. We further develop an LVM to directly match the aggregated posterior to the prior. It can be viewed as a natural extension of VAEs with a regularization of maximizing mutual information, mitigating the "posterior collapse" issue. We demonstrate the effectiveness and versatility of our models in various text generation scenarios, including language modeling, unaligned style transfer, and dialog response generation. The source code to reproduce our experimental results is available on GitHub.
|
2403.01427
|
Shangquan Sun
|
Shangquan Sun, Wenqi Ren, Jingzhi Li, Rui Wang and Xiaochun Cao
|
Logit Standardization in Knowledge Distillation
|
10 pages, 5 figures, accepted by The The IEEE / CVF Computer Vision
and Pattern Recognition Conference (CVPR 2024)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Knowledge distillation involves transferring soft labels from a teacher to a
student using a shared temperature-based softmax function. However, the
assumption of a shared temperature between teacher and student implies a
mandatory exact match between their logits in terms of logit range and
variance. This side-effect limits the performance of student, considering the
capacity discrepancy between them and the finding that the innate logit
relations of teacher are sufficient for student to learn. To address this
issue, we propose setting the temperature as the weighted standard deviation of
logit and performing a plug-and-play Z-score pre-process of logit
standardization before applying softmax and Kullback-Leibler divergence. Our
pre-process enables student to focus on essential logit relations from teacher
rather than requiring a magnitude match, and can improve the performance of
existing logit-based distillation methods. We also show a typical case where
the conventional setting of sharing temperature between teacher and student
cannot reliably yield the authentic distillation evaluation; nonetheless, this
challenge is successfully alleviated by our Z-score. We extensively evaluate
our method for various student and teacher models on CIFAR-100 and ImageNet,
showing its significant superiority. The vanilla knowledge distillation powered
by our pre-process can achieve favorable performance against state-of-the-art
methods, and other distillation variants can obtain considerable gain with the
assistance of our pre-process.
|
[
{
"created": "Sun, 3 Mar 2024 07:54:03 GMT",
"version": "v1"
}
] |
2024-03-05
|
[
[
"Sun",
"Shangquan",
""
],
[
"Ren",
"Wenqi",
""
],
[
"Li",
"Jingzhi",
""
],
[
"Wang",
"Rui",
""
],
[
"Cao",
"Xiaochun",
""
]
] |
Knowledge distillation involves transferring soft labels from a teacher to a student using a shared temperature-based softmax function. However, the assumption of a shared temperature between teacher and student implies a mandatory exact match between their logits in terms of logit range and variance. This side-effect limits the performance of student, considering the capacity discrepancy between them and the finding that the innate logit relations of teacher are sufficient for student to learn. To address this issue, we propose setting the temperature as the weighted standard deviation of logit and performing a plug-and-play Z-score pre-process of logit standardization before applying softmax and Kullback-Leibler divergence. Our pre-process enables student to focus on essential logit relations from teacher rather than requiring a magnitude match, and can improve the performance of existing logit-based distillation methods. We also show a typical case where the conventional setting of sharing temperature between teacher and student cannot reliably yield the authentic distillation evaluation; nonetheless, this challenge is successfully alleviated by our Z-score. We extensively evaluate our method for various student and teacher models on CIFAR-100 and ImageNet, showing its significant superiority. The vanilla knowledge distillation powered by our pre-process can achieve favorable performance against state-of-the-art methods, and other distillation variants can obtain considerable gain with the assistance of our pre-process.
|
1909.07654
|
Tomaso Fontanini
|
Tomaso Fontanini, Eleonora Iotti and Andrea Prati
|
MetalGAN: a Cluster-based Adaptive Training for Few-Shot Adversarial
Colorization
| null | null | null | null |
cs.LG eess.IV stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, the majority of works on deep-learning-based image
colorization have focused on how to make a good use of the enormous datasets
currently available. What about when the data at disposal are scarce? The main
objective of this work is to prove that a network can be trained and can
provide excellent colorization results even without a large quantity of data.
The adopted approach is a mixed one, which uses an adversarial method for the
actual colorization, and a meta-learning technique to enhance the generator
model. Also, a clusterization a-priori of the training dataset ensures a
task-oriented division useful for meta-learning, and at the same time reduces
the per-step number of images. This paper describes in detail the method and
its main motivations, and a discussion of results and future developments is
provided.
|
[
{
"created": "Tue, 17 Sep 2019 08:54:12 GMT",
"version": "v1"
}
] |
2019-09-18
|
[
[
"Fontanini",
"Tomaso",
""
],
[
"Iotti",
"Eleonora",
""
],
[
"Prati",
"Andrea",
""
]
] |
In recent years, the majority of works on deep-learning-based image colorization have focused on how to make a good use of the enormous datasets currently available. What about when the data at disposal are scarce? The main objective of this work is to prove that a network can be trained and can provide excellent colorization results even without a large quantity of data. The adopted approach is a mixed one, which uses an adversarial method for the actual colorization, and a meta-learning technique to enhance the generator model. Also, a clusterization a-priori of the training dataset ensures a task-oriented division useful for meta-learning, and at the same time reduces the per-step number of images. This paper describes in detail the method and its main motivations, and a discussion of results and future developments is provided.
|
1701.01911
|
Nannan Wang
|
Nannan Wang and Xinbo Gao and Jie Li
|
Random Sampling for Fast Face Sketch Synthesis
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Exemplar-based face sketch synthesis plays an important role in both digital
entertainment and law enforcement. It generally consists of two parts: neighbor
selection and reconstruction weight representation. The most time-consuming or
main computation complexity for exemplar-based face sketch synthesis methods
lies in the neighbor selection process. State-of-the-art face sketch synthesis
methods perform neighbor selection online in a data-driven manner by $K$
nearest neighbor ($K$-NN) searching. Actually, the online search increases the
time consuming for synthesis. Moreover, since these methods need to traverse
the whole training dataset for neighbor selection, the computational complexity
increases with the scale of the training database and hence these methods have
limited scalability. In this paper, we proposed a simple but effective offline
random sampling in place of online $K$-NN search to improve the synthesis
efficiency. Extensive experiments on public face sketch databases demonstrate
the superiority of the proposed method in comparison to state-of-the-art
methods, in terms of both synthesis quality and time consumption. The proposed
method could be extended to other heterogeneous face image transformation
problems such as face hallucination. We release the source codes of our
proposed methods and the evaluation metrics for future study online:
http://www.ihitworld.com/RSLCR.html.
|
[
{
"created": "Sun, 8 Jan 2017 03:47:59 GMT",
"version": "v1"
},
{
"created": "Fri, 11 Aug 2017 02:43:48 GMT",
"version": "v2"
}
] |
2017-08-14
|
[
[
"Wang",
"Nannan",
""
],
[
"Gao",
"Xinbo",
""
],
[
"Li",
"Jie",
""
]
] |
Exemplar-based face sketch synthesis plays an important role in both digital entertainment and law enforcement. It generally consists of two parts: neighbor selection and reconstruction weight representation. The most time-consuming or main computation complexity for exemplar-based face sketch synthesis methods lies in the neighbor selection process. State-of-the-art face sketch synthesis methods perform neighbor selection online in a data-driven manner by $K$ nearest neighbor ($K$-NN) searching. Actually, the online search increases the time consuming for synthesis. Moreover, since these methods need to traverse the whole training dataset for neighbor selection, the computational complexity increases with the scale of the training database and hence these methods have limited scalability. In this paper, we proposed a simple but effective offline random sampling in place of online $K$-NN search to improve the synthesis efficiency. Extensive experiments on public face sketch databases demonstrate the superiority of the proposed method in comparison to state-of-the-art methods, in terms of both synthesis quality and time consumption. The proposed method could be extended to other heterogeneous face image transformation problems such as face hallucination. We release the source codes of our proposed methods and the evaluation metrics for future study online: http://www.ihitworld.com/RSLCR.html.
|
2008.13284
|
Weichen Li
|
Weichen Li and Xiaojia Shelly Zhang
|
Momentum-based Accelerated Mirror Descent Stochastic Approximation for
Robust Topology Optimization under Stochastic Loads
|
38 pages (including reference)
| null | null | null |
cs.CE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Robust topology optimization (RTO) improves the robustness of designs with
respect to random sources in real-world structures, yet an accurate sensitivity
analysis requires the solution of many systems of equations at each
optimization step, leading to a high computational cost. To open up the full
potential of RTO under a variety of random sources, this paper presents a
momentum-based accelerated mirror descent stochastic approximation (AC-MDSA)
approach to efficiently solve RTO problems involving various types of load
uncertainties. The proposed framework can perform high-quality design updates
with highly noisy stochastic gradients. We reduce the sample size to two
(minimum for unbiased variance estimation) and show only two samples are
sufficient for evaluating stochastic gradients to obtain robust designs, thus
drastically reducing the computational cost. We derive the AC-MDSA update
formula based on $\ell_1$-norm with entropy function, which is tailored to the
geometry of the feasible domain. To accelerate and stabilize the algorithm, we
integrate a momentum-based acceleration scheme, which also alleviates the step
size sensitivity. Several 2D and 3D examples with various sizes are presented
to demonstrate the effectiveness and efficiency of the proposed AC-MDSA
framework to handle RTO involving various types of loading uncertainties.
|
[
{
"created": "Sun, 30 Aug 2020 21:51:51 GMT",
"version": "v1"
}
] |
2020-09-01
|
[
[
"Li",
"Weichen",
""
],
[
"Zhang",
"Xiaojia Shelly",
""
]
] |
Robust topology optimization (RTO) improves the robustness of designs with respect to random sources in real-world structures, yet an accurate sensitivity analysis requires the solution of many systems of equations at each optimization step, leading to a high computational cost. To open up the full potential of RTO under a variety of random sources, this paper presents a momentum-based accelerated mirror descent stochastic approximation (AC-MDSA) approach to efficiently solve RTO problems involving various types of load uncertainties. The proposed framework can perform high-quality design updates with highly noisy stochastic gradients. We reduce the sample size to two (minimum for unbiased variance estimation) and show only two samples are sufficient for evaluating stochastic gradients to obtain robust designs, thus drastically reducing the computational cost. We derive the AC-MDSA update formula based on $\ell_1$-norm with entropy function, which is tailored to the geometry of the feasible domain. To accelerate and stabilize the algorithm, we integrate a momentum-based acceleration scheme, which also alleviates the step size sensitivity. Several 2D and 3D examples with various sizes are presented to demonstrate the effectiveness and efficiency of the proposed AC-MDSA framework to handle RTO involving various types of loading uncertainties.
|
1508.01927
|
Keehang Kwon
|
Keehang Kwon
|
Incorporating Inductions and Game Semantics into Logic Programming
|
11 pages. arXiv admin note: substantial text overlap with
arXiv:1507.07228
| null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Inductions and game semantics are two useful extensions to traditional logic
programming. To be specific, inductions can capture a wider class of provable
formulas in logic programming. Adopting game semantics can make logic
programming more interactive.
In this paper, we propose an execution model for a logic language with these
features. This execution model follows closely the reasoning process in real
life.
|
[
{
"created": "Sat, 8 Aug 2015 17:14:09 GMT",
"version": "v1"
}
] |
2015-08-11
|
[
[
"Kwon",
"Keehang",
""
]
] |
Inductions and game semantics are two useful extensions to traditional logic programming. To be specific, inductions can capture a wider class of provable formulas in logic programming. Adopting game semantics can make logic programming more interactive. In this paper, we propose an execution model for a logic language with these features. This execution model follows closely the reasoning process in real life.
|
2008.00302
|
Radu Tudor Ionescu
|
Mihail Burduja, Radu Tudor Ionescu and Nicolae Verga
|
Accurate and Efficient Intracranial Hemorrhage Detection and Subtype
Classification in 3D CT Scans with Convolutional and Long Short-Term Memory
Neural Networks
|
Accepted at Sensors
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we present our system for the RSNA Intracranial Hemorrhage
Detection challenge. The proposed system is based on a lightweight deep neural
network architecture composed of a convolutional neural network (CNN) that
takes as input individual CT slices, and a Long Short-Term Memory (LSTM)
network that takes as input feature embeddings provided by the CNN. For
efficient processing, we consider various feature selection methods to produce
a subset of useful CNN features for the LSTM. Furthermore, we reduce the CT
slices by a factor of 2x, allowing ourselves to train the model faster. Even if
our model is designed to balance speed and accuracy, we report a weighted mean
log loss of 0.04989 on the final test set, which places us in the top 30
ranking (2%) from a total of 1345 participants. Although our computing
infrastructure does not allow it, processing CT slices at their original scale
is likely to improve performance. In order to enable others to reproduce our
results, we provide our code as open source at
https://github.com/warchildmd/ihd. After the challenge, we conducted a
subjective intracranial hemorrhage detection assessment by radiologists,
indicating that the performance of our deep model is on par with that of
doctors specialized in reading CT scans. Another contribution of our work is to
integrate Grad-CAM visualizations in our system, providing useful explanations
for its predictions. We therefore consider our system as a viable option when a
fast diagnosis or a second opinion on intracranial hemorrhage detection are
needed.
|
[
{
"created": "Sat, 1 Aug 2020 17:28:25 GMT",
"version": "v1"
},
{
"created": "Sun, 27 Sep 2020 08:05:21 GMT",
"version": "v2"
},
{
"created": "Tue, 29 Sep 2020 14:55:07 GMT",
"version": "v3"
}
] |
2020-09-30
|
[
[
"Burduja",
"Mihail",
""
],
[
"Ionescu",
"Radu Tudor",
""
],
[
"Verga",
"Nicolae",
""
]
] |
In this paper, we present our system for the RSNA Intracranial Hemorrhage Detection challenge. The proposed system is based on a lightweight deep neural network architecture composed of a convolutional neural network (CNN) that takes as input individual CT slices, and a Long Short-Term Memory (LSTM) network that takes as input feature embeddings provided by the CNN. For efficient processing, we consider various feature selection methods to produce a subset of useful CNN features for the LSTM. Furthermore, we reduce the CT slices by a factor of 2x, allowing ourselves to train the model faster. Even if our model is designed to balance speed and accuracy, we report a weighted mean log loss of 0.04989 on the final test set, which places us in the top 30 ranking (2%) from a total of 1345 participants. Although our computing infrastructure does not allow it, processing CT slices at their original scale is likely to improve performance. In order to enable others to reproduce our results, we provide our code as open source at https://github.com/warchildmd/ihd. After the challenge, we conducted a subjective intracranial hemorrhage detection assessment by radiologists, indicating that the performance of our deep model is on par with that of doctors specialized in reading CT scans. Another contribution of our work is to integrate Grad-CAM visualizations in our system, providing useful explanations for its predictions. We therefore consider our system as a viable option when a fast diagnosis or a second opinion on intracranial hemorrhage detection are needed.
|
2207.09080
|
Qun Li
|
Hua Ma, Qun Li, Yifeng Zheng, Zhi Zhang, Xiaoning Liu, Yansong Gao,
Said F. Al-Sarawi, Derek Abbott
|
MUD-PQFed: Towards Malicious User Detection in Privacy-Preserving
Quantized Federated Learning
|
13 pages,13 figures
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Federated Learning (FL), a distributed machine learning paradigm, has been
adapted to mitigate privacy concerns for customers. Despite their appeal, there
are various inference attacks that can exploit shared-plaintext model updates
to embed traces of customer private information, leading to serious privacy
concerns. To alleviate this privacy issue, cryptographic techniques such as
Secure Multi-Party Computation and Homomorphic Encryption have been used for
privacy-preserving FL. However, such security issues in privacy-preserving FL
are poorly elucidated and underexplored. This work is the first attempt to
elucidate the triviality of performing model corruption attacks on
privacy-preserving FL based on lightweight secret sharing. We consider
scenarios in which model updates are quantized to reduce communication overhead
in this case, where an adversary can simply provide local parameters outside
the legal range to corrupt the model. We then propose the MUD-PQFed protocol,
which can precisely detect malicious clients performing attacks and enforce
fair penalties. By removing the contributions of detected malicious clients,
the global model utility is preserved to be comparable to the baseline global
model without the attack. Extensive experiments validate effectiveness in
maintaining baseline accuracy and detecting malicious clients in a fine-grained
manner
|
[
{
"created": "Tue, 19 Jul 2022 05:30:25 GMT",
"version": "v1"
}
] |
2022-07-20
|
[
[
"Ma",
"Hua",
""
],
[
"Li",
"Qun",
""
],
[
"Zheng",
"Yifeng",
""
],
[
"Zhang",
"Zhi",
""
],
[
"Liu",
"Xiaoning",
""
],
[
"Gao",
"Yansong",
""
],
[
"Al-Sarawi",
"Said F.",
""
],
[
"Abbott",
"Derek",
""
]
] |
Federated Learning (FL), a distributed machine learning paradigm, has been adapted to mitigate privacy concerns for customers. Despite their appeal, there are various inference attacks that can exploit shared-plaintext model updates to embed traces of customer private information, leading to serious privacy concerns. To alleviate this privacy issue, cryptographic techniques such as Secure Multi-Party Computation and Homomorphic Encryption have been used for privacy-preserving FL. However, such security issues in privacy-preserving FL are poorly elucidated and underexplored. This work is the first attempt to elucidate the triviality of performing model corruption attacks on privacy-preserving FL based on lightweight secret sharing. We consider scenarios in which model updates are quantized to reduce communication overhead in this case, where an adversary can simply provide local parameters outside the legal range to corrupt the model. We then propose the MUD-PQFed protocol, which can precisely detect malicious clients performing attacks and enforce fair penalties. By removing the contributions of detected malicious clients, the global model utility is preserved to be comparable to the baseline global model without the attack. Extensive experiments validate effectiveness in maintaining baseline accuracy and detecting malicious clients in a fine-grained manner
|
2103.09072
|
Giulia Belgiovine
|
Jonas Gonzalez-Billandon, Giulia Belgiovine, Alessandra Sciutti,
Giulio Sandini, Francesco Rea
|
Cognitive architecture aided by working-memory for self-supervised
multi-modal humans recognition
|
Submitted to the IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS)
| null | null | null |
cs.RO cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The ability to recognize human partners is an important social skill to build
personalized and long-term human-robot interactions, especially in scenarios
like education, care-giving, and rehabilitation. Faces and voices constitute
two important sources of information to enable artificial systems to reliably
recognize individuals. Deep learning networks have achieved state-of-the-art
results and demonstrated to be suitable tools to address such a task. However,
when those networks are applied to different and unprecedented scenarios not
included in the training set, they can suffer a drop in performance. For
example, with robotic platforms in ever-changing and realistic environments,
where always new sensory evidence is acquired, the performance of those models
degrades. One solution is to make robots learn from their first-hand sensory
data with self-supervision. This allows coping with the inherent variability of
the data gathered in realistic and interactive contexts. To this aim, we
propose a cognitive architecture integrating low-level perceptual processes
with a spatial working memory mechanism. The architecture autonomously
organizes the robot's sensory experience into a structured dataset suitable for
human recognition. Our results demonstrate the effectiveness of our
architecture and show that it is a promising solution in the quest of making
robots more autonomous in their learning process.
|
[
{
"created": "Tue, 16 Mar 2021 13:50:24 GMT",
"version": "v1"
}
] |
2021-03-17
|
[
[
"Gonzalez-Billandon",
"Jonas",
""
],
[
"Belgiovine",
"Giulia",
""
],
[
"Sciutti",
"Alessandra",
""
],
[
"Sandini",
"Giulio",
""
],
[
"Rea",
"Francesco",
""
]
] |
The ability to recognize human partners is an important social skill to build personalized and long-term human-robot interactions, especially in scenarios like education, care-giving, and rehabilitation. Faces and voices constitute two important sources of information to enable artificial systems to reliably recognize individuals. Deep learning networks have achieved state-of-the-art results and demonstrated to be suitable tools to address such a task. However, when those networks are applied to different and unprecedented scenarios not included in the training set, they can suffer a drop in performance. For example, with robotic platforms in ever-changing and realistic environments, where always new sensory evidence is acquired, the performance of those models degrades. One solution is to make robots learn from their first-hand sensory data with self-supervision. This allows coping with the inherent variability of the data gathered in realistic and interactive contexts. To this aim, we propose a cognitive architecture integrating low-level perceptual processes with a spatial working memory mechanism. The architecture autonomously organizes the robot's sensory experience into a structured dataset suitable for human recognition. Our results demonstrate the effectiveness of our architecture and show that it is a promising solution in the quest of making robots more autonomous in their learning process.
|
2407.03922
|
Antoine Legouhy
|
Antoine Legouhy, Ross Callaghan, Hojjat Azadbakht and Hui Zhang
|
POLAFFINI: Efficient feature-based polyaffine initialization for
improved non-linear image registration
|
submitted and accepted to IPMI 2023
|
Information Processing in Medical Imaging. IPMI 2023. Lecture
Notes in Computer Science, vol 13939. Springer, Cham
|
10.1007/978-3-031-34048-2_47
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents an efficient feature-based approach to initialize
non-linear image registration. Today, nonlinear image registration is dominated
by methods relying on intensity-based similarity measures. A good estimate of
the initial transformation is essential, both for traditional iterative
algorithms and for recent one-shot deep learning (DL)-based alternatives. The
established approach to estimate this starting point is to perform affine
registration, but this may be insufficient due to its parsimonious, global, and
non-bending nature. We propose an improved initialization method that takes
advantage of recent advances in DL-based segmentation techniques able to
instantly estimate fine-grained regional delineations with state-of-the-art
accuracies. Those segmentations are used to produce local, anatomically
grounded, feature-based affine matchings using iteration-free closed-form
expressions. Estimated local affine transformations are then fused, with the
log-Euclidean polyaffine framework, into an overall dense diffeomorphic
transformation. We show that, compared to its affine counterpart, the proposed
initialization leads to significantly better alignment for both traditional and
DL-based non-linear registration algorithms. The proposed approach is also more
robust and significantly faster than commonly used affine registration
algorithms such as FSL FLIRT.
|
[
{
"created": "Thu, 4 Jul 2024 13:36:29 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Jul 2024 08:47:44 GMT",
"version": "v2"
}
] |
2024-07-10
|
[
[
"Legouhy",
"Antoine",
""
],
[
"Callaghan",
"Ross",
""
],
[
"Azadbakht",
"Hojjat",
""
],
[
"Zhang",
"Hui",
""
]
] |
This paper presents an efficient feature-based approach to initialize non-linear image registration. Today, nonlinear image registration is dominated by methods relying on intensity-based similarity measures. A good estimate of the initial transformation is essential, both for traditional iterative algorithms and for recent one-shot deep learning (DL)-based alternatives. The established approach to estimate this starting point is to perform affine registration, but this may be insufficient due to its parsimonious, global, and non-bending nature. We propose an improved initialization method that takes advantage of recent advances in DL-based segmentation techniques able to instantly estimate fine-grained regional delineations with state-of-the-art accuracies. Those segmentations are used to produce local, anatomically grounded, feature-based affine matchings using iteration-free closed-form expressions. Estimated local affine transformations are then fused, with the log-Euclidean polyaffine framework, into an overall dense diffeomorphic transformation. We show that, compared to its affine counterpart, the proposed initialization leads to significantly better alignment for both traditional and DL-based non-linear registration algorithms. The proposed approach is also more robust and significantly faster than commonly used affine registration algorithms such as FSL FLIRT.
|
1705.08632
|
J\"urgen Koslowski
|
Horatiu Cirstea, Serguei Lenglet, Pierre-Etienne Moreau
|
Faithful (meta-)encodings of programmable strategies into term rewriting
systems
| null |
Logical Methods in Computer Science, Volume 13, Issue 4 (November
28, 2017) lmcs:4096
|
10.23638/LMCS-13(4:16)2017
| null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Rewriting is a formalism widely used in computer science and mathematical
logic. When using rewriting as a programming or modeling paradigm, the rewrite
rules describe the transformations one wants to operate and rewriting
strategies are used to con- trol their application. The operational semantics
of these strategies are generally accepted and approaches for analyzing the
termination of specific strategies have been studied. We propose in this paper
a generic encoding of classic control and traversal strategies used in rewrite
based languages such as Maude, Stratego and Tom into a plain term rewriting
system. The encoding is proven sound and complete and, as a direct consequence,
estab- lished termination methods used for term rewriting systems can be
applied to analyze the termination of strategy controlled term rewriting
systems. We show that the encoding of strategies into term rewriting systems
can be easily adapted to handle many-sorted signa- tures and we use a
meta-level representation of terms to reduce the size of the encodings. The
corresponding implementation in Tom generates term rewriting systems compatible
with the syntax of termination tools such as AProVE and TTT2, tools which
turned out to be very effective in (dis)proving the termination of the
generated term rewriting systems. The approach can also be seen as a generic
strategy compiler which can be integrated into languages providing pattern
matching primitives; experiments in Tom show that applying our encoding leads
to performances comparable to the native Tom strategies.
|
[
{
"created": "Wed, 24 May 2017 07:06:41 GMT",
"version": "v1"
},
{
"created": "Mon, 27 Nov 2017 01:07:37 GMT",
"version": "v2"
}
] |
2019-03-14
|
[
[
"Cirstea",
"Horatiu",
""
],
[
"Lenglet",
"Serguei",
""
],
[
"Moreau",
"Pierre-Etienne",
""
]
] |
Rewriting is a formalism widely used in computer science and mathematical logic. When using rewriting as a programming or modeling paradigm, the rewrite rules describe the transformations one wants to operate and rewriting strategies are used to con- trol their application. The operational semantics of these strategies are generally accepted and approaches for analyzing the termination of specific strategies have been studied. We propose in this paper a generic encoding of classic control and traversal strategies used in rewrite based languages such as Maude, Stratego and Tom into a plain term rewriting system. The encoding is proven sound and complete and, as a direct consequence, estab- lished termination methods used for term rewriting systems can be applied to analyze the termination of strategy controlled term rewriting systems. We show that the encoding of strategies into term rewriting systems can be easily adapted to handle many-sorted signa- tures and we use a meta-level representation of terms to reduce the size of the encodings. The corresponding implementation in Tom generates term rewriting systems compatible with the syntax of termination tools such as AProVE and TTT2, tools which turned out to be very effective in (dis)proving the termination of the generated term rewriting systems. The approach can also be seen as a generic strategy compiler which can be integrated into languages providing pattern matching primitives; experiments in Tom show that applying our encoding leads to performances comparable to the native Tom strategies.
|
2202.09587
|
Shiliang Zhang
|
Shiliang Zhang, Anton Hagermalm, Sanjin Slavnic, Elad Michael
Schiller, Magnus Almgren
|
Evaluation of Open-source Tools for Differential Privacy
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Differential privacy (DP) defines privacy protection by promising quantified
indistinguishability between individuals that consent to share their
privacy-sensitive information and the ones that do not. DP aims to deliver this
promise by including well-crafted elements of random noise in the published
data and thus there is an inherent trade-off between the degree of privacy
protection and the ability to utilize the protected data. Currently, several
open-source tools were proposed for DP provision. To the best of our knowledge,
there is no comprehensive study for comparing these open-source tools with
respect to their ability to balance DP's inherent trade-off as well as the use
of system resources. This work proposes an open-source evaluation framework for
privacy protection solutions and offers evaluation for OpenDP Smartnoise,
Google DP, PyTorch Opacus, Tensorflow Privacy, and Diffprivlib. In addition to
studying their ability to balance the above trade-off, we consider discrete and
continuous attributes by quantifying their performance under different data
sizes. Our results reveal several patterns that developers should have in mind
when selecting tools under different application needs and criteria. This
evaluation survey can be the basis for an improved selection of open-source DP
tools and quicker adaptation of DP.
|
[
{
"created": "Sat, 19 Feb 2022 12:14:13 GMT",
"version": "v1"
},
{
"created": "Fri, 22 Apr 2022 13:19:29 GMT",
"version": "v2"
},
{
"created": "Tue, 24 May 2022 13:08:49 GMT",
"version": "v3"
}
] |
2022-05-25
|
[
[
"Zhang",
"Shiliang",
""
],
[
"Hagermalm",
"Anton",
""
],
[
"Slavnic",
"Sanjin",
""
],
[
"Schiller",
"Elad Michael",
""
],
[
"Almgren",
"Magnus",
""
]
] |
Differential privacy (DP) defines privacy protection by promising quantified indistinguishability between individuals that consent to share their privacy-sensitive information and the ones that do not. DP aims to deliver this promise by including well-crafted elements of random noise in the published data and thus there is an inherent trade-off between the degree of privacy protection and the ability to utilize the protected data. Currently, several open-source tools were proposed for DP provision. To the best of our knowledge, there is no comprehensive study for comparing these open-source tools with respect to their ability to balance DP's inherent trade-off as well as the use of system resources. This work proposes an open-source evaluation framework for privacy protection solutions and offers evaluation for OpenDP Smartnoise, Google DP, PyTorch Opacus, Tensorflow Privacy, and Diffprivlib. In addition to studying their ability to balance the above trade-off, we consider discrete and continuous attributes by quantifying their performance under different data sizes. Our results reveal several patterns that developers should have in mind when selecting tools under different application needs and criteria. This evaluation survey can be the basis for an improved selection of open-source DP tools and quicker adaptation of DP.
|
2303.01091
|
Gaochao Song
|
Gaochao Song, Luo Zhang, Ran Su, Jianfeng Shi, Ying He, Qian Sun
|
OPE-SR: Orthogonal Position Encoding for Designing a Parameter-free
Upsampling Module in Arbitrary-scale Image Super-Resolution
|
Accepted by CVPR 2023. 11 pages
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Implicit neural representation (INR) is a popular approach for
arbitrary-scale image super-resolution (SR), as a key component of INR,
position encoding improves its representation ability. Motivated by position
encoding, we propose orthogonal position encoding (OPE) - an extension of
position encoding - and an OPE-Upscale module to replace the INR-based
upsampling module for arbitrary-scale image super-resolution. Same as INR, our
OPE-Upscale Module takes 2D coordinates and latent code as inputs; however it
does not require training parameters. This parameter-free feature allows the
OPE-Upscale Module to directly perform linear combination operations to
reconstruct an image in a continuous manner, achieving an arbitrary-scale image
reconstruction. As a concise SR framework, our method has high computing
efficiency and consumes less memory comparing to the state-of-the-art (SOTA),
which has been confirmed by extensive experiments and evaluations. In addition,
our method has comparable results with SOTA in arbitrary scale image
super-resolution. Last but not the least, we show that OPE corresponds to a set
of orthogonal basis, justifying our design principle.
|
[
{
"created": "Thu, 2 Mar 2023 09:26:14 GMT",
"version": "v1"
}
] |
2023-03-03
|
[
[
"Song",
"Gaochao",
""
],
[
"Zhang",
"Luo",
""
],
[
"Su",
"Ran",
""
],
[
"Shi",
"Jianfeng",
""
],
[
"He",
"Ying",
""
],
[
"Sun",
"Qian",
""
]
] |
Implicit neural representation (INR) is a popular approach for arbitrary-scale image super-resolution (SR), as a key component of INR, position encoding improves its representation ability. Motivated by position encoding, we propose orthogonal position encoding (OPE) - an extension of position encoding - and an OPE-Upscale module to replace the INR-based upsampling module for arbitrary-scale image super-resolution. Same as INR, our OPE-Upscale Module takes 2D coordinates and latent code as inputs; however it does not require training parameters. This parameter-free feature allows the OPE-Upscale Module to directly perform linear combination operations to reconstruct an image in a continuous manner, achieving an arbitrary-scale image reconstruction. As a concise SR framework, our method has high computing efficiency and consumes less memory comparing to the state-of-the-art (SOTA), which has been confirmed by extensive experiments and evaluations. In addition, our method has comparable results with SOTA in arbitrary scale image super-resolution. Last but not the least, we show that OPE corresponds to a set of orthogonal basis, justifying our design principle.
|
2404.01730
|
Ahmad Beirami
|
Joy Qiping Yang and Salman Salamatian and Ziteng Sun and Ananda
Theertha Suresh and Ahmad Beirami
|
Asymptotics of Language Model Alignment
| null | null | null | null |
cs.LG cs.IT math.IT stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Let $p$ denote a generative language model. Let $r$ denote a reward model
that returns a scalar that captures the degree at which a draw from $p$ is
preferred. The goal of language model alignment is to alter $p$ to a new
distribution $\phi$ that results in a higher expected reward while keeping
$\phi$ close to $p.$ A popular alignment method is the KL-constrained
reinforcement learning (RL), which chooses a distribution $\phi_\Delta$ that
maximizes $E_{\phi_{\Delta}} r(y)$ subject to a relative entropy constraint
$KL(\phi_\Delta || p) \leq \Delta.$ Another simple alignment method is
best-of-$N$, where $N$ samples are drawn from $p$ and one with highest reward
is selected. In this paper, we offer a closed-form characterization of the
optimal KL-constrained RL solution. We demonstrate that any alignment method
that achieves a comparable trade-off between KL divergence and reward must
approximate the optimal KL-constrained RL solution in terms of relative
entropy. To further analyze the properties of alignment methods, we introduce
two simplifying assumptions: we let the language model be memoryless, and the
reward model be linear. Although these assumptions may not reflect complex
real-world scenarios, they enable a precise characterization of the asymptotic
behavior of both the best-of-$N$ alignment, and the KL-constrained RL method,
in terms of information-theoretic quantities. We prove that the reward of the
optimal KL-constrained RL solution satisfies a large deviation principle, and
we fully characterize its rate function. We also show that the rate of growth
of the scaled cumulants of the reward is characterized by a proper Renyi cross
entropy. Finally, we show that best-of-$N$ is asymptotically equivalent to
KL-constrained RL solution by proving that their expected rewards are
asymptotically equal, and concluding that the two distributions must be close
in KL divergence.
|
[
{
"created": "Tue, 2 Apr 2024 08:40:07 GMT",
"version": "v1"
}
] |
2024-04-03
|
[
[
"Yang",
"Joy Qiping",
""
],
[
"Salamatian",
"Salman",
""
],
[
"Sun",
"Ziteng",
""
],
[
"Suresh",
"Ananda Theertha",
""
],
[
"Beirami",
"Ahmad",
""
]
] |
Let $p$ denote a generative language model. Let $r$ denote a reward model that returns a scalar that captures the degree at which a draw from $p$ is preferred. The goal of language model alignment is to alter $p$ to a new distribution $\phi$ that results in a higher expected reward while keeping $\phi$ close to $p.$ A popular alignment method is the KL-constrained reinforcement learning (RL), which chooses a distribution $\phi_\Delta$ that maximizes $E_{\phi_{\Delta}} r(y)$ subject to a relative entropy constraint $KL(\phi_\Delta || p) \leq \Delta.$ Another simple alignment method is best-of-$N$, where $N$ samples are drawn from $p$ and one with highest reward is selected. In this paper, we offer a closed-form characterization of the optimal KL-constrained RL solution. We demonstrate that any alignment method that achieves a comparable trade-off between KL divergence and reward must approximate the optimal KL-constrained RL solution in terms of relative entropy. To further analyze the properties of alignment methods, we introduce two simplifying assumptions: we let the language model be memoryless, and the reward model be linear. Although these assumptions may not reflect complex real-world scenarios, they enable a precise characterization of the asymptotic behavior of both the best-of-$N$ alignment, and the KL-constrained RL method, in terms of information-theoretic quantities. We prove that the reward of the optimal KL-constrained RL solution satisfies a large deviation principle, and we fully characterize its rate function. We also show that the rate of growth of the scaled cumulants of the reward is characterized by a proper Renyi cross entropy. Finally, we show that best-of-$N$ is asymptotically equivalent to KL-constrained RL solution by proving that their expected rewards are asymptotically equal, and concluding that the two distributions must be close in KL divergence.
|
1708.05908
|
Stojan Trajanovski
|
Stojan Trajanovski, Fernando A. Kuipers, Yezekael Hayel, Eitan Altman
and Piet Van Mieghem
|
Designing virus-resistant, high-performance networks: a game-formation
approach
|
accepted for publication in IEEE Transactions on Control of Network
Systems
| null |
10.1109/TCNS.2017.2747840
| null |
cs.GT cs.NI cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Designing an optimal network topology while balancing multiple, possibly
conflicting objectives like cost, performance, and resiliency to viruses is a
challenging endeavor, let alone in the case of decentralized network formation.
We therefore propose a game-formation technique where each player aims to
minimize its cost in installing links, the probability of being infected by a
virus and the sum of hopcounts on its shortest paths to all other nodes.
In this article, we (1) determine the Nash Equilibria and the Price of
Anarchy for our novel network formation game, (2) demonstrate that the Price of
Anarchy (PoA) is usually low, which suggests that (near-)optimal topologies can
be formed in a decentralized way, and (3) give suggestions for practitioners
for those cases where the PoA is high and some centralized control/incentives
are advisable.
|
[
{
"created": "Sat, 19 Aug 2017 22:48:39 GMT",
"version": "v1"
},
{
"created": "Thu, 24 Aug 2017 23:12:48 GMT",
"version": "v2"
},
{
"created": "Sun, 1 Oct 2017 16:26:07 GMT",
"version": "v3"
}
] |
2017-10-03
|
[
[
"Trajanovski",
"Stojan",
""
],
[
"Kuipers",
"Fernando A.",
""
],
[
"Hayel",
"Yezekael",
""
],
[
"Altman",
"Eitan",
""
],
[
"Van Mieghem",
"Piet",
""
]
] |
Designing an optimal network topology while balancing multiple, possibly conflicting objectives like cost, performance, and resiliency to viruses is a challenging endeavor, let alone in the case of decentralized network formation. We therefore propose a game-formation technique where each player aims to minimize its cost in installing links, the probability of being infected by a virus and the sum of hopcounts on its shortest paths to all other nodes. In this article, we (1) determine the Nash Equilibria and the Price of Anarchy for our novel network formation game, (2) demonstrate that the Price of Anarchy (PoA) is usually low, which suggests that (near-)optimal topologies can be formed in a decentralized way, and (3) give suggestions for practitioners for those cases where the PoA is high and some centralized control/incentives are advisable.
|
1709.05262
|
Vikas Garg
|
Vikas K. Garg, Adam Kalai
|
Supervising Unsupervised Learning
|
11 two column pages. arXiv admin note: substantial text overlap with
arXiv:1612.09030
| null | null | null |
cs.AI cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a framework to leverage knowledge acquired from a repository of
(heterogeneous) supervised datasets to new unsupervised datasets. Our
perspective avoids the subjectivity inherent in unsupervised learning by
reducing it to supervised learning, and provides a principled way to evaluate
unsupervised algorithms. We demonstrate the versatility of our framework via
simple agnostic bounds on unsupervised problems. In the context of clustering,
our approach helps choose the number of clusters and the clustering algorithm,
remove the outliers, and provably circumvent the Kleinberg's impossibility
result. Experimental results across hundreds of problems demonstrate improved
performance on unsupervised data with simple algorithms, despite the fact that
our problems come from heterogeneous domains. Additionally, our framework lets
us leverage deep networks to learn common features from many such small
datasets, and perform zero shot learning.
|
[
{
"created": "Thu, 14 Sep 2017 14:42:41 GMT",
"version": "v1"
},
{
"created": "Fri, 16 Feb 2018 14:08:39 GMT",
"version": "v2"
}
] |
2018-02-19
|
[
[
"Garg",
"Vikas K.",
""
],
[
"Kalai",
"Adam",
""
]
] |
We introduce a framework to leverage knowledge acquired from a repository of (heterogeneous) supervised datasets to new unsupervised datasets. Our perspective avoids the subjectivity inherent in unsupervised learning by reducing it to supervised learning, and provides a principled way to evaluate unsupervised algorithms. We demonstrate the versatility of our framework via simple agnostic bounds on unsupervised problems. In the context of clustering, our approach helps choose the number of clusters and the clustering algorithm, remove the outliers, and provably circumvent the Kleinberg's impossibility result. Experimental results across hundreds of problems demonstrate improved performance on unsupervised data with simple algorithms, despite the fact that our problems come from heterogeneous domains. Additionally, our framework lets us leverage deep networks to learn common features from many such small datasets, and perform zero shot learning.
|
2209.10791
|
Guangyu Chen
|
Guangyu Chen
|
Homophone Reveals the Truth: A Reality Check for Speech2Vec
|
Corrected typos
| null | null | null |
cs.CL cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Generating spoken word embeddings that possess semantic information is a
fascinating topic. Compared with text-based embeddings, they cover both
phonetic and semantic characteristics, which can provide richer information and
are potentially helpful for improving ASR and speech translation systems. In
this paper, we review and examine the authenticity of a seminal work in this
field: Speech2Vec. First, a homophone-based inspection method is proposed to
check the speech embeddings released by the author of Speech2Vec. There is no
indication that these embeddings are generated by the Speech2Vec model.
Moreover, through further analysis of the vocabulary composition, we suspect
that a text-based model fabricates these embeddings. Finally, we reproduce the
Speech2Vec model, referring to the official code and optimal settings in the
original paper. Experiments showed that this model failed to learn effective
semantic embeddings. In word similarity benchmarks, it gets a correlation score
of 0.08 in MEN and 0.15 in WS-353-SIM tests, which is over 0.5 lower than those
described in the original paper. Our data and code are available.
|
[
{
"created": "Thu, 22 Sep 2022 05:32:09 GMT",
"version": "v1"
},
{
"created": "Fri, 23 Sep 2022 11:10:15 GMT",
"version": "v2"
}
] |
2022-09-26
|
[
[
"Chen",
"Guangyu",
""
]
] |
Generating spoken word embeddings that possess semantic information is a fascinating topic. Compared with text-based embeddings, they cover both phonetic and semantic characteristics, which can provide richer information and are potentially helpful for improving ASR and speech translation systems. In this paper, we review and examine the authenticity of a seminal work in this field: Speech2Vec. First, a homophone-based inspection method is proposed to check the speech embeddings released by the author of Speech2Vec. There is no indication that these embeddings are generated by the Speech2Vec model. Moreover, through further analysis of the vocabulary composition, we suspect that a text-based model fabricates these embeddings. Finally, we reproduce the Speech2Vec model, referring to the official code and optimal settings in the original paper. Experiments showed that this model failed to learn effective semantic embeddings. In word similarity benchmarks, it gets a correlation score of 0.08 in MEN and 0.15 in WS-353-SIM tests, which is over 0.5 lower than those described in the original paper. Our data and code are available.
|
1201.2995
|
Rajathilagam Bijoy
|
B.Rajathilagam, Murali Rangarajan, K.P.Soman
|
G-Lets: Signal Processing Using Transformation Groups
|
20 pages, 8 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present an algorithm using transformation groups and their irreducible
representations to generate an orthogonal basis for a signal in the vector
space of the signal. It is shown that multiresolution analysis can be done with
amplitudes using a transformation group. G-lets is thus not a single transform,
but a group of linear transformations related by group theory. The algorithm
also specifies that a multiresolution and multiscale analysis for each
resolution is possible in terms of frequencies. Separation of low and high
frequency components of each amplitude resolution is facilitated by G-lets.
Using conjugacy classes of the transformation group, more than one set of basis
may be generated, giving a different perspective of the signal through each
basis. Applications for this algorithm include edge detection, feature
extraction, denoising, face recognition, compression, and more. We analyze this
algorithm using dihedral groups as an example. We demonstrate the results with
an ECG signal and the standard `Lena' image.
|
[
{
"created": "Sat, 14 Jan 2012 07:18:06 GMT",
"version": "v1"
}
] |
2012-01-17
|
[
[
"Rajathilagam",
"B.",
""
],
[
"Rangarajan",
"Murali",
""
],
[
"Soman",
"K. P.",
""
]
] |
We present an algorithm using transformation groups and their irreducible representations to generate an orthogonal basis for a signal in the vector space of the signal. It is shown that multiresolution analysis can be done with amplitudes using a transformation group. G-lets is thus not a single transform, but a group of linear transformations related by group theory. The algorithm also specifies that a multiresolution and multiscale analysis for each resolution is possible in terms of frequencies. Separation of low and high frequency components of each amplitude resolution is facilitated by G-lets. Using conjugacy classes of the transformation group, more than one set of basis may be generated, giving a different perspective of the signal through each basis. Applications for this algorithm include edge detection, feature extraction, denoising, face recognition, compression, and more. We analyze this algorithm using dihedral groups as an example. We demonstrate the results with an ECG signal and the standard `Lena' image.
|
1902.10388
|
Itsikiantsoa Randrianantenaina
|
Itsikiantsoa Randrianantenaina, Megumi Kaneko, Hayssam Dahrouj, Hesham
ElSawy, and Mohamed-Slim Alouini
|
Interference Management in NOMA-based Fog-Radio Access Networks via
Joint Scheduling and Power Adaptation
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Non-Orthogonal Multiple Access (NOMA) and Fog Radio Access Networks (FRAN)
are promising candidates within the 5G and beyond systems. This work examines
the benefit of adopting NOMA in an FRAN architecture with constrained capacity
fronthaul. The paper proposes methods for optimizing joint scheduling and power
adaptation in the downlink of a NOMA-based FRAN with multiple resource blocks
(RB). We consider a mixed-integer optimization problem which maximizes a
network-wide rate-based utility function subject to fronthaul-capacity
constraints, so as to determine i) the user-to-RB assignment, ii) the allocated
power to each RB, and iii) the power split levels of the NOMA users in each RB.
The paper proposes a feasible decoupled solution for such non-convex
optimization problem using a three-step hybrid centralized/distributed
approach. The proposed solution complies with FRAN operation that aims to
partially shift the network control to the FAPs, so as to overcome delays due
to fronthaul rate constraints. The paper proposes and compares two distinct
methods for solving the assignment problem, namely the Hungarian method, and
the Multiple Choice Knapsack method. The power allocation and the NOMA power
split optimization, on the other hand, are solved using the alternating
direction method of multipliers (ADMM). Simulations results illustrate the
advantages of the proposed methods compared to different baseline schemes
including the conventional Orthogonal Multiple Access (OMA), for different
utility functions and different network environments.
|
[
{
"created": "Wed, 27 Feb 2019 08:38:20 GMT",
"version": "v1"
}
] |
2019-02-28
|
[
[
"Randrianantenaina",
"Itsikiantsoa",
""
],
[
"Kaneko",
"Megumi",
""
],
[
"Dahrouj",
"Hayssam",
""
],
[
"ElSawy",
"Hesham",
""
],
[
"Alouini",
"Mohamed-Slim",
""
]
] |
Non-Orthogonal Multiple Access (NOMA) and Fog Radio Access Networks (FRAN) are promising candidates within the 5G and beyond systems. This work examines the benefit of adopting NOMA in an FRAN architecture with constrained capacity fronthaul. The paper proposes methods for optimizing joint scheduling and power adaptation in the downlink of a NOMA-based FRAN with multiple resource blocks (RB). We consider a mixed-integer optimization problem which maximizes a network-wide rate-based utility function subject to fronthaul-capacity constraints, so as to determine i) the user-to-RB assignment, ii) the allocated power to each RB, and iii) the power split levels of the NOMA users in each RB. The paper proposes a feasible decoupled solution for such non-convex optimization problem using a three-step hybrid centralized/distributed approach. The proposed solution complies with FRAN operation that aims to partially shift the network control to the FAPs, so as to overcome delays due to fronthaul rate constraints. The paper proposes and compares two distinct methods for solving the assignment problem, namely the Hungarian method, and the Multiple Choice Knapsack method. The power allocation and the NOMA power split optimization, on the other hand, are solved using the alternating direction method of multipliers (ADMM). Simulations results illustrate the advantages of the proposed methods compared to different baseline schemes including the conventional Orthogonal Multiple Access (OMA), for different utility functions and different network environments.
|
2401.11287
|
Mikael Bisgaard Dahlsen-Jensen
|
Mikael Bisgaard Dahlsen-Jensen (1), Baptiste Fievet (2), Laure
Petrucci (2), Jaco van de Pol (1) ((1) Aarhus University, Aarhus, Denmark,
(2) Universit\'e Sorbonne Paris Nord CNRS, Villetaneuse, France)
|
On-The-Fly Algorithm for Reachability in Parametric Timed Games
(Extended Version)
|
26 pages, 4 figures
| null | null | null |
cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
Parametric Timed Games (PTG) are an extension of the model of Timed Automata.
They allow for the verification and synthesis of real-time systems, reactive to
their environmeand depending on adjustable parameters. Given a PTG and a
reachability objective, we synthesize the values of the parameters such that
the game is winning for the controller. We adapt and implement the On-The-Fly
algorithm for parameter synthesis for PTG. Several pruning heuristics are
introduced, to improve termination and speed of the algorithm. We evaluate the
feasibility of parameter synthesis for PTG on two large case studies. Finally,
we investigate the correctness guarantee of the algorithm: though the problem
is undecidable, our semi-algorithm produces all correct parameter valuations
``in the limit''.
|
[
{
"created": "Sat, 20 Jan 2024 17:38:43 GMT",
"version": "v1"
}
] |
2024-01-23
|
[
[
"Dahlsen-Jensen",
"Mikael Bisgaard",
""
],
[
"Fievet",
"Baptiste",
""
],
[
"Petrucci",
"Laure",
""
],
[
"van de Pol",
"Jaco",
""
]
] |
Parametric Timed Games (PTG) are an extension of the model of Timed Automata. They allow for the verification and synthesis of real-time systems, reactive to their environmeand depending on adjustable parameters. Given a PTG and a reachability objective, we synthesize the values of the parameters such that the game is winning for the controller. We adapt and implement the On-The-Fly algorithm for parameter synthesis for PTG. Several pruning heuristics are introduced, to improve termination and speed of the algorithm. We evaluate the feasibility of parameter synthesis for PTG on two large case studies. Finally, we investigate the correctness guarantee of the algorithm: though the problem is undecidable, our semi-algorithm produces all correct parameter valuations ``in the limit''.
|
2205.12594
|
Fatemeh Hadaeghi
|
Zohreh Ansari, Farzin Pourhoseini, Fatemeh Hadaeghi
|
Heterogeneous Reservoir Computing Models for Persian Speech Recognition
|
This paper was accepted for oral presentation in IEEE WCCI 2022 +
IJCNN 2022, special session on Reservoir Computing: algorithms,
implementations and applications
| null | null | null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Over the last decade, deep-learning methods have been gradually incorporated
into conventional automatic speech recognition (ASR) frameworks to create
acoustic, pronunciation, and language models. Although it led to significant
improvements in ASRs' recognition accuracy, due to their hard constraints
related to hardware requirements (e.g., computing power and memory usage), it
is unclear if such approaches are the most computationally- and
energy-efficient options for embedded ASR applications. Reservoir computing
(RC) models (e.g., echo state networks (ESNs) and liquid state machines
(LSMs)), on the other hand, have been proven inexpensive to train, have vastly
fewer parameters, and are compatible with emergent hardware technologies.
However, their performance in speech processing tasks is relatively inferior to
that of the deep-learning-based models. To enhance the accuracy of the RC in
ASR applications, we propose heterogeneous single and multi-layer ESNs to
create non-linear transformations of the inputs that capture temporal context
at different scales. To test our models, we performed a speech recognition task
on the Farsdat Persian dataset. Since, to the best of our knowledge, standard
RC has not yet been employed to conduct any Persian ASR tasks, we also trained
conventional single-layer and deep ESNs to provide baselines for comparison.
Besides, we compared the RC performance with a standard long-short-term memory
(LSTM) model. Heterogeneous RC models (1) show improved performance to the
standard RC models; (2) perform on par in terms of recognition accuracy with
the LSTM, and (3) reduce the training time considerably.
|
[
{
"created": "Wed, 25 May 2022 09:15:15 GMT",
"version": "v1"
}
] |
2022-05-26
|
[
[
"Ansari",
"Zohreh",
""
],
[
"Pourhoseini",
"Farzin",
""
],
[
"Hadaeghi",
"Fatemeh",
""
]
] |
Over the last decade, deep-learning methods have been gradually incorporated into conventional automatic speech recognition (ASR) frameworks to create acoustic, pronunciation, and language models. Although it led to significant improvements in ASRs' recognition accuracy, due to their hard constraints related to hardware requirements (e.g., computing power and memory usage), it is unclear if such approaches are the most computationally- and energy-efficient options for embedded ASR applications. Reservoir computing (RC) models (e.g., echo state networks (ESNs) and liquid state machines (LSMs)), on the other hand, have been proven inexpensive to train, have vastly fewer parameters, and are compatible with emergent hardware technologies. However, their performance in speech processing tasks is relatively inferior to that of the deep-learning-based models. To enhance the accuracy of the RC in ASR applications, we propose heterogeneous single and multi-layer ESNs to create non-linear transformations of the inputs that capture temporal context at different scales. To test our models, we performed a speech recognition task on the Farsdat Persian dataset. Since, to the best of our knowledge, standard RC has not yet been employed to conduct any Persian ASR tasks, we also trained conventional single-layer and deep ESNs to provide baselines for comparison. Besides, we compared the RC performance with a standard long-short-term memory (LSTM) model. Heterogeneous RC models (1) show improved performance to the standard RC models; (2) perform on par in terms of recognition accuracy with the LSTM, and (3) reduce the training time considerably.
|
2406.09960
|
Jannis Kiesel
|
Jannis Kiesel, Elias Gr\"unewald
|
Extending Business Process Management for Regulatory Transparency
|
Preprint, accepted to the BPM Forum 2024
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Ever-increasingly complex business processes are enabled by loosely coupled
cloud-native systems. In such fast-paced development environments, data
controllers face the challenge of capturing and updating all personal data
processing activities due to considerable communication overhead between
development teams and data protection staff. To date, established business
process management methods generate valuable insights about systems, however,
they do not account for all regulatory transparency obligations. For instance,
data controllers need to record all information about data categories, legal
purpose specifications, third-country transfers, etc. Therefore, we propose to
bridge the gap between business processes and application systems by providing
three contributions that assist in modeling, discovering, and checking personal
data transparency through a process-oriented perspective. We enable
transparency modeling for relevant business activities by providing a plug-in
extension to BPMN featuring regulatory transparency information. Furthermore,
we utilize event logs to record regulatory transparency information in
realistic cloud-native systems. On this basis, we leverage process mining
techniques to discover and analyze personal data flows in business processes,
e.g., through transparency conformance checking. We design and implement
prototypes for all contributions, emphasizing the appropriate integration and
modeling effort required to create business-process-oriented transparency.
Altogether, we connect current business process engineering techniques with
regulatory needs as imposed by the GDPR and other legal frameworks.
|
[
{
"created": "Fri, 14 Jun 2024 12:08:34 GMT",
"version": "v1"
}
] |
2024-06-17
|
[
[
"Kiesel",
"Jannis",
""
],
[
"Grünewald",
"Elias",
""
]
] |
Ever-increasingly complex business processes are enabled by loosely coupled cloud-native systems. In such fast-paced development environments, data controllers face the challenge of capturing and updating all personal data processing activities due to considerable communication overhead between development teams and data protection staff. To date, established business process management methods generate valuable insights about systems, however, they do not account for all regulatory transparency obligations. For instance, data controllers need to record all information about data categories, legal purpose specifications, third-country transfers, etc. Therefore, we propose to bridge the gap between business processes and application systems by providing three contributions that assist in modeling, discovering, and checking personal data transparency through a process-oriented perspective. We enable transparency modeling for relevant business activities by providing a plug-in extension to BPMN featuring regulatory transparency information. Furthermore, we utilize event logs to record regulatory transparency information in realistic cloud-native systems. On this basis, we leverage process mining techniques to discover and analyze personal data flows in business processes, e.g., through transparency conformance checking. We design and implement prototypes for all contributions, emphasizing the appropriate integration and modeling effort required to create business-process-oriented transparency. Altogether, we connect current business process engineering techniques with regulatory needs as imposed by the GDPR and other legal frameworks.
|
2307.03319
|
Amir Globerson
|
Roni Rabin, Alexandre Djerbetian, Roee Engelberg, Lidan Hackmon, Gal
Elidan, Reut Tsarfaty, Amir Globerson
|
Covering Uncommon Ground: Gap-Focused Question Generation for Answer
Assessment
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Human communication often involves information gaps between the
interlocutors. For example, in an educational dialogue, a student often
provides an answer that is incomplete, and there is a gap between this answer
and the perfect one expected by the teacher. Successful dialogue then hinges on
the teacher asking about this gap in an effective manner, thus creating a rich
and interactive educational experience. We focus on the problem of generating
such gap-focused questions (GFQs) automatically. We define the task, highlight
key desired aspects of a good GFQ, and propose a model that satisfies these.
Finally, we provide an evaluation by human annotators of our generated
questions compared against human generated ones, demonstrating competitive
performance.
|
[
{
"created": "Thu, 6 Jul 2023 22:21:42 GMT",
"version": "v1"
}
] |
2023-07-10
|
[
[
"Rabin",
"Roni",
""
],
[
"Djerbetian",
"Alexandre",
""
],
[
"Engelberg",
"Roee",
""
],
[
"Hackmon",
"Lidan",
""
],
[
"Elidan",
"Gal",
""
],
[
"Tsarfaty",
"Reut",
""
],
[
"Globerson",
"Amir",
""
]
] |
Human communication often involves information gaps between the interlocutors. For example, in an educational dialogue, a student often provides an answer that is incomplete, and there is a gap between this answer and the perfect one expected by the teacher. Successful dialogue then hinges on the teacher asking about this gap in an effective manner, thus creating a rich and interactive educational experience. We focus on the problem of generating such gap-focused questions (GFQs) automatically. We define the task, highlight key desired aspects of a good GFQ, and propose a model that satisfies these. Finally, we provide an evaluation by human annotators of our generated questions compared against human generated ones, demonstrating competitive performance.
|
cs/0603023
|
Marcus Hutter
|
Viktor Zhumatiy and Faustino Gomez and Marcus Hutter and Juergen
Schmidhuber
|
Metric State Space Reinforcement Learning for a Vision-Capable Mobile
Robot
|
14 pages, 8 figures
|
Proc. 9th International Conf. on Intelligent Autonomous Systems
(IAS 2006) pages 272-281
| null |
IDSIA-05-06
|
cs.RO cs.LG
| null |
We address the problem of autonomously learning controllers for
vision-capable mobile robots. We extend McCallum's (1995) Nearest-Sequence
Memory algorithm to allow for general metrics over state-action trajectories.
We demonstrate the feasibility of our approach by successfully running our
algorithm on a real mobile robot. The algorithm is novel and unique in that it
(a) explores the environment and learns directly on a mobile robot without
using a hand-made computer model as an intermediate step, (b) does not require
manual discretization of the sensor input space, (c) works in piecewise
continuous perceptual spaces, and (d) copes with partial observability.
Together this allows learning from much less experience compared to previous
methods.
|
[
{
"created": "Tue, 7 Mar 2006 08:44:29 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Zhumatiy",
"Viktor",
""
],
[
"Gomez",
"Faustino",
""
],
[
"Hutter",
"Marcus",
""
],
[
"Schmidhuber",
"Juergen",
""
]
] |
We address the problem of autonomously learning controllers for vision-capable mobile robots. We extend McCallum's (1995) Nearest-Sequence Memory algorithm to allow for general metrics over state-action trajectories. We demonstrate the feasibility of our approach by successfully running our algorithm on a real mobile robot. The algorithm is novel and unique in that it (a) explores the environment and learns directly on a mobile robot without using a hand-made computer model as an intermediate step, (b) does not require manual discretization of the sensor input space, (c) works in piecewise continuous perceptual spaces, and (d) copes with partial observability. Together this allows learning from much less experience compared to previous methods.
|
2211.06385
|
Md Vasimuddin
|
Md Vasimuddin, Ramanarayan Mohanty, Sanchit Misra, Sasikanth Avancha
|
DistGNN-MB: Distributed Large-Scale Graph Neural Network Training on x86
via Minibatch Sampling
| null | null | null | null |
cs.LG cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Training Graph Neural Networks, on graphs containing billions of vertices and
edges, at scale using minibatch sampling poses a key challenge: strong-scaling
graphs and training examples results in lower compute and higher communication
volume and potential performance loss. DistGNN-MB employs a novel Historical
Embedding Cache combined with compute-communication overlap to address this
challenge. On a 32-node (64-socket) cluster of $3^{rd}$ generation Intel Xeon
Scalable Processors with 36 cores per socket, DistGNN-MB trains 3-layer
GraphSAGE and GAT models on OGBN-Papers100M to convergence with epoch times of
2 seconds and 4.9 seconds, respectively, on 32 compute nodes. At this scale,
DistGNN-MB trains GraphSAGE 5.2x faster than the widely-used DistDGL.
DistGNN-MB trains GraphSAGE and GAT 10x and 17.2x faster, respectively, as
compute nodes scale from 2 to 32.
|
[
{
"created": "Fri, 11 Nov 2022 18:07:33 GMT",
"version": "v1"
}
] |
2022-11-14
|
[
[
"Vasimuddin",
"Md",
""
],
[
"Mohanty",
"Ramanarayan",
""
],
[
"Misra",
"Sanchit",
""
],
[
"Avancha",
"Sasikanth",
""
]
] |
Training Graph Neural Networks, on graphs containing billions of vertices and edges, at scale using minibatch sampling poses a key challenge: strong-scaling graphs and training examples results in lower compute and higher communication volume and potential performance loss. DistGNN-MB employs a novel Historical Embedding Cache combined with compute-communication overlap to address this challenge. On a 32-node (64-socket) cluster of $3^{rd}$ generation Intel Xeon Scalable Processors with 36 cores per socket, DistGNN-MB trains 3-layer GraphSAGE and GAT models on OGBN-Papers100M to convergence with epoch times of 2 seconds and 4.9 seconds, respectively, on 32 compute nodes. At this scale, DistGNN-MB trains GraphSAGE 5.2x faster than the widely-used DistDGL. DistGNN-MB trains GraphSAGE and GAT 10x and 17.2x faster, respectively, as compute nodes scale from 2 to 32.
|
2307.02105
|
Matthias Barkowsky
|
Matthias Barkowsky and Holger Giese
|
Incremental Model Transformations with Triple Graph Grammars for
Multi-version Models
|
arXiv admin note: substantial text overlap with arXiv:2301.00623
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Like conventional software projects, projects in model-driven software
engineering require adequate management of multiple versions of development
artifacts, importantly allowing living with temporary inconsistencies. In
previous work, multi-version models for model-driven software engineering have
been introduced, which allow checking well-formedness and finding merge
conflicts for multiple versions of a model at once. However, also for
multi-version models, situations where different artifacts, that is, different
models, are linked via automatic model transformations have to be handled.
In this paper, we propose a technique for jointly handling the transformation
of multiple versions of a source model into corresponding versions of a target
model, which enables the use of a more compact representation that may afford
improved execution time of both the transformation and further analysis
operations. Our approach is based on the well-known formalism of triple graph
grammars and the aforementioned encoding of model version histories called
multi-version models. In addition to batch transformation of an entire model
version history, the technique also covers incremental synchronization of
changes in the framework of multi-version models.
We show the correctness of our approach with respect to the standard
semantics of triple graph grammars and conduct an empirical evaluation to
investigate the performance of our technique regarding execution time and
memory consumption. Our results indicate that the proposed technique affords
lower memory consumption and may improve execution time for batch
transformation of large version histories, but can also come with computational
overhead in unfavorable cases.
|
[
{
"created": "Wed, 5 Jul 2023 08:26:18 GMT",
"version": "v1"
},
{
"created": "Fri, 7 Jul 2023 12:49:21 GMT",
"version": "v2"
}
] |
2023-07-10
|
[
[
"Barkowsky",
"Matthias",
""
],
[
"Giese",
"Holger",
""
]
] |
Like conventional software projects, projects in model-driven software engineering require adequate management of multiple versions of development artifacts, importantly allowing living with temporary inconsistencies. In previous work, multi-version models for model-driven software engineering have been introduced, which allow checking well-formedness and finding merge conflicts for multiple versions of a model at once. However, also for multi-version models, situations where different artifacts, that is, different models, are linked via automatic model transformations have to be handled. In this paper, we propose a technique for jointly handling the transformation of multiple versions of a source model into corresponding versions of a target model, which enables the use of a more compact representation that may afford improved execution time of both the transformation and further analysis operations. Our approach is based on the well-known formalism of triple graph grammars and the aforementioned encoding of model version histories called multi-version models. In addition to batch transformation of an entire model version history, the technique also covers incremental synchronization of changes in the framework of multi-version models. We show the correctness of our approach with respect to the standard semantics of triple graph grammars and conduct an empirical evaluation to investigate the performance of our technique regarding execution time and memory consumption. Our results indicate that the proposed technique affords lower memory consumption and may improve execution time for batch transformation of large version histories, but can also come with computational overhead in unfavorable cases.
|
2307.07313
|
Oscar Carlsson
|
Oscar Carlsson, Jan E. Gerken, Hampus Linander, Heiner Spie{\ss},
Fredrik Ohlsson, Christoffer Petersson, Daniel Persson
|
HEAL-SWIN: A Vision Transformer On The Sphere
|
Accepted as poster to CVPR 2024. Main body: 10 pages, 7 figures.
Appendices: 9 pages, 6 figures
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
High-resolution wide-angle fisheye images are becoming more and more
important for robotics applications such as autonomous driving. However, using
ordinary convolutional neural networks or vision transformers on this data is
problematic due to projection and distortion losses introduced when projecting
to a rectangular grid on the plane. We introduce the HEAL-SWIN transformer,
which combines the highly uniform Hierarchical Equal Area iso-Latitude
Pixelation (HEALPix) grid used in astrophysics and cosmology with the
Hierarchical Shifted-Window (SWIN) transformer to yield an efficient and
flexible model capable of training on high-resolution, distortion-free
spherical data. In HEAL-SWIN, the nested structure of the HEALPix grid is used
to perform the patching and windowing operations of the SWIN transformer,
enabling the network to process spherical representations with minimal
computational overhead. We demonstrate the superior performance of our model on
both synthetic and real automotive datasets, as well as a selection of other
image datasets, for semantic segmentation, depth regression and classification
tasks. Our code is publicly available at
https://github.com/JanEGerken/HEAL-SWIN.
|
[
{
"created": "Fri, 14 Jul 2023 12:46:59 GMT",
"version": "v1"
},
{
"created": "Wed, 8 May 2024 15:49:58 GMT",
"version": "v2"
}
] |
2024-05-09
|
[
[
"Carlsson",
"Oscar",
""
],
[
"Gerken",
"Jan E.",
""
],
[
"Linander",
"Hampus",
""
],
[
"Spieß",
"Heiner",
""
],
[
"Ohlsson",
"Fredrik",
""
],
[
"Petersson",
"Christoffer",
""
],
[
"Persson",
"Daniel",
""
]
] |
High-resolution wide-angle fisheye images are becoming more and more important for robotics applications such as autonomous driving. However, using ordinary convolutional neural networks or vision transformers on this data is problematic due to projection and distortion losses introduced when projecting to a rectangular grid on the plane. We introduce the HEAL-SWIN transformer, which combines the highly uniform Hierarchical Equal Area iso-Latitude Pixelation (HEALPix) grid used in astrophysics and cosmology with the Hierarchical Shifted-Window (SWIN) transformer to yield an efficient and flexible model capable of training on high-resolution, distortion-free spherical data. In HEAL-SWIN, the nested structure of the HEALPix grid is used to perform the patching and windowing operations of the SWIN transformer, enabling the network to process spherical representations with minimal computational overhead. We demonstrate the superior performance of our model on both synthetic and real automotive datasets, as well as a selection of other image datasets, for semantic segmentation, depth regression and classification tasks. Our code is publicly available at https://github.com/JanEGerken/HEAL-SWIN.
|
1701.00806
|
Monique Laurent
|
Monique Laurent, Matteo Seminaroti, Shin-ichi Tanigawa
|
A Structural Characterization for Certifying Robinsonian Matrices
|
21 pages, 1 figure
| null | null | null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A symmetric matrix is Robinsonian if its rows and columns can be
simultaneously reordered in such a way that entries are monotone nondecreasing
in rows and columns when moving toward the diagonal. The adjacency matrix of a
graph is Robinsonian precisely when the graph is a unit interval graph, so that
Robinsonian matrices form a matrix analogue of the class of unit interval
graphs. Here we provide a structural characterization for Robinsonian matrices
in terms of forbidden substructures, extending the notion of asteroidal triples
to weighted graphs. This implies the known characterization of unit interval
graphs and leads to an efficient algorithm for certifying that a matrix is not
Robinsonian.
|
[
{
"created": "Tue, 3 Jan 2017 19:59:17 GMT",
"version": "v1"
}
] |
2018-11-20
|
[
[
"Laurent",
"Monique",
""
],
[
"Seminaroti",
"Matteo",
""
],
[
"Tanigawa",
"Shin-ichi",
""
]
] |
A symmetric matrix is Robinsonian if its rows and columns can be simultaneously reordered in such a way that entries are monotone nondecreasing in rows and columns when moving toward the diagonal. The adjacency matrix of a graph is Robinsonian precisely when the graph is a unit interval graph, so that Robinsonian matrices form a matrix analogue of the class of unit interval graphs. Here we provide a structural characterization for Robinsonian matrices in terms of forbidden substructures, extending the notion of asteroidal triples to weighted graphs. This implies the known characterization of unit interval graphs and leads to an efficient algorithm for certifying that a matrix is not Robinsonian.
|
1402.5979
|
Renato J Cintra
|
V. A. Coutinho, R. J. Cintra, F. M. Bayer, S. Kulasekera, A.
Madanayake
|
A Multiplierless Pruned DCT-like Transformation for Image and Video
Compression that Requires 10 Additions Only
|
13 pages, 4 figures, 5 tables
|
Journal of Real-Time Image Processing, August 2016, Volume 12,
Issue 2, pp 247-255
|
10.1007/s11554-015-0492-8
| null |
cs.MM cs.CV stat.ME
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A multiplierless pruned approximate 8-point discrete cosine transform (DCT)
requiring only 10 additions is introduced. The proposed algorithm was assessed
in image and video compression, showing competitive performance with
state-of-the-art methods. Digital implementation in 45 nm CMOS technology up to
place-and-route level indicates clock speed of 288 MHz at a 1.1 V supply. The
8x8 block rate is 36 MHz.The DCT approximation was embedded into HEVC reference
software; resulting video frames, at up to 327 Hz for 8-bit RGB HEVC, presented
negligible image degradation.
|
[
{
"created": "Mon, 24 Feb 2014 21:04:41 GMT",
"version": "v1"
},
{
"created": "Sun, 11 Dec 2016 19:23:57 GMT",
"version": "v2"
}
] |
2016-12-13
|
[
[
"Coutinho",
"V. A.",
""
],
[
"Cintra",
"R. J.",
""
],
[
"Bayer",
"F. M.",
""
],
[
"Kulasekera",
"S.",
""
],
[
"Madanayake",
"A.",
""
]
] |
A multiplierless pruned approximate 8-point discrete cosine transform (DCT) requiring only 10 additions is introduced. The proposed algorithm was assessed in image and video compression, showing competitive performance with state-of-the-art methods. Digital implementation in 45 nm CMOS technology up to place-and-route level indicates clock speed of 288 MHz at a 1.1 V supply. The 8x8 block rate is 36 MHz.The DCT approximation was embedded into HEVC reference software; resulting video frames, at up to 327 Hz for 8-bit RGB HEVC, presented negligible image degradation.
|
1807.06446
|
Haoyu Yang
|
Haoyu Yang, Shuhe Li, Cyrus Tabery, Bingqing Lin, Bei Yu
|
Bridging the Gap Between Layout Pattern Sampling and Hotspot Detection
via Batch Active Learning
|
8 pages, 7 figures
| null | null | null |
cs.LG eess.IV stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Layout hotpot detection is one of the main steps in modern VLSI design. A
typical hotspot detection flow is extremely time consuming due to the
computationally expensive mask optimization and lithographic simulation. Recent
researches try to facilitate the procedure with a reduced flow including
feature extraction, training set generation and hotspot detection, where
feature extraction methods and hotspot detection engines are deeply studied.
However, the performance of hotspot detectors relies highly on the quality of
reference layout libraries which are costly to obtain and usually predetermined
or randomly sampled in previous works. In this paper, we propose an active
learning-based layout pattern sampling and hotspot detection flow, which
simultaneously optimizes the machine learning model and the training set that
aims to achieve similar or better hotspot detection performance with much
smaller number of training instances. Experimental results show that our
proposed method can significantly reduce lithography simulation overhead while
attaining satisfactory detection accuracy on designs under both DUV and EUV
lithography technologies.
|
[
{
"created": "Fri, 13 Jul 2018 17:51:42 GMT",
"version": "v1"
}
] |
2018-07-18
|
[
[
"Yang",
"Haoyu",
""
],
[
"Li",
"Shuhe",
""
],
[
"Tabery",
"Cyrus",
""
],
[
"Lin",
"Bingqing",
""
],
[
"Yu",
"Bei",
""
]
] |
Layout hotpot detection is one of the main steps in modern VLSI design. A typical hotspot detection flow is extremely time consuming due to the computationally expensive mask optimization and lithographic simulation. Recent researches try to facilitate the procedure with a reduced flow including feature extraction, training set generation and hotspot detection, where feature extraction methods and hotspot detection engines are deeply studied. However, the performance of hotspot detectors relies highly on the quality of reference layout libraries which are costly to obtain and usually predetermined or randomly sampled in previous works. In this paper, we propose an active learning-based layout pattern sampling and hotspot detection flow, which simultaneously optimizes the machine learning model and the training set that aims to achieve similar or better hotspot detection performance with much smaller number of training instances. Experimental results show that our proposed method can significantly reduce lithography simulation overhead while attaining satisfactory detection accuracy on designs under both DUV and EUV lithography technologies.
|
1101.0302
|
Tsachy Weissman
|
Rami Atar and Tsachy Weissman
|
Mutual Information, Relative Entropy, and Estimation in the Poisson
Channel
|
24 pages, 4 figures
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Let $X$ be a non-negative random variable and let the conditional
distribution of a random variable $Y$, given $X$, be ${Poisson}(\gamma \cdot
X)$, for a parameter $\gamma \geq 0$. We identify a natural loss function such
that: 1) The derivative of the mutual information between $X$ and $Y$ with
respect to $\gamma$ is equal to the \emph{minimum} mean loss in estimating $X$
based on $Y$, regardless of the distribution of $X$. 2) When $X \sim P$ is
estimated based on $Y$ by a mismatched estimator that would have minimized the
expected loss had $X \sim Q$, the integral over all values of $\gamma$ of the
excess mean loss is equal to the relative entropy between $P$ and $Q$.
For a continuous time setting where $X^T = \{X_t, 0 \leq t \leq T \}$ is a
non-negative stochastic process and the conditional law of $Y^T=\{Y_t, 0\le
t\le T\}$, given $X^T$, is that of a non-homogeneous Poisson process with
intensity function $\gamma \cdot X^T$, under the same loss function: 1) The
minimum mean loss in \emph{causal} filtering when $\gamma = \gamma_0$ is equal
to the expected value of the minimum mean loss in \emph{non-causal} filtering
(smoothing) achieved with a channel whose parameter $\gamma$ is uniformly
distributed between 0 and $\gamma_0$. Bridging the two quantities is the mutual
information between $X^T$ and $Y^T$. 2) This relationship between the mean
losses in causal and non-causal filtering holds also in the case where the
filters employed are mismatched, i.e., optimized assuming a law on $X^T$ which
is not the true one. Bridging the two quantities in this case is the sum of the
mutual information and the relative entropy between the true and the mismatched
distribution of $Y^T$. Thus, relative entropy quantifies the excess estimation
loss due to mismatch in this setting.
These results parallel those recently found for the Gaussian channel.
|
[
{
"created": "Fri, 31 Dec 2010 21:28:43 GMT",
"version": "v1"
}
] |
2015-03-17
|
[
[
"Atar",
"Rami",
""
],
[
"Weissman",
"Tsachy",
""
]
] |
Let $X$ be a non-negative random variable and let the conditional distribution of a random variable $Y$, given $X$, be ${Poisson}(\gamma \cdot X)$, for a parameter $\gamma \geq 0$. We identify a natural loss function such that: 1) The derivative of the mutual information between $X$ and $Y$ with respect to $\gamma$ is equal to the \emph{minimum} mean loss in estimating $X$ based on $Y$, regardless of the distribution of $X$. 2) When $X \sim P$ is estimated based on $Y$ by a mismatched estimator that would have minimized the expected loss had $X \sim Q$, the integral over all values of $\gamma$ of the excess mean loss is equal to the relative entropy between $P$ and $Q$. For a continuous time setting where $X^T = \{X_t, 0 \leq t \leq T \}$ is a non-negative stochastic process and the conditional law of $Y^T=\{Y_t, 0\le t\le T\}$, given $X^T$, is that of a non-homogeneous Poisson process with intensity function $\gamma \cdot X^T$, under the same loss function: 1) The minimum mean loss in \emph{causal} filtering when $\gamma = \gamma_0$ is equal to the expected value of the minimum mean loss in \emph{non-causal} filtering (smoothing) achieved with a channel whose parameter $\gamma$ is uniformly distributed between 0 and $\gamma_0$. Bridging the two quantities is the mutual information between $X^T$ and $Y^T$. 2) This relationship between the mean losses in causal and non-causal filtering holds also in the case where the filters employed are mismatched, i.e., optimized assuming a law on $X^T$ which is not the true one. Bridging the two quantities in this case is the sum of the mutual information and the relative entropy between the true and the mismatched distribution of $Y^T$. Thus, relative entropy quantifies the excess estimation loss due to mismatch in this setting. These results parallel those recently found for the Gaussian channel.
|
1806.02867
|
Guy Lorberbom
|
Guy Lorberbom (Technion), Andreea Gane (MIT), Tommi Jaakkola (MIT),
Tamir Hazan (Technion)
|
Direct Optimization through $\arg \max$ for Discrete Variational
Auto-Encoder
|
Accepted by Neural Information Processing Systems (NeurIPS 2019)
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reparameterization of variational auto-encoders with continuous random
variables is an effective method for reducing the variance of their gradient
estimates. In the discrete case, one can perform reparametrization using the
Gumbel-Max trick, but the resulting objective relies on an $\arg \max$
operation and is non-differentiable. In contrast to previous works which resort
to softmax-based relaxations, we propose to optimize it directly by applying
the direct loss minimization approach. Our proposal extends naturally to
structured discrete latent variable models when evaluating the $\arg \max$
operation is tractable. We demonstrate empirically the effectiveness of the
direct loss minimization technique in variational autoencoders with both
unstructured and structured discrete latent variables.
|
[
{
"created": "Thu, 7 Jun 2018 19:09:21 GMT",
"version": "v1"
},
{
"created": "Thu, 11 Oct 2018 17:07:53 GMT",
"version": "v2"
},
{
"created": "Sat, 9 Feb 2019 19:34:43 GMT",
"version": "v3"
},
{
"created": "Thu, 30 May 2019 13:49:37 GMT",
"version": "v4"
},
{
"created": "Sun, 8 Dec 2019 08:59:53 GMT",
"version": "v5"
}
] |
2019-12-10
|
[
[
"Lorberbom",
"Guy",
"",
"Technion"
],
[
"Gane",
"Andreea",
"",
"MIT"
],
[
"Jaakkola",
"Tommi",
"",
"MIT"
],
[
"Hazan",
"Tamir",
"",
"Technion"
]
] |
Reparameterization of variational auto-encoders with continuous random variables is an effective method for reducing the variance of their gradient estimates. In the discrete case, one can perform reparametrization using the Gumbel-Max trick, but the resulting objective relies on an $\arg \max$ operation and is non-differentiable. In contrast to previous works which resort to softmax-based relaxations, we propose to optimize it directly by applying the direct loss minimization approach. Our proposal extends naturally to structured discrete latent variable models when evaluating the $\arg \max$ operation is tractable. We demonstrate empirically the effectiveness of the direct loss minimization technique in variational autoencoders with both unstructured and structured discrete latent variables.
|
2103.15596
|
Thiago Gomes
|
Thiago L. Gomes and Renato Martins and Jo\~ao Ferreira and Rafael
Azevedo and Guilherme Torres and Erickson R. Nascimento
|
A Shape-Aware Retargeting Approach to Transfer Human Motion and
Appearance in Monocular Videos
|
19 pages, 13 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Transferring human motion and appearance between videos of human actors
remains one of the key challenges in Computer Vision. Despite the advances from
recent image-to-image translation approaches, there are several transferring
contexts where most end-to-end learning-based retargeting methods still perform
poorly. Transferring human appearance from one actor to another is only ensured
when a strict setup has been complied, which is generally built considering
their training regime's specificities. In this work, we propose a shape-aware
approach based on a hybrid image-based rendering technique that exhibits
competitive visual retargeting quality compared to state-of-the-art neural
rendering approaches. The formulation leverages the user body shape into the
retargeting while considering physical constraints of the motion in 3D and the
2D image domain. We also present a new video retargeting benchmark dataset
composed of different videos with annotated human motions to evaluate the task
of synthesizing people's videos, which can be used as a common base to improve
tracking the progress in the field. The dataset and its evaluation protocols
are designed to evaluate retargeting methods in more general and challenging
conditions. Our method is validated in several experiments, comprising publicly
available videos of actors with different shapes, motion types, and camera
setups. The dataset and retargeting code are publicly available to the
community at: https://www.verlab.dcc.ufmg.br/retargeting-motion.
|
[
{
"created": "Mon, 29 Mar 2021 13:17:41 GMT",
"version": "v1"
},
{
"created": "Wed, 28 Apr 2021 15:56:27 GMT",
"version": "v2"
}
] |
2021-04-29
|
[
[
"Gomes",
"Thiago L.",
""
],
[
"Martins",
"Renato",
""
],
[
"Ferreira",
"João",
""
],
[
"Azevedo",
"Rafael",
""
],
[
"Torres",
"Guilherme",
""
],
[
"Nascimento",
"Erickson R.",
""
]
] |
Transferring human motion and appearance between videos of human actors remains one of the key challenges in Computer Vision. Despite the advances from recent image-to-image translation approaches, there are several transferring contexts where most end-to-end learning-based retargeting methods still perform poorly. Transferring human appearance from one actor to another is only ensured when a strict setup has been complied, which is generally built considering their training regime's specificities. In this work, we propose a shape-aware approach based on a hybrid image-based rendering technique that exhibits competitive visual retargeting quality compared to state-of-the-art neural rendering approaches. The formulation leverages the user body shape into the retargeting while considering physical constraints of the motion in 3D and the 2D image domain. We also present a new video retargeting benchmark dataset composed of different videos with annotated human motions to evaluate the task of synthesizing people's videos, which can be used as a common base to improve tracking the progress in the field. The dataset and its evaluation protocols are designed to evaluate retargeting methods in more general and challenging conditions. Our method is validated in several experiments, comprising publicly available videos of actors with different shapes, motion types, and camera setups. The dataset and retargeting code are publicly available to the community at: https://www.verlab.dcc.ufmg.br/retargeting-motion.
|
1805.07966
|
Sopan Khosla
|
Sopan Khosla, Niyati Chhaya, Kushal Chawla
|
Aff2Vec: Affect--Enriched Distributional Word Representations
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Human communication includes information, opinions, and reactions. Reactions
are often captured by the affective-messages in written as well as verbal
communications. While there has been work in affect modeling and to some extent
affective content generation, the area of affective word distributions in not
well studied. Synsets and lexica capture semantic relationships across words.
These models however lack in encoding affective or emotional word
interpretations. Our proposed model, Aff2Vec provides a method for enriched
word embeddings that are representative of affective interpretations of words.
Aff2Vec outperforms the state--of--the--art in intrinsic word-similarity tasks.
Further, the use of Aff2Vec representations outperforms baseline embeddings in
downstream natural language understanding tasks including sentiment analysis,
personality detection, and frustration prediction.
|
[
{
"created": "Mon, 21 May 2018 10:10:16 GMT",
"version": "v1"
}
] |
2018-05-22
|
[
[
"Khosla",
"Sopan",
""
],
[
"Chhaya",
"Niyati",
""
],
[
"Chawla",
"Kushal",
""
]
] |
Human communication includes information, opinions, and reactions. Reactions are often captured by the affective-messages in written as well as verbal communications. While there has been work in affect modeling and to some extent affective content generation, the area of affective word distributions in not well studied. Synsets and lexica capture semantic relationships across words. These models however lack in encoding affective or emotional word interpretations. Our proposed model, Aff2Vec provides a method for enriched word embeddings that are representative of affective interpretations of words. Aff2Vec outperforms the state--of--the--art in intrinsic word-similarity tasks. Further, the use of Aff2Vec representations outperforms baseline embeddings in downstream natural language understanding tasks including sentiment analysis, personality detection, and frustration prediction.
|
2202.12650
|
Javier L\'opez-Randulfe
|
Javier L\'opez-Randulfe, Nico Reeb, Negin Karimi, Chen Liu, Hector A.
Gonzalez, Robin Dietrich, Bernhard Vogginger, Christian Mayr, Alois Knoll
|
Time-coded Spiking Fourier Transform in Neuromorphic Hardware
|
Accepted version on IEEE Transactions on Computers (early access).
Added copyright notice
| null |
10.1109/TC.2022.3162708
| null |
cs.NE eess.SP
|
http://creativecommons.org/licenses/by-sa/4.0/
|
After several decades of continuously optimizing computing systems, the
Moore's law is reaching itsend. However, there is an increasing demand for fast
and efficient processing systems that can handlelarge streams of data while
decreasing system footprints. Neuromorphic computing answers thisneed by
creating decentralized architectures that communicate with binary events over
time. Despiteits rapid growth in the last few years, novel algorithms are
needed that can leverage the potential ofthis emerging computing paradigm and
can stimulate the design of advanced neuromorphic chips.In this work, we
propose a time-based spiking neural network that is mathematically equivalent
tothe Fourier transform. We implemented the network in the neuromorphic chip
Loihi and conductedexperiments on five different real scenarios with an
automotive frequency modulated continuouswave radar. Experimental results
validate the algorithm, and we hope they prompt the design of adhoc
neuromorphic chips that can improve the efficiency of state-of-the-art digital
signal processorsand encourage research on neuromorphic computing for signal
processing.
|
[
{
"created": "Fri, 25 Feb 2022 12:15:46 GMT",
"version": "v1"
},
{
"created": "Thu, 31 Mar 2022 10:34:13 GMT",
"version": "v2"
}
] |
2022-04-01
|
[
[
"López-Randulfe",
"Javier",
""
],
[
"Reeb",
"Nico",
""
],
[
"Karimi",
"Negin",
""
],
[
"Liu",
"Chen",
""
],
[
"Gonzalez",
"Hector A.",
""
],
[
"Dietrich",
"Robin",
""
],
[
"Vogginger",
"Bernhard",
""
],
[
"Mayr",
"Christian",
""
],
[
"Knoll",
"Alois",
""
]
] |
After several decades of continuously optimizing computing systems, the Moore's law is reaching itsend. However, there is an increasing demand for fast and efficient processing systems that can handlelarge streams of data while decreasing system footprints. Neuromorphic computing answers thisneed by creating decentralized architectures that communicate with binary events over time. Despiteits rapid growth in the last few years, novel algorithms are needed that can leverage the potential ofthis emerging computing paradigm and can stimulate the design of advanced neuromorphic chips.In this work, we propose a time-based spiking neural network that is mathematically equivalent tothe Fourier transform. We implemented the network in the neuromorphic chip Loihi and conductedexperiments on five different real scenarios with an automotive frequency modulated continuouswave radar. Experimental results validate the algorithm, and we hope they prompt the design of adhoc neuromorphic chips that can improve the efficiency of state-of-the-art digital signal processorsand encourage research on neuromorphic computing for signal processing.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.