id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2202.00155
|
Hattie Zhou
|
Hattie Zhou, Ankit Vani, Hugo Larochelle, Aaron Courville
|
Fortuitous Forgetting in Connectionist Networks
|
ICLR Camera Ready
|
ICLR 2022
| null | null |
cs.LG cs.AI cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Forgetting is often seen as an unwanted characteristic in both human and
machine learning. However, we propose that forgetting can in fact be favorable
to learning. We introduce "forget-and-relearn" as a powerful paradigm for
shaping the learning trajectories of artificial neural networks. In this
process, the forgetting step selectively removes undesirable information from
the model, and the relearning step reinforces features that are consistently
useful under different conditions. The forget-and-relearn framework unifies
many existing iterative training algorithms in the image classification and
language emergence literature, and allows us to understand the success of these
algorithms in terms of the disproportionate forgetting of undesirable
information. We leverage this understanding to improve upon existing algorithms
by designing more targeted forgetting operations. Insights from our analysis
provide a coherent view on the dynamics of iterative training in neural
networks and offer a clear path towards performance improvements.
|
[
{
"created": "Tue, 1 Feb 2022 00:15:58 GMT",
"version": "v1"
}
] |
2022-02-02
|
[
[
"Zhou",
"Hattie",
""
],
[
"Vani",
"Ankit",
""
],
[
"Larochelle",
"Hugo",
""
],
[
"Courville",
"Aaron",
""
]
] |
Forgetting is often seen as an unwanted characteristic in both human and machine learning. However, we propose that forgetting can in fact be favorable to learning. We introduce "forget-and-relearn" as a powerful paradigm for shaping the learning trajectories of artificial neural networks. In this process, the forgetting step selectively removes undesirable information from the model, and the relearning step reinforces features that are consistently useful under different conditions. The forget-and-relearn framework unifies many existing iterative training algorithms in the image classification and language emergence literature, and allows us to understand the success of these algorithms in terms of the disproportionate forgetting of undesirable information. We leverage this understanding to improve upon existing algorithms by designing more targeted forgetting operations. Insights from our analysis provide a coherent view on the dynamics of iterative training in neural networks and offer a clear path towards performance improvements.
|
1901.07366
|
James Hahn
|
James Hahn, Adriana Kovashka
|
Measuring Effectiveness of Video Advertisements
|
9 pages, 7 figures, 2 tables
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Advertisements are unavoidable in modern society. Times Square is notorious
for its incessant display of advertisements. Its popularity is worldwide and
smaller cities possess miniature versions of the display, such as Pittsburgh
and its digital works in Oakland on Forbes Avenue. Tokyo's Ginza district
recently rose to popularity due to its upscale shops and constant onslaught of
advertisements to pedestrians. Advertisements arise in other mediums as well.
For example, they help popular streaming services, such as Spotify, Hulu, and
Youtube TV gather significant streams of revenue to reduce the cost of monthly
subscriptions for consumers. Ads provide an additional source of money for
companies and entire industries to allocate resources toward alternative
business motives. They are attractive to companies and nearly unavoidable for
consumers. One challenge for advertisers is examining a advertisement's
effectiveness or usefulness in conveying a message to their targeted
demographics. Rather than constructing a single, static image of content, a
video advertisement possesses hundreds of frames of data with varying scenes,
actors, objects, and complexity. Therefore, measuring effectiveness of video
advertisements is important to impacting a billion-dollar industry. This paper
explores the combination of human-annotated features and common video
processing techniques to predict effectiveness ratings of advertisements
collected from Youtube. This task is seen as a binary (effective vs.
non-effective), four-way, and five-way machine learning classification task.
The first findings in terms of accuracy and inference on this dataset, as well
as some of the first ad research, on a small dataset are presented. Accuracies
of 84\%, 65\%, and 55\% are reached on the binary, four-way, and five-way tasks
respectively.
|
[
{
"created": "Tue, 15 Jan 2019 03:41:37 GMT",
"version": "v1"
},
{
"created": "Mon, 28 Jan 2019 20:15:03 GMT",
"version": "v2"
}
] |
2019-01-30
|
[
[
"Hahn",
"James",
""
],
[
"Kovashka",
"Adriana",
""
]
] |
Advertisements are unavoidable in modern society. Times Square is notorious for its incessant display of advertisements. Its popularity is worldwide and smaller cities possess miniature versions of the display, such as Pittsburgh and its digital works in Oakland on Forbes Avenue. Tokyo's Ginza district recently rose to popularity due to its upscale shops and constant onslaught of advertisements to pedestrians. Advertisements arise in other mediums as well. For example, they help popular streaming services, such as Spotify, Hulu, and Youtube TV gather significant streams of revenue to reduce the cost of monthly subscriptions for consumers. Ads provide an additional source of money for companies and entire industries to allocate resources toward alternative business motives. They are attractive to companies and nearly unavoidable for consumers. One challenge for advertisers is examining a advertisement's effectiveness or usefulness in conveying a message to their targeted demographics. Rather than constructing a single, static image of content, a video advertisement possesses hundreds of frames of data with varying scenes, actors, objects, and complexity. Therefore, measuring effectiveness of video advertisements is important to impacting a billion-dollar industry. This paper explores the combination of human-annotated features and common video processing techniques to predict effectiveness ratings of advertisements collected from Youtube. This task is seen as a binary (effective vs. non-effective), four-way, and five-way machine learning classification task. The first findings in terms of accuracy and inference on this dataset, as well as some of the first ad research, on a small dataset are presented. Accuracies of 84\%, 65\%, and 55\% are reached on the binary, four-way, and five-way tasks respectively.
|
2209.02156
|
Farhad Aghili
|
Farhad Aghili
|
Adaptive Visual Servo Control for Autonomous Robots
| null | null |
10.1109/TMECH.2021.3087729
| null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper focuses on an adaptive and fault-tolerant vision-guided robotic
system that enables to choose the most appropriate control action if partial or
complete failure of the vision system in the short term occurs. Moreover, the
autonomous robotic system takes physical and operational constraints into
account to perform the demands of a specific visual servoing task in a way to
minimize a cost function. A hierarchical control architecture is developed
based on interwoven integration of a variant of the iterative closest point
(ICP) image registration, a constrained noise-adaptive Kalman filter, a fault
detection logic and recovery, together with a constrained optimal path planner.
The dynamic estimator estimates unknown states and uncertain parameters
required for motion prediction while imposing a set of inequality constraints
for consistency of the estimation process and adjusting adaptively the Kalman
filter parameters in the face of unexpected vision errors. It is followed by
the implementation of a fault recovery strategy based on a fault detection
logic that monitors the health of the visual feedback using the metric fit
error of the image registration. Subsequently, the estimated/predicted pose and
parameters are passed to an optimal path planner in order to bring the robot
end-effector to the grasping point of a moving target as quickly as possible
subject to multiple constraints such as acceleration limit, smooth capture, and
line-of-sight angle of the target.
|
[
{
"created": "Mon, 5 Sep 2022 22:22:29 GMT",
"version": "v1"
}
] |
2022-09-07
|
[
[
"Aghili",
"Farhad",
""
]
] |
This paper focuses on an adaptive and fault-tolerant vision-guided robotic system that enables to choose the most appropriate control action if partial or complete failure of the vision system in the short term occurs. Moreover, the autonomous robotic system takes physical and operational constraints into account to perform the demands of a specific visual servoing task in a way to minimize a cost function. A hierarchical control architecture is developed based on interwoven integration of a variant of the iterative closest point (ICP) image registration, a constrained noise-adaptive Kalman filter, a fault detection logic and recovery, together with a constrained optimal path planner. The dynamic estimator estimates unknown states and uncertain parameters required for motion prediction while imposing a set of inequality constraints for consistency of the estimation process and adjusting adaptively the Kalman filter parameters in the face of unexpected vision errors. It is followed by the implementation of a fault recovery strategy based on a fault detection logic that monitors the health of the visual feedback using the metric fit error of the image registration. Subsequently, the estimated/predicted pose and parameters are passed to an optimal path planner in order to bring the robot end-effector to the grasping point of a moving target as quickly as possible subject to multiple constraints such as acceleration limit, smooth capture, and line-of-sight angle of the target.
|
2307.08671
|
Jaehyun Choi
|
Gyojin Han, Dong-Jae Lee, Jiwan Hur, Jaehyun Choi, Junmo Kim
|
Deep Cross-Modal Steganography Using Neural Representations
|
ICIP 2023 Oral
| null | null | null |
cs.CR cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Steganography is the process of embedding secret data into another message or
data, in such a way that it is not easily noticeable. With the advancement of
deep learning, Deep Neural Networks (DNNs) have recently been utilized in
steganography. However, existing deep steganography techniques are limited in
scope, as they focus on specific data types and are not effective for
cross-modal steganography. Therefore, We propose a deep cross-modal
steganography framework using Implicit Neural Representations (INRs) to hide
secret data of various formats in cover images. The proposed framework employs
INRs to represent the secret data, which can handle data of various modalities
and resolutions. Experiments on various secret datasets of diverse types
demonstrate that the proposed approach is expandable and capable of
accommodating different modalities.
|
[
{
"created": "Sun, 2 Jul 2023 08:08:02 GMT",
"version": "v1"
},
{
"created": "Tue, 18 Jul 2023 08:12:14 GMT",
"version": "v2"
},
{
"created": "Sat, 7 Oct 2023 05:45:58 GMT",
"version": "v3"
}
] |
2023-10-10
|
[
[
"Han",
"Gyojin",
""
],
[
"Lee",
"Dong-Jae",
""
],
[
"Hur",
"Jiwan",
""
],
[
"Choi",
"Jaehyun",
""
],
[
"Kim",
"Junmo",
""
]
] |
Steganography is the process of embedding secret data into another message or data, in such a way that it is not easily noticeable. With the advancement of deep learning, Deep Neural Networks (DNNs) have recently been utilized in steganography. However, existing deep steganography techniques are limited in scope, as they focus on specific data types and are not effective for cross-modal steganography. Therefore, We propose a deep cross-modal steganography framework using Implicit Neural Representations (INRs) to hide secret data of various formats in cover images. The proposed framework employs INRs to represent the secret data, which can handle data of various modalities and resolutions. Experiments on various secret datasets of diverse types demonstrate that the proposed approach is expandable and capable of accommodating different modalities.
|
1810.04599
|
Hui Miao
|
Hui Miao and Amol Deshpande
|
Understanding Data Science Lifecycle Provenance via Graph Segmentation
and Summarization
| null | null | null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Increasingly modern data science platforms today have non-intrusive and
extensible provenance ingestion mechanisms to collect rich provenance and
context information, handle modifications to the same file using
distinguishable versions, and use graph data models (e.g., property graphs) and
query languages (e.g., Cypher) to represent and manipulate the stored
provenance/context information. Due to the schema-later nature of the metadata,
multiple versions of the same files, and unfamiliar artifacts introduced by
team members, the "provenance graph" is verbose and evolving, and hard to
understand; using standard graph query model, it is difficult to compose
queries and utilize this valuable information.
In this paper, we propose two high-level graph query operators to address the
verboseness and evolving nature of such provenance graphs. First, we introduce
a graph segmentation operator, which queries the retrospective provenance
between a set of source vertices and a set of destination vertices via flexible
boundary criteria to help users get insight about the derivation relationships
among those vertices. We show the semantics of such a query in terms of a
context-free grammar, and develop efficient algorithms that run orders of
magnitude faster than state-of-the-art. Second, we propose a graph
summarization operator that combines similar segments together to query
prospective provenance of the underlying project. The operator allows tuning
the summary by ignoring vertex details and characterizing local structures, and
ensures the provenance meaning using path constraints. We show the optimal
summary problem is PSPACE-complete and develop effective approximation
algorithms. The operators are implemented on top of a property graph backend.
We evaluate our query methods extensively and show the effectiveness and
efficiency of the proposed methods.
|
[
{
"created": "Wed, 10 Oct 2018 15:40:27 GMT",
"version": "v1"
},
{
"created": "Tue, 16 Oct 2018 05:05:07 GMT",
"version": "v2"
}
] |
2018-10-17
|
[
[
"Miao",
"Hui",
""
],
[
"Deshpande",
"Amol",
""
]
] |
Increasingly modern data science platforms today have non-intrusive and extensible provenance ingestion mechanisms to collect rich provenance and context information, handle modifications to the same file using distinguishable versions, and use graph data models (e.g., property graphs) and query languages (e.g., Cypher) to represent and manipulate the stored provenance/context information. Due to the schema-later nature of the metadata, multiple versions of the same files, and unfamiliar artifacts introduced by team members, the "provenance graph" is verbose and evolving, and hard to understand; using standard graph query model, it is difficult to compose queries and utilize this valuable information. In this paper, we propose two high-level graph query operators to address the verboseness and evolving nature of such provenance graphs. First, we introduce a graph segmentation operator, which queries the retrospective provenance between a set of source vertices and a set of destination vertices via flexible boundary criteria to help users get insight about the derivation relationships among those vertices. We show the semantics of such a query in terms of a context-free grammar, and develop efficient algorithms that run orders of magnitude faster than state-of-the-art. Second, we propose a graph summarization operator that combines similar segments together to query prospective provenance of the underlying project. The operator allows tuning the summary by ignoring vertex details and characterizing local structures, and ensures the provenance meaning using path constraints. We show the optimal summary problem is PSPACE-complete and develop effective approximation algorithms. The operators are implemented on top of a property graph backend. We evaluate our query methods extensively and show the effectiveness and efficiency of the proposed methods.
|
2403.01412
|
Lingfeng Liu
|
Lingfeng Liu, Dong Ni, Hangjie Yuan
|
LUM-ViT: Learnable Under-sampling Mask Vision Transformer for Bandwidth
Limited Optical Signal Acquisition
|
Accepted to ICLR 2024
| null | null | null |
cs.CV eess.IV eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Bandwidth constraints during signal acquisition frequently impede real-time
detection applications. Hyperspectral data is a notable example, whose vast
volume compromises real-time hyperspectral detection. To tackle this hurdle, we
introduce a novel approach leveraging pre-acquisition modulation to reduce the
acquisition volume. This modulation process is governed by a deep learning
model, utilizing prior information. Central to our approach is LUM-ViT, a
Vision Transformer variant. Uniquely, LUM-ViT incorporates a learnable
under-sampling mask tailored for pre-acquisition modulation. To further
optimize for optical calculations, we propose a kernel-level weight
binarization technique and a three-stage fine-tuning strategy. Our evaluations
reveal that, by sampling a mere 10% of the original image pixels, LUM-ViT
maintains the accuracy loss within 1.8% on the ImageNet classification task.
The method sustains near-original accuracy when implemented on real-world
optical hardware, demonstrating its practicality. Code will be available at
https://github.com/MaxLLF/LUM-ViT.
|
[
{
"created": "Sun, 3 Mar 2024 06:49:01 GMT",
"version": "v1"
}
] |
2024-03-05
|
[
[
"Liu",
"Lingfeng",
""
],
[
"Ni",
"Dong",
""
],
[
"Yuan",
"Hangjie",
""
]
] |
Bandwidth constraints during signal acquisition frequently impede real-time detection applications. Hyperspectral data is a notable example, whose vast volume compromises real-time hyperspectral detection. To tackle this hurdle, we introduce a novel approach leveraging pre-acquisition modulation to reduce the acquisition volume. This modulation process is governed by a deep learning model, utilizing prior information. Central to our approach is LUM-ViT, a Vision Transformer variant. Uniquely, LUM-ViT incorporates a learnable under-sampling mask tailored for pre-acquisition modulation. To further optimize for optical calculations, we propose a kernel-level weight binarization technique and a three-stage fine-tuning strategy. Our evaluations reveal that, by sampling a mere 10% of the original image pixels, LUM-ViT maintains the accuracy loss within 1.8% on the ImageNet classification task. The method sustains near-original accuracy when implemented on real-world optical hardware, demonstrating its practicality. Code will be available at https://github.com/MaxLLF/LUM-ViT.
|
1611.08699
|
Colin Brown J
|
Colin J Brown, Ghassan Hamarneh
|
Machine Learning on Human Connectome Data from MRI
|
51 pages, 6 figures. To be submitted to a journal
| null | null | null |
cs.LG q-bio.NC stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Functional MRI (fMRI) and diffusion MRI (dMRI) are non-invasive imaging
modalities that allow in-vivo analysis of a patient's brain network (known as a
connectome). Use of these technologies has enabled faster and better diagnoses
and treatments of neurological disorders and a deeper understanding of the
human brain. Recently, researchers have been exploring the application of
machine learning models to connectome data in order to predict clinical
outcomes and analyze the importance of subnetworks in the brain. Connectome
data has unique properties, which present both special challenges and
opportunities when used for machine learning. The purpose of this work is to
review the literature on the topic of applying machine learning models to
MRI-based connectome data. This field is growing rapidly and now encompasses a
large body of research. To summarize the research done to date, we provide a
comparative, structured summary of 77 relevant works, tabulated according to
different criteria, that represent the majority of the literature on this
topic. (We also published a living version of this table online at
http://connectomelearning.cs.sfu.ca that the community can continue to
contribute to.) After giving an overview of how connectomes are constructed
from dMRI and fMRI data, we discuss the variety of machine learning tasks that
have been explored with connectome data. We then compare the advantages and
drawbacks of different machine learning approaches that have been employed,
discussing different feature selection and feature extraction schemes, as well
as the learning models and regularization penalties themselves. Throughout this
discussion, we focus particularly on how the methods are adapted to the unique
nature of graphical connectome data. Finally, we conclude by summarizing the
current state of the art and by outlining what we believe are strategic
directions for future research.
|
[
{
"created": "Sat, 26 Nov 2016 11:14:22 GMT",
"version": "v1"
}
] |
2016-12-06
|
[
[
"Brown",
"Colin J",
""
],
[
"Hamarneh",
"Ghassan",
""
]
] |
Functional MRI (fMRI) and diffusion MRI (dMRI) are non-invasive imaging modalities that allow in-vivo analysis of a patient's brain network (known as a connectome). Use of these technologies has enabled faster and better diagnoses and treatments of neurological disorders and a deeper understanding of the human brain. Recently, researchers have been exploring the application of machine learning models to connectome data in order to predict clinical outcomes and analyze the importance of subnetworks in the brain. Connectome data has unique properties, which present both special challenges and opportunities when used for machine learning. The purpose of this work is to review the literature on the topic of applying machine learning models to MRI-based connectome data. This field is growing rapidly and now encompasses a large body of research. To summarize the research done to date, we provide a comparative, structured summary of 77 relevant works, tabulated according to different criteria, that represent the majority of the literature on this topic. (We also published a living version of this table online at http://connectomelearning.cs.sfu.ca that the community can continue to contribute to.) After giving an overview of how connectomes are constructed from dMRI and fMRI data, we discuss the variety of machine learning tasks that have been explored with connectome data. We then compare the advantages and drawbacks of different machine learning approaches that have been employed, discussing different feature selection and feature extraction schemes, as well as the learning models and regularization penalties themselves. Throughout this discussion, we focus particularly on how the methods are adapted to the unique nature of graphical connectome data. Finally, we conclude by summarizing the current state of the art and by outlining what we believe are strategic directions for future research.
|
2003.01433
|
Dong Liu
|
Dong Liu and Baptiste Cavarec and Lars K. Rasmussen and Jing Yue
|
On Dominant Interference in Random Networks and Communication
Reliability
| null |
ICC 2019
| null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we study the characteristics of dominant interference power
with directional reception in a random network modelled by a Poisson Point
Process. Additionally, the Laplace functional of cumulative interference
excluding the $n$ dominant interferers is also derived, which turns out to be a
generalization of omni-directional reception and complete accumulative
interference. As an application of these results, we study the impact of
directional receivers in random networks in terms of outage probability and
error probability with queue length constraint.
|
[
{
"created": "Tue, 3 Mar 2020 10:36:44 GMT",
"version": "v1"
}
] |
2020-03-04
|
[
[
"Liu",
"Dong",
""
],
[
"Cavarec",
"Baptiste",
""
],
[
"Rasmussen",
"Lars K.",
""
],
[
"Yue",
"Jing",
""
]
] |
In this paper, we study the characteristics of dominant interference power with directional reception in a random network modelled by a Poisson Point Process. Additionally, the Laplace functional of cumulative interference excluding the $n$ dominant interferers is also derived, which turns out to be a generalization of omni-directional reception and complete accumulative interference. As an application of these results, we study the impact of directional receivers in random networks in terms of outage probability and error probability with queue length constraint.
|
1610.09726
|
Kirthevasan Kandasamy
|
Kirthevasan Kandasamy and Gautam Dasarathy and Jeff Schneider and
Barnab\'as P\'oczos
|
The Multi-fidelity Multi-armed Bandit
|
To appear at NIPS 2016
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study a variant of the classical stochastic $K$-armed bandit where
observing the outcome of each arm is expensive, but cheap approximations to
this outcome are available. For example, in online advertising the performance
of an ad can be approximated by displaying it for shorter time periods or to
narrower audiences. We formalise this task as a multi-fidelity bandit, where,
at each time step, the forecaster may choose to play an arm at any one of $M$
fidelities. The highest fidelity (desired outcome) expends cost
$\lambda^{(m)}$. The $m^{\text{th}}$ fidelity (an approximation) expends
$\lambda^{(m)} < \lambda^{(M)}$ and returns a biased estimate of the highest
fidelity. We develop MF-UCB, a novel upper confidence bound procedure for this
setting and prove that it naturally adapts to the sequence of available
approximations and costs thus attaining better regret than naive strategies
which ignore the approximations. For instance, in the above online advertising
example, MF-UCB would use the lower fidelities to quickly eliminate suboptimal
ads and reserve the larger expensive experiments on a small set of promising
candidates. We complement this result with a lower bound and show that MF-UCB
is nearly optimal under certain conditions.
|
[
{
"created": "Sun, 30 Oct 2016 23:07:49 GMT",
"version": "v1"
}
] |
2016-11-01
|
[
[
"Kandasamy",
"Kirthevasan",
""
],
[
"Dasarathy",
"Gautam",
""
],
[
"Schneider",
"Jeff",
""
],
[
"Póczos",
"Barnabás",
""
]
] |
We study a variant of the classical stochastic $K$-armed bandit where observing the outcome of each arm is expensive, but cheap approximations to this outcome are available. For example, in online advertising the performance of an ad can be approximated by displaying it for shorter time periods or to narrower audiences. We formalise this task as a multi-fidelity bandit, where, at each time step, the forecaster may choose to play an arm at any one of $M$ fidelities. The highest fidelity (desired outcome) expends cost $\lambda^{(m)}$. The $m^{\text{th}}$ fidelity (an approximation) expends $\lambda^{(m)} < \lambda^{(M)}$ and returns a biased estimate of the highest fidelity. We develop MF-UCB, a novel upper confidence bound procedure for this setting and prove that it naturally adapts to the sequence of available approximations and costs thus attaining better regret than naive strategies which ignore the approximations. For instance, in the above online advertising example, MF-UCB would use the lower fidelities to quickly eliminate suboptimal ads and reserve the larger expensive experiments on a small set of promising candidates. We complement this result with a lower bound and show that MF-UCB is nearly optimal under certain conditions.
|
2108.06696
|
Peter Hillmann
|
Peter Hillmann, Erik Heiland, Andreas Karcher
|
Automated Enterprise Architecture Model Mining
| null |
NISecurity 2021
| null | null |
cs.IR cs.CR cs.NI cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Metadata are like the steam engine of the 21st century, driving businesses
and offer multiple enhancements. Nevertheless, many companies are unaware that
these data can be used efficiently to improve their own operation. This is
where the Enterprise Architecture Framework comes in. It empowers an
organisation to get a clear view of their business, application, technical and
physical layer. This modelling approach is an established method for
organizations to take a deeper look into their structure and processes. The
development of such models requires a great deal of effort, is carried out
manually by interviewing stakeholders and requires continuous maintenance. Our
new approach enables the automated mining of Enterprise Architecture models.
The system uses common technologies to collect the metadata based on network
traffic, log files and other information in an organisation. Based on this, the
new approach generates EA models with the desired views points. Furthermore, a
rule and knowledge-based reasoning is used to obtain a holistic overview. This
offers a strategic decision support from business structure over process design
up to planning the appropriate support technology. Therefore, it forms the base
for organisations to act in an agile way. The modelling can be performed in
different modelling languages, including ArchiMate and the Nato Architecture
Framework (NAF). The designed approach is already evaluated on a small company
with multiple services and an infrastructure with several nodes.
|
[
{
"created": "Sun, 15 Aug 2021 09:01:57 GMT",
"version": "v1"
}
] |
2021-08-17
|
[
[
"Hillmann",
"Peter",
""
],
[
"Heiland",
"Erik",
""
],
[
"Karcher",
"Andreas",
""
]
] |
Metadata are like the steam engine of the 21st century, driving businesses and offer multiple enhancements. Nevertheless, many companies are unaware that these data can be used efficiently to improve their own operation. This is where the Enterprise Architecture Framework comes in. It empowers an organisation to get a clear view of their business, application, technical and physical layer. This modelling approach is an established method for organizations to take a deeper look into their structure and processes. The development of such models requires a great deal of effort, is carried out manually by interviewing stakeholders and requires continuous maintenance. Our new approach enables the automated mining of Enterprise Architecture models. The system uses common technologies to collect the metadata based on network traffic, log files and other information in an organisation. Based on this, the new approach generates EA models with the desired views points. Furthermore, a rule and knowledge-based reasoning is used to obtain a holistic overview. This offers a strategic decision support from business structure over process design up to planning the appropriate support technology. Therefore, it forms the base for organisations to act in an agile way. The modelling can be performed in different modelling languages, including ArchiMate and the Nato Architecture Framework (NAF). The designed approach is already evaluated on a small company with multiple services and an infrastructure with several nodes.
|
2306.08013
|
Pum Jun Kim
|
Pum Jun Kim, Yoojin Jang, Jisu Kim, Jaejun Yoo
|
TopP&R: Robust Support Estimation Approach for Evaluating Fidelity and
Diversity in Generative Models
|
Accepted to NeurIPS 2023
| null | null | null |
cs.LG cs.AI cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a robust and reliable evaluation metric for generative models by
introducing topological and statistical treatments for rigorous support
estimation. Existing metrics, such as Inception Score (IS), Frechet Inception
Distance (FID), and the variants of Precision and Recall (P&R), heavily rely on
supports that are estimated from sample features. However, the reliability of
their estimation has not been seriously discussed (and overlooked) even though
the quality of the evaluation entirely depends on it. In this paper, we propose
Topological Precision and Recall (TopP&R, pronounced 'topper'), which provides
a systematic approach to estimating supports, retaining only topologically and
statistically important features with a certain level of confidence. This not
only makes TopP&R strong for noisy features, but also provides statistical
consistency. Our theoretical and experimental results show that TopP&R is
robust to outliers and non-independent and identically distributed (Non-IID)
perturbations, while accurately capturing the true trend of change in samples.
To the best of our knowledge, this is the first evaluation metric focused on
the robust estimation of the support and provides its statistical consistency
under noise.
|
[
{
"created": "Tue, 13 Jun 2023 11:46:00 GMT",
"version": "v1"
},
{
"created": "Wed, 21 Jun 2023 07:51:07 GMT",
"version": "v2"
},
{
"created": "Fri, 22 Sep 2023 08:41:31 GMT",
"version": "v3"
},
{
"created": "Wed, 8 Nov 2023 06:51:05 GMT",
"version": "v4"
},
{
"created": "Thu, 9 Nov 2023 05:51:52 GMT",
"version": "v5"
},
{
"created": "Wed, 24 Jan 2024 07:48:34 GMT",
"version": "v6"
}
] |
2024-01-25
|
[
[
"Kim",
"Pum Jun",
""
],
[
"Jang",
"Yoojin",
""
],
[
"Kim",
"Jisu",
""
],
[
"Yoo",
"Jaejun",
""
]
] |
We propose a robust and reliable evaluation metric for generative models by introducing topological and statistical treatments for rigorous support estimation. Existing metrics, such as Inception Score (IS), Frechet Inception Distance (FID), and the variants of Precision and Recall (P&R), heavily rely on supports that are estimated from sample features. However, the reliability of their estimation has not been seriously discussed (and overlooked) even though the quality of the evaluation entirely depends on it. In this paper, we propose Topological Precision and Recall (TopP&R, pronounced 'topper'), which provides a systematic approach to estimating supports, retaining only topologically and statistically important features with a certain level of confidence. This not only makes TopP&R strong for noisy features, but also provides statistical consistency. Our theoretical and experimental results show that TopP&R is robust to outliers and non-independent and identically distributed (Non-IID) perturbations, while accurately capturing the true trend of change in samples. To the best of our knowledge, this is the first evaluation metric focused on the robust estimation of the support and provides its statistical consistency under noise.
|
2407.10704
|
Tianxiang Hao
|
Tianxiang Hao, Xiaohan Ding, Juexiao Feng, Yuhong Yang, Hui Chen and
Guiguang Ding
|
Quantized Prompt for Efficient Generalization of Vision-Language Models
|
ECCV 2024
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the past few years, large-scale pre-trained vision-language models like
CLIP have achieved tremendous success in various fields. Naturally, how to
transfer the rich knowledge in such huge pre-trained models to downstream tasks
and datasets becomes a hot topic. During downstream adaptation, the most
challenging problems are overfitting and catastrophic forgetting, which can
cause the model to overly focus on the current data and lose more crucial
domain-general knowledge. Existing works use classic regularization techniques
to solve the problems. As solutions become increasingly complex, the
ever-growing storage and inference costs are also a significant problem that
urgently needs to be addressed. While in this paper, we start from an
observation that proper random noise can suppress overfitting and catastrophic
forgetting. Then we regard quantization error as a kind of noise, and explore
quantization for regularizing vision-language model, which is quite efficiency
and effective. Furthermore, to improve the model's generalization capability
while maintaining its specialization capacity at minimal cost, we deeply
analyze the characteristics of the weight distribution in prompts, conclude
several principles for quantization module design and follow such principles to
create several competitive baselines. The proposed method is significantly
efficient due to its inherent lightweight nature, making it possible to adapt
on extremely resource-limited devices. Our method can be fruitfully integrated
into many existing approaches like MaPLe, enhancing accuracy while reducing
storage overhead, making it more powerful yet versatile. Extensive experiments
on 11 datasets shows great superiority of our method sufficiently. Code is
available at https://github.com/beyondhtx/QPrompt.
|
[
{
"created": "Mon, 15 Jul 2024 13:19:56 GMT",
"version": "v1"
},
{
"created": "Fri, 19 Jul 2024 22:52:27 GMT",
"version": "v2"
}
] |
2024-07-23
|
[
[
"Hao",
"Tianxiang",
""
],
[
"Ding",
"Xiaohan",
""
],
[
"Feng",
"Juexiao",
""
],
[
"Yang",
"Yuhong",
""
],
[
"Chen",
"Hui",
""
],
[
"Ding",
"Guiguang",
""
]
] |
In the past few years, large-scale pre-trained vision-language models like CLIP have achieved tremendous success in various fields. Naturally, how to transfer the rich knowledge in such huge pre-trained models to downstream tasks and datasets becomes a hot topic. During downstream adaptation, the most challenging problems are overfitting and catastrophic forgetting, which can cause the model to overly focus on the current data and lose more crucial domain-general knowledge. Existing works use classic regularization techniques to solve the problems. As solutions become increasingly complex, the ever-growing storage and inference costs are also a significant problem that urgently needs to be addressed. While in this paper, we start from an observation that proper random noise can suppress overfitting and catastrophic forgetting. Then we regard quantization error as a kind of noise, and explore quantization for regularizing vision-language model, which is quite efficiency and effective. Furthermore, to improve the model's generalization capability while maintaining its specialization capacity at minimal cost, we deeply analyze the characteristics of the weight distribution in prompts, conclude several principles for quantization module design and follow such principles to create several competitive baselines. The proposed method is significantly efficient due to its inherent lightweight nature, making it possible to adapt on extremely resource-limited devices. Our method can be fruitfully integrated into many existing approaches like MaPLe, enhancing accuracy while reducing storage overhead, making it more powerful yet versatile. Extensive experiments on 11 datasets shows great superiority of our method sufficiently. Code is available at https://github.com/beyondhtx/QPrompt.
|
1001.3213
|
Jean-Philippe Chancelier
|
Jean-Philippe Chancelier (CERMICS), J\'er\^ome Lelong (LJK), Bernard
Lapeyre (CERMICS)
|
Using Premia and Nsp for Constructing a Risk Management Benchmark for
Testing Parallel Architecture
| null | null | null | null |
cs.CE cs.DC cs.MS cs.NA q-fin.CP q-fin.PR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Financial institutions have massive computations to carry out overnight which
are very demanding in terms of the consumed CPU. The challenge is to price many
different products on a cluster-like architecture. We have used the Premia
software to valuate the financial derivatives. In this work, we explain how
Premia can be embedded into Nsp, a scientific software like Matlab, to provide
a powerful tool to valuate a whole portfolio. Finally, we have integrated an
MPI toolbox into Nsp to enable to use Premia to solve a bunch of pricing
problems on a cluster. This unified framework can then be used to test
different parallel architectures.
|
[
{
"created": "Tue, 19 Jan 2010 07:54:16 GMT",
"version": "v1"
},
{
"created": "Mon, 21 May 2012 19:13:53 GMT",
"version": "v2"
}
] |
2012-05-23
|
[
[
"Chancelier",
"Jean-Philippe",
"",
"CERMICS"
],
[
"Lelong",
"Jérôme",
"",
"LJK"
],
[
"Lapeyre",
"Bernard",
"",
"CERMICS"
]
] |
Financial institutions have massive computations to carry out overnight which are very demanding in terms of the consumed CPU. The challenge is to price many different products on a cluster-like architecture. We have used the Premia software to valuate the financial derivatives. In this work, we explain how Premia can be embedded into Nsp, a scientific software like Matlab, to provide a powerful tool to valuate a whole portfolio. Finally, we have integrated an MPI toolbox into Nsp to enable to use Premia to solve a bunch of pricing problems on a cluster. This unified framework can then be used to test different parallel architectures.
|
2309.09756
|
Ege Onat \"Ozs\"uer
|
Ege Onat \"Ozs\"uer, Bar{\i}\c{s} Akg\"un, Fatma G\"uney
|
Privileged to Predicted: Towards Sensorimotor Reinforcement Learning for
Urban Driving
|
7 pages
| null | null | null |
cs.RO cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Reinforcement Learning (RL) has the potential to surpass human performance in
driving without needing any expert supervision. Despite its promise, the
state-of-the-art in sensorimotor self-driving is dominated by imitation
learning methods due to the inherent shortcomings of RL algorithms.
Nonetheless, RL agents are able to discover highly successful policies when
provided with privileged ground truth representations of the environment. In
this work, we investigate what separates privileged RL agents from sensorimotor
agents for urban driving in order to bridge the gap between the two. We propose
vision-based deep learning models to approximate the privileged representations
from sensor data. In particular, we identify aspects of state representation
that are crucial for the success of the RL agent such as desired route
generation and stop zone prediction, and propose solutions to gradually develop
less privileged RL agents. We also observe that bird's-eye-view models trained
on offline datasets do not generalize to online RL training due to distribution
mismatch. Through rigorous evaluation on the CARLA simulation environment, we
shed light on the significance of the state representations in RL for
autonomous driving and point to unresolved challenges for future research.
|
[
{
"created": "Mon, 18 Sep 2023 13:34:41 GMT",
"version": "v1"
}
] |
2023-09-19
|
[
[
"Özsüer",
"Ege Onat",
""
],
[
"Akgün",
"Barış",
""
],
[
"Güney",
"Fatma",
""
]
] |
Reinforcement Learning (RL) has the potential to surpass human performance in driving without needing any expert supervision. Despite its promise, the state-of-the-art in sensorimotor self-driving is dominated by imitation learning methods due to the inherent shortcomings of RL algorithms. Nonetheless, RL agents are able to discover highly successful policies when provided with privileged ground truth representations of the environment. In this work, we investigate what separates privileged RL agents from sensorimotor agents for urban driving in order to bridge the gap between the two. We propose vision-based deep learning models to approximate the privileged representations from sensor data. In particular, we identify aspects of state representation that are crucial for the success of the RL agent such as desired route generation and stop zone prediction, and propose solutions to gradually develop less privileged RL agents. We also observe that bird's-eye-view models trained on offline datasets do not generalize to online RL training due to distribution mismatch. Through rigorous evaluation on the CARLA simulation environment, we shed light on the significance of the state representations in RL for autonomous driving and point to unresolved challenges for future research.
|
2012.11753
|
Qian Wang
|
Qian Wang, Toby P. Breckon
|
Contraband Materials Detection Within Volumetric 3D Computed Tomography
Baggage Security Screening Imagery
|
8 pages
| null | null | null |
cs.CV cs.LG eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automatic prohibited object detection within 2D/3D X-ray Computed Tomography
(CT) has been studied in literature to enhance the aviation security screening
at checkpoints. Deep Convolutional Neural Networks (CNN) have demonstrated
superior performance in 2D X-ray imagery. However, there exists very limited
proof of how deep neural networks perform in materials detection within
volumetric 3D CT baggage screening imagery. We attempt to close this gap by
applying Deep Neural Networks in 3D contraband substance detection based on
their material signatures. Specifically, we formulate it as a 3D semantic
segmentation problem to identify material types for all voxels based on which
contraband materials can be detected. To this end, we firstly investigate 3D
CNN based semantic segmentation algorithms such as 3D U-Net and its variants.
In contrast to the original dense representation form of volumetric 3D CT data,
we propose to convert the CT volumes into sparse point clouds which allows the
use of point cloud processing approaches such as PointNet++ towards more
efficient processing. Experimental results on a publicly available dataset (NEU
ATR) demonstrate the effectiveness of both 3D U-Net and PointNet++ in materials
detection in 3D CT imagery for baggage security screening.
|
[
{
"created": "Mon, 21 Dec 2020 23:48:06 GMT",
"version": "v1"
}
] |
2020-12-23
|
[
[
"Wang",
"Qian",
""
],
[
"Breckon",
"Toby P.",
""
]
] |
Automatic prohibited object detection within 2D/3D X-ray Computed Tomography (CT) has been studied in literature to enhance the aviation security screening at checkpoints. Deep Convolutional Neural Networks (CNN) have demonstrated superior performance in 2D X-ray imagery. However, there exists very limited proof of how deep neural networks perform in materials detection within volumetric 3D CT baggage screening imagery. We attempt to close this gap by applying Deep Neural Networks in 3D contraband substance detection based on their material signatures. Specifically, we formulate it as a 3D semantic segmentation problem to identify material types for all voxels based on which contraband materials can be detected. To this end, we firstly investigate 3D CNN based semantic segmentation algorithms such as 3D U-Net and its variants. In contrast to the original dense representation form of volumetric 3D CT data, we propose to convert the CT volumes into sparse point clouds which allows the use of point cloud processing approaches such as PointNet++ towards more efficient processing. Experimental results on a publicly available dataset (NEU ATR) demonstrate the effectiveness of both 3D U-Net and PointNet++ in materials detection in 3D CT imagery for baggage security screening.
|
2304.04480
|
Yannis Stamatiou
|
V. Liagkou and P.E. Nastou and P. Spirakis and Y.C. Stamatiou
|
On the existence of highly organized communities in networks of locally
interacting agents
| null | null | null | null |
cs.CR cs.DM
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper we investigate phenomena of spontaneous emergence or purposeful
formation of highly organized structures in networks of related agents. We show
that the formation of large organized structures requires exponentially large,
in the size of the structures, networks. Our approach is based on Kolmogorov,
or descriptional, complexity of networks viewed as finite size strings. We
apply this approach to the study of the emergence or formation of simple
organized, hierarchical, structures based on Sierpinski Graphs and we prove a
Ramsey type theorem that bounds the number of vertices in Kolmogorov random
graphs that contain Sierpinski Graphs as subgraphs. Moreover, we show that
Sierpinski Graphs encompass close-knit relationships among their vertices that
facilitate fast spread and learning of information when agents in their
vertices are engaged in pairwise interactions modelled as two person games.
Finally, we generalize our findings for any organized structure with succinct
representations. Our work can be deployed, in particular, to study problems
related to the security of networks by identifying conditions which enable or
forbid the formation of sufficiently large insider subnetworks with malicious
common goal to overtake the network or cause disruption of its operation.
|
[
{
"created": "Mon, 10 Apr 2023 09:39:41 GMT",
"version": "v1"
}
] |
2023-04-11
|
[
[
"Liagkou",
"V.",
""
],
[
"Nastou",
"P. E.",
""
],
[
"Spirakis",
"P.",
""
],
[
"Stamatiou",
"Y. C.",
""
]
] |
In this paper we investigate phenomena of spontaneous emergence or purposeful formation of highly organized structures in networks of related agents. We show that the formation of large organized structures requires exponentially large, in the size of the structures, networks. Our approach is based on Kolmogorov, or descriptional, complexity of networks viewed as finite size strings. We apply this approach to the study of the emergence or formation of simple organized, hierarchical, structures based on Sierpinski Graphs and we prove a Ramsey type theorem that bounds the number of vertices in Kolmogorov random graphs that contain Sierpinski Graphs as subgraphs. Moreover, we show that Sierpinski Graphs encompass close-knit relationships among their vertices that facilitate fast spread and learning of information when agents in their vertices are engaged in pairwise interactions modelled as two person games. Finally, we generalize our findings for any organized structure with succinct representations. Our work can be deployed, in particular, to study problems related to the security of networks by identifying conditions which enable or forbid the formation of sufficiently large insider subnetworks with malicious common goal to overtake the network or cause disruption of its operation.
|
1709.02082
|
Romain Lopez
|
Romain Lopez, Jeffrey Regier, Michael Cole, Michael Jordan and Nir
Yosef
|
A deep generative model for gene expression profiles from single-cell
RNA sequencing
|
BayLearn2017, NIPS workshop MLCB 2017
| null | null | null |
cs.LG q-bio.GN stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a probabilistic model for interpreting gene expression levels that
are observed through single-cell RNA sequencing. In the model, each cell has a
low-dimensional latent representation. Additional latent variables account for
technical effects that may erroneously set some observations of gene expression
levels to zero. Conditional distributions are specified by neural networks,
giving the proposed model enough flexibility to fit the data well. We use
variational inference and stochastic optimization to approximate the posterior
distribution. The inference procedure scales to over one million cells, whereas
competing algorithms do not. Even for smaller datasets, for several tasks, the
proposed procedure outperforms state-of-the-art methods like ZIFA and
ZINB-WaVE. We also extend our framework to account for batch effects and other
confounding factors, and propose a Bayesian hypothesis test for differential
expression that outperforms DESeq2.
|
[
{
"created": "Thu, 7 Sep 2017 05:59:49 GMT",
"version": "v1"
},
{
"created": "Tue, 17 Oct 2017 01:41:27 GMT",
"version": "v2"
},
{
"created": "Wed, 18 Oct 2017 00:37:51 GMT",
"version": "v3"
},
{
"created": "Tue, 16 Jan 2018 22:44:59 GMT",
"version": "v4"
}
] |
2018-01-18
|
[
[
"Lopez",
"Romain",
""
],
[
"Regier",
"Jeffrey",
""
],
[
"Cole",
"Michael",
""
],
[
"Jordan",
"Michael",
""
],
[
"Yosef",
"Nir",
""
]
] |
We propose a probabilistic model for interpreting gene expression levels that are observed through single-cell RNA sequencing. In the model, each cell has a low-dimensional latent representation. Additional latent variables account for technical effects that may erroneously set some observations of gene expression levels to zero. Conditional distributions are specified by neural networks, giving the proposed model enough flexibility to fit the data well. We use variational inference and stochastic optimization to approximate the posterior distribution. The inference procedure scales to over one million cells, whereas competing algorithms do not. Even for smaller datasets, for several tasks, the proposed procedure outperforms state-of-the-art methods like ZIFA and ZINB-WaVE. We also extend our framework to account for batch effects and other confounding factors, and propose a Bayesian hypothesis test for differential expression that outperforms DESeq2.
|
1610.07336
|
Steffen Urban
|
Steffen Urban and Stefan Hinz
|
MultiCol-SLAM - A Modular Real-Time Multi-Camera SLAM System
|
15 pages, 8 figures, 2 tables
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The basis for most vision based applications like robotics, self-driving cars
and potentially augmented and virtual reality is a robust, continuous
estimation of the position and orientation of a camera system w.r.t the
observed environment (scene). In recent years many vision based systems that
perform simultaneous localization and mapping (SLAM) have been presented and
released as open source. In this paper, we extend and improve upon a
state-of-the-art SLAM to make it applicable to arbitrary, rigidly coupled
multi-camera systems (MCS) using the MultiCol model. In addition, we include a
performance evaluation on accurate ground truth and compare the robustness of
the proposed method to a single camera version of the SLAM system. An open
source implementation of the proposed multi-fisheye camera SLAM system can be
found on-line https://github.com/urbste/MultiCol-SLAM.
|
[
{
"created": "Mon, 24 Oct 2016 09:27:47 GMT",
"version": "v1"
}
] |
2016-10-25
|
[
[
"Urban",
"Steffen",
""
],
[
"Hinz",
"Stefan",
""
]
] |
The basis for most vision based applications like robotics, self-driving cars and potentially augmented and virtual reality is a robust, continuous estimation of the position and orientation of a camera system w.r.t the observed environment (scene). In recent years many vision based systems that perform simultaneous localization and mapping (SLAM) have been presented and released as open source. In this paper, we extend and improve upon a state-of-the-art SLAM to make it applicable to arbitrary, rigidly coupled multi-camera systems (MCS) using the MultiCol model. In addition, we include a performance evaluation on accurate ground truth and compare the robustness of the proposed method to a single camera version of the SLAM system. An open source implementation of the proposed multi-fisheye camera SLAM system can be found on-line https://github.com/urbste/MultiCol-SLAM.
|
2108.03206
|
Bo Yu
|
Shengzhao Wang, Meitang Li, Bo Yu, Shan Bao, Yuren Chen
|
Investigating The Impacting Factors on The Public's Attitudes Towards
Autonomous Vehicles Using Sentiment Analysis from Social Media Data
|
22 pages, 5 figures
| null | null | null |
cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
The public's attitudes play a critical role in the acceptance, purchase, use,
and research and development of autonomous vehicles (AVs). To date, the
public's attitudes towards AVs were mostly estimated through traditional survey
data with high labor costs and a low quantity of samples, which also might be
one of the reasons why the influencing factors on the public's attitudes of AVs
have not been studied from multiple aspects in a comprehensive way yet. To
address the issue, this study aims to propose a method by using large-scale
social media data to investigate key factors that affect the public's attitudes
and acceptance of AVs. A total of 954,151 Twitter data related to AVs and 53
candidate independent variables from seven categories were extracted using the
web scraping method. Then, sentiment analysis was used to measure the public
attitudes towards AVs by calculating sentiment scores. Random forests algorithm
was employed to preliminarily select candidate independent variables according
to their importance, while a linear mixed model was performed to explore the
impacting factors considering the unobserved heterogeneities caused by the
subjectivity level of tweets. The results showed that the overall attitude of
the public on AVs was slightly optimistic. Factors like "drunk", "blind spot",
and "mobility" had the largest impacts on public attitudes. In addition, people
were more likely to express positive feelings when talking about words such as
"lidar" and "Tesla" that relate to high technologies. Conversely, factors such
as "COVID-19", "pedestrian", "sleepy", and "highway" were found to have
significantly negative effects on the public's attitudes. The findings of this
study are beneficial for the development of AV technologies, the guidelines for
AV-related policy formulation, and the public's understanding and acceptance of
AVs.
|
[
{
"created": "Fri, 6 Aug 2021 17:07:29 GMT",
"version": "v1"
}
] |
2021-08-09
|
[
[
"Wang",
"Shengzhao",
""
],
[
"Li",
"Meitang",
""
],
[
"Yu",
"Bo",
""
],
[
"Bao",
"Shan",
""
],
[
"Chen",
"Yuren",
""
]
] |
The public's attitudes play a critical role in the acceptance, purchase, use, and research and development of autonomous vehicles (AVs). To date, the public's attitudes towards AVs were mostly estimated through traditional survey data with high labor costs and a low quantity of samples, which also might be one of the reasons why the influencing factors on the public's attitudes of AVs have not been studied from multiple aspects in a comprehensive way yet. To address the issue, this study aims to propose a method by using large-scale social media data to investigate key factors that affect the public's attitudes and acceptance of AVs. A total of 954,151 Twitter data related to AVs and 53 candidate independent variables from seven categories were extracted using the web scraping method. Then, sentiment analysis was used to measure the public attitudes towards AVs by calculating sentiment scores. Random forests algorithm was employed to preliminarily select candidate independent variables according to their importance, while a linear mixed model was performed to explore the impacting factors considering the unobserved heterogeneities caused by the subjectivity level of tweets. The results showed that the overall attitude of the public on AVs was slightly optimistic. Factors like "drunk", "blind spot", and "mobility" had the largest impacts on public attitudes. In addition, people were more likely to express positive feelings when talking about words such as "lidar" and "Tesla" that relate to high technologies. Conversely, factors such as "COVID-19", "pedestrian", "sleepy", and "highway" were found to have significantly negative effects on the public's attitudes. The findings of this study are beneficial for the development of AV technologies, the guidelines for AV-related policy formulation, and the public's understanding and acceptance of AVs.
|
2108.11845
|
Bin Liu
|
Bin Liu
|
Consistent Relative Confidence and Label-Free Model Selection for
Convolutional Neural Networks
|
This paper has been accepted by 2022 International Conference on
Pattern Recognition and Machine Learning (PRML 2022)
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In this paper, we are concerned with image classification with deep
convolutional neural networks (CNNs). We focus on the following question: given
a set of candidate CNN models, how to select the right one with the best
generalization property for the current task? Current model selection methods
all require access to a batch of labeled data for computing a pre-specified
performance metric, such as the cross-entropy loss, the classification error
rate and the negative log-likelihood. In many practical cases, labels are not
available in time as labeling itself is a time-consuming and expensive task. To
this end, we propose an approach to CNN model selection using only unlabeled
data. We develop this method based on a principle termed consistent relative
confidence. Experimental results on benchmark datasets demonstrate the
effectiveness and efficiency of our method.
|
[
{
"created": "Thu, 26 Aug 2021 15:14:38 GMT",
"version": "v1"
},
{
"created": "Mon, 24 Jan 2022 10:35:57 GMT",
"version": "v2"
},
{
"created": "Wed, 26 Jan 2022 13:17:47 GMT",
"version": "v3"
},
{
"created": "Thu, 27 Jan 2022 02:45:29 GMT",
"version": "v4"
},
{
"created": "Fri, 28 Jan 2022 06:10:27 GMT",
"version": "v5"
},
{
"created": "Mon, 31 Jan 2022 11:46:08 GMT",
"version": "v6"
},
{
"created": "Thu, 28 Apr 2022 07:36:07 GMT",
"version": "v7"
},
{
"created": "Sat, 28 May 2022 08:27:53 GMT",
"version": "v8"
},
{
"created": "Tue, 31 May 2022 03:16:02 GMT",
"version": "v9"
}
] |
2022-06-01
|
[
[
"Liu",
"Bin",
""
]
] |
In this paper, we are concerned with image classification with deep convolutional neural networks (CNNs). We focus on the following question: given a set of candidate CNN models, how to select the right one with the best generalization property for the current task? Current model selection methods all require access to a batch of labeled data for computing a pre-specified performance metric, such as the cross-entropy loss, the classification error rate and the negative log-likelihood. In many practical cases, labels are not available in time as labeling itself is a time-consuming and expensive task. To this end, we propose an approach to CNN model selection using only unlabeled data. We develop this method based on a principle termed consistent relative confidence. Experimental results on benchmark datasets demonstrate the effectiveness and efficiency of our method.
|
2402.03202
|
Rashid Iqbal
|
Rashid Iqbal, Mauro Biagi, Ahmed Zoha, Muhammad Ali Imran, Hanaa
Abumarshoud
|
Leveraging IRS Induced Time Delay for Enhanced Physical Layer Security
in VLC Systems
| null | null | null | null |
cs.IT cs.CR math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Indoor visible light communication (VLC) is considered secure against
attackers outside the confined area where the light propagates, but it is still
susceptible to interception from inside the coverage area. A new technology,
intelligent reflecting surfaces (IRS), has been recently introduced, offering a
way to enhance physical layer security (PLS). Most research on IRS-assisted VLC
assumes the same time of arrival from all reflecting elements and overlooks the
effect of time delay and the associated intersymbol interference. This paper
tackles, for the first time, the effect of time delay on the secrecy rate in
VLC systems. Our results show that, at a fixed light-emitting diode (LED) power
of 3W, the secrecy rate can be enhanced by up to 253\% at random positions for
the legitimate user when the eavesdropper is located within a 1-meter radius of
the LED. Our results also show that careful allocation of the IRS elements can
lead to enhanced PLS even when the eavesdropper has a more favourable position
and, thus, a better channel gain than the legitimate user.
|
[
{
"created": "Mon, 5 Feb 2024 17:13:12 GMT",
"version": "v1"
},
{
"created": "Fri, 10 May 2024 15:03:43 GMT",
"version": "v2"
}
] |
2024-05-13
|
[
[
"Iqbal",
"Rashid",
""
],
[
"Biagi",
"Mauro",
""
],
[
"Zoha",
"Ahmed",
""
],
[
"Imran",
"Muhammad Ali",
""
],
[
"Abumarshoud",
"Hanaa",
""
]
] |
Indoor visible light communication (VLC) is considered secure against attackers outside the confined area where the light propagates, but it is still susceptible to interception from inside the coverage area. A new technology, intelligent reflecting surfaces (IRS), has been recently introduced, offering a way to enhance physical layer security (PLS). Most research on IRS-assisted VLC assumes the same time of arrival from all reflecting elements and overlooks the effect of time delay and the associated intersymbol interference. This paper tackles, for the first time, the effect of time delay on the secrecy rate in VLC systems. Our results show that, at a fixed light-emitting diode (LED) power of 3W, the secrecy rate can be enhanced by up to 253\% at random positions for the legitimate user when the eavesdropper is located within a 1-meter radius of the LED. Our results also show that careful allocation of the IRS elements can lead to enhanced PLS even when the eavesdropper has a more favourable position and, thus, a better channel gain than the legitimate user.
|
2109.04385
|
Maximilian Mozes
|
Maximilian Mozes, Max Bartolo, Pontus Stenetorp, Bennett Kleinberg,
Lewis D. Griffin
|
Contrasting Human- and Machine-Generated Word-Level Adversarial Examples
for Text Classification
|
EMNLP 2021
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Research shows that natural language processing models are generally
considered to be vulnerable to adversarial attacks; but recent work has drawn
attention to the issue of validating these adversarial inputs against certain
criteria (e.g., the preservation of semantics and grammaticality). Enforcing
constraints to uphold such criteria may render attacks unsuccessful, raising
the question of whether valid attacks are actually feasible. In this work, we
investigate this through the lens of human language ability. We report on
crowdsourcing studies in which we task humans with iteratively modifying words
in an input text, while receiving immediate model feedback, with the aim of
causing a sentiment classification model to misclassify the example. Our
findings suggest that humans are capable of generating a substantial amount of
adversarial examples using semantics-preserving word substitutions. We analyze
how human-generated adversarial examples compare to the recently proposed
TextFooler, Genetic, BAE and SememePSO attack algorithms on the dimensions
naturalness, preservation of sentiment, grammaticality and substitution rate.
Our findings suggest that human-generated adversarial examples are not more
able than the best algorithms to generate natural-reading, sentiment-preserving
examples, though they do so by being much more computationally efficient.
|
[
{
"created": "Thu, 9 Sep 2021 16:16:04 GMT",
"version": "v1"
}
] |
2021-09-10
|
[
[
"Mozes",
"Maximilian",
""
],
[
"Bartolo",
"Max",
""
],
[
"Stenetorp",
"Pontus",
""
],
[
"Kleinberg",
"Bennett",
""
],
[
"Griffin",
"Lewis D.",
""
]
] |
Research shows that natural language processing models are generally considered to be vulnerable to adversarial attacks; but recent work has drawn attention to the issue of validating these adversarial inputs against certain criteria (e.g., the preservation of semantics and grammaticality). Enforcing constraints to uphold such criteria may render attacks unsuccessful, raising the question of whether valid attacks are actually feasible. In this work, we investigate this through the lens of human language ability. We report on crowdsourcing studies in which we task humans with iteratively modifying words in an input text, while receiving immediate model feedback, with the aim of causing a sentiment classification model to misclassify the example. Our findings suggest that humans are capable of generating a substantial amount of adversarial examples using semantics-preserving word substitutions. We analyze how human-generated adversarial examples compare to the recently proposed TextFooler, Genetic, BAE and SememePSO attack algorithms on the dimensions naturalness, preservation of sentiment, grammaticality and substitution rate. Our findings suggest that human-generated adversarial examples are not more able than the best algorithms to generate natural-reading, sentiment-preserving examples, though they do so by being much more computationally efficient.
|
1409.3651
|
Dushyant Vaghela Mr.
|
Dushyant Vaghela
|
An Advanced Approach On Load Balancing in Grid Computing
|
We have applied our Research work on various servers, NGIX performs
better, VPS Hosting Godadday servers Representative for
http://explorequotes.com/ working fine, finally we have concluded that all
the experiments were satisfactory
| null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the rapid development in wide area networks and low cost, powerful
computational resources, grid computing has gained its popularity. With the
advent of grid computing, space limitations of conventional distributed systems
can be overcome and underutilized computing resources at different locations
around the world can be put to distributed jobs. Workload and resource
management is the main key grid services at the service level of grid
infrastructures, out of which load balancing in the main concern for grid
developers. It has been found that load is the major problem which server
faces, especially when the number of users increases. A lot of research is
being done in the area of load management. This paper presents the various
mechanisms of load balancing in grid computing so that the readers will get an
idea of which algorithm would be suitable in different situations. Keywords:
wide area network, distributed computing, load balancing.
|
[
{
"created": "Fri, 12 Sep 2014 05:40:34 GMT",
"version": "v1"
}
] |
2014-09-15
|
[
[
"Vaghela",
"Dushyant",
""
]
] |
With the rapid development in wide area networks and low cost, powerful computational resources, grid computing has gained its popularity. With the advent of grid computing, space limitations of conventional distributed systems can be overcome and underutilized computing resources at different locations around the world can be put to distributed jobs. Workload and resource management is the main key grid services at the service level of grid infrastructures, out of which load balancing in the main concern for grid developers. It has been found that load is the major problem which server faces, especially when the number of users increases. A lot of research is being done in the area of load management. This paper presents the various mechanisms of load balancing in grid computing so that the readers will get an idea of which algorithm would be suitable in different situations. Keywords: wide area network, distributed computing, load balancing.
|
1711.10839
|
Sevil Dr\"axler
|
Sevil Dr\"axler, Holger Karl, Zolt\'an \'Ad\'am Mann
|
JASPER: Joint Optimization of Scaling, Placement, and Routing of Virtual
Network Services
| null | null |
10.1109/TNSM.2018.2846572
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To adapt to continuously changing workloads in networks, components of the
running network services may need to be replicated (scaling the network
service) and allocated to physical resources (placement) dynamically, also
necessitating dynamic re-routing of flows between service components. In this
paper, we propose JASPER, a fully automated approach to jointly optimizing
scaling, placement, and routing for complex network services, consisting of
multiple (virtualized) components. JASPER handles multiple network services
that share the same substrate network; services can be dynamically added or
removed and dynamic workload changes are handled. Our approach lets service
designers specify their services on a high level of abstraction using service
templates. From the service templates and a description of the substrate
network, JASPER automatically makes scaling, placement and routing decisions,
enabling quick reaction to changes. We formalize the problem, analyze its
complexity, and develop two algorithms to solve it. Extensive empirical results
show the applicability and effectiveness of the proposed approach.
|
[
{
"created": "Wed, 29 Nov 2017 13:22:07 GMT",
"version": "v1"
}
] |
2018-06-15
|
[
[
"Dräxler",
"Sevil",
""
],
[
"Karl",
"Holger",
""
],
[
"Mann",
"Zoltán Ádám",
""
]
] |
To adapt to continuously changing workloads in networks, components of the running network services may need to be replicated (scaling the network service) and allocated to physical resources (placement) dynamically, also necessitating dynamic re-routing of flows between service components. In this paper, we propose JASPER, a fully automated approach to jointly optimizing scaling, placement, and routing for complex network services, consisting of multiple (virtualized) components. JASPER handles multiple network services that share the same substrate network; services can be dynamically added or removed and dynamic workload changes are handled. Our approach lets service designers specify their services on a high level of abstraction using service templates. From the service templates and a description of the substrate network, JASPER automatically makes scaling, placement and routing decisions, enabling quick reaction to changes. We formalize the problem, analyze its complexity, and develop two algorithms to solve it. Extensive empirical results show the applicability and effectiveness of the proposed approach.
|
2202.02864
|
Jerry Van Aken
|
Jerry R. Van Aken
|
Alpha Blending with No Division Operations
|
10 pages, 1 figure
| null | null | null |
cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
Highly accurate alpha blending can be performed entirely with integer
operations, and no divisions. To reduce the number of integer multiplications,
multiple color components can be blended in parallel in the same 32-bit or
64-bit register. This tutorial explains how to avoid division operations when
alpha blending with 32-bit RGBA pixels. An RGBA pixel contains four 8-bit
components (red, green, blue, and alpha) whose values range from 0 to 255.
Alpha blending requires multiplication of the color components by an alpha
value, after which (for greatest accuracy) each of these products is divided by
255 and then rounded to the nearest integer. This tutorial presents an
approximate alpha-blending formula that replaces the division operation with an
integer shift and add -- and also enables the number of multiplications to be
reduced. When the same blending calculation is carried out to high precision
using double-precision floating-point division operations, the results are
found to exactly match those produced by this approximation. C++ code examples
are included.
|
[
{
"created": "Sun, 6 Feb 2022 21:48:04 GMT",
"version": "v1"
},
{
"created": "Thu, 17 Feb 2022 23:20:12 GMT",
"version": "v2"
}
] |
2022-02-21
|
[
[
"Van Aken",
"Jerry R.",
""
]
] |
Highly accurate alpha blending can be performed entirely with integer operations, and no divisions. To reduce the number of integer multiplications, multiple color components can be blended in parallel in the same 32-bit or 64-bit register. This tutorial explains how to avoid division operations when alpha blending with 32-bit RGBA pixels. An RGBA pixel contains four 8-bit components (red, green, blue, and alpha) whose values range from 0 to 255. Alpha blending requires multiplication of the color components by an alpha value, after which (for greatest accuracy) each of these products is divided by 255 and then rounded to the nearest integer. This tutorial presents an approximate alpha-blending formula that replaces the division operation with an integer shift and add -- and also enables the number of multiplications to be reduced. When the same blending calculation is carried out to high precision using double-precision floating-point division operations, the results are found to exactly match those produced by this approximation. C++ code examples are included.
|
2012.15754
|
Stefanos Tsimenidis
|
Stefanos Tsimenidis
|
Limitations of Deep Neural Networks: a discussion of G. Marcus' critical
appraisal of deep learning
|
16 pages
| null | null | null |
cs.AI cs.CY cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Deep neural networks have triggered a revolution in artificial intelligence,
having been applied with great results in medical imaging, semi-autonomous
vehicles, ecommerce, genetics research, speech recognition, particle physics,
experimental art, economic forecasting, environmental science, industrial
manufacturing, and a wide variety of applications in nearly every field. This
sudden success, though, may have intoxicated the research community and blinded
them to the potential pitfalls of assigning deep learning a higher status than
warranted. Also, research directed at alleviating the weaknesses of deep
learning may seem less attractive to scientists and engineers, who focus on the
low-hanging fruit of finding more and more applications for deep learning
models, thus letting short-term benefits hamper long-term scientific progress.
Gary Marcus wrote a paper entitled Deep Learning: A Critical Appraisal, and
here we discuss Marcus' core ideas, as well as attempt a general assessment of
the subject. This study examines some of the limitations of deep neural
networks, with the intention of pointing towards potential paths for future
research, and of clearing up some metaphysical misconceptions, held by numerous
researchers, that may misdirect them.
|
[
{
"created": "Tue, 22 Dec 2020 12:11:19 GMT",
"version": "v1"
}
] |
2021-01-01
|
[
[
"Tsimenidis",
"Stefanos",
""
]
] |
Deep neural networks have triggered a revolution in artificial intelligence, having been applied with great results in medical imaging, semi-autonomous vehicles, ecommerce, genetics research, speech recognition, particle physics, experimental art, economic forecasting, environmental science, industrial manufacturing, and a wide variety of applications in nearly every field. This sudden success, though, may have intoxicated the research community and blinded them to the potential pitfalls of assigning deep learning a higher status than warranted. Also, research directed at alleviating the weaknesses of deep learning may seem less attractive to scientists and engineers, who focus on the low-hanging fruit of finding more and more applications for deep learning models, thus letting short-term benefits hamper long-term scientific progress. Gary Marcus wrote a paper entitled Deep Learning: A Critical Appraisal, and here we discuss Marcus' core ideas, as well as attempt a general assessment of the subject. This study examines some of the limitations of deep neural networks, with the intention of pointing towards potential paths for future research, and of clearing up some metaphysical misconceptions, held by numerous researchers, that may misdirect them.
|
2009.11508
|
Yang Bai
|
Yang Bai and Yuyuan Zeng and Yong Jiang and Yisen Wang and Shu-Tao Xia
and Weiwei Guo
|
Improving Query Efficiency of Black-box Adversarial Attack
|
Accepted to ECCV2020
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep neural networks (DNNs) have demonstrated excellent performance on
various tasks, however they are under the risk of adversarial examples that can
be easily generated when the target model is accessible to an attacker
(white-box setting). As plenty of machine learning models have been deployed
via online services that only provide query outputs from inaccessible models
(e.g. Google Cloud Vision API2), black-box adversarial attacks (inaccessible
target model) are of critical security concerns in practice rather than
white-box ones. However, existing query-based black-box adversarial attacks
often require excessive model queries to maintain a high attack success rate.
Therefore, in order to improve query efficiency, we explore the distribution of
adversarial examples around benign inputs with the help of image structure
information characterized by a Neural Process, and propose a Neural Process
based black-box adversarial attack (NP-Attack) in this paper. Extensive
experiments show that NP-Attack could greatly decrease the query counts under
the black-box setting.
|
[
{
"created": "Thu, 24 Sep 2020 06:22:56 GMT",
"version": "v1"
},
{
"created": "Fri, 25 Sep 2020 07:09:25 GMT",
"version": "v2"
}
] |
2020-09-28
|
[
[
"Bai",
"Yang",
""
],
[
"Zeng",
"Yuyuan",
""
],
[
"Jiang",
"Yong",
""
],
[
"Wang",
"Yisen",
""
],
[
"Xia",
"Shu-Tao",
""
],
[
"Guo",
"Weiwei",
""
]
] |
Deep neural networks (DNNs) have demonstrated excellent performance on various tasks, however they are under the risk of adversarial examples that can be easily generated when the target model is accessible to an attacker (white-box setting). As plenty of machine learning models have been deployed via online services that only provide query outputs from inaccessible models (e.g. Google Cloud Vision API2), black-box adversarial attacks (inaccessible target model) are of critical security concerns in practice rather than white-box ones. However, existing query-based black-box adversarial attacks often require excessive model queries to maintain a high attack success rate. Therefore, in order to improve query efficiency, we explore the distribution of adversarial examples around benign inputs with the help of image structure information characterized by a Neural Process, and propose a Neural Process based black-box adversarial attack (NP-Attack) in this paper. Extensive experiments show that NP-Attack could greatly decrease the query counts under the black-box setting.
|
2205.11343
|
M. Park
|
Minjae Park
|
Heterogeneous Graph Neural Network for Personalized Session-Based
Recommendation with User-Session Constraints
|
There is a fatal error in the derived experiment results
| null | null | null |
cs.IR cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The recommendation system provides users with an appropriate limit of recent
online large amounts of information. Session-based recommendation, a sub-area
of recommender systems, attempts to recommend items by interpreting sessions
that consist of sequences of items. Recently, research to include user
information in these sessions is progress. However, it is difficult to generate
high-quality user representation that includes session representations
generated by user. In this paper, we consider various relationships in graph
created by sessions through Heterogeneous attention network. Constraints also
force user representations to consider the user's preferences presented in the
session. It seeks to increase performance through additional optimization in
the training process. The proposed model outperformed other methods on various
real-world datasets.
|
[
{
"created": "Mon, 23 May 2022 14:35:26 GMT",
"version": "v1"
},
{
"created": "Tue, 24 May 2022 08:46:21 GMT",
"version": "v2"
},
{
"created": "Sun, 26 Jun 2022 14:35:10 GMT",
"version": "v3"
}
] |
2022-06-28
|
[
[
"Park",
"Minjae",
""
]
] |
The recommendation system provides users with an appropriate limit of recent online large amounts of information. Session-based recommendation, a sub-area of recommender systems, attempts to recommend items by interpreting sessions that consist of sequences of items. Recently, research to include user information in these sessions is progress. However, it is difficult to generate high-quality user representation that includes session representations generated by user. In this paper, we consider various relationships in graph created by sessions through Heterogeneous attention network. Constraints also force user representations to consider the user's preferences presented in the session. It seeks to increase performance through additional optimization in the training process. The proposed model outperformed other methods on various real-world datasets.
|
2309.02145
|
Patrick Eickhoff
|
Patrick Eickhoff, Matthias M\"oller, Theresa Pekarek Rosin, Johannes
Twiefel, Stefan Wermter
|
Bring the Noise: Introducing Noise Robustness to Pretrained Automatic
Speech Recognition
|
Submitted and accepted for ICANN 2023 (32nd International Conference
on Artificial Neural Networks)
| null | null | null |
cs.CL cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
In recent research, in the domain of speech processing, large End-to-End
(E2E) systems for Automatic Speech Recognition (ASR) have reported
state-of-the-art performance on various benchmarks. These systems intrinsically
learn how to handle and remove noise conditions from speech. Previous research
has shown, that it is possible to extract the denoising capabilities of these
models into a preprocessor network, which can be used as a frontend for
downstream ASR models. However, the proposed methods were limited to specific
fully convolutional architectures. In this work, we propose a novel method to
extract the denoising capabilities, that can be applied to any encoder-decoder
architecture. We propose the Cleancoder preprocessor architecture that extracts
hidden activations from the Conformer ASR model and feeds them to a decoder to
predict denoised spectrograms. We train our pre-processor on the Noisy Speech
Database (NSD) to reconstruct denoised spectrograms from noisy inputs. Then, we
evaluate our model as a frontend to a pretrained Conformer ASR model as well as
a frontend to train smaller Conformer ASR models from scratch. We show that the
Cleancoder is able to filter noise from speech and that it improves the total
Word Error Rate (WER) of the downstream model in noisy conditions for both
applications.
|
[
{
"created": "Tue, 5 Sep 2023 11:34:21 GMT",
"version": "v1"
}
] |
2023-09-06
|
[
[
"Eickhoff",
"Patrick",
""
],
[
"Möller",
"Matthias",
""
],
[
"Rosin",
"Theresa Pekarek",
""
],
[
"Twiefel",
"Johannes",
""
],
[
"Wermter",
"Stefan",
""
]
] |
In recent research, in the domain of speech processing, large End-to-End (E2E) systems for Automatic Speech Recognition (ASR) have reported state-of-the-art performance on various benchmarks. These systems intrinsically learn how to handle and remove noise conditions from speech. Previous research has shown, that it is possible to extract the denoising capabilities of these models into a preprocessor network, which can be used as a frontend for downstream ASR models. However, the proposed methods were limited to specific fully convolutional architectures. In this work, we propose a novel method to extract the denoising capabilities, that can be applied to any encoder-decoder architecture. We propose the Cleancoder preprocessor architecture that extracts hidden activations from the Conformer ASR model and feeds them to a decoder to predict denoised spectrograms. We train our pre-processor on the Noisy Speech Database (NSD) to reconstruct denoised spectrograms from noisy inputs. Then, we evaluate our model as a frontend to a pretrained Conformer ASR model as well as a frontend to train smaller Conformer ASR models from scratch. We show that the Cleancoder is able to filter noise from speech and that it improves the total Word Error Rate (WER) of the downstream model in noisy conditions for both applications.
|
1508.04467
|
Zhao Kang
|
Zhao Kang, Chong Peng, Qiang Cheng
|
Robust Subspace Clustering via Smoothed Rank Approximation
|
Journal, code is available
|
IEEE Signal Processing Letters, 22(2015)2088-2092
|
10.1109/LSP.2015.2460737
| null |
cs.CV cs.IT cs.LG cs.NA math.IT stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Matrix rank minimizing subject to affine constraints arises in many
application areas, ranging from signal processing to machine learning. Nuclear
norm is a convex relaxation for this problem which can recover the rank exactly
under some restricted and theoretically interesting conditions. However, for
many real-world applications, nuclear norm approximation to the rank function
can only produce a result far from the optimum. To seek a solution of higher
accuracy than the nuclear norm, in this paper, we propose a rank approximation
based on Logarithm-Determinant. We consider using this rank approximation for
subspace clustering application. Our framework can model different kinds of
errors and noise. Effective optimization strategy is developed with theoretical
guarantee to converge to a stationary point. The proposed method gives
promising results on face clustering and motion segmentation tasks compared to
the state-of-the-art subspace clustering algorithms.
|
[
{
"created": "Tue, 18 Aug 2015 21:54:03 GMT",
"version": "v1"
}
] |
2015-08-20
|
[
[
"Kang",
"Zhao",
""
],
[
"Peng",
"Chong",
""
],
[
"Cheng",
"Qiang",
""
]
] |
Matrix rank minimizing subject to affine constraints arises in many application areas, ranging from signal processing to machine learning. Nuclear norm is a convex relaxation for this problem which can recover the rank exactly under some restricted and theoretically interesting conditions. However, for many real-world applications, nuclear norm approximation to the rank function can only produce a result far from the optimum. To seek a solution of higher accuracy than the nuclear norm, in this paper, we propose a rank approximation based on Logarithm-Determinant. We consider using this rank approximation for subspace clustering application. Our framework can model different kinds of errors and noise. Effective optimization strategy is developed with theoretical guarantee to converge to a stationary point. The proposed method gives promising results on face clustering and motion segmentation tasks compared to the state-of-the-art subspace clustering algorithms.
|
2212.14181
|
Guangwei Gao
|
Wenjie Li, Juncheng Li, Guangwei Gao, Weihong Deng, Jian Yang, Guo-Jun
Qi and Chia-Wen Lin
|
Efficient Image Super-Resolution with Feature Interaction Weighted
Hybrid Network
|
15 pages, 14 figures, extention of our AAAI2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, great progress has been made in single-image super-resolution
(SISR) based on deep learning technology. However, the existing methods usually
require a large computational cost. Meanwhile, the activation function will
cause some features of the intermediate layer to be lost. Therefore, it is a
challenge to make the model lightweight while reducing the impact of
intermediate feature loss on the reconstruction quality. In this paper, we
propose a Feature Interaction Weighted Hybrid Network (FIWHN) to alleviate the
above problem. Specifically, FIWHN consists of a series of novel Wide-residual
Distillation Interaction Blocks (WDIB) as the backbone, where every third WDIBs
form a Feature shuffle Weighted Group (FSWG) by mutual information mixing and
fusion. In addition, to mitigate the adverse effects of intermediate feature
loss on the reconstruction results, we introduced a well-designed Wide
Convolutional Residual Weighting (WCRW) and Wide Identical Residual Weighting
(WIRW) units in WDIB, and effectively cross-fused features of different
finenesses through a Wide-residual Distillation Connection (WRDC) framework and
a Self-Calibrating Fusion (SCF) unit. Finally, to complement the global
features lacking in the CNN model, we introduced the Transformer into our model
and explored a new way of combining the CNN and Transformer. Extensive
quantitative and qualitative experiments on low-level and high-level tasks show
that our proposed FIWHN can achieve a good balance between performance and
efficiency, and is more conducive to downstream tasks to solve problems in
low-pixel scenarios.
|
[
{
"created": "Thu, 29 Dec 2022 05:57:29 GMT",
"version": "v1"
}
] |
2023-01-02
|
[
[
"Li",
"Wenjie",
""
],
[
"Li",
"Juncheng",
""
],
[
"Gao",
"Guangwei",
""
],
[
"Deng",
"Weihong",
""
],
[
"Yang",
"Jian",
""
],
[
"Qi",
"Guo-Jun",
""
],
[
"Lin",
"Chia-Wen",
""
]
] |
Recently, great progress has been made in single-image super-resolution (SISR) based on deep learning technology. However, the existing methods usually require a large computational cost. Meanwhile, the activation function will cause some features of the intermediate layer to be lost. Therefore, it is a challenge to make the model lightweight while reducing the impact of intermediate feature loss on the reconstruction quality. In this paper, we propose a Feature Interaction Weighted Hybrid Network (FIWHN) to alleviate the above problem. Specifically, FIWHN consists of a series of novel Wide-residual Distillation Interaction Blocks (WDIB) as the backbone, where every third WDIBs form a Feature shuffle Weighted Group (FSWG) by mutual information mixing and fusion. In addition, to mitigate the adverse effects of intermediate feature loss on the reconstruction results, we introduced a well-designed Wide Convolutional Residual Weighting (WCRW) and Wide Identical Residual Weighting (WIRW) units in WDIB, and effectively cross-fused features of different finenesses through a Wide-residual Distillation Connection (WRDC) framework and a Self-Calibrating Fusion (SCF) unit. Finally, to complement the global features lacking in the CNN model, we introduced the Transformer into our model and explored a new way of combining the CNN and Transformer. Extensive quantitative and qualitative experiments on low-level and high-level tasks show that our proposed FIWHN can achieve a good balance between performance and efficiency, and is more conducive to downstream tasks to solve problems in low-pixel scenarios.
|
1610.03437
|
Jo\~ao Oliveira
|
Jo\~ao P. Oliveira and Ana Bragan\c{c}a and Jos\'e Bioucas-Dias and
M\'ario Figueiredo and Lu\'is Alc\'acer and Jorge Morgado and Quirina
Ferreira
|
Restoring STM images via Sparse Coding: noise and artifact removal
|
14 pages, 6 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this article, we present a denoising algorithm to improve the
interpretation and quality of scanning tunneling microscopy (STM) images. Given
the high level of self-similarity of STM images, we propose a denoising
algorithm by reformulating the true estimation problem as a sparse regression,
often termed sparse coding. We introduce modifications to the algorithm to cope
with the existence of artifacts, mainly dropouts, which appear in a structured
way as consecutive line segments on the scanning direction. The resulting
algorithm treats the artifacts as missing data, and the estimated values
outperform those algorithms that substitute the outliers by a local filtering.
We provide code implementations for both Matlab and Gwyddion.
|
[
{
"created": "Tue, 11 Oct 2016 17:37:47 GMT",
"version": "v1"
}
] |
2016-10-12
|
[
[
"Oliveira",
"João P.",
""
],
[
"Bragança",
"Ana",
""
],
[
"Bioucas-Dias",
"José",
""
],
[
"Figueiredo",
"Mário",
""
],
[
"Alcácer",
"Luís",
""
],
[
"Morgado",
"Jorge",
""
],
[
"Ferreira",
"Quirina",
""
]
] |
In this article, we present a denoising algorithm to improve the interpretation and quality of scanning tunneling microscopy (STM) images. Given the high level of self-similarity of STM images, we propose a denoising algorithm by reformulating the true estimation problem as a sparse regression, often termed sparse coding. We introduce modifications to the algorithm to cope with the existence of artifacts, mainly dropouts, which appear in a structured way as consecutive line segments on the scanning direction. The resulting algorithm treats the artifacts as missing data, and the estimated values outperform those algorithms that substitute the outliers by a local filtering. We provide code implementations for both Matlab and Gwyddion.
|
2207.04103
|
Dominik Lewy
|
Dominik Lewy, Jacek Ma\'ndziuk, Maria Ganzha, Marcin Paprzycki
|
StatMix: Data augmentation method that relies on image statistics in
federated learning
| null | null | null | null |
cs.LG cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Availability of large amount of annotated data is one of the pillars of deep
learning success. Although numerous big datasets have been made available for
research, this is often not the case in real life applications (e.g. companies
are not able to share data due to GDPR or concerns related to intellectual
property rights protection). Federated learning (FL) is a potential solution to
this problem, as it enables training a global model on data scattered across
multiple nodes, without sharing local data itself. However, even FL methods
pose a threat to data privacy, if not handled properly. Therefore, we propose
StatMix, an augmentation approach that uses image statistics, to improve
results of FL scenario(s). StatMix is empirically tested on CIFAR-10 and
CIFAR-100, using two neural network architectures. In all FL experiments,
application of StatMix improves the average accuracy, compared to the baseline
training (with no use of StatMix). Some improvement can also be observed in
non-FL setups.
|
[
{
"created": "Fri, 8 Jul 2022 19:02:41 GMT",
"version": "v1"
}
] |
2022-07-12
|
[
[
"Lewy",
"Dominik",
""
],
[
"Mańdziuk",
"Jacek",
""
],
[
"Ganzha",
"Maria",
""
],
[
"Paprzycki",
"Marcin",
""
]
] |
Availability of large amount of annotated data is one of the pillars of deep learning success. Although numerous big datasets have been made available for research, this is often not the case in real life applications (e.g. companies are not able to share data due to GDPR or concerns related to intellectual property rights protection). Federated learning (FL) is a potential solution to this problem, as it enables training a global model on data scattered across multiple nodes, without sharing local data itself. However, even FL methods pose a threat to data privacy, if not handled properly. Therefore, we propose StatMix, an augmentation approach that uses image statistics, to improve results of FL scenario(s). StatMix is empirically tested on CIFAR-10 and CIFAR-100, using two neural network architectures. In all FL experiments, application of StatMix improves the average accuracy, compared to the baseline training (with no use of StatMix). Some improvement can also be observed in non-FL setups.
|
1804.11162
|
Joaqu\'in Arias M.Sc.
|
Joaqu\'in Arias, Manuel Carro, Elmer Salazar, Kyle Marple and Gopal
Gupta
|
Constraint Answer Set Programming without Grounding
|
Paper presented at the 34nd International Conference on Logic
Programming (ICLP 2018), Oxford, UK, July 14 to July 17, 2018 18 pages, LaTeX
| null | null | null |
cs.PL cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Extending ASP with constraints (CASP) enhances its expressiveness and
performance. This extension is not straightforward as the grounding phase,
present in most ASP systems, removes variables and the links among them, and
also causes a combinatorial explosion in the size of the program. Several
methods to overcome this issue have been devised: restricting the constraint
domains (e.g., discrete instead of dense), or the type (or number) of models
that can be returned. In this paper we propose to incorporate constraints into
s(ASP), a goal-directed, top-down execution model which implements ASP while
retaining logical variables both during execution and in the answer sets. The
resulting model, s(CASP), can constrain variables that, as in CLP, are kept
during the execution and in the answer sets. s(CASP) inherits and generalizes
the execution model of s(ASP) and is parametric w.r.t. the constraint solver.
We describe this novel execution model and show through several examples the
enhanced expressiveness of s(CASP) w.r.t. ASP, CLP, and other CASP systems. We
also report improved performance w.r.t. other very mature, highly optimized ASP
systems in some benchmarks. This paper is under consideration for publication
in Theory and Practice of Logic Programming (TPLP).
|
[
{
"created": "Mon, 30 Apr 2018 12:50:28 GMT",
"version": "v1"
},
{
"created": "Thu, 31 May 2018 15:33:53 GMT",
"version": "v2"
}
] |
2018-06-01
|
[
[
"Arias",
"Joaquín",
""
],
[
"Carro",
"Manuel",
""
],
[
"Salazar",
"Elmer",
""
],
[
"Marple",
"Kyle",
""
],
[
"Gupta",
"Gopal",
""
]
] |
Extending ASP with constraints (CASP) enhances its expressiveness and performance. This extension is not straightforward as the grounding phase, present in most ASP systems, removes variables and the links among them, and also causes a combinatorial explosion in the size of the program. Several methods to overcome this issue have been devised: restricting the constraint domains (e.g., discrete instead of dense), or the type (or number) of models that can be returned. In this paper we propose to incorporate constraints into s(ASP), a goal-directed, top-down execution model which implements ASP while retaining logical variables both during execution and in the answer sets. The resulting model, s(CASP), can constrain variables that, as in CLP, are kept during the execution and in the answer sets. s(CASP) inherits and generalizes the execution model of s(ASP) and is parametric w.r.t. the constraint solver. We describe this novel execution model and show through several examples the enhanced expressiveness of s(CASP) w.r.t. ASP, CLP, and other CASP systems. We also report improved performance w.r.t. other very mature, highly optimized ASP systems in some benchmarks. This paper is under consideration for publication in Theory and Practice of Logic Programming (TPLP).
|
2403.14356
|
Xudong Sun
|
Xudong Sun, Carla Feistner, Alexej Gossmann, George Schwarz, Rao
Muhammad Umer, Lisa Beer, Patrick Rockenschaub, Rahul Babu Shrestha, Armin
Gruber, Nutan Chen, Sayedali Shetab Boushehri, Florian Buettner, Carsten Marr
|
DomainLab: A modular Python package for domain generalization in deep
learning
| null | null | null | null |
cs.LG cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Poor generalization performance caused by distribution shifts in unseen
domains often hinders the trustworthy deployment of deep neural networks. Many
domain generalization techniques address this problem by adding a domain
invariant regularization loss terms during training. However, there is a lack
of modular software that allows users to combine the advantages of different
methods with minimal effort for reproducibility. DomainLab is a modular Python
package for training user specified neural networks with composable
regularization loss terms. Its decoupled design allows the separation of neural
networks from regularization loss construction. Hierarchical combinations of
neural networks, different domain generalization methods, and associated
hyperparameters, can all be specified together with other experimental setup in
a single configuration file. Hierarchical combinations of neural networks,
different domain generalization methods, and associated hyperparameters, can
all be specified together with other experimental setup in a single
configuration file. In addition, DomainLab offers powerful benchmarking
functionality to evaluate the generalization performance of neural networks in
out-of-distribution data. The package supports running the specified benchmark
on an HPC cluster or on a standalone machine. The package is well tested with
over 95 percent coverage and well documented. From the user perspective, it is
closed to modification but open to extension. The package is under the MIT
license, and its source code, tutorial and documentation can be found at
https://github.com/marrlab/DomainLab.
|
[
{
"created": "Thu, 21 Mar 2024 12:35:46 GMT",
"version": "v1"
}
] |
2024-03-22
|
[
[
"Sun",
"Xudong",
""
],
[
"Feistner",
"Carla",
""
],
[
"Gossmann",
"Alexej",
""
],
[
"Schwarz",
"George",
""
],
[
"Umer",
"Rao Muhammad",
""
],
[
"Beer",
"Lisa",
""
],
[
"Rockenschaub",
"Patrick",
""
],
[
"Shrestha",
"Rahul Babu",
""
],
[
"Gruber",
"Armin",
""
],
[
"Chen",
"Nutan",
""
],
[
"Boushehri",
"Sayedali Shetab",
""
],
[
"Buettner",
"Florian",
""
],
[
"Marr",
"Carsten",
""
]
] |
Poor generalization performance caused by distribution shifts in unseen domains often hinders the trustworthy deployment of deep neural networks. Many domain generalization techniques address this problem by adding a domain invariant regularization loss terms during training. However, there is a lack of modular software that allows users to combine the advantages of different methods with minimal effort for reproducibility. DomainLab is a modular Python package for training user specified neural networks with composable regularization loss terms. Its decoupled design allows the separation of neural networks from regularization loss construction. Hierarchical combinations of neural networks, different domain generalization methods, and associated hyperparameters, can all be specified together with other experimental setup in a single configuration file. Hierarchical combinations of neural networks, different domain generalization methods, and associated hyperparameters, can all be specified together with other experimental setup in a single configuration file. In addition, DomainLab offers powerful benchmarking functionality to evaluate the generalization performance of neural networks in out-of-distribution data. The package supports running the specified benchmark on an HPC cluster or on a standalone machine. The package is well tested with over 95 percent coverage and well documented. From the user perspective, it is closed to modification but open to extension. The package is under the MIT license, and its source code, tutorial and documentation can be found at https://github.com/marrlab/DomainLab.
|
2404.09408
|
Xinyu Liang
|
Xinyu Liang, Ruiying Du, Jing Chen, Yu Zhang, Meng Jia, Shuangxi Cao,
Yufeng Wei, Shixiong Yao
|
A Distributed Scalable Cross-chain State Channel Scheme Based on
Recursive State Synchronization
| null | null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
As cross-chain technology continues to advance, the scale of cross-chain
transactions is experiencing significant expansion. To improve scalability,
researchers have turned to the study of cross-chain state channels. However,
most of the existing schemes rely on trusted parties to support channel
operations. To address this issue, we present Interpipe: a distributed
cross-chain state channel scheme. Specifically, we propose a real-time
cross-chain synchronization scheme to ensure consistent operations between two
blockchains to a cross-chain state channel. Moreover, we propose a batch
transaction proof scheme based on recursive SNARK to meet the cross-chain
verification needs of large-scale users. Based on the above designs, Interpipe
offers protocols for opening, updating, closing, and disputing operations to
cross-chain state channels. Security analysis shows that Interpipe has
consistency and resistance, and experimental results demonstrate that a
cross-chain state channel can be nearly as efficient as an existing intra-chain
state channel.
|
[
{
"created": "Mon, 15 Apr 2024 01:50:28 GMT",
"version": "v1"
}
] |
2024-04-16
|
[
[
"Liang",
"Xinyu",
""
],
[
"Du",
"Ruiying",
""
],
[
"Chen",
"Jing",
""
],
[
"Zhang",
"Yu",
""
],
[
"Jia",
"Meng",
""
],
[
"Cao",
"Shuangxi",
""
],
[
"Wei",
"Yufeng",
""
],
[
"Yao",
"Shixiong",
""
]
] |
As cross-chain technology continues to advance, the scale of cross-chain transactions is experiencing significant expansion. To improve scalability, researchers have turned to the study of cross-chain state channels. However, most of the existing schemes rely on trusted parties to support channel operations. To address this issue, we present Interpipe: a distributed cross-chain state channel scheme. Specifically, we propose a real-time cross-chain synchronization scheme to ensure consistent operations between two blockchains to a cross-chain state channel. Moreover, we propose a batch transaction proof scheme based on recursive SNARK to meet the cross-chain verification needs of large-scale users. Based on the above designs, Interpipe offers protocols for opening, updating, closing, and disputing operations to cross-chain state channels. Security analysis shows that Interpipe has consistency and resistance, and experimental results demonstrate that a cross-chain state channel can be nearly as efficient as an existing intra-chain state channel.
|
2301.05776
|
Iurii Medvedev
|
Iurii Medvedev and Farhad Shadmand and Nuno Gon\c{c}alves
|
Young Labeled Faces in the Wild (YLFW): A Dataset for Children Faces
Recognition
|
11 pages, 3 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Face recognition has achieved outstanding performance in the last decade with
the development of deep learning techniques.
Nowadays, the challenges in face recognition are related to specific
scenarios, for instance, the performance under diverse image quality, the
robustness for aging and edge cases of person age (children and elders),
distinguishing of related identities.
In this set of problems, recognizing children's faces is one of the most
sensitive and important. One of the reasons for this problem is the existing
bias towards adults in existing face datasets.
In this work, we present a benchmark dataset for children's face recognition,
which is compiled similarly to the famous face recognition benchmarks LFW,
CALFW, CPLFW, XQLFW and AgeDB.
We also present a development dataset (separated into train and test parts)
for adapting face recognition models for face images of children.
The proposed data is balanced for African, Asian, Caucasian, and Indian
races. To the best of our knowledge, this is the first standartized data tool
set for benchmarking and the largest collection for development for children's
face recognition. Several face recognition experiments are presented to
demonstrate the performance of the proposed data tool set.
|
[
{
"created": "Fri, 13 Jan 2023 22:19:44 GMT",
"version": "v1"
}
] |
2023-01-18
|
[
[
"Medvedev",
"Iurii",
""
],
[
"Shadmand",
"Farhad",
""
],
[
"Gonçalves",
"Nuno",
""
]
] |
Face recognition has achieved outstanding performance in the last decade with the development of deep learning techniques. Nowadays, the challenges in face recognition are related to specific scenarios, for instance, the performance under diverse image quality, the robustness for aging and edge cases of person age (children and elders), distinguishing of related identities. In this set of problems, recognizing children's faces is one of the most sensitive and important. One of the reasons for this problem is the existing bias towards adults in existing face datasets. In this work, we present a benchmark dataset for children's face recognition, which is compiled similarly to the famous face recognition benchmarks LFW, CALFW, CPLFW, XQLFW and AgeDB. We also present a development dataset (separated into train and test parts) for adapting face recognition models for face images of children. The proposed data is balanced for African, Asian, Caucasian, and Indian races. To the best of our knowledge, this is the first standartized data tool set for benchmarking and the largest collection for development for children's face recognition. Several face recognition experiments are presented to demonstrate the performance of the proposed data tool set.
|
2202.13536
|
Geon-Hyeong Kim
|
Geon-Hyeong Kim, Jongmin Lee, Youngsoo Jang, Hongseok Yang, Kee-Eung
Kim
|
LobsDICE: Offline Learning from Observation via Stationary Distribution
Correction Estimation
|
33 pages, Accepted at NeurIPS 2022
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We consider the problem of learning from observation (LfO), in which the
agent aims to mimic the expert's behavior from the state-only demonstrations by
experts. We additionally assume that the agent cannot interact with the
environment but has access to the action-labeled transition data collected by
some agents with unknown qualities. This offline setting for LfO is appealing
in many real-world scenarios where the ground-truth expert actions are
inaccessible and the arbitrary environment interactions are costly or risky. In
this paper, we present LobsDICE, an offline LfO algorithm that learns to
imitate the expert policy via optimization in the space of stationary
distributions. Our algorithm solves a single convex minimization problem, which
minimizes the divergence between the two state-transition distributions induced
by the expert and the agent policy. Through an extensive set of offline LfO
tasks, we show that LobsDICE outperforms strong baseline methods.
|
[
{
"created": "Mon, 28 Feb 2022 04:24:30 GMT",
"version": "v1"
},
{
"created": "Tue, 18 Oct 2022 02:31:21 GMT",
"version": "v2"
}
] |
2022-10-19
|
[
[
"Kim",
"Geon-Hyeong",
""
],
[
"Lee",
"Jongmin",
""
],
[
"Jang",
"Youngsoo",
""
],
[
"Yang",
"Hongseok",
""
],
[
"Kim",
"Kee-Eung",
""
]
] |
We consider the problem of learning from observation (LfO), in which the agent aims to mimic the expert's behavior from the state-only demonstrations by experts. We additionally assume that the agent cannot interact with the environment but has access to the action-labeled transition data collected by some agents with unknown qualities. This offline setting for LfO is appealing in many real-world scenarios where the ground-truth expert actions are inaccessible and the arbitrary environment interactions are costly or risky. In this paper, we present LobsDICE, an offline LfO algorithm that learns to imitate the expert policy via optimization in the space of stationary distributions. Our algorithm solves a single convex minimization problem, which minimizes the divergence between the two state-transition distributions induced by the expert and the agent policy. Through an extensive set of offline LfO tasks, we show that LobsDICE outperforms strong baseline methods.
|
2104.08942
|
Neel Kanwal
|
Neel Kanwal, Giuseppe Rizzo
|
Attention-based Clinical Note Summarization
|
Accepted at ACM SAC 2022, in Special Track "KNLP"
|
ACM SAC 2022
|
10.1145/3477314.3507256
|
978-1-4503-8713-2/22/04
|
cs.CL cs.AI cs.IR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, the trend of deploying digital systems in numerous
industries has hiked. The health sector has observed an extensive adoption of
digital systems and services that generate significant medical records.
Electronic health records contain valuable information for prospective and
retrospective analysis that is often not entirely exploited because of the
complicated dense information storage. The crude purpose of condensing health
records is to select the information that holds most characteristics of the
original documents based on a reported disease. These summaries may boost
diagnosis and save a doctor's time during a saturated workload situation like
the COVID-19 pandemic. In this paper, we are applying a multi-head
attention-based mechanism to perform extractive summarization of meaningful
phrases on clinical notes. Our method finds major sentences for a summary by
correlating tokens, segments, and positional embeddings of sentences in a
clinical note. The model outputs attention scores that are statistically
transformed to extract critical phrases for visualization on the heat-mapping
tool and for human use.
|
[
{
"created": "Sun, 18 Apr 2021 19:40:26 GMT",
"version": "v1"
},
{
"created": "Fri, 1 Oct 2021 10:51:26 GMT",
"version": "v2"
},
{
"created": "Mon, 28 Feb 2022 11:15:16 GMT",
"version": "v3"
}
] |
2022-03-01
|
[
[
"Kanwal",
"Neel",
""
],
[
"Rizzo",
"Giuseppe",
""
]
] |
In recent years, the trend of deploying digital systems in numerous industries has hiked. The health sector has observed an extensive adoption of digital systems and services that generate significant medical records. Electronic health records contain valuable information for prospective and retrospective analysis that is often not entirely exploited because of the complicated dense information storage. The crude purpose of condensing health records is to select the information that holds most characteristics of the original documents based on a reported disease. These summaries may boost diagnosis and save a doctor's time during a saturated workload situation like the COVID-19 pandemic. In this paper, we are applying a multi-head attention-based mechanism to perform extractive summarization of meaningful phrases on clinical notes. Our method finds major sentences for a summary by correlating tokens, segments, and positional embeddings of sentences in a clinical note. The model outputs attention scores that are statistically transformed to extract critical phrases for visualization on the heat-mapping tool and for human use.
|
2008.07397
|
Daniel Engelsman
|
Daniel Engelsman
|
A Study of a Genetic Algorithm for Polydisperse Spray Flames
|
Advisor : Prof. Barry J. Greenberg, 66 Pages, 65 figures
| null | null | null |
cs.NE
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Modern technological advancements constantly push forward the human-machine
interaction. Evolutionary Algorithms (EA) are an machine learning (ML) subclass
inspired by the process of natural selection - Survival of the Fittest, as
stated by the Darwinian Theory of Evolution. The most notable algorithm in that
class is the Genetic Algorithm (GA) - a powerful heuristic tool which enables
the generation of a high-quality solutions to optimization problems. In recent
decades the algorithm underwent remarkable improvement, which adapted it into a
wide range of engineering problems, by heuristically searching for the optimal
solution. Despite being well-defined, many engineering problems may suffer from
heavy analytical entanglement when approaching the derivation process, as
required in classic optimization methods. Therefore, the main motivation here,
is to work around that obstacle. In this piece of work, I would like to harness
the GA capabilities to examine optimality with respect to a unique combustion
problem, in a way that was never performed before. To be more precise, I would
like to utilize it to answer the question : What form of an initial droplet
size distribution (iDSD) will guarantee an optimal flame ? To answer this
question, I will first provide a general introduction to the GA method, then
develop the combustion model, and eventually merge both into an optimization
problem.
|
[
{
"created": "Tue, 11 Aug 2020 10:17:42 GMT",
"version": "v1"
}
] |
2020-08-18
|
[
[
"Engelsman",
"Daniel",
""
]
] |
Modern technological advancements constantly push forward the human-machine interaction. Evolutionary Algorithms (EA) are an machine learning (ML) subclass inspired by the process of natural selection - Survival of the Fittest, as stated by the Darwinian Theory of Evolution. The most notable algorithm in that class is the Genetic Algorithm (GA) - a powerful heuristic tool which enables the generation of a high-quality solutions to optimization problems. In recent decades the algorithm underwent remarkable improvement, which adapted it into a wide range of engineering problems, by heuristically searching for the optimal solution. Despite being well-defined, many engineering problems may suffer from heavy analytical entanglement when approaching the derivation process, as required in classic optimization methods. Therefore, the main motivation here, is to work around that obstacle. In this piece of work, I would like to harness the GA capabilities to examine optimality with respect to a unique combustion problem, in a way that was never performed before. To be more precise, I would like to utilize it to answer the question : What form of an initial droplet size distribution (iDSD) will guarantee an optimal flame ? To answer this question, I will first provide a general introduction to the GA method, then develop the combustion model, and eventually merge both into an optimization problem.
|
2102.11262
|
Lei Ding
|
Lei Ding, Hao Tang, Yahui Liu, Yilei Shi, Xiao Xiang Zhu and Lorenzo
Bruzzone
|
Adversarial Shape Learning for Building Extraction in VHR Remote Sensing
Images
| null | null |
10.1109/TIP.2021.3134455
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Building extraction in VHR RSIs remains a challenging task due to occlusion
and boundary ambiguity problems. Although conventional convolutional neural
networks (CNNs) based methods are capable of exploiting local texture and
context information, they fail to capture the shape patterns of buildings,
which is a necessary constraint in the human recognition. To address this
issue, we propose an adversarial shape learning network (ASLNet) to model the
building shape patterns that improve the accuracy of building segmentation. In
the proposed ASLNet, we introduce the adversarial learning strategy to
explicitly model the shape constraints, as well as a CNN shape regularizer to
strengthen the embedding of shape features. To assess the geometric accuracy of
building segmentation results, we introduced several object-based quality
assessment metrics. Experiments on two open benchmark datasets show that the
proposed ASLNet improves both the pixel-based accuracy and the object-based
quality measurements by a large margin. The code is available at:
https://github.com/ggsDing/ASLNet
|
[
{
"created": "Mon, 22 Feb 2021 18:49:43 GMT",
"version": "v1"
},
{
"created": "Thu, 25 Feb 2021 13:58:51 GMT",
"version": "v2"
},
{
"created": "Tue, 9 Mar 2021 20:59:18 GMT",
"version": "v3"
},
{
"created": "Wed, 17 Mar 2021 10:16:18 GMT",
"version": "v4"
},
{
"created": "Tue, 30 Mar 2021 22:12:26 GMT",
"version": "v5"
},
{
"created": "Sat, 18 Dec 2021 01:20:28 GMT",
"version": "v6"
}
] |
2021-12-21
|
[
[
"Ding",
"Lei",
""
],
[
"Tang",
"Hao",
""
],
[
"Liu",
"Yahui",
""
],
[
"Shi",
"Yilei",
""
],
[
"Zhu",
"Xiao Xiang",
""
],
[
"Bruzzone",
"Lorenzo",
""
]
] |
Building extraction in VHR RSIs remains a challenging task due to occlusion and boundary ambiguity problems. Although conventional convolutional neural networks (CNNs) based methods are capable of exploiting local texture and context information, they fail to capture the shape patterns of buildings, which is a necessary constraint in the human recognition. To address this issue, we propose an adversarial shape learning network (ASLNet) to model the building shape patterns that improve the accuracy of building segmentation. In the proposed ASLNet, we introduce the adversarial learning strategy to explicitly model the shape constraints, as well as a CNN shape regularizer to strengthen the embedding of shape features. To assess the geometric accuracy of building segmentation results, we introduced several object-based quality assessment metrics. Experiments on two open benchmark datasets show that the proposed ASLNet improves both the pixel-based accuracy and the object-based quality measurements by a large margin. The code is available at: https://github.com/ggsDing/ASLNet
|
1907.10937
|
Mohsen Ghaffari
|
V\'aclav Rozho\v{n} and Mohsen Ghaffari
|
Polylogarithmic-Time Deterministic Network Decomposition and Distributed
Derandomization
|
Extended version of an article that appears at the Symposium on
Theory of Computing (STOC) 2020
| null | null | null |
cs.DS cs.DC cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a simple polylogarithmic-time deterministic distributed algorithm
for network decomposition. This improves on a celebrated $2^{O(\sqrt{\log
n})}$-time algorithm of Panconesi and Srinivasan [STOC'92] and settles a
central and long-standing question in distributed graph algorithms. It also
leads to the first polylogarithmic-time deterministic distributed algorithms
for numerous other problems, hence resolving several well-known and decades-old
open problems, including Linial's question about the deterministic complexity
of maximal independent set [FOCS'87; SICOMP'92]---which had been called the
most outstanding problem in the area.
The main implication is a more general distributed derandomization theorem:
Put together with the results of Ghaffari, Kuhn, and Maus [STOC'17] and
Ghaffari, Harris, and Kuhn [FOCS'18], our network decomposition implies that
$$\mathsf{P}\textit{-}\mathsf{RLOCAL} = \mathsf{P}\textit{-}\mathsf{LOCAL}.$$
That is, for any problem whose solution can be checked deterministically in
polylogarithmic-time, any polylogarithmic-time randomized algorithm can be
derandomized to a polylogarithmic-time deterministic algorithm. Informally, for
the standard first-order interpretation of efficiency as polylogarithmic-time,
distributed algorithms do not need randomness for efficiency.
By known connections, our result leads also to substantially faster
randomized distributed algorithms for a number of well-studied problems
including $(\Delta+1)$-coloring, maximal independent set, and Lov\'{a}sz Local
Lemma, as well as massively parallel algorithms for $(\Delta+1)$-coloring.
|
[
{
"created": "Thu, 25 Jul 2019 10:01:49 GMT",
"version": "v1"
},
{
"created": "Sun, 10 May 2020 18:24:18 GMT",
"version": "v2"
}
] |
2020-05-12
|
[
[
"Rozhoň",
"Václav",
""
],
[
"Ghaffari",
"Mohsen",
""
]
] |
We present a simple polylogarithmic-time deterministic distributed algorithm for network decomposition. This improves on a celebrated $2^{O(\sqrt{\log n})}$-time algorithm of Panconesi and Srinivasan [STOC'92] and settles a central and long-standing question in distributed graph algorithms. It also leads to the first polylogarithmic-time deterministic distributed algorithms for numerous other problems, hence resolving several well-known and decades-old open problems, including Linial's question about the deterministic complexity of maximal independent set [FOCS'87; SICOMP'92]---which had been called the most outstanding problem in the area. The main implication is a more general distributed derandomization theorem: Put together with the results of Ghaffari, Kuhn, and Maus [STOC'17] and Ghaffari, Harris, and Kuhn [FOCS'18], our network decomposition implies that $$\mathsf{P}\textit{-}\mathsf{RLOCAL} = \mathsf{P}\textit{-}\mathsf{LOCAL}.$$ That is, for any problem whose solution can be checked deterministically in polylogarithmic-time, any polylogarithmic-time randomized algorithm can be derandomized to a polylogarithmic-time deterministic algorithm. Informally, for the standard first-order interpretation of efficiency as polylogarithmic-time, distributed algorithms do not need randomness for efficiency. By known connections, our result leads also to substantially faster randomized distributed algorithms for a number of well-studied problems including $(\Delta+1)$-coloring, maximal independent set, and Lov\'{a}sz Local Lemma, as well as massively parallel algorithms for $(\Delta+1)$-coloring.
|
2401.10122
|
Zhechen Li
|
Zhechen Li, Zimai Guo, Lirong Xia, Yongzhi Cao, Hanpin Wang
|
Differentially Private Approval-Based Committee Voting
| null | null | null | null |
cs.GT
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we investigate tradeoffs between differential privacy (DP) and
several voting axioms for approval-based committee voting, including
proportionality, Pareto efficiency, Condorcet criterion, and strategyproofness.
For all the axioms except strategyproofness, we show their incompatibility with
DP, and provide both upper and lower bounds for their tradeoffs with DP.
Furthermore, we show that any $\epsilon$-DP mechanism satisfies
$e^{-\epsilon}$-cardinality strategyproofness, and the satisfaction can be
further improved if the mechanism satisfies monotonicity.
|
[
{
"created": "Thu, 18 Jan 2024 16:51:51 GMT",
"version": "v1"
}
] |
2024-01-19
|
[
[
"Li",
"Zhechen",
""
],
[
"Guo",
"Zimai",
""
],
[
"Xia",
"Lirong",
""
],
[
"Cao",
"Yongzhi",
""
],
[
"Wang",
"Hanpin",
""
]
] |
In this paper, we investigate tradeoffs between differential privacy (DP) and several voting axioms for approval-based committee voting, including proportionality, Pareto efficiency, Condorcet criterion, and strategyproofness. For all the axioms except strategyproofness, we show their incompatibility with DP, and provide both upper and lower bounds for their tradeoffs with DP. Furthermore, we show that any $\epsilon$-DP mechanism satisfies $e^{-\epsilon}$-cardinality strategyproofness, and the satisfaction can be further improved if the mechanism satisfies monotonicity.
|
2303.10280
|
Ketul Shah
|
Arun V. Reddy, Ketul Shah, William Paul, Rohita Mocharla, Judy
Hoffman, Kapil D. Katyal, Dinesh Manocha, Celso M. de Melo, Rama Chellappa
|
Synthetic-to-Real Domain Adaptation for Action Recognition: A Dataset
and Baseline Performances
|
ICRA 2023. The first two authors contributed equally. Dataset
available at: https://github.com/reddyav1/RoCoG-v2
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human action recognition is a challenging problem, particularly when there is
high variability in factors such as subject appearance, backgrounds and
viewpoint. While deep neural networks (DNNs) have been shown to perform well on
action recognition tasks, they typically require large amounts of high-quality
labeled data to achieve robust performance across a variety of conditions.
Synthetic data has shown promise as a way to avoid the substantial costs and
potential ethical concerns associated with collecting and labeling enormous
amounts of data in the real-world. However, synthetic data may differ from real
data in important ways. This phenomenon, known as \textit{domain shift}, can
limit the utility of synthetic data in robotics applications. To mitigate the
effects of domain shift, substantial effort is being dedicated to the
development of domain adaptation (DA) techniques. Yet, much remains to be
understood about how best to develop these techniques. In this paper, we
introduce a new dataset called Robot Control Gestures (RoCoG-v2). The dataset
is composed of both real and synthetic videos from seven gesture classes, and
is intended to support the study of synthetic-to-real domain shift for
video-based action recognition. Our work expands upon existing datasets by
focusing the action classes on gestures for human-robot teaming, as well as by
enabling investigation of domain shift in both ground and aerial views. We
present baseline results using state-of-the-art action recognition and domain
adaptation algorithms and offer initial insight on tackling the
synthetic-to-real and ground-to-air domain shifts.
|
[
{
"created": "Fri, 17 Mar 2023 23:23:55 GMT",
"version": "v1"
},
{
"created": "Thu, 1 Aug 2024 18:49:11 GMT",
"version": "v2"
}
] |
2024-08-05
|
[
[
"Reddy",
"Arun V.",
""
],
[
"Shah",
"Ketul",
""
],
[
"Paul",
"William",
""
],
[
"Mocharla",
"Rohita",
""
],
[
"Hoffman",
"Judy",
""
],
[
"Katyal",
"Kapil D.",
""
],
[
"Manocha",
"Dinesh",
""
],
[
"de Melo",
"Celso M.",
""
],
[
"Chellappa",
"Rama",
""
]
] |
Human action recognition is a challenging problem, particularly when there is high variability in factors such as subject appearance, backgrounds and viewpoint. While deep neural networks (DNNs) have been shown to perform well on action recognition tasks, they typically require large amounts of high-quality labeled data to achieve robust performance across a variety of conditions. Synthetic data has shown promise as a way to avoid the substantial costs and potential ethical concerns associated with collecting and labeling enormous amounts of data in the real-world. However, synthetic data may differ from real data in important ways. This phenomenon, known as \textit{domain shift}, can limit the utility of synthetic data in robotics applications. To mitigate the effects of domain shift, substantial effort is being dedicated to the development of domain adaptation (DA) techniques. Yet, much remains to be understood about how best to develop these techniques. In this paper, we introduce a new dataset called Robot Control Gestures (RoCoG-v2). The dataset is composed of both real and synthetic videos from seven gesture classes, and is intended to support the study of synthetic-to-real domain shift for video-based action recognition. Our work expands upon existing datasets by focusing the action classes on gestures for human-robot teaming, as well as by enabling investigation of domain shift in both ground and aerial views. We present baseline results using state-of-the-art action recognition and domain adaptation algorithms and offer initial insight on tackling the synthetic-to-real and ground-to-air domain shifts.
|
2311.11396
|
Dmitry Kangin
|
Plamen Angelov, Dmitry Kangin, Ziyang Zhang
|
Towards interpretable-by-design deep learning algorithms
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The proposed framework named IDEAL (Interpretable-by-design DEep learning
ALgorithms) recasts the standard supervised classification problem into a
function of similarity to a set of prototypes derived from the training data,
while taking advantage of existing latent spaces of large neural networks
forming so-called Foundation Models (FM). This addresses the issue of
explainability (stage B) while retaining the benefits from the tremendous
achievements offered by DL models (e.g., visual transformers, ViT) pre-trained
on huge data sets such as IG-3.6B + ImageNet-1K or LVD-142M (stage A). We show
that one can turn such DL models into conceptually simpler,
explainable-through-prototypes ones.
The key findings can be summarized as follows: (1) the proposed models are
interpretable through prototypes, mitigating the issue of confounded
interpretations, (2) the proposed IDEAL framework circumvents the issue of
catastrophic forgetting allowing efficient class-incremental learning, and (3)
the proposed IDEAL approach demonstrates that ViT architectures narrow the gap
between finetuned and non-finetuned models allowing for transfer learning in a
fraction of time \textbf{without} finetuning of the feature space on a target
dataset with iterative supervised methods.
|
[
{
"created": "Sun, 19 Nov 2023 18:40:49 GMT",
"version": "v1"
}
] |
2023-11-21
|
[
[
"Angelov",
"Plamen",
""
],
[
"Kangin",
"Dmitry",
""
],
[
"Zhang",
"Ziyang",
""
]
] |
The proposed framework named IDEAL (Interpretable-by-design DEep learning ALgorithms) recasts the standard supervised classification problem into a function of similarity to a set of prototypes derived from the training data, while taking advantage of existing latent spaces of large neural networks forming so-called Foundation Models (FM). This addresses the issue of explainability (stage B) while retaining the benefits from the tremendous achievements offered by DL models (e.g., visual transformers, ViT) pre-trained on huge data sets such as IG-3.6B + ImageNet-1K or LVD-142M (stage A). We show that one can turn such DL models into conceptually simpler, explainable-through-prototypes ones. The key findings can be summarized as follows: (1) the proposed models are interpretable through prototypes, mitigating the issue of confounded interpretations, (2) the proposed IDEAL framework circumvents the issue of catastrophic forgetting allowing efficient class-incremental learning, and (3) the proposed IDEAL approach demonstrates that ViT architectures narrow the gap between finetuned and non-finetuned models allowing for transfer learning in a fraction of time \textbf{without} finetuning of the feature space on a target dataset with iterative supervised methods.
|
2003.10895
|
Amir Livne
|
Amir Livne, Alex Bronstein, Ron Kimmel, Ziv Aviv, Shahaf Grofit
|
Do We Need Depth in State-Of-The-Art Face Authentication?
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Some face recognition methods are designed to utilize geometric information
extracted from depth sensors to overcome the weaknesses of single-image based
recognition technologies. However, the accurate acquisition of the depth
profile is an expensive and challenging process. Here, we introduce a novel
method that learns to recognize faces from stereo camera systems without the
need to explicitly compute the facial surface or depth map. The raw face stereo
images along with the location in the image from which the face is extracted
allow the proposed CNN to improve the recognition task while avoiding the need
to explicitly handle the geometric structure of the face. This way, we keep the
simplicity and cost efficiency of identity authentication from a single image,
while enjoying the benefits of geometric data without explicitly reconstructing
it. We demonstrate that the suggested method outperforms both existing
single-image and explicit depth based methods on large-scale benchmarks, and
even capable of recognize spoofing attacks. We also provide an ablation study
that shows that the suggested method uses the face locations in the left and
right images to encode informative features that improve the overall
performance.
|
[
{
"created": "Tue, 24 Mar 2020 14:51:25 GMT",
"version": "v1"
},
{
"created": "Tue, 10 Nov 2020 11:52:04 GMT",
"version": "v2"
}
] |
2020-11-11
|
[
[
"Livne",
"Amir",
""
],
[
"Bronstein",
"Alex",
""
],
[
"Kimmel",
"Ron",
""
],
[
"Aviv",
"Ziv",
""
],
[
"Grofit",
"Shahaf",
""
]
] |
Some face recognition methods are designed to utilize geometric information extracted from depth sensors to overcome the weaknesses of single-image based recognition technologies. However, the accurate acquisition of the depth profile is an expensive and challenging process. Here, we introduce a novel method that learns to recognize faces from stereo camera systems without the need to explicitly compute the facial surface or depth map. The raw face stereo images along with the location in the image from which the face is extracted allow the proposed CNN to improve the recognition task while avoiding the need to explicitly handle the geometric structure of the face. This way, we keep the simplicity and cost efficiency of identity authentication from a single image, while enjoying the benefits of geometric data without explicitly reconstructing it. We demonstrate that the suggested method outperforms both existing single-image and explicit depth based methods on large-scale benchmarks, and even capable of recognize spoofing attacks. We also provide an ablation study that shows that the suggested method uses the face locations in the left and right images to encode informative features that improve the overall performance.
|
1707.05982
|
Sergey Triputen
|
Sergey Triputen, Kristiaan Schreve, Viktor Tkachev and Matthias Ratsch
|
Closed-form Solution for IMU based LSD-SLAM Point Cloud Conversion into
the Scaled 3D World Environment
|
6 pages, 8 figures
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
SLAM is a very popular research stream in computer vision and robotics
nowadays. For more effective SLAM implementation it is necessary to have
reliable informa- tion about the environment, also the data should be aligned
and scaled according to the real world coordinate system. Monocular SLAM
research is an attractive sub-stream, because of the low equipment cost, size
and weight. In this paper we present a way to build a conversion from LSD-SLAM
coordinate space to the real world coordinates using a true metric scale with
IMU sensor data implementation. The causes of differences between the real and
calculated spaces are explained and the possibility of conversions between the
spaces is proved. Additionally, a closed-form solution for inter space trans-
formation calculation is presented. The synthetic method of generating high
level accurate and well controlled input data for the LSD-SLAM algorithm is
presented. Finally, the reconstructed 3D environment representation is
delivered as an output of the implemented conversion.
|
[
{
"created": "Wed, 19 Jul 2017 08:56:04 GMT",
"version": "v1"
}
] |
2017-07-20
|
[
[
"Triputen",
"Sergey",
""
],
[
"Schreve",
"Kristiaan",
""
],
[
"Tkachev",
"Viktor",
""
],
[
"Ratsch",
"Matthias",
""
]
] |
SLAM is a very popular research stream in computer vision and robotics nowadays. For more effective SLAM implementation it is necessary to have reliable informa- tion about the environment, also the data should be aligned and scaled according to the real world coordinate system. Monocular SLAM research is an attractive sub-stream, because of the low equipment cost, size and weight. In this paper we present a way to build a conversion from LSD-SLAM coordinate space to the real world coordinates using a true metric scale with IMU sensor data implementation. The causes of differences between the real and calculated spaces are explained and the possibility of conversions between the spaces is proved. Additionally, a closed-form solution for inter space trans- formation calculation is presented. The synthetic method of generating high level accurate and well controlled input data for the LSD-SLAM algorithm is presented. Finally, the reconstructed 3D environment representation is delivered as an output of the implemented conversion.
|
2403.12945
|
Karl Pertsch
|
Alexander Khazatsky, Karl Pertsch, Suraj Nair, Ashwin Balakrishna,
Sudeep Dasari, Siddharth Karamcheti, Soroush Nasiriany, Mohan Kumar Srirama,
Lawrence Yunliang Chen, Kirsty Ellis, Peter David Fagan, Joey Hejna, Masha
Itkina, Marion Lepert, Yecheng Jason Ma, Patrick Tree Miller, Jimmy Wu,
Suneel Belkhale, Shivin Dass, Huy Ha, Arhan Jain, Abraham Lee, Youngwoon Lee,
Marius Memmel, Sungjae Park, Ilija Radosavovic, Kaiyuan Wang, Albert Zhan,
Kevin Black, Cheng Chi, Kyle Beltran Hatch, Shan Lin, Jingpei Lu, Jean
Mercat, Abdul Rehman, Pannag R Sanketi, Archit Sharma, Cody Simpson, Quan
Vuong, Homer Rich Walke, Blake Wulfe, Ted Xiao, Jonathan Heewon Yang, Arefeh
Yavary, Tony Z. Zhao, Christopher Agia, Rohan Baijal, Mateo Guaman Castro,
Daphne Chen, Qiuyu Chen, Trinity Chung, Jaimyn Drake, Ethan Paul Foster,
Jensen Gao, David Antonio Herrera, Minho Heo, Kyle Hsu, Jiaheng Hu, Donovon
Jackson, Charlotte Le, Yunshuang Li, Kevin Lin, Roy Lin, Zehan Ma, Abhiram
Maddukuri, Suvir Mirchandani, Daniel Morton, Tony Nguyen, Abigail O'Neill,
Rosario Scalise, Derick Seale, Victor Son, Stephen Tian, Emi Tran, Andrew E.
Wang, Yilin Wu, Annie Xie, Jingyun Yang, Patrick Yin, Yunchu Zhang, Osbert
Bastani, Glen Berseth, Jeannette Bohg, Ken Goldberg, Abhinav Gupta, Abhishek
Gupta, Dinesh Jayaraman, Joseph J Lim, Jitendra Malik, Roberto
Mart\'in-Mart\'in, Subramanian Ramamoorthy, Dorsa Sadigh, Shuran Song, Jiajun
Wu, Michael C. Yip, Yuke Zhu, Thomas Kollar, Sergey Levine, Chelsea Finn
|
DROID: A Large-Scale In-The-Wild Robot Manipulation Dataset
|
Project website: https://droid-dataset.github.io/
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
The creation of large, diverse, high-quality robot manipulation datasets is
an important stepping stone on the path toward more capable and robust robotic
manipulation policies. However, creating such datasets is challenging:
collecting robot manipulation data in diverse environments poses logistical and
safety challenges and requires substantial investments in hardware and human
labour. As a result, even the most general robot manipulation policies today
are mostly trained on data collected in a small number of environments with
limited scene and task diversity. In this work, we introduce DROID (Distributed
Robot Interaction Dataset), a diverse robot manipulation dataset with 76k
demonstration trajectories or 350 hours of interaction data, collected across
564 scenes and 84 tasks by 50 data collectors in North America, Asia, and
Europe over the course of 12 months. We demonstrate that training with DROID
leads to policies with higher performance and improved generalization ability.
We open source the full dataset, policy learning code, and a detailed guide for
reproducing our robot hardware setup.
|
[
{
"created": "Tue, 19 Mar 2024 17:48:38 GMT",
"version": "v1"
}
] |
2024-03-20
|
[
[
"Khazatsky",
"Alexander",
""
],
[
"Pertsch",
"Karl",
""
],
[
"Nair",
"Suraj",
""
],
[
"Balakrishna",
"Ashwin",
""
],
[
"Dasari",
"Sudeep",
""
],
[
"Karamcheti",
"Siddharth",
""
],
[
"Nasiriany",
"Soroush",
""
],
[
"Srirama",
"Mohan Kumar",
""
],
[
"Chen",
"Lawrence Yunliang",
""
],
[
"Ellis",
"Kirsty",
""
],
[
"Fagan",
"Peter David",
""
],
[
"Hejna",
"Joey",
""
],
[
"Itkina",
"Masha",
""
],
[
"Lepert",
"Marion",
""
],
[
"Ma",
"Yecheng Jason",
""
],
[
"Miller",
"Patrick Tree",
""
],
[
"Wu",
"Jimmy",
""
],
[
"Belkhale",
"Suneel",
""
],
[
"Dass",
"Shivin",
""
],
[
"Ha",
"Huy",
""
],
[
"Jain",
"Arhan",
""
],
[
"Lee",
"Abraham",
""
],
[
"Lee",
"Youngwoon",
""
],
[
"Memmel",
"Marius",
""
],
[
"Park",
"Sungjae",
""
],
[
"Radosavovic",
"Ilija",
""
],
[
"Wang",
"Kaiyuan",
""
],
[
"Zhan",
"Albert",
""
],
[
"Black",
"Kevin",
""
],
[
"Chi",
"Cheng",
""
],
[
"Hatch",
"Kyle Beltran",
""
],
[
"Lin",
"Shan",
""
],
[
"Lu",
"Jingpei",
""
],
[
"Mercat",
"Jean",
""
],
[
"Rehman",
"Abdul",
""
],
[
"Sanketi",
"Pannag R",
""
],
[
"Sharma",
"Archit",
""
],
[
"Simpson",
"Cody",
""
],
[
"Vuong",
"Quan",
""
],
[
"Walke",
"Homer Rich",
""
],
[
"Wulfe",
"Blake",
""
],
[
"Xiao",
"Ted",
""
],
[
"Yang",
"Jonathan Heewon",
""
],
[
"Yavary",
"Arefeh",
""
],
[
"Zhao",
"Tony Z.",
""
],
[
"Agia",
"Christopher",
""
],
[
"Baijal",
"Rohan",
""
],
[
"Castro",
"Mateo Guaman",
""
],
[
"Chen",
"Daphne",
""
],
[
"Chen",
"Qiuyu",
""
],
[
"Chung",
"Trinity",
""
],
[
"Drake",
"Jaimyn",
""
],
[
"Foster",
"Ethan Paul",
""
],
[
"Gao",
"Jensen",
""
],
[
"Herrera",
"David Antonio",
""
],
[
"Heo",
"Minho",
""
],
[
"Hsu",
"Kyle",
""
],
[
"Hu",
"Jiaheng",
""
],
[
"Jackson",
"Donovon",
""
],
[
"Le",
"Charlotte",
""
],
[
"Li",
"Yunshuang",
""
],
[
"Lin",
"Kevin",
""
],
[
"Lin",
"Roy",
""
],
[
"Ma",
"Zehan",
""
],
[
"Maddukuri",
"Abhiram",
""
],
[
"Mirchandani",
"Suvir",
""
],
[
"Morton",
"Daniel",
""
],
[
"Nguyen",
"Tony",
""
],
[
"O'Neill",
"Abigail",
""
],
[
"Scalise",
"Rosario",
""
],
[
"Seale",
"Derick",
""
],
[
"Son",
"Victor",
""
],
[
"Tian",
"Stephen",
""
],
[
"Tran",
"Emi",
""
],
[
"Wang",
"Andrew E.",
""
],
[
"Wu",
"Yilin",
""
],
[
"Xie",
"Annie",
""
],
[
"Yang",
"Jingyun",
""
],
[
"Yin",
"Patrick",
""
],
[
"Zhang",
"Yunchu",
""
],
[
"Bastani",
"Osbert",
""
],
[
"Berseth",
"Glen",
""
],
[
"Bohg",
"Jeannette",
""
],
[
"Goldberg",
"Ken",
""
],
[
"Gupta",
"Abhinav",
""
],
[
"Gupta",
"Abhishek",
""
],
[
"Jayaraman",
"Dinesh",
""
],
[
"Lim",
"Joseph J",
""
],
[
"Malik",
"Jitendra",
""
],
[
"Martín-Martín",
"Roberto",
""
],
[
"Ramamoorthy",
"Subramanian",
""
],
[
"Sadigh",
"Dorsa",
""
],
[
"Song",
"Shuran",
""
],
[
"Wu",
"Jiajun",
""
],
[
"Yip",
"Michael C.",
""
],
[
"Zhu",
"Yuke",
""
],
[
"Kollar",
"Thomas",
""
],
[
"Levine",
"Sergey",
""
],
[
"Finn",
"Chelsea",
""
]
] |
The creation of large, diverse, high-quality robot manipulation datasets is an important stepping stone on the path toward more capable and robust robotic manipulation policies. However, creating such datasets is challenging: collecting robot manipulation data in diverse environments poses logistical and safety challenges and requires substantial investments in hardware and human labour. As a result, even the most general robot manipulation policies today are mostly trained on data collected in a small number of environments with limited scene and task diversity. In this work, we introduce DROID (Distributed Robot Interaction Dataset), a diverse robot manipulation dataset with 76k demonstration trajectories or 350 hours of interaction data, collected across 564 scenes and 84 tasks by 50 data collectors in North America, Asia, and Europe over the course of 12 months. We demonstrate that training with DROID leads to policies with higher performance and improved generalization ability. We open source the full dataset, policy learning code, and a detailed guide for reproducing our robot hardware setup.
|
0906.0798
|
Subhash Kak
|
Subhash Kak
|
Single Neuron Memories and the Network's Proximity Matrix
|
10 pages
| null | null | null |
cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper extends the treatment of single-neuron memories obtained by the
B-matrix approach. The spreading of the activity within the network is
determined by the network's proximity matrix which represents the separations
amongst the neurons through the neural pathways.
|
[
{
"created": "Wed, 3 Jun 2009 23:10:25 GMT",
"version": "v1"
}
] |
2009-06-05
|
[
[
"Kak",
"Subhash",
""
]
] |
This paper extends the treatment of single-neuron memories obtained by the B-matrix approach. The spreading of the activity within the network is determined by the network's proximity matrix which represents the separations amongst the neurons through the neural pathways.
|
2305.07598
|
Hakjin Lee
|
Hakjin Lee, Minki Song, Jamyoung Koo, Junghoon Seo
|
Hausdorff Distance Matching with Adaptive Query Denoising for Rotated
Detection Transformer
|
Under review, 16 pages, 12 tables, 8 figures
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The Detection Transformer (DETR) has emerged as a pivotal role in object
detection tasks, setting new performance benchmarks due to its end-to-end
design and scalability. Despite its advancements, the application of DETR in
detecting rotated objects has demonstrated suboptimal performance relative to
established oriented object detectors. Our analysis identifies a key
limitation: the L1 cost used in Hungarian Matching leads to duplicate
predictions due to the square-like problem in oriented object detection,
thereby obstructing the training process of the detector. We introduce a
Hausdorff distance-based cost for Hungarian matching, which more accurately
quantifies the discrepancy between predictions and ground truths. Moreover, we
note that a static denoising approach hampers the training of rotated DETR,
particularly when the detector's predictions surpass the quality of noised
ground truths. We propose an adaptive query denoising technique, employing
Hungarian matching to selectively filter out superfluous noised queries that no
longer contribute to model improvement. Our proposed modifications to DETR have
resulted in superior performance, surpassing previous rotated DETR models and
other alternatives. This is evidenced by our model's state-of-the-art
achievements in benchmarks such as DOTA-v1.0/v1.5/v2.0, and DIOR-R.
|
[
{
"created": "Fri, 12 May 2023 16:42:54 GMT",
"version": "v1"
},
{
"created": "Mon, 15 May 2023 07:01:45 GMT",
"version": "v2"
},
{
"created": "Tue, 6 Jun 2023 09:06:28 GMT",
"version": "v3"
},
{
"created": "Wed, 29 Nov 2023 08:56:29 GMT",
"version": "v4"
}
] |
2023-11-30
|
[
[
"Lee",
"Hakjin",
""
],
[
"Song",
"Minki",
""
],
[
"Koo",
"Jamyoung",
""
],
[
"Seo",
"Junghoon",
""
]
] |
The Detection Transformer (DETR) has emerged as a pivotal role in object detection tasks, setting new performance benchmarks due to its end-to-end design and scalability. Despite its advancements, the application of DETR in detecting rotated objects has demonstrated suboptimal performance relative to established oriented object detectors. Our analysis identifies a key limitation: the L1 cost used in Hungarian Matching leads to duplicate predictions due to the square-like problem in oriented object detection, thereby obstructing the training process of the detector. We introduce a Hausdorff distance-based cost for Hungarian matching, which more accurately quantifies the discrepancy between predictions and ground truths. Moreover, we note that a static denoising approach hampers the training of rotated DETR, particularly when the detector's predictions surpass the quality of noised ground truths. We propose an adaptive query denoising technique, employing Hungarian matching to selectively filter out superfluous noised queries that no longer contribute to model improvement. Our proposed modifications to DETR have resulted in superior performance, surpassing previous rotated DETR models and other alternatives. This is evidenced by our model's state-of-the-art achievements in benchmarks such as DOTA-v1.0/v1.5/v2.0, and DIOR-R.
|
1510.04221
|
Venet Osmani
|
Enrique Garcia-Ceja, Venet Osmani, Oscar Mayora
|
Automatic Stress Detection in Working Environments from Smartphones'
Accelerometer Data: A First Step
|
in IEEE Journal of Biomedical and Health Informatics, 2015
| null |
10.1109/JBHI.2015.2446195
| null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Increase in workload across many organisations and consequent increase in
occupational stress is negatively affecting the health of the workforce.
Measuring stress and other human psychological dynamics is difficult due to
subjective nature of self- reporting and variability between and within
individuals. With the advent of smartphones it is now possible to monitor
diverse aspects of human behaviour, including objectively measured behaviour
related to psychological state and consequently stress. We have used data from
the smartphone's built-in accelerometer to detect behaviour that correlates
with subjects stress levels. Accelerometer sensor was chosen because it raises
fewer privacy concerns (in comparison to location, video or audio recording,
for example) and because its low power consumption makes it suitable to be
embedded in smaller wearable devices, such as fitness trackers. 30 subjects
from two different organizations were provided with smartphones. The study
lasted for 8 weeks and was conducted in real working environments, with no
constraints whatsoever placed upon smartphone usage. The subjects reported
their perceived stress levels three times during their working hours. Using
combination of statistical models to classify self reported stress levels, we
achieved a maximum overall accuracy of 71% for user-specific models and an
accuracy of 60% for the use of similar-users models, relying solely on data
from a single accelerometer.
|
[
{
"created": "Wed, 14 Oct 2015 18:10:28 GMT",
"version": "v1"
}
] |
2015-10-15
|
[
[
"Garcia-Ceja",
"Enrique",
""
],
[
"Osmani",
"Venet",
""
],
[
"Mayora",
"Oscar",
""
]
] |
Increase in workload across many organisations and consequent increase in occupational stress is negatively affecting the health of the workforce. Measuring stress and other human psychological dynamics is difficult due to subjective nature of self- reporting and variability between and within individuals. With the advent of smartphones it is now possible to monitor diverse aspects of human behaviour, including objectively measured behaviour related to psychological state and consequently stress. We have used data from the smartphone's built-in accelerometer to detect behaviour that correlates with subjects stress levels. Accelerometer sensor was chosen because it raises fewer privacy concerns (in comparison to location, video or audio recording, for example) and because its low power consumption makes it suitable to be embedded in smaller wearable devices, such as fitness trackers. 30 subjects from two different organizations were provided with smartphones. The study lasted for 8 weeks and was conducted in real working environments, with no constraints whatsoever placed upon smartphone usage. The subjects reported their perceived stress levels three times during their working hours. Using combination of statistical models to classify self reported stress levels, we achieved a maximum overall accuracy of 71% for user-specific models and an accuracy of 60% for the use of similar-users models, relying solely on data from a single accelerometer.
|
1906.04777
|
Pieter Peers
|
Victoria L. Cooper, James C. Bieron, Pieter Peers
|
Estimating Homogeneous Data-driven BRDF Parameters from a Reflectance
Map under Known Natural Lighting
| null | null | null | null |
cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we demonstrate robust estimation of the model parameters of a
fully-linear data-driven BRDF model from a reflectance map under known natural
lighting. To regularize the estimation of the model parameters, we leverage the
reflectance similarities within a material class. We approximate the space of
homogeneous BRDFs using a Gaussian mixture model, and assign a material class
to each Gaussian in the mixture model. We formulate the estimation of the model
parameters as a non-linear maximum a-posteriori optimization, and introduce a
linear approximation that estimates a solution per material class from which
the best solution is selected. We demonstrate the efficacy and robustness of
our method using the MERL BRDF database under a variety of natural lighting
conditions, and we provide a proof-of-concept real-world experiment.
|
[
{
"created": "Tue, 11 Jun 2019 19:28:09 GMT",
"version": "v1"
}
] |
2019-06-13
|
[
[
"Cooper",
"Victoria L.",
""
],
[
"Bieron",
"James C.",
""
],
[
"Peers",
"Pieter",
""
]
] |
In this paper we demonstrate robust estimation of the model parameters of a fully-linear data-driven BRDF model from a reflectance map under known natural lighting. To regularize the estimation of the model parameters, we leverage the reflectance similarities within a material class. We approximate the space of homogeneous BRDFs using a Gaussian mixture model, and assign a material class to each Gaussian in the mixture model. We formulate the estimation of the model parameters as a non-linear maximum a-posteriori optimization, and introduce a linear approximation that estimates a solution per material class from which the best solution is selected. We demonstrate the efficacy and robustness of our method using the MERL BRDF database under a variety of natural lighting conditions, and we provide a proof-of-concept real-world experiment.
|
1904.11563
|
Suayb Arslan
|
Suayb S. Arslan
|
Array BP-XOR Codes for Hierarchically Distributed Matrix Multiplication
|
22 pages, 5 figures, 4 tables. Accepted to IEEE Transactions on
Information Theory, 2021. arXiv admin note: text overlap with
arXiv:1709.07949
| null |
10.1109/TIT.2021.3132043
| null |
cs.IT cs.DC math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A novel fault-tolerant computation technique based on array Belief
Propagation (BP)-decodable XOR (BP-XOR) codes is proposed for distributed
matrix-matrix multiplication. The proposed scheme is shown to be configurable
and suited for modern hierarchical compute architectures such as Graphical
Processing Units (GPUs) equipped with multiple nodes, whereby each has many
small independent processing units with increased core-to-core communications.
The proposed scheme is shown to outperform a few of the well--known earlier
strategies in terms of total end-to-end execution time while in presence of
slow nodes, called $stragglers$. This performance advantage is due to the
careful design of array codes which distributes the encoding operation over the
cluster (slave) nodes at the expense of increased master-slave communication.
An interesting trade-off between end-to-end latency and total communication
cost is precisely described. In addition, to be able to address an identified
problem of scaling stragglers, an asymptotic version of array BP-XOR codes
based on projection geometry is proposed at the expense of some computation
overhead. A thorough latency analysis is conducted for all schemes to
demonstrate that the proposed scheme achieves order-optimal computation in both
the sublinear as well as the linear regimes in the size of the computed product
from an end-to-end delay perspective.
|
[
{
"created": "Thu, 25 Apr 2019 19:59:47 GMT",
"version": "v1"
},
{
"created": "Mon, 13 May 2019 16:28:32 GMT",
"version": "v2"
},
{
"created": "Fri, 10 Dec 2021 12:33:11 GMT",
"version": "v3"
}
] |
2021-12-13
|
[
[
"Arslan",
"Suayb S.",
""
]
] |
A novel fault-tolerant computation technique based on array Belief Propagation (BP)-decodable XOR (BP-XOR) codes is proposed for distributed matrix-matrix multiplication. The proposed scheme is shown to be configurable and suited for modern hierarchical compute architectures such as Graphical Processing Units (GPUs) equipped with multiple nodes, whereby each has many small independent processing units with increased core-to-core communications. The proposed scheme is shown to outperform a few of the well--known earlier strategies in terms of total end-to-end execution time while in presence of slow nodes, called $stragglers$. This performance advantage is due to the careful design of array codes which distributes the encoding operation over the cluster (slave) nodes at the expense of increased master-slave communication. An interesting trade-off between end-to-end latency and total communication cost is precisely described. In addition, to be able to address an identified problem of scaling stragglers, an asymptotic version of array BP-XOR codes based on projection geometry is proposed at the expense of some computation overhead. A thorough latency analysis is conducted for all schemes to demonstrate that the proposed scheme achieves order-optimal computation in both the sublinear as well as the linear regimes in the size of the computed product from an end-to-end delay perspective.
|
1902.03534
|
Ali Vakilian
|
Piotr Indyk, Sepideh Mahabadi, Ronitt Rubinfeld, Ali Vakilian, Anak
Yodpinyanee
|
Set Cover in Sub-linear Time
| null | null | null | null |
cs.DS cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the classic set cover problem from the perspective of sub-linear
algorithms. Given access to a collection of $m$ sets over $n$ elements in the
query model, we show that sub-linear algorithms derived from existing
techniques have almost tight query complexities.
On one hand, first we show an adaptation of the streaming algorithm presented
in Har-Peled et al. [2016] to the sub-linear query model, that returns an
$\alpha$-approximate cover using $\tilde{O}(m(n/k)^{1/(\alpha-1)} + nk)$
queries to the input, where $k$ denotes the value of a minimum set cover. We
then complement this upper bound by proving that for lower values of $k$, the
required number of queries is $\tilde{\Omega}(m(n/k)^{1/(2\alpha)})$, even for
estimating the optimal cover size. Moreover, we prove that even checking
whether a given collection of sets covers all the elements would require
$\Omega(nk)$ queries. These two lower bounds provide strong evidence that the
upper bound is almost tight for certain values of the parameter $k$.
On the other hand, we show that this bound is not optimal for larger values
of the parameter $k$, as there exists a $(1+\varepsilon)$-approximation
algorithm with $\tilde{O}(mn/k\varepsilon^2)$ queries. We show that this bound
is essentially tight for sufficiently small constant $\varepsilon$, by
establishing a lower bound of $\tilde{\Omega}(mn/k)$ query complexity.
|
[
{
"created": "Sun, 10 Feb 2019 04:10:34 GMT",
"version": "v1"
}
] |
2019-02-12
|
[
[
"Indyk",
"Piotr",
""
],
[
"Mahabadi",
"Sepideh",
""
],
[
"Rubinfeld",
"Ronitt",
""
],
[
"Vakilian",
"Ali",
""
],
[
"Yodpinyanee",
"Anak",
""
]
] |
We study the classic set cover problem from the perspective of sub-linear algorithms. Given access to a collection of $m$ sets over $n$ elements in the query model, we show that sub-linear algorithms derived from existing techniques have almost tight query complexities. On one hand, first we show an adaptation of the streaming algorithm presented in Har-Peled et al. [2016] to the sub-linear query model, that returns an $\alpha$-approximate cover using $\tilde{O}(m(n/k)^{1/(\alpha-1)} + nk)$ queries to the input, where $k$ denotes the value of a minimum set cover. We then complement this upper bound by proving that for lower values of $k$, the required number of queries is $\tilde{\Omega}(m(n/k)^{1/(2\alpha)})$, even for estimating the optimal cover size. Moreover, we prove that even checking whether a given collection of sets covers all the elements would require $\Omega(nk)$ queries. These two lower bounds provide strong evidence that the upper bound is almost tight for certain values of the parameter $k$. On the other hand, we show that this bound is not optimal for larger values of the parameter $k$, as there exists a $(1+\varepsilon)$-approximation algorithm with $\tilde{O}(mn/k\varepsilon^2)$ queries. We show that this bound is essentially tight for sufficiently small constant $\varepsilon$, by establishing a lower bound of $\tilde{\Omega}(mn/k)$ query complexity.
|
1709.10063
|
Sebastian Kuhnert
|
V. Arvind, Johannes K\"obler, Sebastian Kuhnert and Jacobo Toran
|
Finding Small Weight Isomorphisms with Additional Constraints is
Fixed-Parameter Tractable
|
An extended abstract of this article appears in the proceedings of
IPEC 2017
| null | null | null |
cs.CC math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Lubiw showed that several variants of Graph Isomorphism are NP-complete,
where the solutions are required to satisfy certain additional constraints
[SICOMP 10, 1981]. One of these, called Isomorphism With Restrictions, is to
decide for two given graphs $X_1=(V,E_1)$ and $X_2=(V,E_2)$ and a subset
$R\subseteq V\times V$ of forbidden pairs whether there is an isomorphism $\pi$
from $X_1$ to $X_2$ such that $\pi(i)\neq j$ for all $(i,j)\in R$. We prove
that this problem and several of its generalizations are in fact in FPT:
- The problem of deciding whether there is an isomorphism between two graphs
that moves k vertices and satisfies Lubiw-style constraints is in FPT, with k
and the size of $R$ as parameters. The problem remains in FPT if a CNF of such
constraints is allowed. It follows that the problem to decide whether there is
an isomorphism that moves exactly k vertices is in FPT. This solves a question
left open in our article on exact weight automorphisms [STACS 2017].
- When the weight and complexity are unrestricted, finding isomorphisms that
satisfy a CNF of Lubiw-style constraints can be solved in FPT with access to a
GI oracle.
- Checking if there is an isomorphism $\pi$ between two graphs with
complexity t is also in FPT with t as parameter, where the complexity of a
permutation is the Cayley measure defined as the minimum number t such that
$\pi$ can be expressed as a product of t transpositions.
- We consider a more general problem in which the vertex set of a graph X is
partitioned into Red and Blue, and we are interested in an automorphism that
stabilizes Red and Blue and moves exactly k vertices in Blue, where k is the
parameter. This problem was introduced by [Downey and Fellows 1999], and we
showed [STACS 2017] that it is W[1]-hard even with color classes of size 4
inside Red. Now, for color classes of size at most 3 inside Red, we show the
problem is in FPT.
|
[
{
"created": "Thu, 28 Sep 2017 17:08:11 GMT",
"version": "v1"
}
] |
2017-09-29
|
[
[
"Arvind",
"V.",
""
],
[
"Köbler",
"Johannes",
""
],
[
"Kuhnert",
"Sebastian",
""
],
[
"Toran",
"Jacobo",
""
]
] |
Lubiw showed that several variants of Graph Isomorphism are NP-complete, where the solutions are required to satisfy certain additional constraints [SICOMP 10, 1981]. One of these, called Isomorphism With Restrictions, is to decide for two given graphs $X_1=(V,E_1)$ and $X_2=(V,E_2)$ and a subset $R\subseteq V\times V$ of forbidden pairs whether there is an isomorphism $\pi$ from $X_1$ to $X_2$ such that $\pi(i)\neq j$ for all $(i,j)\in R$. We prove that this problem and several of its generalizations are in fact in FPT: - The problem of deciding whether there is an isomorphism between two graphs that moves k vertices and satisfies Lubiw-style constraints is in FPT, with k and the size of $R$ as parameters. The problem remains in FPT if a CNF of such constraints is allowed. It follows that the problem to decide whether there is an isomorphism that moves exactly k vertices is in FPT. This solves a question left open in our article on exact weight automorphisms [STACS 2017]. - When the weight and complexity are unrestricted, finding isomorphisms that satisfy a CNF of Lubiw-style constraints can be solved in FPT with access to a GI oracle. - Checking if there is an isomorphism $\pi$ between two graphs with complexity t is also in FPT with t as parameter, where the complexity of a permutation is the Cayley measure defined as the minimum number t such that $\pi$ can be expressed as a product of t transpositions. - We consider a more general problem in which the vertex set of a graph X is partitioned into Red and Blue, and we are interested in an automorphism that stabilizes Red and Blue and moves exactly k vertices in Blue, where k is the parameter. This problem was introduced by [Downey and Fellows 1999], and we showed [STACS 2017] that it is W[1]-hard even with color classes of size 4 inside Red. Now, for color classes of size at most 3 inside Red, we show the problem is in FPT.
|
2407.19484
|
Hao Shi
|
Zhengyi Jiang, Hao Shi, Zhongyi Huang, Linqi Song, Bo Bai, Gong Zhang,
Hanxu Hou
|
Error Correction Decoding Algorithms of RS Codes Based on An Earlier
Termination Algorithm to Find The Error Locator Polynomial
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reed-Solomon (RS) codes are widely used to correct errors in storage systems.
Finding the error locator polynomial is one of the key steps in the error
correction procedure of RS codes. Modular Approach (MA) is an effective
algorithm for solving the Welch-Berlekamp (WB) key-equation problem to find the
error locator polynomial that needs $2t$ steps, where $t$ is the error
correction capability. In this paper, we first present a new MA algorithm that
only requires $2e$ steps and then propose two fast decoding algorithms for RS
codes based on our MA algorithm, where $e$ is the number of errors and $e\leq
t$. We propose Improved-Frequency Domain Modular Approach (I-FDMA) algorithm
that needs $2e$ steps to solve the error locator polynomial and present our
first decoding algorithm based on the I-FDMA algorithm. We show that, compared
with the existing methods based on MA algorithms, our I-FDMA algorithm can
effectively reduce the decoding complexity of RS codes when $e<t$. Furthermore,
we propose the $t_0$-Shortened I-FDMA ($t_0$-SI-FDMA) algorithm ($t_0$ is a
predetermined even number less than $2t-1$) based on the new termination
mechanism to solve the error number $e$ quickly. We propose our second decoding
algorithm based on the SI-FDMA algorithm for RS codes and show that the
multiplication complexity of our second decoding algorithm is lower than our
first decoding algorithm (the I-FDMA decoding algorithm) when $2e<t_0+1$.
|
[
{
"created": "Sun, 28 Jul 2024 12:32:07 GMT",
"version": "v1"
}
] |
2024-07-30
|
[
[
"Jiang",
"Zhengyi",
""
],
[
"Shi",
"Hao",
""
],
[
"Huang",
"Zhongyi",
""
],
[
"Song",
"Linqi",
""
],
[
"Bai",
"Bo",
""
],
[
"Zhang",
"Gong",
""
],
[
"Hou",
"Hanxu",
""
]
] |
Reed-Solomon (RS) codes are widely used to correct errors in storage systems. Finding the error locator polynomial is one of the key steps in the error correction procedure of RS codes. Modular Approach (MA) is an effective algorithm for solving the Welch-Berlekamp (WB) key-equation problem to find the error locator polynomial that needs $2t$ steps, where $t$ is the error correction capability. In this paper, we first present a new MA algorithm that only requires $2e$ steps and then propose two fast decoding algorithms for RS codes based on our MA algorithm, where $e$ is the number of errors and $e\leq t$. We propose Improved-Frequency Domain Modular Approach (I-FDMA) algorithm that needs $2e$ steps to solve the error locator polynomial and present our first decoding algorithm based on the I-FDMA algorithm. We show that, compared with the existing methods based on MA algorithms, our I-FDMA algorithm can effectively reduce the decoding complexity of RS codes when $e<t$. Furthermore, we propose the $t_0$-Shortened I-FDMA ($t_0$-SI-FDMA) algorithm ($t_0$ is a predetermined even number less than $2t-1$) based on the new termination mechanism to solve the error number $e$ quickly. We propose our second decoding algorithm based on the SI-FDMA algorithm for RS codes and show that the multiplication complexity of our second decoding algorithm is lower than our first decoding algorithm (the I-FDMA decoding algorithm) when $2e<t_0+1$.
|
1904.06118
|
Elgin Akp{\i}nar
|
Elgin Akp{\i}nar, Yeliz Ye\c{s}ilada, Selim Temizer
|
Ability and Context Based Adaptive System: A Proposal for Machine
Learning Approach
|
Presented at the CHI'19 Workshop: Addressing the Challenges of
Situationally-Induced Impairments and Disabilities in Mobile Interaction,
2019 (arXiv:1904.05382)
| null | null |
SIID/2019/no02
|
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
When we interact with small screen devices, sometimes we make errors, due to
our abilities/disabilities, contextual factors that distract our attention or
problems related to the interface. Recovering from these errors may be time
consuming or cause frustration. Predicting and learning these errors based on
the previous user interaction and contextual factors, and adapting user
interface to prevent from these errors can improve user performance and
satisfaction. In this paper, we propose a system that aims to monitor user
performance and contextual changes and do adaptations based on the user
performance by using machine learning techniques. Here, we briefly present our
systematic literature review findings and discuss our research questions
towards developing such an adaptive system.
|
[
{
"created": "Fri, 12 Apr 2019 09:24:34 GMT",
"version": "v1"
}
] |
2019-04-15
|
[
[
"Akpınar",
"Elgin",
""
],
[
"Yeşilada",
"Yeliz",
""
],
[
"Temizer",
"Selim",
""
]
] |
When we interact with small screen devices, sometimes we make errors, due to our abilities/disabilities, contextual factors that distract our attention or problems related to the interface. Recovering from these errors may be time consuming or cause frustration. Predicting and learning these errors based on the previous user interaction and contextual factors, and adapting user interface to prevent from these errors can improve user performance and satisfaction. In this paper, we propose a system that aims to monitor user performance and contextual changes and do adaptations based on the user performance by using machine learning techniques. Here, we briefly present our systematic literature review findings and discuss our research questions towards developing such an adaptive system.
|
2312.11559
|
Harris Papadopoulos
|
Harris Papadopoulos and Nestoras Georgiou and Charalambos Eliades and
Andreas Konstantinidis
|
Android Malware Detection with Unbiased Confidence Guarantees
| null |
Neurocomputing, Volume 280, Pages 3-12, 2018
|
10.1016/j.neucom.2017.08.072
| null |
cs.CR cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The impressive growth of smartphone devices in combination with the rising
ubiquity of using mobile platforms for sensitive applications such as Internet
banking, have triggered a rapid increase in mobile malware. In recent
literature, many studies examine Machine Learning techniques, as the most
promising approach for mobile malware detection, without however quantifying
the uncertainty involved in their detections. In this paper, we address this
problem by proposing a machine learning dynamic analysis approach that provides
provably valid confidence guarantees in each malware detection. Moreover the
particular guarantees hold for both the malicious and benign classes
independently and are unaffected by any bias in the data. The proposed approach
is based on a novel machine learning framework, called Conformal Prediction,
combined with a random forests classifier. We examine its performance on a
large-scale dataset collected by installing 1866 malicious and 4816 benign
applications on a real android device. We make this collection of dynamic
analysis data available to the research community. The obtained experimental
results demonstrate the empirical validity, usefulness and unbiased nature of
the outputs produced by the proposed approach.
|
[
{
"created": "Sun, 17 Dec 2023 11:07:31 GMT",
"version": "v1"
}
] |
2023-12-20
|
[
[
"Papadopoulos",
"Harris",
""
],
[
"Georgiou",
"Nestoras",
""
],
[
"Eliades",
"Charalambos",
""
],
[
"Konstantinidis",
"Andreas",
""
]
] |
The impressive growth of smartphone devices in combination with the rising ubiquity of using mobile platforms for sensitive applications such as Internet banking, have triggered a rapid increase in mobile malware. In recent literature, many studies examine Machine Learning techniques, as the most promising approach for mobile malware detection, without however quantifying the uncertainty involved in their detections. In this paper, we address this problem by proposing a machine learning dynamic analysis approach that provides provably valid confidence guarantees in each malware detection. Moreover the particular guarantees hold for both the malicious and benign classes independently and are unaffected by any bias in the data. The proposed approach is based on a novel machine learning framework, called Conformal Prediction, combined with a random forests classifier. We examine its performance on a large-scale dataset collected by installing 1866 malicious and 4816 benign applications on a real android device. We make this collection of dynamic analysis data available to the research community. The obtained experimental results demonstrate the empirical validity, usefulness and unbiased nature of the outputs produced by the proposed approach.
|
2203.03038
|
Ashkan Jasour
|
Weiqiao Han and Ashkan Jasour and Brian Williams
|
Non-Gaussian Risk Bounded Trajectory Optimization for Stochastic
Nonlinear Systems in Uncertain Environments
|
Accepted at the 39th IEEE Conference on Robotics and Automation
(ICRA), 2022
| null | null | null |
cs.RO cs.SY eess.SY math.OC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We address the risk bounded trajectory optimization problem of stochastic
nonlinear robotic systems. More precisely, we consider the motion planning
problem in which the robot has stochastic nonlinear dynamics and uncertain
initial locations, and the environment contains multiple dynamic uncertain
obstacles with arbitrary probabilistic distributions. The goal is to plan a
sequence of control inputs for the robot to navigate to the target while
bounding the probability of colliding with obstacles. Existing approaches to
address risk bounded trajectory optimization problems are limited to particular
classes of models and uncertainties such as Gaussian linear problems. In this
paper, we deal with stochastic nonlinear models, nonlinear safety constraints,
and arbitrary probabilistic uncertainties, the most general setting ever
considered. To address the risk bounded trajectory optimization problem, we
first formulate the problem as an optimization problem with stochastic dynamics
equations and chance constraints. We then convert probabilistic constraints and
stochastic dynamics constraints on random variables into a set of deterministic
constraints on the moments of state probability distributions. Finally, we
solve the resulting deterministic optimization problem using nonlinear
optimization solvers and get a sequence of control inputs. To our best
knowledge, it is the first time that the motion planning problem to such a
general extent is considered and solved. To illustrate the performance of the
proposed method, we provide several robotics examples.
|
[
{
"created": "Sun, 6 Mar 2022 19:48:08 GMT",
"version": "v1"
}
] |
2022-03-08
|
[
[
"Han",
"Weiqiao",
""
],
[
"Jasour",
"Ashkan",
""
],
[
"Williams",
"Brian",
""
]
] |
We address the risk bounded trajectory optimization problem of stochastic nonlinear robotic systems. More precisely, we consider the motion planning problem in which the robot has stochastic nonlinear dynamics and uncertain initial locations, and the environment contains multiple dynamic uncertain obstacles with arbitrary probabilistic distributions. The goal is to plan a sequence of control inputs for the robot to navigate to the target while bounding the probability of colliding with obstacles. Existing approaches to address risk bounded trajectory optimization problems are limited to particular classes of models and uncertainties such as Gaussian linear problems. In this paper, we deal with stochastic nonlinear models, nonlinear safety constraints, and arbitrary probabilistic uncertainties, the most general setting ever considered. To address the risk bounded trajectory optimization problem, we first formulate the problem as an optimization problem with stochastic dynamics equations and chance constraints. We then convert probabilistic constraints and stochastic dynamics constraints on random variables into a set of deterministic constraints on the moments of state probability distributions. Finally, we solve the resulting deterministic optimization problem using nonlinear optimization solvers and get a sequence of control inputs. To our best knowledge, it is the first time that the motion planning problem to such a general extent is considered and solved. To illustrate the performance of the proposed method, we provide several robotics examples.
|
2304.11042
|
Ali Momeni
|
Ali Momeni, Babak Rahmani, Matthieu Mallejac, Philipp Del Hougne, and
Romain Fleury
|
Backpropagation-free Training of Deep Physical Neural Networks
|
44 pages, 12 figures
| null | null | null |
cs.LG cs.NE physics.app-ph physics.optics
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent years have witnessed the outstanding success of deep learning in
various fields such as vision and natural language processing. This success is
largely indebted to the massive size of deep learning models that is expected
to increase unceasingly. This growth of the deep learning models is accompanied
by issues related to their considerable energy consumption, both during the
training and inference phases, as well as their scalability. Although a number
of work based on unconventional physical systems have been proposed which
addresses the issue of energy efficiency in the inference phase, efficient
training of deep learning models has remained unaddressed. So far, training of
digital deep learning models mainly relies on backpropagation, which is not
suitable for physical implementation as it requires perfect knowledge of the
computation performed in the so-called forward pass of the neural network.
Here, we tackle this issue by proposing a simple deep neural network
architecture augmented by a biologically plausible learning algorithm, referred
to as "model-free forward-forward training". The proposed architecture enables
training deep physical neural networks consisting of layers of physical
nonlinear systems, without requiring detailed knowledge of the nonlinear
physical layers' properties. We show that our method outperforms
state-of-the-art hardware-aware training methods by improving training speed,
decreasing digital computations, and reducing power consumption in physical
systems. We demonstrate the adaptability of the proposed method, even in
systems exposed to dynamic or unpredictable external perturbations. To showcase
the universality of our approach, we train diverse wave-based physical neural
networks that vary in the underlying wave phenomenon and the type of
non-linearity they use, to perform vowel and image classification tasks
experimentally.
|
[
{
"created": "Thu, 20 Apr 2023 14:02:49 GMT",
"version": "v1"
},
{
"created": "Tue, 9 May 2023 12:16:53 GMT",
"version": "v2"
},
{
"created": "Mon, 12 Jun 2023 18:24:02 GMT",
"version": "v3"
}
] |
2023-06-14
|
[
[
"Momeni",
"Ali",
""
],
[
"Rahmani",
"Babak",
""
],
[
"Mallejac",
"Matthieu",
""
],
[
"Del Hougne",
"Philipp",
""
],
[
"Fleury",
"Romain",
""
]
] |
Recent years have witnessed the outstanding success of deep learning in various fields such as vision and natural language processing. This success is largely indebted to the massive size of deep learning models that is expected to increase unceasingly. This growth of the deep learning models is accompanied by issues related to their considerable energy consumption, both during the training and inference phases, as well as their scalability. Although a number of work based on unconventional physical systems have been proposed which addresses the issue of energy efficiency in the inference phase, efficient training of deep learning models has remained unaddressed. So far, training of digital deep learning models mainly relies on backpropagation, which is not suitable for physical implementation as it requires perfect knowledge of the computation performed in the so-called forward pass of the neural network. Here, we tackle this issue by proposing a simple deep neural network architecture augmented by a biologically plausible learning algorithm, referred to as "model-free forward-forward training". The proposed architecture enables training deep physical neural networks consisting of layers of physical nonlinear systems, without requiring detailed knowledge of the nonlinear physical layers' properties. We show that our method outperforms state-of-the-art hardware-aware training methods by improving training speed, decreasing digital computations, and reducing power consumption in physical systems. We demonstrate the adaptability of the proposed method, even in systems exposed to dynamic or unpredictable external perturbations. To showcase the universality of our approach, we train diverse wave-based physical neural networks that vary in the underlying wave phenomenon and the type of non-linearity they use, to perform vowel and image classification tasks experimentally.
|
1201.1972
|
Abdelhakim Khlifi
|
Abdelhakim Khlifi and Ridha Bouallegue
|
Hybrid LS-LMMSE Channel Estimation Technique for LTE Downlink Systems
|
13 pages, 11 figures
|
International Journal of Next Generation Networks (IJNGN) Vol.3,
No.4, December 2011, 1-13
|
10.5121/ijngn.2011.3401
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose to improve the performance of the channel
estimation for LTE Downlink systems under the effect of the channel length. As
LTE Downlink system is a MIMO-OFDMA based system, a cyclic prefix (CP) is
inserted at the beginning of each transmitted OFDM symbol in order to mitigate
both inter-carrier interference (ICI) and inter-symbol interference (ISI). The
inserted CP is usually equal to or longer than the channel length. However, the
cyclic prefix can be shorter because of some unforeseen channel behaviour.
Previous works have shown that in the case where the cyclic prefix is equal to
or longer than the channel length, LMMSE performs better than LSE but at the
cost of computational complexity .In the other case, LMMSE performs also better
than LS only for low SNR values. However, LS shows better performance for LTE
Downlink systems for high SNR values. Therefore, we propose a hybrid LS-LMMSE
channel estimation technique robust to the channel length effect. MATLAB
Monte-Carlo simulations are used to evaluate the performance of the proposed
estimator in terms of Mean Square Error (MSE) and Bit Error Rate (BER) for 2x2
LTE Downlink systems.
|
[
{
"created": "Tue, 10 Jan 2012 06:14:26 GMT",
"version": "v1"
}
] |
2012-01-11
|
[
[
"Khlifi",
"Abdelhakim",
""
],
[
"Bouallegue",
"Ridha",
""
]
] |
In this paper, we propose to improve the performance of the channel estimation for LTE Downlink systems under the effect of the channel length. As LTE Downlink system is a MIMO-OFDMA based system, a cyclic prefix (CP) is inserted at the beginning of each transmitted OFDM symbol in order to mitigate both inter-carrier interference (ICI) and inter-symbol interference (ISI). The inserted CP is usually equal to or longer than the channel length. However, the cyclic prefix can be shorter because of some unforeseen channel behaviour. Previous works have shown that in the case where the cyclic prefix is equal to or longer than the channel length, LMMSE performs better than LSE but at the cost of computational complexity .In the other case, LMMSE performs also better than LS only for low SNR values. However, LS shows better performance for LTE Downlink systems for high SNR values. Therefore, we propose a hybrid LS-LMMSE channel estimation technique robust to the channel length effect. MATLAB Monte-Carlo simulations are used to evaluate the performance of the proposed estimator in terms of Mean Square Error (MSE) and Bit Error Rate (BER) for 2x2 LTE Downlink systems.
|
2203.15251
|
Yueming Jin
|
Yueming Jin, Yang Yu, Cheng Chen, Zixu Zhao, Pheng-Ann Heng, Danail
Stoyanov
|
Exploring Intra- and Inter-Video Relation for Surgical Semantic Scene
Segmentation
|
Accepted at IEEE TMI
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automatic surgical scene segmentation is fundamental for facilitating
cognitive intelligence in the modern operating theatre. Previous works rely on
conventional aggregation modules (e.g., dilated convolution, convolutional
LSTM), which only make use of the local context. In this paper, we propose a
novel framework STswinCL that explores the complementary intra- and inter-video
relations to boost segmentation performance, by progressively capturing the
global context. We firstly develop a hierarchy Transformer to capture
intra-video relation that includes richer spatial and temporal cues from
neighbor pixels and previous frames. A joint space-time window shift scheme is
proposed to efficiently aggregate these two cues into each pixel embedding.
Then, we explore inter-video relation via pixel-to-pixel contrastive learning,
which well structures the global embedding space. A multi-source contrast
training objective is developed to group the pixel embeddings across videos
with the ground-truth guidance, which is crucial for learning the global
property of the whole data. We extensively validate our approach on two public
surgical video benchmarks, including EndoVis18 Challenge and CaDIS dataset.
Experimental results demonstrate the promising performance of our method, which
consistently exceeds previous state-of-the-art approaches. Code is available at
https://github.com/YuemingJin/STswinCL.
|
[
{
"created": "Tue, 29 Mar 2022 05:52:23 GMT",
"version": "v1"
},
{
"created": "Fri, 24 Jun 2022 16:48:23 GMT",
"version": "v2"
}
] |
2022-06-27
|
[
[
"Jin",
"Yueming",
""
],
[
"Yu",
"Yang",
""
],
[
"Chen",
"Cheng",
""
],
[
"Zhao",
"Zixu",
""
],
[
"Heng",
"Pheng-Ann",
""
],
[
"Stoyanov",
"Danail",
""
]
] |
Automatic surgical scene segmentation is fundamental for facilitating cognitive intelligence in the modern operating theatre. Previous works rely on conventional aggregation modules (e.g., dilated convolution, convolutional LSTM), which only make use of the local context. In this paper, we propose a novel framework STswinCL that explores the complementary intra- and inter-video relations to boost segmentation performance, by progressively capturing the global context. We firstly develop a hierarchy Transformer to capture intra-video relation that includes richer spatial and temporal cues from neighbor pixels and previous frames. A joint space-time window shift scheme is proposed to efficiently aggregate these two cues into each pixel embedding. Then, we explore inter-video relation via pixel-to-pixel contrastive learning, which well structures the global embedding space. A multi-source contrast training objective is developed to group the pixel embeddings across videos with the ground-truth guidance, which is crucial for learning the global property of the whole data. We extensively validate our approach on two public surgical video benchmarks, including EndoVis18 Challenge and CaDIS dataset. Experimental results demonstrate the promising performance of our method, which consistently exceeds previous state-of-the-art approaches. Code is available at https://github.com/YuemingJin/STswinCL.
|
2407.15720
|
Zhenmei Shi
|
Zhuoyan Xu, Zhenmei Shi, Yingyu Liang
|
Do Large Language Models Have Compositional Ability? An Investigation
into Limitations and Scalability
| null | null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Large language models (LLMs) have emerged as powerful tools for many AI
problems and exhibit remarkable in-context learning (ICL) capabilities.
Compositional ability, solving unseen complex tasks that combine two or more
simple tasks, is an essential reasoning ability for Artificial General
Intelligence. Despite the tremendous success of LLMs, how they approach
composite tasks, especially those not encountered during the pretraining phase,
remains an open and largely underexplored question. In this study, we delve
into the ICL capabilities of LLMs on composite tasks, with only simple tasks as
in-context examples. We develop a test suite of composite tasks including
linguistic and logical challenges and perform empirical studies across
different LLM families. We observe that models exhibit divergent behaviors: (1)
For simpler composite tasks that apply distinct mapping mechanisms to different
input segments, the models demonstrate decent compositional ability, while
scaling up the model enhances this ability; (2) for more complex composite
tasks involving reasoning multiple steps, where each step represents one task,
models typically underperform, and scaling up generally provides no
improvements. We offer theoretical analysis in a simplified setting, explaining
that models exhibit compositional capability when the task handles different
input parts separately. We believe our work sheds new light on the capabilities
of LLMs in solving composite tasks regarding the nature of the tasks and model
scale. Our dataset and code are available at
{\url{https://github.com/OliverXUZY/LLM_Compose}}.
|
[
{
"created": "Mon, 22 Jul 2024 15:22:34 GMT",
"version": "v1"
},
{
"created": "Sun, 11 Aug 2024 04:39:16 GMT",
"version": "v2"
}
] |
2024-08-13
|
[
[
"Xu",
"Zhuoyan",
""
],
[
"Shi",
"Zhenmei",
""
],
[
"Liang",
"Yingyu",
""
]
] |
Large language models (LLMs) have emerged as powerful tools for many AI problems and exhibit remarkable in-context learning (ICL) capabilities. Compositional ability, solving unseen complex tasks that combine two or more simple tasks, is an essential reasoning ability for Artificial General Intelligence. Despite the tremendous success of LLMs, how they approach composite tasks, especially those not encountered during the pretraining phase, remains an open and largely underexplored question. In this study, we delve into the ICL capabilities of LLMs on composite tasks, with only simple tasks as in-context examples. We develop a test suite of composite tasks including linguistic and logical challenges and perform empirical studies across different LLM families. We observe that models exhibit divergent behaviors: (1) For simpler composite tasks that apply distinct mapping mechanisms to different input segments, the models demonstrate decent compositional ability, while scaling up the model enhances this ability; (2) for more complex composite tasks involving reasoning multiple steps, where each step represents one task, models typically underperform, and scaling up generally provides no improvements. We offer theoretical analysis in a simplified setting, explaining that models exhibit compositional capability when the task handles different input parts separately. We believe our work sheds new light on the capabilities of LLMs in solving composite tasks regarding the nature of the tasks and model scale. Our dataset and code are available at {\url{https://github.com/OliverXUZY/LLM_Compose}}.
|
2211.17100
|
Tommy Nilsson
|
Anna Vock, Tommy Nilsson
|
Holistic Outpost Design for Lunar Lava Tubes
|
73rd International Astronautical Congress (IAC)
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
As the space industry continues its rapid development, humanity is poised to
expand beyond Low Earth Orbit (LEO), seeking to establish permanent presence on
the Moon and beyond. While space travel has traditionally been the domain of a
small number of highly specialized professionals, a new era of human
exploration, involving non-space actors and stakeholders, is now becoming a
reality. In spite of this development, most space habitats are still designed
for a narrow target group. This paper seeks to address this deficit by
rethinking the established design approaches, typically limited to tackling
engineering and challenges of human space exploration (such as radiation or
hypogravity), by instead adopting an interdisciplinary "big picture"
perspective encompassing social, psychological and cultural aspects of future
space habitats. By elaborating and reflecting on our concept, this paper seeks
to demonstrate the importance of a trans-disciplinary approach to designing
thriving sustainable colonies beyond LEO. We demonstrate the potentially key
role of design as mediator in advancing macro-strategies promoting thriving
existence and sustainable growth. With this approach we tackle big-picture
questions about humanity's future and prospects amongst the stars.
|
[
{
"created": "Wed, 19 Oct 2022 09:30:02 GMT",
"version": "v1"
}
] |
2022-12-01
|
[
[
"Vock",
"Anna",
""
],
[
"Nilsson",
"Tommy",
""
]
] |
As the space industry continues its rapid development, humanity is poised to expand beyond Low Earth Orbit (LEO), seeking to establish permanent presence on the Moon and beyond. While space travel has traditionally been the domain of a small number of highly specialized professionals, a new era of human exploration, involving non-space actors and stakeholders, is now becoming a reality. In spite of this development, most space habitats are still designed for a narrow target group. This paper seeks to address this deficit by rethinking the established design approaches, typically limited to tackling engineering and challenges of human space exploration (such as radiation or hypogravity), by instead adopting an interdisciplinary "big picture" perspective encompassing social, psychological and cultural aspects of future space habitats. By elaborating and reflecting on our concept, this paper seeks to demonstrate the importance of a trans-disciplinary approach to designing thriving sustainable colonies beyond LEO. We demonstrate the potentially key role of design as mediator in advancing macro-strategies promoting thriving existence and sustainable growth. With this approach we tackle big-picture questions about humanity's future and prospects amongst the stars.
|
2305.00730
|
Sanath Kumar Vengaldas
|
Sanath Kumar Vengaldas and Adarsh Reddy Muthyala and Bharath Chaitanya
Konkati and P. Venkata Subba Reddy
|
Integer Linear Programming Formulations for Triple and Quadruple Roman
Domination Problems
| null | null | null | null |
cs.DM math.CO
|
http://creativecommons.org/licenses/by/4.0/
|
Roman domination is a well researched topic in graph theory. Recently two new
variants of Roman domination, namely triple Roman domination and quadruple
Roman domination problems have been introduced, to provide better defense
strategies. However, triple Roman domination and quadruple Roman domination
problems are NP-hard. In this paper, we have provided genetic algorithm for
solving triple and quadruple Roman domination problems. Programming (ILP)
formulations for triple Roman domination and quadruple Roman domination
problems have been proposed. The proposed models are implemented using IBM
CPLEX 22.1 optimization solvers and obtained results for random graphs
generated using NetworkX Erdos-Renyi model.
|
[
{
"created": "Mon, 1 May 2023 09:13:24 GMT",
"version": "v1"
}
] |
2023-05-02
|
[
[
"Vengaldas",
"Sanath Kumar",
""
],
[
"Muthyala",
"Adarsh Reddy",
""
],
[
"Konkati",
"Bharath Chaitanya",
""
],
[
"Reddy",
"P. Venkata Subba",
""
]
] |
Roman domination is a well researched topic in graph theory. Recently two new variants of Roman domination, namely triple Roman domination and quadruple Roman domination problems have been introduced, to provide better defense strategies. However, triple Roman domination and quadruple Roman domination problems are NP-hard. In this paper, we have provided genetic algorithm for solving triple and quadruple Roman domination problems. Programming (ILP) formulations for triple Roman domination and quadruple Roman domination problems have been proposed. The proposed models are implemented using IBM CPLEX 22.1 optimization solvers and obtained results for random graphs generated using NetworkX Erdos-Renyi model.
|
1901.00716
|
Marmar Orooji
|
Marmar Orooji, Gerald M. Knapp
|
Improving Suppression to Reduce Disclosure Risk and Enhance Data Utility
|
6 pages, conference
|
Institute of Industrial and Systems Engineers (2018)
| null | null |
cs.DB cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In Privacy Preserving Data Publishing, various privacy models have been
developed for employing anonymization operations on sensitive individual level
datasets, in order to publish the data for public access while preserving the
privacy of individuals in the dataset. However, there is always a trade-off
between preserving privacy and data utility; the more changes we make on the
confidential dataset to reduce disclosure risk, the more information the data
loses and the less data utility it preserves. The optimum privacy technique is
the one that results in a dataset with minimum disclosure risk and maximum data
utility. In this paper, we propose an improved suppression method, which
reduces the disclosure risk and enhances the data utility by targeting the
highest risk records and keeping other records intact. We have shown the
effectiveness of our approach through an experiment on a real-world
confidential dataset.
|
[
{
"created": "Wed, 2 Jan 2019 18:48:34 GMT",
"version": "v1"
},
{
"created": "Tue, 8 Jan 2019 01:36:58 GMT",
"version": "v2"
}
] |
2019-01-09
|
[
[
"Orooji",
"Marmar",
""
],
[
"Knapp",
"Gerald M.",
""
]
] |
In Privacy Preserving Data Publishing, various privacy models have been developed for employing anonymization operations on sensitive individual level datasets, in order to publish the data for public access while preserving the privacy of individuals in the dataset. However, there is always a trade-off between preserving privacy and data utility; the more changes we make on the confidential dataset to reduce disclosure risk, the more information the data loses and the less data utility it preserves. The optimum privacy technique is the one that results in a dataset with minimum disclosure risk and maximum data utility. In this paper, we propose an improved suppression method, which reduces the disclosure risk and enhances the data utility by targeting the highest risk records and keeping other records intact. We have shown the effectiveness of our approach through an experiment on a real-world confidential dataset.
|
1210.7138
|
Lse Lse
|
Nicolas Anquetil (INRIA Lille - Nord Europe), Jannik Laval (INRIA
Lille - Nord Europe)
|
Legacy Software Restructuring: Analyzing a Concrete Case
| null |
Proceedings of the 15th European Conference on Software
Maintenance and Reengineering (CSMR'11) (2011) 279--286
| null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Software re-modularization is an old preoccupation of reverse engineering
research. The advantages of a well structured or modularized system are well
known. Yet after so much time and efforts, the field seems unable to come up
with solutions that make a clear difference in practice. Recently, some
researchers started to question whether some basic assumptions of the field
were not overrated. The main one consists in evaluating the
high-cohesion/low-coupling dogma with metrics of unknown relevance. In this
paper, we study a real structuring case (on the Eclipse platform) to try to
better understand if (some) existing metrics would have helped the software
engineers in the task. Results show that the cohesion and coupling metrics used
in the experiment did not behave as expected and would probably not have helped
the maintainers reach there goal. We also measured another possible
restructuring which is to decrease the number of cyclic dependencies between
modules. Again, the results did not meet expectations.
|
[
{
"created": "Fri, 26 Oct 2012 13:20:00 GMT",
"version": "v1"
}
] |
2012-10-29
|
[
[
"Anquetil",
"Nicolas",
"",
"INRIA Lille - Nord Europe"
],
[
"Laval",
"Jannik",
"",
"INRIA\n Lille - Nord Europe"
]
] |
Software re-modularization is an old preoccupation of reverse engineering research. The advantages of a well structured or modularized system are well known. Yet after so much time and efforts, the field seems unable to come up with solutions that make a clear difference in practice. Recently, some researchers started to question whether some basic assumptions of the field were not overrated. The main one consists in evaluating the high-cohesion/low-coupling dogma with metrics of unknown relevance. In this paper, we study a real structuring case (on the Eclipse platform) to try to better understand if (some) existing metrics would have helped the software engineers in the task. Results show that the cohesion and coupling metrics used in the experiment did not behave as expected and would probably not have helped the maintainers reach there goal. We also measured another possible restructuring which is to decrease the number of cyclic dependencies between modules. Again, the results did not meet expectations.
|
2208.04246
|
Colorado J Reed
|
Malachy Moran and Kayla Woputz and Derrick Hee and Manuela Girotto and
Paolo D'Odorico and Ritwik Gupta and Daniel Feldman and Puya Vahabi and
Alberto Todeschini and Colorado J Reed
|
Snowpack Estimation in Key Mountainous Water Basins from
Openly-Available, Multimodal Data Sources
|
Accepted Oral Presentation at CVPR 2022 MultiEarth
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Accurately estimating the snowpack in key mountainous basins is critical for
water resource managers to make decisions that impact local and global
economies, wildlife, and public policy. Currently, this estimation requires
multiple LiDAR-equipped plane flights or in situ measurements, both of which
are expensive, sparse, and biased towards accessible regions. In this paper, we
demonstrate that fusing spatial and temporal information from multiple,
openly-available satellite and weather data sources enables estimation of
snowpack in key mountainous regions. Our multisource model outperforms
single-source estimation by 5.0 inches RMSE, as well as outperforms sparse in
situ measurements by 1.2 inches RMSE.
|
[
{
"created": "Mon, 8 Aug 2022 16:17:36 GMT",
"version": "v1"
}
] |
2022-08-09
|
[
[
"Moran",
"Malachy",
""
],
[
"Woputz",
"Kayla",
""
],
[
"Hee",
"Derrick",
""
],
[
"Girotto",
"Manuela",
""
],
[
"D'Odorico",
"Paolo",
""
],
[
"Gupta",
"Ritwik",
""
],
[
"Feldman",
"Daniel",
""
],
[
"Vahabi",
"Puya",
""
],
[
"Todeschini",
"Alberto",
""
],
[
"Reed",
"Colorado J",
""
]
] |
Accurately estimating the snowpack in key mountainous basins is critical for water resource managers to make decisions that impact local and global economies, wildlife, and public policy. Currently, this estimation requires multiple LiDAR-equipped plane flights or in situ measurements, both of which are expensive, sparse, and biased towards accessible regions. In this paper, we demonstrate that fusing spatial and temporal information from multiple, openly-available satellite and weather data sources enables estimation of snowpack in key mountainous regions. Our multisource model outperforms single-source estimation by 5.0 inches RMSE, as well as outperforms sparse in situ measurements by 1.2 inches RMSE.
|
2307.08086
|
Murad Tukan
|
Murad Tukan, Alaa Maalouf, Margarita Osadchy
|
Dataset Distillation Meets Provable Subset Selection
| null | null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep learning has grown tremendously over recent years, yielding
state-of-the-art results in various fields. However, training such models
requires huge amounts of data, increasing the computational time and cost. To
address this, dataset distillation was proposed to compress a large training
dataset into a smaller synthetic one that retains its performance -- this is
usually done by (1) uniformly initializing a synthetic set and (2) iteratively
updating/learning this set according to a predefined loss by uniformly sampling
instances from the full data. In this paper, we improve both phases of dataset
distillation: (1) we present a provable, sampling-based approach for
initializing the distilled set by identifying important and removing redundant
points in the data, and (2) we further merge the idea of data subset selection
with dataset distillation, by training the distilled set on ``important''
sampled points during the training procedure instead of randomly sampling the
next batch. To do so, we define the notion of importance based on the relative
contribution of instances with respect to two different loss functions, i.e.,
one for the initialization phase (a kernel fitting function for kernel ridge
regression and $K$-means based loss function for any other distillation
method), and the relative cross-entropy loss (or any other predefined loss)
function for the training phase. Finally, we provide experimental results
showing how our method can latch on to existing dataset distillation techniques
and improve their performance.
|
[
{
"created": "Sun, 16 Jul 2023 15:58:19 GMT",
"version": "v1"
}
] |
2023-07-18
|
[
[
"Tukan",
"Murad",
""
],
[
"Maalouf",
"Alaa",
""
],
[
"Osadchy",
"Margarita",
""
]
] |
Deep learning has grown tremendously over recent years, yielding state-of-the-art results in various fields. However, training such models requires huge amounts of data, increasing the computational time and cost. To address this, dataset distillation was proposed to compress a large training dataset into a smaller synthetic one that retains its performance -- this is usually done by (1) uniformly initializing a synthetic set and (2) iteratively updating/learning this set according to a predefined loss by uniformly sampling instances from the full data. In this paper, we improve both phases of dataset distillation: (1) we present a provable, sampling-based approach for initializing the distilled set by identifying important and removing redundant points in the data, and (2) we further merge the idea of data subset selection with dataset distillation, by training the distilled set on ``important'' sampled points during the training procedure instead of randomly sampling the next batch. To do so, we define the notion of importance based on the relative contribution of instances with respect to two different loss functions, i.e., one for the initialization phase (a kernel fitting function for kernel ridge regression and $K$-means based loss function for any other distillation method), and the relative cross-entropy loss (or any other predefined loss) function for the training phase. Finally, we provide experimental results showing how our method can latch on to existing dataset distillation techniques and improve their performance.
|
1705.04300
|
Nan Yang
|
Nan Yang, Rui Wang, Xiang Gao, Daniel Cremers
|
Challenges in Monocular Visual Odometry: Photometric Calibration, Motion
Bias and Rolling Shutter Effect
|
Accepted by IEEE Robotics and Automation Letters (RA-L), 2018. The
first two authors contributed equally to this paper
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Monocular visual odometry (VO) and simultaneous localization and mapping
(SLAM) have seen tremendous improvements in accuracy, robustness and
efficiency, and have gained increasing popularity over recent years.
Nevertheless, not so many discussions have been carried out to reveal the
influences of three very influential yet easily overlooked aspects: photometric
calibration, motion bias and rolling shutter effect. In this work, we evaluate
these three aspects quantitatively on the state of the art of direct,
feature-based and semi-direct methods, providing the community with useful
practical knowledge both for better applying existing methods and developing
new algorithms of VO and SLAM. Conclusions (some of which are
counter-intuitive) are drawn with both technical and empirical analyses to all
of our experiments. Possible improvements on existing methods are directed or
proposed, such as a sub-pixel accuracy refinement of ORB-SLAM which boosts its
performance.
|
[
{
"created": "Thu, 11 May 2017 17:36:43 GMT",
"version": "v1"
},
{
"created": "Fri, 21 Jul 2017 11:25:45 GMT",
"version": "v2"
},
{
"created": "Mon, 18 Sep 2017 13:21:30 GMT",
"version": "v3"
},
{
"created": "Thu, 7 Jun 2018 11:46:59 GMT",
"version": "v4"
}
] |
2018-06-08
|
[
[
"Yang",
"Nan",
""
],
[
"Wang",
"Rui",
""
],
[
"Gao",
"Xiang",
""
],
[
"Cremers",
"Daniel",
""
]
] |
Monocular visual odometry (VO) and simultaneous localization and mapping (SLAM) have seen tremendous improvements in accuracy, robustness and efficiency, and have gained increasing popularity over recent years. Nevertheless, not so many discussions have been carried out to reveal the influences of three very influential yet easily overlooked aspects: photometric calibration, motion bias and rolling shutter effect. In this work, we evaluate these three aspects quantitatively on the state of the art of direct, feature-based and semi-direct methods, providing the community with useful practical knowledge both for better applying existing methods and developing new algorithms of VO and SLAM. Conclusions (some of which are counter-intuitive) are drawn with both technical and empirical analyses to all of our experiments. Possible improvements on existing methods are directed or proposed, such as a sub-pixel accuracy refinement of ORB-SLAM which boosts its performance.
|
1608.00684
|
Pauline Chou
|
David Savage, Xiuzhen Zhang, Xinghuo Yu, Pauline Chou, Qingmai Wang
|
Detection of opinion spam based on anomalous rating deviation
| null |
Expert Systems with Applications 42 (2015) 8650-8657
| null | null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The publication of fake reviews by parties with vested interests has become a
severe problem for consumers who use online product reviews in their decision
making. To counter this problem a number of methods for detecting these fake
reviews, termed opinion spam, have been proposed. However, to date, many of
these methods focus on analysis of review text, making them unsuitable for many
review systems where accom-panying text is optional, or not possible. Moreover,
these approaches are often computationally expensive, requiring extensive
resources to handle text analysis over the scale of data typically involved.
In this paper, we consider opinion spammers manipulation of average ratings
for products, focusing on dif-ferences between spammer ratings and the majority
opinion of honest reviewers. We propose a lightweight, effective method for
detecting opinion spammers based on these differences. This method uses
binomial regression to identify reviewers having an anomalous proportion of
ratings that deviate from the majority opinion. Experiments on real-world and
synthetic data show that our approach is able to successfully iden-tify opinion
spammers. Comparison with the current state-of-the-art approach, also based
only on ratings, shows that our method is able to achieve similar detection
accuracy while removing the need for assump-tions regarding probabilities of
spam and non-spam reviews and reducing the heavy computation required for
learning.
|
[
{
"created": "Tue, 2 Aug 2016 02:52:18 GMT",
"version": "v1"
}
] |
2016-08-03
|
[
[
"Savage",
"David",
""
],
[
"Zhang",
"Xiuzhen",
""
],
[
"Yu",
"Xinghuo",
""
],
[
"Chou",
"Pauline",
""
],
[
"Wang",
"Qingmai",
""
]
] |
The publication of fake reviews by parties with vested interests has become a severe problem for consumers who use online product reviews in their decision making. To counter this problem a number of methods for detecting these fake reviews, termed opinion spam, have been proposed. However, to date, many of these methods focus on analysis of review text, making them unsuitable for many review systems where accom-panying text is optional, or not possible. Moreover, these approaches are often computationally expensive, requiring extensive resources to handle text analysis over the scale of data typically involved. In this paper, we consider opinion spammers manipulation of average ratings for products, focusing on dif-ferences between spammer ratings and the majority opinion of honest reviewers. We propose a lightweight, effective method for detecting opinion spammers based on these differences. This method uses binomial regression to identify reviewers having an anomalous proportion of ratings that deviate from the majority opinion. Experiments on real-world and synthetic data show that our approach is able to successfully iden-tify opinion spammers. Comparison with the current state-of-the-art approach, also based only on ratings, shows that our method is able to achieve similar detection accuracy while removing the need for assump-tions regarding probabilities of spam and non-spam reviews and reducing the heavy computation required for learning.
|
1909.12535
|
Duc Bui
|
Duc Bui, Kshitiz Malik, Jack Goetz, Honglei Liu, Seungwhan Moon, Anuj
Kumar, Kang G. Shin
|
Federated User Representation Learning
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Collaborative personalization, such as through learned user representations
(embeddings), can improve the prediction accuracy of neural-network-based
models significantly. We propose Federated User Representation Learning (FURL),
a simple, scalable, privacy-preserving and resource-efficient way to utilize
existing neural personalization techniques in the Federated Learning (FL)
setting. FURL divides model parameters into federated and private parameters.
Private parameters, such as private user embeddings, are trained locally, but
unlike federated parameters, they are not transferred to or averaged on the
server. We show theoretically that this parameter split does not affect
training for most model personalization approaches. Storing user embeddings
locally not only preserves user privacy, but also improves memory locality of
personalization compared to on-server training. We evaluate FURL on two
datasets, demonstrating a significant improvement in model quality with 8% and
51% performance increases, and approximately the same level of performance as
centralized training with only 0% and 4% reductions. Furthermore, we show that
user embeddings learned in FL and the centralized setting have a very similar
structure, indicating that FURL can learn collaboratively through the shared
parameters while preserving user privacy.
|
[
{
"created": "Fri, 27 Sep 2019 07:40:08 GMT",
"version": "v1"
}
] |
2019-09-30
|
[
[
"Bui",
"Duc",
""
],
[
"Malik",
"Kshitiz",
""
],
[
"Goetz",
"Jack",
""
],
[
"Liu",
"Honglei",
""
],
[
"Moon",
"Seungwhan",
""
],
[
"Kumar",
"Anuj",
""
],
[
"Shin",
"Kang G.",
""
]
] |
Collaborative personalization, such as through learned user representations (embeddings), can improve the prediction accuracy of neural-network-based models significantly. We propose Federated User Representation Learning (FURL), a simple, scalable, privacy-preserving and resource-efficient way to utilize existing neural personalization techniques in the Federated Learning (FL) setting. FURL divides model parameters into federated and private parameters. Private parameters, such as private user embeddings, are trained locally, but unlike federated parameters, they are not transferred to or averaged on the server. We show theoretically that this parameter split does not affect training for most model personalization approaches. Storing user embeddings locally not only preserves user privacy, but also improves memory locality of personalization compared to on-server training. We evaluate FURL on two datasets, demonstrating a significant improvement in model quality with 8% and 51% performance increases, and approximately the same level of performance as centralized training with only 0% and 4% reductions. Furthermore, we show that user embeddings learned in FL and the centralized setting have a very similar structure, indicating that FURL can learn collaboratively through the shared parameters while preserving user privacy.
|
2402.11061
|
Bin Han
|
Anthony Kiggundu, Bin Han, Dennis Krummacker, and Hans D. Schotten
|
Chronicles of jockeying in queuing systems
|
Submitted to ACM Computing Surveys
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The relevance of studies in queuing theory in social systems has inspired its
adoption in other mainstream technologies with its application in distributed
and communication systems becoming an intense research domain. Considerable
work has been done regarding the application of the impatient queuing
phenomenon in distributed computing to achieve optimal resource sharing and
allocation for performance improvement. Generally, there are two types of
common impatient queuing behaviour that have been well studied, namely balking
and reneging, respectively. In this survey, we are interested in the third type
of impatience: jockeying, a phenomenon that draws origins from impatient
customers switching from one queue to another.
This survey chronicles classical and latest efforts that labor to model and
exploit the jockeying behaviour in queuing systems, with a special focus on
those related to information and communication systems, especially in the
context of Multi-Access Edge Computing. We comparatively summarize the reviewed
literature regarding their methodologies, invoked models, and use cases.
|
[
{
"created": "Fri, 16 Feb 2024 20:24:02 GMT",
"version": "v1"
},
{
"created": "Wed, 12 Jun 2024 19:07:13 GMT",
"version": "v2"
},
{
"created": "Fri, 28 Jun 2024 11:53:32 GMT",
"version": "v3"
}
] |
2024-07-01
|
[
[
"Kiggundu",
"Anthony",
""
],
[
"Han",
"Bin",
""
],
[
"Krummacker",
"Dennis",
""
],
[
"Schotten",
"Hans D.",
""
]
] |
The relevance of studies in queuing theory in social systems has inspired its adoption in other mainstream technologies with its application in distributed and communication systems becoming an intense research domain. Considerable work has been done regarding the application of the impatient queuing phenomenon in distributed computing to achieve optimal resource sharing and allocation for performance improvement. Generally, there are two types of common impatient queuing behaviour that have been well studied, namely balking and reneging, respectively. In this survey, we are interested in the third type of impatience: jockeying, a phenomenon that draws origins from impatient customers switching from one queue to another. This survey chronicles classical and latest efforts that labor to model and exploit the jockeying behaviour in queuing systems, with a special focus on those related to information and communication systems, especially in the context of Multi-Access Edge Computing. We comparatively summarize the reviewed literature regarding their methodologies, invoked models, and use cases.
|
2402.18465
|
Lukas Brand
|
Lukas Brand, Yan Wang, Maurizio Magarini, Robert Schober, and
Sebastian Lotter
|
Semantic Information in MC: Chemotaxis Beyond Shannon
|
7 pages, 5 figures, This work has been submitted in part for possible
publication to the IEEE Global Communications Conference (GLOBECOM) 2024
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The recently emerged molecular communication (MC) paradigm intends to
leverage communication engineering tools for the design of synthetic chemical
communication systems. These systems are envisioned to operate at nanoscale and
in biological environments, such as the human body, and catalyze the emergence
of revolutionary applications in the context of early disease monitoring and
drug targeting. Despite the abundance of theoretical (and recently also
experimental) MC system designs proposed over the past years, some fundamental
questions remain unresolved, hindering the breakthrough of MC in real-world
applications. One of these questions is: What can be a useful measure of
information in the context of MC applications? While most existing works on MC
build upon the concept of syntactic information as introduced by Shannon, in
this paper, we explore the framework of semantic information as introduced by
Kolchinsky and Wolpert for the information-theoretic analysis of a natural MC
system, namely bacterial chemotaxis. Exploiting computational agent-based
modeling (ABM), we are able to quantify, for the first time, the amount of
information that the considered chemotactic bacterium (CB) utilizes to adapt to
and survive in a dynamic environment. In other words, we show how the flow of
information between the environment and the CB is related to the effectiveness
of communication. Effectiveness here refers to the adaptation of the CB to the
dynamic environment in order to ensure survival. Our analysis reveals that it
highly depends on the environmental conditions how much information the CB can
effectively utilize for improving their survival chances. Encouraged by our
results, we envision that the proposed semantic information framework can open
new avenues for the development of theoretical and experimental MC system
designs for future nanoscale applications.
|
[
{
"created": "Wed, 28 Feb 2024 16:41:52 GMT",
"version": "v1"
},
{
"created": "Thu, 4 Apr 2024 09:02:31 GMT",
"version": "v2"
}
] |
2024-04-05
|
[
[
"Brand",
"Lukas",
""
],
[
"Wang",
"Yan",
""
],
[
"Magarini",
"Maurizio",
""
],
[
"Schober",
"Robert",
""
],
[
"Lotter",
"Sebastian",
""
]
] |
The recently emerged molecular communication (MC) paradigm intends to leverage communication engineering tools for the design of synthetic chemical communication systems. These systems are envisioned to operate at nanoscale and in biological environments, such as the human body, and catalyze the emergence of revolutionary applications in the context of early disease monitoring and drug targeting. Despite the abundance of theoretical (and recently also experimental) MC system designs proposed over the past years, some fundamental questions remain unresolved, hindering the breakthrough of MC in real-world applications. One of these questions is: What can be a useful measure of information in the context of MC applications? While most existing works on MC build upon the concept of syntactic information as introduced by Shannon, in this paper, we explore the framework of semantic information as introduced by Kolchinsky and Wolpert for the information-theoretic analysis of a natural MC system, namely bacterial chemotaxis. Exploiting computational agent-based modeling (ABM), we are able to quantify, for the first time, the amount of information that the considered chemotactic bacterium (CB) utilizes to adapt to and survive in a dynamic environment. In other words, we show how the flow of information between the environment and the CB is related to the effectiveness of communication. Effectiveness here refers to the adaptation of the CB to the dynamic environment in order to ensure survival. Our analysis reveals that it highly depends on the environmental conditions how much information the CB can effectively utilize for improving their survival chances. Encouraged by our results, we envision that the proposed semantic information framework can open new avenues for the development of theoretical and experimental MC system designs for future nanoscale applications.
|
1409.1467
|
Erik Leitinger
|
Erik Leitinger and Paul Meissner and Christoph R\"udisser and Gregor
Dumphart and Klaus Witrisal
|
Evaluation of Position-related Information in Multipath Components for
Indoor Positioning
|
14 pages, 10 figures, submitted to the IEEE Journal on Selected Areas
in Communications: Localization-Awareness for Radios and Networks
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Location awareness is a key factor for a wealth of wireless indoor
applications. Its provision requires the careful fusion of diverse information
sources. For agents that use radio signals for localization, this information
may either come from signal transmissions with respect to fixed anchors, from
cooperative transmissions inbetween agents, or from radar-like monostatic
transmissions. Using a-priori knowledge of a floor plan of the environment,
specular multipath components can be exploited, based on a geometric-stochastic
channel model. In this paper, a unified framework is presented for the
quantification of this type of position-related information, using the concept
of equivalent Fisher information. We derive analytical results for the
Cram\'er-Rao lower bound of multipath-assisted positioning, considering
bistatic transmissions between agents and fixed anchors, monostatic
transmissions from agents, cooperative measurements inbetween agents, and
combinations thereof, including the effect of clock offsets. Awareness of this
information enables highly accurate and robust indoor positioning.
Computational results show the applicability of the framework for the
characterization of the localization capabilities of a given environment,
quantifying the influence of different system setups, signal parameters, and
the impact of path overlap.
|
[
{
"created": "Thu, 4 Sep 2014 15:30:59 GMT",
"version": "v1"
},
{
"created": "Mon, 10 Dec 2018 12:50:39 GMT",
"version": "v2"
}
] |
2018-12-11
|
[
[
"Leitinger",
"Erik",
""
],
[
"Meissner",
"Paul",
""
],
[
"Rüdisser",
"Christoph",
""
],
[
"Dumphart",
"Gregor",
""
],
[
"Witrisal",
"Klaus",
""
]
] |
Location awareness is a key factor for a wealth of wireless indoor applications. Its provision requires the careful fusion of diverse information sources. For agents that use radio signals for localization, this information may either come from signal transmissions with respect to fixed anchors, from cooperative transmissions inbetween agents, or from radar-like monostatic transmissions. Using a-priori knowledge of a floor plan of the environment, specular multipath components can be exploited, based on a geometric-stochastic channel model. In this paper, a unified framework is presented for the quantification of this type of position-related information, using the concept of equivalent Fisher information. We derive analytical results for the Cram\'er-Rao lower bound of multipath-assisted positioning, considering bistatic transmissions between agents and fixed anchors, monostatic transmissions from agents, cooperative measurements inbetween agents, and combinations thereof, including the effect of clock offsets. Awareness of this information enables highly accurate and robust indoor positioning. Computational results show the applicability of the framework for the characterization of the localization capabilities of a given environment, quantifying the influence of different system setups, signal parameters, and the impact of path overlap.
|
2012.10860
|
Guangming Wang
|
Guangming Wang, Muyao Chen, Hanwen Liu, Yehui Yang, Zhe Liu, Hesheng
Wang
|
Anchor-Based Spatio-Temporal Attention 3D Convolutional Networks for
Dynamic 3D Point Cloud Sequences
|
10 pages, 6 figures, under review
|
IEEE Transactions on Instrumentation and Measurement, 2021
|
10.1109/TIM.2021.3106101
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the rapid development of measurement technology, LiDAR and depth cameras
are widely used in the perception of the 3D environment. Recent learning based
methods for robot perception most focus on the image or video, but deep
learning methods for dynamic 3D point cloud sequences are underexplored.
Therefore, developing efficient and accurate perception method compatible with
these advanced instruments is pivotal to autonomous driving and service robots.
An Anchor-based Spatio-Temporal Attention 3D Convolution operation (ASTA3DConv)
is proposed in this paper to process dynamic 3D point cloud sequences. The
proposed convolution operation builds a regular receptive field around each
point by setting several virtual anchors around each point. The features of
neighborhood points are firstly aggregated to each anchor based on the
spatio-temporal attention mechanism. Then, anchor-based 3D convolution is
adopted to aggregate these anchors' features to the core points. The proposed
method makes better use of the structured information within the local region
and learns spatio-temporal embedding features from dynamic 3D point cloud
sequences. Anchor-based Spatio-Temporal Attention 3D Convolutional Neural
Networks (ASTA3DCNNs) are built for classification and segmentation tasks based
on the proposed ASTA3DConv and evaluated on action recognition and semantic
segmentation tasks. The experiments and ablation studies on MSRAction3D and
Synthia datasets demonstrate the superior performance and effectiveness of our
method for dynamic 3D point cloud sequences. Our method achieves the
state-of-the-art performance among the methods with dynamic 3D point cloud
sequences as input on MSRAction3D and Synthia datasets.
|
[
{
"created": "Sun, 20 Dec 2020 07:35:37 GMT",
"version": "v1"
},
{
"created": "Thu, 29 Jul 2021 13:55:33 GMT",
"version": "v2"
}
] |
2021-11-04
|
[
[
"Wang",
"Guangming",
""
],
[
"Chen",
"Muyao",
""
],
[
"Liu",
"Hanwen",
""
],
[
"Yang",
"Yehui",
""
],
[
"Liu",
"Zhe",
""
],
[
"Wang",
"Hesheng",
""
]
] |
With the rapid development of measurement technology, LiDAR and depth cameras are widely used in the perception of the 3D environment. Recent learning based methods for robot perception most focus on the image or video, but deep learning methods for dynamic 3D point cloud sequences are underexplored. Therefore, developing efficient and accurate perception method compatible with these advanced instruments is pivotal to autonomous driving and service robots. An Anchor-based Spatio-Temporal Attention 3D Convolution operation (ASTA3DConv) is proposed in this paper to process dynamic 3D point cloud sequences. The proposed convolution operation builds a regular receptive field around each point by setting several virtual anchors around each point. The features of neighborhood points are firstly aggregated to each anchor based on the spatio-temporal attention mechanism. Then, anchor-based 3D convolution is adopted to aggregate these anchors' features to the core points. The proposed method makes better use of the structured information within the local region and learns spatio-temporal embedding features from dynamic 3D point cloud sequences. Anchor-based Spatio-Temporal Attention 3D Convolutional Neural Networks (ASTA3DCNNs) are built for classification and segmentation tasks based on the proposed ASTA3DConv and evaluated on action recognition and semantic segmentation tasks. The experiments and ablation studies on MSRAction3D and Synthia datasets demonstrate the superior performance and effectiveness of our method for dynamic 3D point cloud sequences. Our method achieves the state-of-the-art performance among the methods with dynamic 3D point cloud sequences as input on MSRAction3D and Synthia datasets.
|
1102.5046
|
Tamara Kolda
|
C. Seshadhri, Ali Pinar, Tamara G. Kolda
|
An In-Depth Analysis of Stochastic Kronecker Graphs
| null |
Journal of the ACM 60(2):13 (32 pages), April 2013
|
10.1145/2450142.2450149
| null |
cs.SI cs.DM physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graph analysis is playing an increasingly important role in science and
industry. Due to numerous limitations in sharing real-world graphs, models for
generating massive graphs are critical for developing better algorithms. In
this paper, we analyze the stochastic Kronecker graph model (SKG), which is the
foundation of the Graph500 supercomputer benchmark due to its favorable
properties and easy parallelization. Our goal is to provide a deeper
understanding of the parameters and properties of this model so that its
functionality as a benchmark is increased. We develop a rigorous mathematical
analysis that shows this model cannot generate a power-law distribution or even
a lognormal distribution. However, we formalize an enhanced version of the SKG
model that uses random noise for smoothing. We prove both in theory and in
practice that this enhancement leads to a lognormal distribution. Additionally,
we provide a precise analysis of isolated vertices, showing that the graphs
that are produced by SKG might be quite different than intended. For example,
between 50% and 75% of the vertices in the Graph500 benchmarks will be
isolated. Finally, we show that this model tends to produce extremely small
core numbers (compared to most social networks and other real graphs) for
common parameter choices.
|
[
{
"created": "Thu, 24 Feb 2011 17:36:57 GMT",
"version": "v1"
},
{
"created": "Thu, 8 Sep 2011 18:34:32 GMT",
"version": "v2"
},
{
"created": "Wed, 2 Jan 2013 23:59:15 GMT",
"version": "v3"
}
] |
2013-09-16
|
[
[
"Seshadhri",
"C.",
""
],
[
"Pinar",
"Ali",
""
],
[
"Kolda",
"Tamara G.",
""
]
] |
Graph analysis is playing an increasingly important role in science and industry. Due to numerous limitations in sharing real-world graphs, models for generating massive graphs are critical for developing better algorithms. In this paper, we analyze the stochastic Kronecker graph model (SKG), which is the foundation of the Graph500 supercomputer benchmark due to its favorable properties and easy parallelization. Our goal is to provide a deeper understanding of the parameters and properties of this model so that its functionality as a benchmark is increased. We develop a rigorous mathematical analysis that shows this model cannot generate a power-law distribution or even a lognormal distribution. However, we formalize an enhanced version of the SKG model that uses random noise for smoothing. We prove both in theory and in practice that this enhancement leads to a lognormal distribution. Additionally, we provide a precise analysis of isolated vertices, showing that the graphs that are produced by SKG might be quite different than intended. For example, between 50% and 75% of the vertices in the Graph500 benchmarks will be isolated. Finally, we show that this model tends to produce extremely small core numbers (compared to most social networks and other real graphs) for common parameter choices.
|
2212.14296
|
Yan Jia
|
Rongkuan Ma, Qiang Wei, Jingyi Wang, Shunkai Zhu, Shouling Ji, Peng
Cheng, Yan Jia, Qingxian Wang
|
Towards Comprehensively Understanding the Run-time Security of
Programmable Logic Controllers: A 3-year Empirical Study
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Programmable Logic Controllers (PLCs) are the core control devices in
Industrial Control Systems (ICSs), which control and monitor the underlying
physical plants such as power grids. PLCs were initially designed to work in a
trusted industrial network, which however can be brittle once deployed in an
Internet-facing (or penetrated) network. Yet, there is a lack of systematic
empirical analysis of the run-time security of modern real-world PLCs. To close
this gap, we present the first large-scale measurement on 23 off-the-shelf PLCs
across 13 leading vendors. We find many common security issues and unexplored
implications that should be more carefully addressed in the design and
implementation. To sum up, the unsupervised logic applications can cause system
resource/privilege abuse, which gives adversaries new means to hijack the
control flow of a runtime system remotely (without exploiting memory
vulnerabilities); 2) the improper access control mechanisms bring many
unauthorized access implications; 3) the proprietary or semi-proprietary
protocols are fragile regarding confidentiality and integrity protection of
run-time data. We empirically evaluated the corresponding attack vectors on
multiple PLCs, which demonstrates that the security implications are severe and
broad. Our findings were reported to the related parties responsibly, and 20
bugs have been confirmed with 7 assigned CVEs.
|
[
{
"created": "Thu, 29 Dec 2022 13:18:11 GMT",
"version": "v1"
}
] |
2023-01-02
|
[
[
"Ma",
"Rongkuan",
""
],
[
"Wei",
"Qiang",
""
],
[
"Wang",
"Jingyi",
""
],
[
"Zhu",
"Shunkai",
""
],
[
"Ji",
"Shouling",
""
],
[
"Cheng",
"Peng",
""
],
[
"Jia",
"Yan",
""
],
[
"Wang",
"Qingxian",
""
]
] |
Programmable Logic Controllers (PLCs) are the core control devices in Industrial Control Systems (ICSs), which control and monitor the underlying physical plants such as power grids. PLCs were initially designed to work in a trusted industrial network, which however can be brittle once deployed in an Internet-facing (or penetrated) network. Yet, there is a lack of systematic empirical analysis of the run-time security of modern real-world PLCs. To close this gap, we present the first large-scale measurement on 23 off-the-shelf PLCs across 13 leading vendors. We find many common security issues and unexplored implications that should be more carefully addressed in the design and implementation. To sum up, the unsupervised logic applications can cause system resource/privilege abuse, which gives adversaries new means to hijack the control flow of a runtime system remotely (without exploiting memory vulnerabilities); 2) the improper access control mechanisms bring many unauthorized access implications; 3) the proprietary or semi-proprietary protocols are fragile regarding confidentiality and integrity protection of run-time data. We empirically evaluated the corresponding attack vectors on multiple PLCs, which demonstrates that the security implications are severe and broad. Our findings were reported to the related parties responsibly, and 20 bugs have been confirmed with 7 assigned CVEs.
|
1106.1820
|
R. Barzilay
|
R. Barzilay, N. Elhadad
|
Inferring Strategies for Sentence Ordering in Multidocument News
Summarization
| null |
Journal Of Artificial Intelligence Research, Volume 17, pages
35-55, 2002
|
10.1613/jair.991
| null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The problem of organizing information for multidocument summarization so that
the generated summary is coherent has received relatively little attention.
While sentence ordering for single document summarization can be determined
from the ordering of sentences in the input article, this is not the case for
multidocument summarization where summary sentences may be drawn from different
input articles. In this paper, we propose a methodology for studying the
properties of ordering information in the news genre and describe experiments
done on a corpus of multiple acceptable orderings we developed for the task.
Based on these experiments, we implemented a strategy for ordering information
that combines constraints from chronological order of events and topical
relatedness. Evaluation of our augmented algorithm shows a significant
improvement of the ordering over two baseline strategies.
|
[
{
"created": "Thu, 9 Jun 2011 13:57:02 GMT",
"version": "v1"
}
] |
2011-06-10
|
[
[
"Barzilay",
"R.",
""
],
[
"Elhadad",
"N.",
""
]
] |
The problem of organizing information for multidocument summarization so that the generated summary is coherent has received relatively little attention. While sentence ordering for single document summarization can be determined from the ordering of sentences in the input article, this is not the case for multidocument summarization where summary sentences may be drawn from different input articles. In this paper, we propose a methodology for studying the properties of ordering information in the news genre and describe experiments done on a corpus of multiple acceptable orderings we developed for the task. Based on these experiments, we implemented a strategy for ordering information that combines constraints from chronological order of events and topical relatedness. Evaluation of our augmented algorithm shows a significant improvement of the ordering over two baseline strategies.
|
2306.04252
|
Skander Karkar
|
Skander Karkar and Patrick Gallinari and Alain Rakotomamonjy
|
Adversarial Sample Detection Through Neural Network Transport Dynamics
|
ECML PKDD 2023
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a detector of adversarial samples that is based on the view of
neural networks as discrete dynamic systems. The detector tells clean inputs
from abnormal ones by comparing the discrete vector fields they follow through
the layers. We also show that regularizing this vector field during training
makes the network more regular on the data distribution's support, thus making
the activations of clean inputs more distinguishable from those of abnormal
ones. Experimentally, we compare our detector favorably to other detectors on
seen and unseen attacks, and show that the regularization of the network's
dynamics improves the performance of adversarial detectors that use the
internal embeddings as inputs, while also improving test accuracy.
|
[
{
"created": "Wed, 7 Jun 2023 08:47:41 GMT",
"version": "v1"
},
{
"created": "Thu, 8 Jun 2023 08:43:40 GMT",
"version": "v2"
}
] |
2023-06-09
|
[
[
"Karkar",
"Skander",
""
],
[
"Gallinari",
"Patrick",
""
],
[
"Rakotomamonjy",
"Alain",
""
]
] |
We propose a detector of adversarial samples that is based on the view of neural networks as discrete dynamic systems. The detector tells clean inputs from abnormal ones by comparing the discrete vector fields they follow through the layers. We also show that regularizing this vector field during training makes the network more regular on the data distribution's support, thus making the activations of clean inputs more distinguishable from those of abnormal ones. Experimentally, we compare our detector favorably to other detectors on seen and unseen attacks, and show that the regularization of the network's dynamics improves the performance of adversarial detectors that use the internal embeddings as inputs, while also improving test accuracy.
|
0807.1543
|
Xiaohu Shang
|
Xiaohu Shang, Biao Chen, Gerhard Kramer, H. Vincent Poor
|
On the Capacity of MIMO Interference Channels
|
8 pages, 2 figures, submitted to Allerton 2008
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The capacity region of a multiple-input-multiple-output interference channel
(MIMO IC) where the channel matrices are square and invertible is studied. The
capacity region for strong interference is established where the definition of
strong interference parallels that of scalar channels. Moreover, the sum-rate
capacity for Z interference, noisy interference, and mixed interference is
established. These results generalize known results for the scalar Gaussian IC.
|
[
{
"created": "Thu, 10 Jul 2008 00:39:06 GMT",
"version": "v1"
},
{
"created": "Thu, 25 Sep 2008 19:40:13 GMT",
"version": "v2"
}
] |
2008-09-25
|
[
[
"Shang",
"Xiaohu",
""
],
[
"Chen",
"Biao",
""
],
[
"Kramer",
"Gerhard",
""
],
[
"Poor",
"H. Vincent",
""
]
] |
The capacity region of a multiple-input-multiple-output interference channel (MIMO IC) where the channel matrices are square and invertible is studied. The capacity region for strong interference is established where the definition of strong interference parallels that of scalar channels. Moreover, the sum-rate capacity for Z interference, noisy interference, and mixed interference is established. These results generalize known results for the scalar Gaussian IC.
|
cs/0608025
|
Dinesh Kumar
|
Dinesh Kumar (INRIA Sophia Antipolis), Eitan Altman (INRIA Sophia
Antipolis), Jean-Marc Kelif (INRIA Sophia Antipolis)
|
User-Network Association in a WLAN-UMTS Hybrid Cell: Global & Individual
Optimality
| null | null | null | null |
cs.NI
| null |
We study optimal user-network association in an integrated 802.11 WLAN and
3G-UMTS hybrid cell. Assuming saturated resource allocation on the downlink of
WLAN and UMTS networks and a single QoS class of mobiles arriving at an average
location in the hybrid cell, we formulate the problem with two different
approaches: Global and Individual optimality. The Globally optimal association
is formulated as an SMDP (Semi Markov Decision Process) connection routing
decision problem where rewards comprise a financial gain component and an
aggregate network throughput component. The corresponding Dynamic Programming
equations are solved using Value Iteration method and a stationary optimal
policy with neither convex nor concave type switching curve structure is
obtained. Threshold type and symmetric switching curves are observed for the
analogous homogenous network cases. The Individual optimality is studied under
a non-cooperative dynamic game framework with expected service time of a mobile
as the decision cost criteria. It is shown that individual optimality in a
WLAN-UMTS hybrid cell, results in a threshold policy curve of descending
staircase form with increasing Poisson arrival rate of mobiles.
|
[
{
"created": "Fri, 4 Aug 2006 11:44:31 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Kumar",
"Dinesh",
"",
"INRIA Sophia Antipolis"
],
[
"Altman",
"Eitan",
"",
"INRIA Sophia\n Antipolis"
],
[
"Kelif",
"Jean-Marc",
"",
"INRIA Sophia Antipolis"
]
] |
We study optimal user-network association in an integrated 802.11 WLAN and 3G-UMTS hybrid cell. Assuming saturated resource allocation on the downlink of WLAN and UMTS networks and a single QoS class of mobiles arriving at an average location in the hybrid cell, we formulate the problem with two different approaches: Global and Individual optimality. The Globally optimal association is formulated as an SMDP (Semi Markov Decision Process) connection routing decision problem where rewards comprise a financial gain component and an aggregate network throughput component. The corresponding Dynamic Programming equations are solved using Value Iteration method and a stationary optimal policy with neither convex nor concave type switching curve structure is obtained. Threshold type and symmetric switching curves are observed for the analogous homogenous network cases. The Individual optimality is studied under a non-cooperative dynamic game framework with expected service time of a mobile as the decision cost criteria. It is shown that individual optimality in a WLAN-UMTS hybrid cell, results in a threshold policy curve of descending staircase form with increasing Poisson arrival rate of mobiles.
|
2303.11910
|
Jiaming Zhang
|
Zhifeng Teng, Jiaming Zhang, Kailun Yang, Kunyu Peng, Hao Shi, Simon
Rei{\ss}, Ke Cao, Rainer Stiefelhagen
|
360BEV: Panoramic Semantic Mapping for Indoor Bird's-Eye View
|
Code and datasets are available at the project page:
https://jamycheung.github.io/360BEV.html. Accepted to WACV 2024
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Seeing only a tiny part of the whole is not knowing the full circumstance.
Bird's-eye-view (BEV) perception, a process of obtaining allocentric maps from
egocentric views, is restricted when using a narrow Field of View (FoV) alone.
In this work, mapping from 360{\deg} panoramas to BEV semantics, the 360BEV
task, is established for the first time to achieve holistic representations of
indoor scenes in a top-down view. Instead of relying on narrow-FoV image
sequences, a panoramic image with depth information is sufficient to generate a
holistic BEV semantic map. To benchmark 360BEV, we present two indoor datasets,
360BEV-Matterport and 360BEV-Stanford, both of which include egocentric
panoramic images and semantic segmentation labels, as well as allocentric
semantic maps. Besides delving deep into different mapping paradigms, we
propose a dedicated solution for panoramic semantic mapping, namely 360Mapper.
Through extensive experiments, our methods achieve 44.32% and 45.78% in mIoU on
both datasets respectively, surpassing previous counterparts with gains of
+7.60% and +9.70% in mIoU. Code and datasets are available at the project page:
https://jamycheung.github.io/360BEV.html.
|
[
{
"created": "Tue, 21 Mar 2023 15:01:02 GMT",
"version": "v1"
},
{
"created": "Wed, 22 Mar 2023 08:23:28 GMT",
"version": "v2"
},
{
"created": "Fri, 25 Aug 2023 15:59:04 GMT",
"version": "v3"
},
{
"created": "Mon, 4 Sep 2023 18:17:27 GMT",
"version": "v4"
}
] |
2023-09-06
|
[
[
"Teng",
"Zhifeng",
""
],
[
"Zhang",
"Jiaming",
""
],
[
"Yang",
"Kailun",
""
],
[
"Peng",
"Kunyu",
""
],
[
"Shi",
"Hao",
""
],
[
"Reiß",
"Simon",
""
],
[
"Cao",
"Ke",
""
],
[
"Stiefelhagen",
"Rainer",
""
]
] |
Seeing only a tiny part of the whole is not knowing the full circumstance. Bird's-eye-view (BEV) perception, a process of obtaining allocentric maps from egocentric views, is restricted when using a narrow Field of View (FoV) alone. In this work, mapping from 360{\deg} panoramas to BEV semantics, the 360BEV task, is established for the first time to achieve holistic representations of indoor scenes in a top-down view. Instead of relying on narrow-FoV image sequences, a panoramic image with depth information is sufficient to generate a holistic BEV semantic map. To benchmark 360BEV, we present two indoor datasets, 360BEV-Matterport and 360BEV-Stanford, both of which include egocentric panoramic images and semantic segmentation labels, as well as allocentric semantic maps. Besides delving deep into different mapping paradigms, we propose a dedicated solution for panoramic semantic mapping, namely 360Mapper. Through extensive experiments, our methods achieve 44.32% and 45.78% in mIoU on both datasets respectively, surpassing previous counterparts with gains of +7.60% and +9.70% in mIoU. Code and datasets are available at the project page: https://jamycheung.github.io/360BEV.html.
|
1701.05804
|
Daniele Tantari
|
Paolo Barucca, Fabrizio Lillo, Piero Mazzarisi, Daniele Tantari
|
Disentangling group and link persistence in Dynamic Stochastic Block
models
|
13 pages, 8 figures; Final Section added; figures updated
| null | null | null |
cs.SI cs.LG physics.soc-ph stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the inference of a model of dynamic networks in which both
communities and links keep memory of previous network states. By considering
maximum likelihood inference from single snapshot observations of the network,
we show that link persistence makes the inference of communities harder,
decreasing the detectability threshold, while community persistence tends to
make it easier. We analytically show that communities inferred from single
network snapshot can share a maximum overlap with the underlying communities of
a specific previous instant in time. This leads to time-lagged inference: the
identification of past communities rather than present ones. Finally we compute
the time lag and propose a corrected algorithm, the Lagged Snapshot Dynamic
(LSD) algorithm, for community detection in dynamic networks. We analytically
and numerically characterize the detectability transitions of such algorithm as
a function of the memory parameters of the model and we make a comparison with
a full dynamic inference.
|
[
{
"created": "Fri, 20 Jan 2017 14:33:45 GMT",
"version": "v1"
},
{
"created": "Wed, 5 Jul 2017 11:40:17 GMT",
"version": "v2"
},
{
"created": "Fri, 10 Nov 2017 17:52:52 GMT",
"version": "v3"
},
{
"created": "Wed, 19 Dec 2018 17:53:42 GMT",
"version": "v4"
}
] |
2018-12-20
|
[
[
"Barucca",
"Paolo",
""
],
[
"Lillo",
"Fabrizio",
""
],
[
"Mazzarisi",
"Piero",
""
],
[
"Tantari",
"Daniele",
""
]
] |
We study the inference of a model of dynamic networks in which both communities and links keep memory of previous network states. By considering maximum likelihood inference from single snapshot observations of the network, we show that link persistence makes the inference of communities harder, decreasing the detectability threshold, while community persistence tends to make it easier. We analytically show that communities inferred from single network snapshot can share a maximum overlap with the underlying communities of a specific previous instant in time. This leads to time-lagged inference: the identification of past communities rather than present ones. Finally we compute the time lag and propose a corrected algorithm, the Lagged Snapshot Dynamic (LSD) algorithm, for community detection in dynamic networks. We analytically and numerically characterize the detectability transitions of such algorithm as a function of the memory parameters of the model and we make a comparison with a full dynamic inference.
|
1407.4640
|
Valerii Sopin
|
Valerii Sopin
|
A new algorithm for solving the rSUM problem
| null | null | null | null |
cs.DS cs.CC cs.CG math.NT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A determined algorithm is presented for solving the rSUM problem for any
natural r with a sub-quadratic assessment of time complexity in some cases. In
terms of an amount of memory used the obtained algorithm is the nlog^3(n)
order.
The idea of the obtained algorithm is based not considering integer numbers,
but rather k (is a natural) successive bits of these numbers in the binary
numeration system. It is shown that if a sum of integer numbers is equal to
zero, then the sum of numbers presented by any k successive bits of these
numbers must be sufficiently "close" to zero. This makes it possible to discard
the numbers, which a fortiori, do not establish the solution.
|
[
{
"created": "Thu, 17 Jul 2014 11:27:30 GMT",
"version": "v1"
},
{
"created": "Tue, 5 Aug 2014 19:02:34 GMT",
"version": "v2"
},
{
"created": "Fri, 15 Aug 2014 00:24:38 GMT",
"version": "v3"
},
{
"created": "Mon, 18 Aug 2014 22:18:07 GMT",
"version": "v4"
},
{
"created": "Mon, 9 Feb 2015 09:59:04 GMT",
"version": "v5"
}
] |
2015-02-10
|
[
[
"Sopin",
"Valerii",
""
]
] |
A determined algorithm is presented for solving the rSUM problem for any natural r with a sub-quadratic assessment of time complexity in some cases. In terms of an amount of memory used the obtained algorithm is the nlog^3(n) order. The idea of the obtained algorithm is based not considering integer numbers, but rather k (is a natural) successive bits of these numbers in the binary numeration system. It is shown that if a sum of integer numbers is equal to zero, then the sum of numbers presented by any k successive bits of these numbers must be sufficiently "close" to zero. This makes it possible to discard the numbers, which a fortiori, do not establish the solution.
|
1301.4337
|
Hiren Joshi
|
Mahimn Pandya, Hiren Joshi, Ashish Jani
|
A Novel Digital Watermarking Algorithm using Random Matrix Image
|
4 pages, 8 figures
|
International Journal of Computer Applications, Volume 61, Number
2, pp. 18-12, 2013
|
10.5120/9900-4481
| null |
cs.MM cs.CR
|
http://creativecommons.org/licenses/by/3.0/
|
The availability of bandwidth for internet access is sufficient enough to
communicate digital assets. These digital assets are subjected to various types
of threats. [19] As a result of this, protection mechanism required for the
protection of digital assets is of priority in research. The threat of current
focus is unauthorized copying of digital assets which give boost to piracy.
This under the copyright act is illegal and a robust mechanism is required to
curb this kind of unauthorized copy. To safeguard the copyright digital assets,
a robust digital watermarking technique is needed. The existing digital
watermarking techniques protect digital assets by embedding a digital watermark
into a host digital image. This embedding does induce slight distortion in the
host image but the distortion is usually too small to be noticed. At the same
time the embedded watermark must be robust enough to with stand deliberate
attacks. There are various techniques of digital watermarking but researchers
are making constant efforts to increase the robustness of the watermark image.
The layered approach of watermarking based on Huffman coding [5] can soon
increase the robustness of digital watermark.[11] Ultimately, increasing the
security of copyright of protection. The proposed work is in similar direction
where in RMI (Random Matrix Image) is used in place of Huffman coding. This
innovative algorithm has considerably increased the robustness in digital
watermark while also enhancing security of production
|
[
{
"created": "Fri, 18 Jan 2013 10:16:21 GMT",
"version": "v1"
},
{
"created": "Tue, 22 Jan 2013 13:24:23 GMT",
"version": "v2"
}
] |
2013-01-23
|
[
[
"Pandya",
"Mahimn",
""
],
[
"Joshi",
"Hiren",
""
],
[
"Jani",
"Ashish",
""
]
] |
The availability of bandwidth for internet access is sufficient enough to communicate digital assets. These digital assets are subjected to various types of threats. [19] As a result of this, protection mechanism required for the protection of digital assets is of priority in research. The threat of current focus is unauthorized copying of digital assets which give boost to piracy. This under the copyright act is illegal and a robust mechanism is required to curb this kind of unauthorized copy. To safeguard the copyright digital assets, a robust digital watermarking technique is needed. The existing digital watermarking techniques protect digital assets by embedding a digital watermark into a host digital image. This embedding does induce slight distortion in the host image but the distortion is usually too small to be noticed. At the same time the embedded watermark must be robust enough to with stand deliberate attacks. There are various techniques of digital watermarking but researchers are making constant efforts to increase the robustness of the watermark image. The layered approach of watermarking based on Huffman coding [5] can soon increase the robustness of digital watermark.[11] Ultimately, increasing the security of copyright of protection. The proposed work is in similar direction where in RMI (Random Matrix Image) is used in place of Huffman coding. This innovative algorithm has considerably increased the robustness in digital watermark while also enhancing security of production
|
2405.02173
|
Adish Singla
|
Chao Wen, Ahana Ghosh, Jacqueline Staub, Adish Singla
|
Task Synthesis for Elementary Visual Programming in XLogoOnline
Environment
|
Accepted as a paper at the AIED'24 conference in the late-breaking
results track
| null | null | null |
cs.HC cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, the XLogoOnline programming platform has gained popularity
among novice learners. It integrates the Logo programming language with visual
programming, providing a visual interface for learning computing concepts.
However, XLogoOnline offers only a limited set of tasks, which are inadequate
for learners to master the computing concepts that require sufficient practice.
To address this, we introduce XLogoSyn, a novel technique for synthesizing
high-quality tasks for varying difficulty levels. Given a reference task,
XLogoSyn can generate practice tasks at varying difficulty levels that cater to
the varied needs and abilities of different learners. XLogoSyn achieves this by
combining symbolic execution and constraint satisfaction techniques. Our expert
study demonstrates the effectiveness of XLogoSyn. We have also deployed
synthesized practice tasks into XLogoOnline, highlighting the educational
benefits of these synthesized practice tasks.
|
[
{
"created": "Fri, 3 May 2024 15:22:46 GMT",
"version": "v1"
}
] |
2024-05-06
|
[
[
"Wen",
"Chao",
""
],
[
"Ghosh",
"Ahana",
""
],
[
"Staub",
"Jacqueline",
""
],
[
"Singla",
"Adish",
""
]
] |
In recent years, the XLogoOnline programming platform has gained popularity among novice learners. It integrates the Logo programming language with visual programming, providing a visual interface for learning computing concepts. However, XLogoOnline offers only a limited set of tasks, which are inadequate for learners to master the computing concepts that require sufficient practice. To address this, we introduce XLogoSyn, a novel technique for synthesizing high-quality tasks for varying difficulty levels. Given a reference task, XLogoSyn can generate practice tasks at varying difficulty levels that cater to the varied needs and abilities of different learners. XLogoSyn achieves this by combining symbolic execution and constraint satisfaction techniques. Our expert study demonstrates the effectiveness of XLogoSyn. We have also deployed synthesized practice tasks into XLogoOnline, highlighting the educational benefits of these synthesized practice tasks.
|
2311.07445
|
Junkai Zhou
|
Junkai Zhou, Liang Pang, Huawei Shen, Xueqi Cheng
|
Think Before You Speak: Cultivating Communication Skills of Large
Language Models via Inner Monologue
|
Accepted by NAACL 2024 Findings
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The emergence of large language models (LLMs) further improves the
capabilities of open-domain dialogue systems and can generate fluent, coherent,
and diverse responses. However, LLMs still lack a crucial ability:
communication skills. This limitation renders them more like information
seeking tools rather than anthropomorphic chatbots. Communication skills, such
as topic transition, proactively asking questions, concept guidance, empathy,
and summarising often should be taken into consideration, to make LLMs more
anthropomorphic and proactive during the conversation, thereby increasing the
interest of users and attracting them to chat for longer. However, enabling
these communication skills in black-box LLMs remains a key challenge because
they do not have the same utterance formation mode as real people: think before
speaking. Inspired by linguistics and cognitive science, we empower LLMs with
communication skills through inner monologues. To evaluate various
communication skills, we construct a benchmark named Cskills, which can also
more comprehensively evaluate the dialogue generation ability of the model.
Experimental results show that the proposed CSIM strategy improves the backbone
models and outperforms the baselines.
|
[
{
"created": "Mon, 13 Nov 2023 16:19:42 GMT",
"version": "v1"
},
{
"created": "Fri, 15 Mar 2024 08:30:30 GMT",
"version": "v2"
}
] |
2024-03-18
|
[
[
"Zhou",
"Junkai",
""
],
[
"Pang",
"Liang",
""
],
[
"Shen",
"Huawei",
""
],
[
"Cheng",
"Xueqi",
""
]
] |
The emergence of large language models (LLMs) further improves the capabilities of open-domain dialogue systems and can generate fluent, coherent, and diverse responses. However, LLMs still lack a crucial ability: communication skills. This limitation renders them more like information seeking tools rather than anthropomorphic chatbots. Communication skills, such as topic transition, proactively asking questions, concept guidance, empathy, and summarising often should be taken into consideration, to make LLMs more anthropomorphic and proactive during the conversation, thereby increasing the interest of users and attracting them to chat for longer. However, enabling these communication skills in black-box LLMs remains a key challenge because they do not have the same utterance formation mode as real people: think before speaking. Inspired by linguistics and cognitive science, we empower LLMs with communication skills through inner monologues. To evaluate various communication skills, we construct a benchmark named Cskills, which can also more comprehensively evaluate the dialogue generation ability of the model. Experimental results show that the proposed CSIM strategy improves the backbone models and outperforms the baselines.
|
cs/9901010
|
Tao Jiang
|
Tao Jiang (McMaster U.), Ming Li (U of Waterloo), Paul Vitanyi (CWI
and U of Amsterdam)
|
Average-Case Complexity of Shellsort
|
11 pages. Submitted to ICALP'99
| null | null | null |
cs.DS cs.CC
| null |
We prove a general lower bound on the average-case complexity of Shellsort:
the average number of data-movements (and comparisons) made by a $p$-pass
Shellsort for any incremental sequence is $\Omega (pn^{1 + 1/p)$ for all $p
\leq \log n$. Using similar arguments, we analyze the average-case complexity
of several other sorting algorithms.
|
[
{
"created": "Wed, 20 Jan 1999 16:32:01 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Jiang",
"Tao",
"",
"McMaster U."
],
[
"Li",
"Ming",
"",
"U of Waterloo"
],
[
"Vitanyi",
"Paul",
"",
"CWI\n and U of Amsterdam"
]
] |
We prove a general lower bound on the average-case complexity of Shellsort: the average number of data-movements (and comparisons) made by a $p$-pass Shellsort for any incremental sequence is $\Omega (pn^{1 + 1/p)$ for all $p \leq \log n$. Using similar arguments, we analyze the average-case complexity of several other sorting algorithms.
|
2301.11099
|
Runze Lei
|
Runze Lei, Pinghui Wang, Junzhou Zhao, Lin Lan, Jing Tao, Chao Deng,
Junlan Feng, Xidian Wang, Xiaohong Guan
|
Federated Learning over Coupled Graphs
|
Accepted by IEEE Transactions on Parallel and Distributed Systems
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graphs are widely used to represent the relations among entities. When one
owns the complete data, an entire graph can be easily built, therefore
performing analysis on the graph is straightforward. However, in many
scenarios, it is impractical to centralize the data due to data privacy
concerns. An organization or party only keeps a part of the whole graph data,
i.e., graph data is isolated from different parties. Recently, Federated
Learning (FL) has been proposed to solve the data isolation issue, mainly for
Euclidean data. It is still a challenge to apply FL on graph data because
graphs contain topological information which is notorious for its non-IID
nature and is hard to partition. In this work, we propose a novel FL framework
for graph data, FedCog, to efficiently handle coupled graphs that are a kind of
distributed graph data, but widely exist in a variety of real-world
applications such as mobile carriers' communication networks and banks'
transaction networks. We theoretically prove the correctness and security of
FedCog. Experimental results demonstrate that our method FedCog significantly
outperforms traditional FL methods on graphs. Remarkably, our FedCog improves
the accuracy of node classification tasks by up to 14.7%.
|
[
{
"created": "Thu, 26 Jan 2023 13:43:26 GMT",
"version": "v1"
}
] |
2023-01-27
|
[
[
"Lei",
"Runze",
""
],
[
"Wang",
"Pinghui",
""
],
[
"Zhao",
"Junzhou",
""
],
[
"Lan",
"Lin",
""
],
[
"Tao",
"Jing",
""
],
[
"Deng",
"Chao",
""
],
[
"Feng",
"Junlan",
""
],
[
"Wang",
"Xidian",
""
],
[
"Guan",
"Xiaohong",
""
]
] |
Graphs are widely used to represent the relations among entities. When one owns the complete data, an entire graph can be easily built, therefore performing analysis on the graph is straightforward. However, in many scenarios, it is impractical to centralize the data due to data privacy concerns. An organization or party only keeps a part of the whole graph data, i.e., graph data is isolated from different parties. Recently, Federated Learning (FL) has been proposed to solve the data isolation issue, mainly for Euclidean data. It is still a challenge to apply FL on graph data because graphs contain topological information which is notorious for its non-IID nature and is hard to partition. In this work, we propose a novel FL framework for graph data, FedCog, to efficiently handle coupled graphs that are a kind of distributed graph data, but widely exist in a variety of real-world applications such as mobile carriers' communication networks and banks' transaction networks. We theoretically prove the correctness and security of FedCog. Experimental results demonstrate that our method FedCog significantly outperforms traditional FL methods on graphs. Remarkably, our FedCog improves the accuracy of node classification tasks by up to 14.7%.
|
cs/0610174
|
Marko Samer
|
Marko Samer, Stefan Szeider
|
A Fixed-Parameter Algorithm for #SAT with Parameter Incidence Treewidth
|
9 pages, 1 figure
| null | null | null |
cs.DS cs.CC cs.LO
| null |
We present an efficient fixed-parameter algorithm for #SAT parameterized by
the incidence treewidth, i.e., the treewidth of the bipartite graph whose
vertices are the variables and clauses of the given CNF formula; a variable and
a clause are joined by an edge if and only if the variable occurs in the
clause. Our algorithm runs in time O(4^k k l N), where k denotes the incidence
treewidth, l denotes the size of a largest clause, and N denotes the number of
nodes of the tree-decomposition.
|
[
{
"created": "Tue, 31 Oct 2006 12:58:36 GMT",
"version": "v1"
},
{
"created": "Wed, 21 Feb 2007 20:56:15 GMT",
"version": "v2"
}
] |
2007-05-23
|
[
[
"Samer",
"Marko",
""
],
[
"Szeider",
"Stefan",
""
]
] |
We present an efficient fixed-parameter algorithm for #SAT parameterized by the incidence treewidth, i.e., the treewidth of the bipartite graph whose vertices are the variables and clauses of the given CNF formula; a variable and a clause are joined by an edge if and only if the variable occurs in the clause. Our algorithm runs in time O(4^k k l N), where k denotes the incidence treewidth, l denotes the size of a largest clause, and N denotes the number of nodes of the tree-decomposition.
|
1701.07802
|
Gleb Pogudin
|
Manuel Kauers, Gleb Pogudin
|
Bounds for Substituting Algebraic Functions into D-finite Functions
| null | null | null | null |
cs.SC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It is well known that the composition of a D-finite function with an
algebraic function is again D-finite. We give the first estimates for the
orders and the degrees of annihilating operators for the compositions. We find
that the analysis of removable singularities leads to an order-degree curve
which is much more accurate than the order-degree curve obtained from the usual
linear algebra reasoning.
|
[
{
"created": "Thu, 26 Jan 2017 18:12:52 GMT",
"version": "v1"
},
{
"created": "Fri, 27 Jan 2017 09:04:38 GMT",
"version": "v2"
},
{
"created": "Fri, 26 May 2017 09:53:44 GMT",
"version": "v3"
}
] |
2017-05-29
|
[
[
"Kauers",
"Manuel",
""
],
[
"Pogudin",
"Gleb",
""
]
] |
It is well known that the composition of a D-finite function with an algebraic function is again D-finite. We give the first estimates for the orders and the degrees of annihilating operators for the compositions. We find that the analysis of removable singularities leads to an order-degree curve which is much more accurate than the order-degree curve obtained from the usual linear algebra reasoning.
|
1811.07555
|
Yuxin Zhang
|
Yuxin Zhang, Huan Wang, Yang Luo, Lu Yu, Haoji Hu, Hangguan Shan, Tony
Q. S. Quek
|
Three Dimensional Convolutional Neural Network Pruning with
Regularization-Based Method
|
ICIP 2019
|
ICIP 2019
| null | null |
cs.LG cs.CV stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite enjoying extensive applications in video analysis, three-dimensional
convolutional neural networks (3D CNNs)are restricted by their massive
computation and storage consumption. To solve this problem, we propose a
threedimensional regularization-based neural network pruning method to assign
different regularization parameters to different weight groups based on their
importance to the network. Further we analyze the redundancy and computation
cost for each layer to determine the different pruning ratios. Experiments show
that pruning based on our method can lead to 2x theoretical speedup with only
0.41% accuracy loss for 3DResNet18 and 3.28% accuracy loss for C3D. The
proposed method performs favorably against other popular methods for model
compression and acceleration.
|
[
{
"created": "Mon, 19 Nov 2018 08:40:00 GMT",
"version": "v1"
},
{
"created": "Mon, 20 May 2019 03:48:09 GMT",
"version": "v2"
}
] |
2019-05-21
|
[
[
"Zhang",
"Yuxin",
""
],
[
"Wang",
"Huan",
""
],
[
"Luo",
"Yang",
""
],
[
"Yu",
"Lu",
""
],
[
"Hu",
"Haoji",
""
],
[
"Shan",
"Hangguan",
""
],
[
"Quek",
"Tony Q. S.",
""
]
] |
Despite enjoying extensive applications in video analysis, three-dimensional convolutional neural networks (3D CNNs)are restricted by their massive computation and storage consumption. To solve this problem, we propose a threedimensional regularization-based neural network pruning method to assign different regularization parameters to different weight groups based on their importance to the network. Further we analyze the redundancy and computation cost for each layer to determine the different pruning ratios. Experiments show that pruning based on our method can lead to 2x theoretical speedup with only 0.41% accuracy loss for 3DResNet18 and 3.28% accuracy loss for C3D. The proposed method performs favorably against other popular methods for model compression and acceleration.
|
2009.10808
|
Anuj Tiwari Dr
|
Anuj Tiwari, Arya V. Dadhania, Vijay Avin Balaji Ragunathrao, Edson R.
A. Oliveira
|
Using Machine Learning to Develop a Novel COVID-19 Vulnerability Index
(C19VI)
| null | null | null | null |
cs.LG stat.AP
|
http://creativecommons.org/licenses/by/4.0/
|
COVID19 is now one of the most leading causes of death in the United States.
Systemic health, social and economic disparities have put the minorities and
economically poor communities at a higher risk than others. There is an
immediate requirement to develop a reliable measure of county-level
vulnerabilities that can capture the heterogeneity of both vulnerable
communities and the COVID19 pandemic. This study reports a COVID19
Vulnerability Index (C19VI) for identification and mapping of vulnerable
counties in the United States. We proposed a Random Forest machine learning
based COVID19 vulnerability model using CDC sociodemographic and
COVID19-specific themes. An innovative COVID19 Impact Assessment algorithm was
also developed using homogeneity and trend assessment technique for evaluating
severity of the pandemic in all counties and train RF model. Developed C19VI
was statistically validated and compared with the CDC COVID19 Community
Vulnerability Index (CCVI). Finally, using C19VI along with census data, we
explored racial inequalities and economic disparities in COVID19 health
outcomes amongst different regions in the United States. Our C19VI index
indicates that 18.30% of the counties falls into very high vulnerability class,
24.34% in high, 23.32% in moderate, 22.34% in low, and 11.68% in very low.
Furthermore, C19VI reveals that 75.57% of racial minorities and 82.84% of
economically poor communities are very high or high COVID19 vulnerable regions.
The proposed approach of vulnerability modeling takes advantage of both the
well-established field of statistical analysis and the fast-evolving domain of
machine learning. C19VI provides an accurate and more reliable way to measure
county level vulnerability in the United States. This index aims at helping
emergency planners to develop more effective mitigation strategies especially
for the disproportionately impacted communities.
|
[
{
"created": "Tue, 22 Sep 2020 20:48:19 GMT",
"version": "v1"
}
] |
2020-09-24
|
[
[
"Tiwari",
"Anuj",
""
],
[
"Dadhania",
"Arya V.",
""
],
[
"Ragunathrao",
"Vijay Avin Balaji",
""
],
[
"Oliveira",
"Edson R. A.",
""
]
] |
COVID19 is now one of the most leading causes of death in the United States. Systemic health, social and economic disparities have put the minorities and economically poor communities at a higher risk than others. There is an immediate requirement to develop a reliable measure of county-level vulnerabilities that can capture the heterogeneity of both vulnerable communities and the COVID19 pandemic. This study reports a COVID19 Vulnerability Index (C19VI) for identification and mapping of vulnerable counties in the United States. We proposed a Random Forest machine learning based COVID19 vulnerability model using CDC sociodemographic and COVID19-specific themes. An innovative COVID19 Impact Assessment algorithm was also developed using homogeneity and trend assessment technique for evaluating severity of the pandemic in all counties and train RF model. Developed C19VI was statistically validated and compared with the CDC COVID19 Community Vulnerability Index (CCVI). Finally, using C19VI along with census data, we explored racial inequalities and economic disparities in COVID19 health outcomes amongst different regions in the United States. Our C19VI index indicates that 18.30% of the counties falls into very high vulnerability class, 24.34% in high, 23.32% in moderate, 22.34% in low, and 11.68% in very low. Furthermore, C19VI reveals that 75.57% of racial minorities and 82.84% of economically poor communities are very high or high COVID19 vulnerable regions. The proposed approach of vulnerability modeling takes advantage of both the well-established field of statistical analysis and the fast-evolving domain of machine learning. C19VI provides an accurate and more reliable way to measure county level vulnerability in the United States. This index aims at helping emergency planners to develop more effective mitigation strategies especially for the disproportionately impacted communities.
|
2009.00774
|
Yanchao Sun
|
Yanchao Sun, Da Huo and Furong Huang
|
Vulnerability-Aware Poisoning Mechanism for Online RL with Unknown
Dynamics
| null |
The Ninth International Conference on Learning Representations
(ICLR 2021)
| null | null |
cs.LG cs.CR stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Poisoning attacks on Reinforcement Learning (RL) systems could take advantage
of RL algorithm's vulnerabilities and cause failure of the learning. However,
prior works on poisoning RL usually either unrealistically assume the attacker
knows the underlying Markov Decision Process (MDP), or directly apply the
poisoning methods in supervised learning to RL. In this work, we build a
generic poisoning framework for online RL via a comprehensive investigation of
heterogeneous poisoning models in RL. Without any prior knowledge of the MDP,
we propose a strategic poisoning algorithm called Vulnerability-Aware
Adversarial Critic Poison (VA2C-P), which works for most policy-based deep RL
agents, closing the gap that no poisoning method exists for policy-based RL
agents. VA2C-P uses a novel metric, stability radius in RL, that measures the
vulnerability of RL algorithms. Experiments on multiple deep RL agents and
multiple environments show that our poisoning algorithm successfully prevents
agents from learning a good policy or teaches the agents to converge to a
target policy, with a limited attacking budget.
|
[
{
"created": "Wed, 2 Sep 2020 01:43:30 GMT",
"version": "v1"
},
{
"created": "Fri, 20 Nov 2020 22:24:47 GMT",
"version": "v2"
},
{
"created": "Tue, 4 May 2021 16:09:16 GMT",
"version": "v3"
},
{
"created": "Sat, 12 Feb 2022 16:46:31 GMT",
"version": "v4"
},
{
"created": "Tue, 15 Feb 2022 22:18:13 GMT",
"version": "v5"
}
] |
2022-02-17
|
[
[
"Sun",
"Yanchao",
""
],
[
"Huo",
"Da",
""
],
[
"Huang",
"Furong",
""
]
] |
Poisoning attacks on Reinforcement Learning (RL) systems could take advantage of RL algorithm's vulnerabilities and cause failure of the learning. However, prior works on poisoning RL usually either unrealistically assume the attacker knows the underlying Markov Decision Process (MDP), or directly apply the poisoning methods in supervised learning to RL. In this work, we build a generic poisoning framework for online RL via a comprehensive investigation of heterogeneous poisoning models in RL. Without any prior knowledge of the MDP, we propose a strategic poisoning algorithm called Vulnerability-Aware Adversarial Critic Poison (VA2C-P), which works for most policy-based deep RL agents, closing the gap that no poisoning method exists for policy-based RL agents. VA2C-P uses a novel metric, stability radius in RL, that measures the vulnerability of RL algorithms. Experiments on multiple deep RL agents and multiple environments show that our poisoning algorithm successfully prevents agents from learning a good policy or teaches the agents to converge to a target policy, with a limited attacking budget.
|
2403.06185
|
Zheyu Wu
|
Zheyu Wu, Ya-Feng Liu, Wei-Kun Chen, Christos Masouros
|
Quantized Constant-Envelope Waveform Design for Massive MIMO DFRC
Systems
|
17 pages, 11 figures, submitted for possible publication
| null | null | null |
cs.IT eess.SP math.IT math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Both dual-functional radar-communication (DFRC) and massive multiple-input
multiple-output (MIMO) have been recognized as enabling technologies for 6G
wireless networks. This paper considers the advanced waveform design for
hardware-efficient massive MIMO DFRC systems. Specifically, the transmit
waveform is imposed with the quantized constant-envelope (QCE) constraint,
which facilitates the employment of low-resolution digital-to-analog converters
(DACs) and power-efficient amplifiers. The waveform design problem is
formulated as the minimization of the mean square error (MSE) between the
designed and desired beampatterns subject to the constructive interference
(CI)-based communication quality of service (QoS) constraints and the QCE
constraint. To solve the formulated problem, we first utilize the penalty
technique to transform the discrete problem into an equivalent continuous
penalty model. Then, we propose an inexact augmented Lagrangian method (ALM)
algorithm for solving the penalty model. In particular, the ALM subproblem at
each iteration is solved by a custom-built block successive upper-bound
minimization (BSUM) algorithm, which admits closed-form updates, making the
proposed inexact ALM algorithm computationally efficient. Simulation results
demonstrate the superiority of the proposed approach over existing
state-of-the-art ones. In addition, extensive simulations are conducted to
examine the impact of various system parameters on the trade-off between
communication and radar performances.
|
[
{
"created": "Sun, 10 Mar 2024 12:05:50 GMT",
"version": "v1"
}
] |
2024-03-12
|
[
[
"Wu",
"Zheyu",
""
],
[
"Liu",
"Ya-Feng",
""
],
[
"Chen",
"Wei-Kun",
""
],
[
"Masouros",
"Christos",
""
]
] |
Both dual-functional radar-communication (DFRC) and massive multiple-input multiple-output (MIMO) have been recognized as enabling technologies for 6G wireless networks. This paper considers the advanced waveform design for hardware-efficient massive MIMO DFRC systems. Specifically, the transmit waveform is imposed with the quantized constant-envelope (QCE) constraint, which facilitates the employment of low-resolution digital-to-analog converters (DACs) and power-efficient amplifiers. The waveform design problem is formulated as the minimization of the mean square error (MSE) between the designed and desired beampatterns subject to the constructive interference (CI)-based communication quality of service (QoS) constraints and the QCE constraint. To solve the formulated problem, we first utilize the penalty technique to transform the discrete problem into an equivalent continuous penalty model. Then, we propose an inexact augmented Lagrangian method (ALM) algorithm for solving the penalty model. In particular, the ALM subproblem at each iteration is solved by a custom-built block successive upper-bound minimization (BSUM) algorithm, which admits closed-form updates, making the proposed inexact ALM algorithm computationally efficient. Simulation results demonstrate the superiority of the proposed approach over existing state-of-the-art ones. In addition, extensive simulations are conducted to examine the impact of various system parameters on the trade-off between communication and radar performances.
|
2403.17259
|
Trung-Kien Nguyen
|
Trung-Kien Nguyen, Yuan Fang
|
Diffusion-based Negative Sampling on Graphs for Link Prediction
|
Accepted in the TheWebConf 2024
| null | null | null |
cs.LG cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
Link prediction is a fundamental task for graph analysis with important
applications on the Web, such as social network analysis and recommendation
systems, etc. Modern graph link prediction methods often employ a contrastive
approach to learn robust node representations, where negative sampling is
pivotal. Typical negative sampling methods aim to retrieve hard examples based
on either predefined heuristics or automatic adversarial approaches, which
might be inflexible or difficult to control. Furthermore, in the context of
link prediction, most previous methods sample negative nodes from existing
substructures of the graph, missing out on potentially more optimal samples in
the latent space. To address these issues, we investigate a novel strategy of
multi-level negative sampling that enables negative node generation with
flexible and controllable ``hardness'' levels from the latent space. Our
method, called Conditional Diffusion-based Multi-level Negative Sampling
(DMNS), leverages the Markov chain property of diffusion models to generate
negative nodes in multiple levels of variable hardness and reconcile them for
effective graph link prediction. We further demonstrate that DMNS follows the
sub-linear positivity principle for robust negative sampling. Extensive
experiments on several benchmark datasets demonstrate the effectiveness of
DMNS.
|
[
{
"created": "Mon, 25 Mar 2024 23:07:31 GMT",
"version": "v1"
}
] |
2024-03-27
|
[
[
"Nguyen",
"Trung-Kien",
""
],
[
"Fang",
"Yuan",
""
]
] |
Link prediction is a fundamental task for graph analysis with important applications on the Web, such as social network analysis and recommendation systems, etc. Modern graph link prediction methods often employ a contrastive approach to learn robust node representations, where negative sampling is pivotal. Typical negative sampling methods aim to retrieve hard examples based on either predefined heuristics or automatic adversarial approaches, which might be inflexible or difficult to control. Furthermore, in the context of link prediction, most previous methods sample negative nodes from existing substructures of the graph, missing out on potentially more optimal samples in the latent space. To address these issues, we investigate a novel strategy of multi-level negative sampling that enables negative node generation with flexible and controllable ``hardness'' levels from the latent space. Our method, called Conditional Diffusion-based Multi-level Negative Sampling (DMNS), leverages the Markov chain property of diffusion models to generate negative nodes in multiple levels of variable hardness and reconcile them for effective graph link prediction. We further demonstrate that DMNS follows the sub-linear positivity principle for robust negative sampling. Extensive experiments on several benchmark datasets demonstrate the effectiveness of DMNS.
|
2103.07371
|
Huizi Mao
|
Huizi Mao, Sibo Zhu, Song Han, William J. Dally
|
PatchNet -- Short-range Template Matching for Efficient Video Processing
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Object recognition is a fundamental problem in many video processing tasks,
accurately locating seen objects at low computation cost paves the way for
on-device video recognition. We propose PatchNet, an efficient convolutional
neural network to match objects in adjacent video frames. It learns the
patchwise correlation features instead of pixel features. PatchNet is very
compact, running at just 58MFLOPs, $5\times$ simpler than MobileNetV2. We
demonstrate its application on two tasks, video object detection and visual
object tracking. On ImageNet VID, PatchNet reduces the flops of R-FCN
ResNet-101 by 5x and EfficientDet-D0 by 3.4x with less than 1% mAP loss. On
OTB2015, PatchNet reduces SiamFC and SiamRPN by 2.5x with no accuracy loss.
Experiments on Jetson Nano further demonstrate 2.8x to 4.3x speed-ups
associated with flops reduction. Code is open sourced at
https://github.com/RalphMao/PatchNet.
|
[
{
"created": "Wed, 10 Mar 2021 20:56:07 GMT",
"version": "v1"
}
] |
2021-03-15
|
[
[
"Mao",
"Huizi",
""
],
[
"Zhu",
"Sibo",
""
],
[
"Han",
"Song",
""
],
[
"Dally",
"William J.",
""
]
] |
Object recognition is a fundamental problem in many video processing tasks, accurately locating seen objects at low computation cost paves the way for on-device video recognition. We propose PatchNet, an efficient convolutional neural network to match objects in adjacent video frames. It learns the patchwise correlation features instead of pixel features. PatchNet is very compact, running at just 58MFLOPs, $5\times$ simpler than MobileNetV2. We demonstrate its application on two tasks, video object detection and visual object tracking. On ImageNet VID, PatchNet reduces the flops of R-FCN ResNet-101 by 5x and EfficientDet-D0 by 3.4x with less than 1% mAP loss. On OTB2015, PatchNet reduces SiamFC and SiamRPN by 2.5x with no accuracy loss. Experiments on Jetson Nano further demonstrate 2.8x to 4.3x speed-ups associated with flops reduction. Code is open sourced at https://github.com/RalphMao/PatchNet.
|
2110.06875
|
Ildik\'o Schlotter
|
Ildik\'o Schlotter, P\'eter Bir\'o, Tam\'as Fleiner
|
The core of housing markets from an agent's perspective: Is it worth
sprucing up your home?
|
33 pages
| null | null | null |
cs.GT cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
We study housing markets as introduced by Shapley and Scarf (1974). We
investigate the computational complexity of various questions regarding the
situation of an agent $a$ in a housing market $H$: we show that it is
$\mathsf{NP}$-hard to find an allocation in the core of $H$ where (i) $a$
receives a certain house, (ii) $a$ does not receive a certain house, or (iii)
$a$ receives a house other than her own. We prove that the core of housing
markets respects improvement in the following sense: given an allocation in the
core of $H$ where agent $a$ receives a house $h$, if the value of the house
owned by $a$ increases, then the resulting housing market admits an allocation
in its core in which $a$ receives either $h$, or a house that $a$ prefers to
$h$; moreover, such an allocation can be found efficiently. We further show an
analogous result in the Stable Roommates setting by proving that stable
matchings in a one-sided market also respect improvement.
|
[
{
"created": "Wed, 13 Oct 2021 17:11:06 GMT",
"version": "v1"
},
{
"created": "Tue, 4 Apr 2023 08:29:11 GMT",
"version": "v2"
},
{
"created": "Wed, 10 Jan 2024 07:45:52 GMT",
"version": "v3"
}
] |
2024-01-11
|
[
[
"Schlotter",
"Ildikó",
""
],
[
"Biró",
"Péter",
""
],
[
"Fleiner",
"Tamás",
""
]
] |
We study housing markets as introduced by Shapley and Scarf (1974). We investigate the computational complexity of various questions regarding the situation of an agent $a$ in a housing market $H$: we show that it is $\mathsf{NP}$-hard to find an allocation in the core of $H$ where (i) $a$ receives a certain house, (ii) $a$ does not receive a certain house, or (iii) $a$ receives a house other than her own. We prove that the core of housing markets respects improvement in the following sense: given an allocation in the core of $H$ where agent $a$ receives a house $h$, if the value of the house owned by $a$ increases, then the resulting housing market admits an allocation in its core in which $a$ receives either $h$, or a house that $a$ prefers to $h$; moreover, such an allocation can be found efficiently. We further show an analogous result in the Stable Roommates setting by proving that stable matchings in a one-sided market also respect improvement.
|
2005.04987
|
Daniele Silvestro
|
Daniele Silvestro and Tobias Andermann
|
Prior choice affects ability of Bayesian neural networks to identify
unknowns
| null | null | null | null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep Bayesian neural networks (BNNs) are a powerful tool, though
computationally demanding, to perform parameter estimation while jointly
estimating uncertainty around predictions. BNNs are typically implemented using
arbitrary normal-distributed prior distributions on the model parameters. Here,
we explore the effects of different prior distributions on classification tasks
in BNNs and evaluate the evidence supporting the predictions based on posterior
probabilities approximated by Markov Chain Monte Carlo sampling and by
computing Bayes factors. We show that the choice of priors has a substantial
impact on the ability of the model to confidently assign data to the correct
class (true positive rates). Prior choice also affects significantly the
ability of a BNN to identify out-of-distribution instances as unknown (false
positive rates). When comparing our results against neural networks (NN) with
Monte Carlo dropout we found that BNNs generally outperform NNs. Finally, in
our tests we did not find a single best choice as prior distribution. Instead,
each dataset yielded the best results under a different prior, indicating that
testing alternative options can improve the performance of BNNs.
|
[
{
"created": "Mon, 11 May 2020 10:32:47 GMT",
"version": "v1"
}
] |
2020-05-12
|
[
[
"Silvestro",
"Daniele",
""
],
[
"Andermann",
"Tobias",
""
]
] |
Deep Bayesian neural networks (BNNs) are a powerful tool, though computationally demanding, to perform parameter estimation while jointly estimating uncertainty around predictions. BNNs are typically implemented using arbitrary normal-distributed prior distributions on the model parameters. Here, we explore the effects of different prior distributions on classification tasks in BNNs and evaluate the evidence supporting the predictions based on posterior probabilities approximated by Markov Chain Monte Carlo sampling and by computing Bayes factors. We show that the choice of priors has a substantial impact on the ability of the model to confidently assign data to the correct class (true positive rates). Prior choice also affects significantly the ability of a BNN to identify out-of-distribution instances as unknown (false positive rates). When comparing our results against neural networks (NN) with Monte Carlo dropout we found that BNNs generally outperform NNs. Finally, in our tests we did not find a single best choice as prior distribution. Instead, each dataset yielded the best results under a different prior, indicating that testing alternative options can improve the performance of BNNs.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.