id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2306.06276
|
Xiaodong Cai
|
Anchen Sun, Elizabeth J. Franzmann, Zhibin Chen, Xiaodong Cai
|
Contrastive Learning for Predicting Cancer Prognosis Using Gene
Expression Values
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Recent advancements in image classification have demonstrated that
contrastive learning (CL) can aid in further learning tasks by acquiring good
feature representation from a limited number of data samples. In this paper, we
applied CL to tumor transcriptomes and clinical data to learn feature
representations in a low-dimensional space. We then utilized these learned
features to train a classifier to categorize tumors into a high- or low-risk
group of recurrence. Using data from The Cancer Genome Atlas (TCGA), we
demonstrated that CL can significantly improve classification accuracy.
Specifically, our CL-based classifiers achieved an area under the receiver
operating characteristic curve (AUC) greater than 0.8 for 14 types of cancer,
and an AUC greater than 0.9 for 2 types of cancer. We also developed CL-based
Cox (CLCox) models for predicting cancer prognosis. Our CLCox models trained
with the TCGA data outperformed existing methods significantly in predicting
the prognosis of 19 types of cancer under consideration. The performance of
CLCox models and CL-based classifiers trained with TCGA lung and prostate
cancer data were validated using the data from two independent cohorts. We also
show that the CLCox model trained with the whole transcriptome significantly
outperforms the Cox model trained with the 21 genes of Oncotype DX that is in
clinical use for breast cancer patients. CL-based classifiers and CLCox models
for 19 types of cancer are publicly available and can be used to predict cancer
prognosis using the RNA-seq transcriptome of an individual tumor. Python codes
for model training and testing are also publicly accessible, and can be applied
to train new CL-based models using gene expression data of tumors.
|
[
{
"created": "Fri, 9 Jun 2023 22:03:18 GMT",
"version": "v1"
},
{
"created": "Tue, 19 Sep 2023 03:52:55 GMT",
"version": "v2"
},
{
"created": "Thu, 9 May 2024 04:38:57 GMT",
"version": "v3"
},
{
"created": "Thu, 16 May 2024 22:31:37 GMT",
"version": "v4"
}
] |
2024-05-20
|
[
[
"Sun",
"Anchen",
""
],
[
"Franzmann",
"Elizabeth J.",
""
],
[
"Chen",
"Zhibin",
""
],
[
"Cai",
"Xiaodong",
""
]
] |
Recent advancements in image classification have demonstrated that contrastive learning (CL) can aid in further learning tasks by acquiring good feature representation from a limited number of data samples. In this paper, we applied CL to tumor transcriptomes and clinical data to learn feature representations in a low-dimensional space. We then utilized these learned features to train a classifier to categorize tumors into a high- or low-risk group of recurrence. Using data from The Cancer Genome Atlas (TCGA), we demonstrated that CL can significantly improve classification accuracy. Specifically, our CL-based classifiers achieved an area under the receiver operating characteristic curve (AUC) greater than 0.8 for 14 types of cancer, and an AUC greater than 0.9 for 2 types of cancer. We also developed CL-based Cox (CLCox) models for predicting cancer prognosis. Our CLCox models trained with the TCGA data outperformed existing methods significantly in predicting the prognosis of 19 types of cancer under consideration. The performance of CLCox models and CL-based classifiers trained with TCGA lung and prostate cancer data were validated using the data from two independent cohorts. We also show that the CLCox model trained with the whole transcriptome significantly outperforms the Cox model trained with the 21 genes of Oncotype DX that is in clinical use for breast cancer patients. CL-based classifiers and CLCox models for 19 types of cancer are publicly available and can be used to predict cancer prognosis using the RNA-seq transcriptome of an individual tumor. Python codes for model training and testing are also publicly accessible, and can be applied to train new CL-based models using gene expression data of tumors.
|
2304.14778
|
Arvid Becker
|
Arvid Becker, Pedro Cabalar, Mart\'in Di\'eguez, Torsten Schaub, Anna
Schuhmann
|
Metric Temporal Equilibrium Logic over Timed Traces
|
Under consideration in Theory and Practice of Logic Programming
(TPLP)
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In temporal extensions of Answer Set Programming (ASP) based on linear-time,
the behavior of dynamic systems is captured by sequences of states. While this
representation reflects their relative order, it abstracts away the specific
times associated with each state. However, timing constraints are important in
many applications like, for instance, when planning and scheduling go hand in
hand. We address this by developing a metric extension of linear-time temporal
equilibrium logic, in which temporal operators are constrained by intervals
over natural numbers. The resulting Metric Equilibrium Logic provides the
foundation of an ASP-based approach for specifying qualitative and quantitative
dynamic constraints. To this end, we define a translation of metric formulas
into monadic first-order formulas and give a correspondence between their
models in Metric Equilibrium Logic and Monadic Quantified Equilibrium Logic,
respectively. Interestingly, our translation provides a blue print for
implementation in terms of ASP modulo difference constraints.
|
[
{
"created": "Fri, 28 Apr 2023 11:39:49 GMT",
"version": "v1"
},
{
"created": "Fri, 3 May 2024 12:40:35 GMT",
"version": "v2"
}
] |
2024-05-06
|
[
[
"Becker",
"Arvid",
""
],
[
"Cabalar",
"Pedro",
""
],
[
"Diéguez",
"Martín",
""
],
[
"Schaub",
"Torsten",
""
],
[
"Schuhmann",
"Anna",
""
]
] |
In temporal extensions of Answer Set Programming (ASP) based on linear-time, the behavior of dynamic systems is captured by sequences of states. While this representation reflects their relative order, it abstracts away the specific times associated with each state. However, timing constraints are important in many applications like, for instance, when planning and scheduling go hand in hand. We address this by developing a metric extension of linear-time temporal equilibrium logic, in which temporal operators are constrained by intervals over natural numbers. The resulting Metric Equilibrium Logic provides the foundation of an ASP-based approach for specifying qualitative and quantitative dynamic constraints. To this end, we define a translation of metric formulas into monadic first-order formulas and give a correspondence between their models in Metric Equilibrium Logic and Monadic Quantified Equilibrium Logic, respectively. Interestingly, our translation provides a blue print for implementation in terms of ASP modulo difference constraints.
|
2404.13421
|
Michael Duchesne
|
Michael Duchesne, Kaiwen Zhang, Chamseddine Talhi
|
MultiConfederated Learning: Inclusive Non-IID Data handling with
Decentralized Federated Learning
| null |
Proceedings of the 39th ACM/SIGAPP Symposium on Applied Computing,
SAC '24, 1587-1595, April 2024. ACM
|
10.1145/3605098.3636000
| null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Federated Learning (FL) has emerged as a prominent privacy-preserving
technique for enabling use cases like confidential clinical machine learning.
FL operates by aggregating models trained by remote devices which owns the
data. Thus, FL enables the training of powerful global models using
crowd-sourced data from a large number of learners, without compromising their
privacy. However, the aggregating server is a single point of failure when
generating the global model. Moreover, the performance of the model suffers
when the data is not independent and identically distributed (non-IID data) on
all remote devices. This leads to vastly different models being aggregated,
which can reduce the performance by as much as 50% in certain scenarios.
In this paper, we seek to address the aforementioned issues while retaining
the benefits of FL. We propose MultiConfederated Learning: a decentralized FL
framework which is designed to handle non-IID data. Unlike traditional FL,
MultiConfederated Learning will maintain multiple models in parallel (instead
of a single global model) to help with convergence when the data is non-IID.
With the help of transfer learning, learners can converge to fewer models. In
order to increase adaptability, learners are allowed to choose which updates to
aggregate from their peers.
|
[
{
"created": "Sat, 20 Apr 2024 16:38:26 GMT",
"version": "v1"
}
] |
2024-04-23
|
[
[
"Duchesne",
"Michael",
""
],
[
"Zhang",
"Kaiwen",
""
],
[
"Talhi",
"Chamseddine",
""
]
] |
Federated Learning (FL) has emerged as a prominent privacy-preserving technique for enabling use cases like confidential clinical machine learning. FL operates by aggregating models trained by remote devices which owns the data. Thus, FL enables the training of powerful global models using crowd-sourced data from a large number of learners, without compromising their privacy. However, the aggregating server is a single point of failure when generating the global model. Moreover, the performance of the model suffers when the data is not independent and identically distributed (non-IID data) on all remote devices. This leads to vastly different models being aggregated, which can reduce the performance by as much as 50% in certain scenarios. In this paper, we seek to address the aforementioned issues while retaining the benefits of FL. We propose MultiConfederated Learning: a decentralized FL framework which is designed to handle non-IID data. Unlike traditional FL, MultiConfederated Learning will maintain multiple models in parallel (instead of a single global model) to help with convergence when the data is non-IID. With the help of transfer learning, learners can converge to fewer models. In order to increase adaptability, learners are allowed to choose which updates to aggregate from their peers.
|
2211.08234
|
Jun Jin
|
Jun Jin, Hongming Zhang, Jun Luo
|
Build generally reusable agent-environment interaction models
|
Accepted in Foundation Models for Decision Making Workshop at Neural
Information Processing Systems, 2022. Slides:
https://docs.google.com/presentation/d/1PMS2xwTcztP2pPk1bsjqkQscI39Wy5tpmoE5-_ZC7Fo/edit?usp=sharing
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
This paper tackles the problem of how to pre-train a model and make it
generally reusable backbones for downstream task learning. In pre-training, we
propose a method that builds an agent-environment interaction model by learning
domain invariant successor features from the agent's vast experiences covering
various tasks, then discretize them into behavior prototypes which result in an
embodied set structure. To make the model generally reusable for downstream
task learning, we propose (1) embodied feature projection that retains previous
knowledge by projecting the new task's observation-action pair to the embodied
set structure and (2) projected Bellman updates which add learning plasticity
for the new task setting. We provide preliminary results that show downstream
task learning based on a pre-trained embodied set structure can handle unseen
changes in task objectives, environmental dynamics and sensor modalities.
|
[
{
"created": "Sun, 13 Nov 2022 07:33:14 GMT",
"version": "v1"
}
] |
2022-11-16
|
[
[
"Jin",
"Jun",
""
],
[
"Zhang",
"Hongming",
""
],
[
"Luo",
"Jun",
""
]
] |
This paper tackles the problem of how to pre-train a model and make it generally reusable backbones for downstream task learning. In pre-training, we propose a method that builds an agent-environment interaction model by learning domain invariant successor features from the agent's vast experiences covering various tasks, then discretize them into behavior prototypes which result in an embodied set structure. To make the model generally reusable for downstream task learning, we propose (1) embodied feature projection that retains previous knowledge by projecting the new task's observation-action pair to the embodied set structure and (2) projected Bellman updates which add learning plasticity for the new task setting. We provide preliminary results that show downstream task learning based on a pre-trained embodied set structure can handle unseen changes in task objectives, environmental dynamics and sensor modalities.
|
2407.01490
|
Lu\'isa Shimabucoro
|
Lu\'isa Shimabucoro, Sebastian Ruder, Julia Kreutzer, Marzieh Fadaee
and Sara Hooker
|
LLM See, LLM Do: Guiding Data Generation to Target Non-Differentiable
Objectives
| null | null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The widespread adoption of synthetic data raises new questions about how
models generating the data can influence other large language models (LLMs) via
distilled data. To start, our work exhaustively characterizes the impact of
passive inheritance of model properties by systematically studying the
consequences of synthetic data integration. We provide one of the most
comprehensive studies to-date of how the source of synthetic data shapes
models' internal biases, calibration and generations' textual attributes and
preferences. We find that models are surprisingly sensitive towards certain
attributes even when the synthetic data prompts appear "neutral". which invites
the question whether this sensitivity can be exploited for good.
Our findings invite the question can we explicitly steer the models towards
the properties we want at test time by exploiting the data generation process?
This would have historically been considered infeasible due to the cost of
collecting data with a specific characteristic or objective in mind. However,
improvement in the quality of synthetic data, as well as a shift towards
general-purpose models designed to follow a diverse way of instructions, means
this question is timely. We propose active inheritance as a term to describe
intentionally constraining synthetic data according to a non-differentiable
objective. We demonstrate how active inheritance can steer the generation
profiles of models towards desirable non-differentiable attributes, e.g. high
lexical diversity or low toxicity.
|
[
{
"created": "Mon, 1 Jul 2024 17:26:21 GMT",
"version": "v1"
},
{
"created": "Fri, 19 Jul 2024 10:45:21 GMT",
"version": "v2"
}
] |
2024-07-22
|
[
[
"Shimabucoro",
"Luísa",
""
],
[
"Ruder",
"Sebastian",
""
],
[
"Kreutzer",
"Julia",
""
],
[
"Fadaee",
"Marzieh",
""
],
[
"Hooker",
"Sara",
""
]
] |
The widespread adoption of synthetic data raises new questions about how models generating the data can influence other large language models (LLMs) via distilled data. To start, our work exhaustively characterizes the impact of passive inheritance of model properties by systematically studying the consequences of synthetic data integration. We provide one of the most comprehensive studies to-date of how the source of synthetic data shapes models' internal biases, calibration and generations' textual attributes and preferences. We find that models are surprisingly sensitive towards certain attributes even when the synthetic data prompts appear "neutral". which invites the question whether this sensitivity can be exploited for good. Our findings invite the question can we explicitly steer the models towards the properties we want at test time by exploiting the data generation process? This would have historically been considered infeasible due to the cost of collecting data with a specific characteristic or objective in mind. However, improvement in the quality of synthetic data, as well as a shift towards general-purpose models designed to follow a diverse way of instructions, means this question is timely. We propose active inheritance as a term to describe intentionally constraining synthetic data according to a non-differentiable objective. We demonstrate how active inheritance can steer the generation profiles of models towards desirable non-differentiable attributes, e.g. high lexical diversity or low toxicity.
|
1803.07519
|
Minhui Xue
|
Lei Ma, Felix Juefei-Xu, Fuyuan Zhang, Jiyuan Sun, Minhui Xue, Bo Li,
Chunyang Chen, Ting Su, Li Li, Yang Liu, Jianjun Zhao, Yadong Wang
|
DeepGauge: Multi-Granularity Testing Criteria for Deep Learning Systems
|
The 33rd IEEE/ACM International Conference on Automated Software
Engineering (ASE 2018)
|
DeepGauge: Multi-Granularity Testing Criteria for Deep Learning
Systems. In Proceedings of the 33rd ACM/IEEE International Conference on
Automated Software Engineering (ASE 18), September 3-7, 2018, Montpellier,
France
|
10.1145/3238147.3238202
| null |
cs.SE cs.CR cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep learning (DL) defines a new data-driven programming paradigm that
constructs the internal system logic of a crafted neuron network through a set
of training data. We have seen wide adoption of DL in many safety-critical
scenarios. However, a plethora of studies have shown that the state-of-the-art
DL systems suffer from various vulnerabilities which can lead to severe
consequences when applied to real-world applications. Currently, the testing
adequacy of a DL system is usually measured by the accuracy of test data.
Considering the limitation of accessible high quality test data, good accuracy
performance on test data can hardly provide confidence to the testing adequacy
and generality of DL systems. Unlike traditional software systems that have
clear and controllable logic and functionality, the lack of interpretability in
a DL system makes system analysis and defect detection difficult, which could
potentially hinder its real-world deployment. In this paper, we propose
DeepGauge, a set of multi-granularity testing criteria for DL systems, which
aims at rendering a multi-faceted portrayal of the testbed. The in-depth
evaluation of our proposed testing criteria is demonstrated on two well-known
datasets, five DL systems, and with four state-of-the-art adversarial attack
techniques against DL. The potential usefulness of DeepGauge sheds light on the
construction of more generic and robust DL systems.
|
[
{
"created": "Tue, 20 Mar 2018 16:52:12 GMT",
"version": "v1"
},
{
"created": "Tue, 15 May 2018 05:02:54 GMT",
"version": "v2"
},
{
"created": "Sat, 28 Jul 2018 07:47:27 GMT",
"version": "v3"
},
{
"created": "Tue, 14 Aug 2018 23:07:39 GMT",
"version": "v4"
}
] |
2018-08-16
|
[
[
"Ma",
"Lei",
""
],
[
"Juefei-Xu",
"Felix",
""
],
[
"Zhang",
"Fuyuan",
""
],
[
"Sun",
"Jiyuan",
""
],
[
"Xue",
"Minhui",
""
],
[
"Li",
"Bo",
""
],
[
"Chen",
"Chunyang",
""
],
[
"Su",
"Ting",
""
],
[
"Li",
"Li",
""
],
[
"Liu",
"Yang",
""
],
[
"Zhao",
"Jianjun",
""
],
[
"Wang",
"Yadong",
""
]
] |
Deep learning (DL) defines a new data-driven programming paradigm that constructs the internal system logic of a crafted neuron network through a set of training data. We have seen wide adoption of DL in many safety-critical scenarios. However, a plethora of studies have shown that the state-of-the-art DL systems suffer from various vulnerabilities which can lead to severe consequences when applied to real-world applications. Currently, the testing adequacy of a DL system is usually measured by the accuracy of test data. Considering the limitation of accessible high quality test data, good accuracy performance on test data can hardly provide confidence to the testing adequacy and generality of DL systems. Unlike traditional software systems that have clear and controllable logic and functionality, the lack of interpretability in a DL system makes system analysis and defect detection difficult, which could potentially hinder its real-world deployment. In this paper, we propose DeepGauge, a set of multi-granularity testing criteria for DL systems, which aims at rendering a multi-faceted portrayal of the testbed. The in-depth evaluation of our proposed testing criteria is demonstrated on two well-known datasets, five DL systems, and with four state-of-the-art adversarial attack techniques against DL. The potential usefulness of DeepGauge sheds light on the construction of more generic and robust DL systems.
|
2205.00366
|
Venkat Margapuri
|
Venkat Margapuri, Trevor Rife, Chaney Courtney, Brandon Schlautman,
Kai Zhao, Mitchell Neilsen
|
Fractional Vegetation Cover Estimation using Hough Lines and Linear
Iterative Clustering
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
A common requirement of plant breeding programs across the country is
companion planting -- growing different species of plants in close proximity so
they can mutually benefit each other. However, the determination of companion
plants requires meticulous monitoring of plant growth. The technique of ocular
monitoring is often laborious and error prone. The availability of image
processing techniques can be used to address the challenge of plant growth
monitoring and provide robust solutions that assist plant scientists to
identify companion plants. This paper presents a new image processing algorithm
to determine the amount of vegetation cover present in a given area, called
fractional vegetation cover. The proposed technique draws inspiration from the
trusted Daubenmire method for vegetation cover estimation and expands upon it.
Briefly, the idea is to estimate vegetation cover from images containing
multiple rows of plant species growing in close proximity separated by a
multi-segment PVC frame of known size. The proposed algorithm applies a Hough
Transform and Simple Linear Iterative Clustering (SLIC) to estimate the amount
of vegetation cover within each segment of the PVC frame. The analysis when
repeated over images captured at regular intervals of time provides crucial
insights into plant growth. As a means of comparison, the proposed algorithm is
compared with SamplePoint and Canopeo, two trusted applications used for
vegetation cover estimation. The comparison shows a 99% similarity with both
SamplePoint and Canopeo demonstrating the accuracy and feasibility of the
algorithm for fractional vegetation cover estimation.
|
[
{
"created": "Sat, 30 Apr 2022 23:33:31 GMT",
"version": "v1"
}
] |
2022-05-03
|
[
[
"Margapuri",
"Venkat",
""
],
[
"Rife",
"Trevor",
""
],
[
"Courtney",
"Chaney",
""
],
[
"Schlautman",
"Brandon",
""
],
[
"Zhao",
"Kai",
""
],
[
"Neilsen",
"Mitchell",
""
]
] |
A common requirement of plant breeding programs across the country is companion planting -- growing different species of plants in close proximity so they can mutually benefit each other. However, the determination of companion plants requires meticulous monitoring of plant growth. The technique of ocular monitoring is often laborious and error prone. The availability of image processing techniques can be used to address the challenge of plant growth monitoring and provide robust solutions that assist plant scientists to identify companion plants. This paper presents a new image processing algorithm to determine the amount of vegetation cover present in a given area, called fractional vegetation cover. The proposed technique draws inspiration from the trusted Daubenmire method for vegetation cover estimation and expands upon it. Briefly, the idea is to estimate vegetation cover from images containing multiple rows of plant species growing in close proximity separated by a multi-segment PVC frame of known size. The proposed algorithm applies a Hough Transform and Simple Linear Iterative Clustering (SLIC) to estimate the amount of vegetation cover within each segment of the PVC frame. The analysis when repeated over images captured at regular intervals of time provides crucial insights into plant growth. As a means of comparison, the proposed algorithm is compared with SamplePoint and Canopeo, two trusted applications used for vegetation cover estimation. The comparison shows a 99% similarity with both SamplePoint and Canopeo demonstrating the accuracy and feasibility of the algorithm for fractional vegetation cover estimation.
|
2210.11806
|
Li Chong
|
Li Chong, Denghao Ma, Yueguo Chen
|
Multi-view Semantic Matching of Question retrieval using Fine-grained
Semantic Representations
|
10 pages
| null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As a key task of question answering, question retrieval has attracted much
attention from the communities of academia and industry. Previous solutions
mainly focus on the translation model, topic model, and deep learning
techniques. Distinct from the previous solutions, we propose to construct
fine-grained semantic representations of a question by a learned importance
score assigned to each keyword, so that we can achieve a fine-grained question
matching solution with these semantic representations of different lengths.
Accordingly, we propose a multi-view semantic matching model by reusing the
important keywords in multiple semantic representations.
As a key of constructing fine-grained semantic representations, we are the
first to use a cross-task weakly supervised extraction model that applies
question-question labelled signals to supervise the keyword extraction process
(i.e. to learn the keyword importance). The extraction model integrates the
deep semantic representation and lexical matching information with statistical
features to estimate the importance of keywords. We conduct extensive
experiments on three public datasets and the experimental results show that our
proposed model significantly outperforms the state-of-the-art solutions.
|
[
{
"created": "Fri, 21 Oct 2022 08:32:38 GMT",
"version": "v1"
},
{
"created": "Thu, 16 Feb 2023 01:46:57 GMT",
"version": "v2"
}
] |
2023-02-17
|
[
[
"Chong",
"Li",
""
],
[
"Ma",
"Denghao",
""
],
[
"Chen",
"Yueguo",
""
]
] |
As a key task of question answering, question retrieval has attracted much attention from the communities of academia and industry. Previous solutions mainly focus on the translation model, topic model, and deep learning techniques. Distinct from the previous solutions, we propose to construct fine-grained semantic representations of a question by a learned importance score assigned to each keyword, so that we can achieve a fine-grained question matching solution with these semantic representations of different lengths. Accordingly, we propose a multi-view semantic matching model by reusing the important keywords in multiple semantic representations. As a key of constructing fine-grained semantic representations, we are the first to use a cross-task weakly supervised extraction model that applies question-question labelled signals to supervise the keyword extraction process (i.e. to learn the keyword importance). The extraction model integrates the deep semantic representation and lexical matching information with statistical features to estimate the importance of keywords. We conduct extensive experiments on three public datasets and the experimental results show that our proposed model significantly outperforms the state-of-the-art solutions.
|
1808.03322
|
Nicholas DeMarinis
|
Nicholas DeMarinis, Stefanie Tellex, Vasileios Kemerlis, George
Konidaris, Rodrigo Fonseca
|
Scanning the Internet for ROS: A View of Security in Robotics Research
|
10 pages
| null | null | null |
cs.CR cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Because robots can directly perceive and affect the physical world, security
issues take on particular importance. In this paper, we describe the results of
our work on scanning the entire IPv4 address space of the Internet for
instances of the Robot Operating System (ROS), a widely used robotics platform
for research. Our results identified that a number of hosts supporting ROS are
exposed to the public Internet, thereby allowing anyone to access robotic
sensors and actuators. As a proof of concept, and with consent, we were able to
read image sensor information and move the robot of a research group in a US
university. This paper gives an overview of our findings, including the
geographic distribution of publicly-accessible platforms, the sorts of sensor
and actuator data that is available, as well as the different kinds of robots
and sensors that our scan uncovered. Additionally, we offer recommendations on
best practices to mitigate these security issues in the future.
|
[
{
"created": "Mon, 23 Jul 2018 13:05:03 GMT",
"version": "v1"
}
] |
2018-08-13
|
[
[
"DeMarinis",
"Nicholas",
""
],
[
"Tellex",
"Stefanie",
""
],
[
"Kemerlis",
"Vasileios",
""
],
[
"Konidaris",
"George",
""
],
[
"Fonseca",
"Rodrigo",
""
]
] |
Because robots can directly perceive and affect the physical world, security issues take on particular importance. In this paper, we describe the results of our work on scanning the entire IPv4 address space of the Internet for instances of the Robot Operating System (ROS), a widely used robotics platform for research. Our results identified that a number of hosts supporting ROS are exposed to the public Internet, thereby allowing anyone to access robotic sensors and actuators. As a proof of concept, and with consent, we were able to read image sensor information and move the robot of a research group in a US university. This paper gives an overview of our findings, including the geographic distribution of publicly-accessible platforms, the sorts of sensor and actuator data that is available, as well as the different kinds of robots and sensors that our scan uncovered. Additionally, we offer recommendations on best practices to mitigate these security issues in the future.
|
2109.06481
|
Jongyoon Song
|
Jongyoon Song, Sungwon Kim, and Sungroh Yoon
|
AligNART: Non-autoregressive Neural Machine Translation by Jointly
Learning to Estimate Alignment and Translate
|
Accepted by EMNLP 2021
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Non-autoregressive neural machine translation (NART) models suffer from the
multi-modality problem which causes translation inconsistency such as token
repetition. Most recent approaches have attempted to solve this problem by
implicitly modeling dependencies between outputs. In this paper, we introduce
AligNART, which leverages full alignment information to explicitly reduce the
modality of the target distribution. AligNART divides the machine translation
task into $(i)$ alignment estimation and $(ii)$ translation with aligned
decoder inputs, guiding the decoder to focus on simplified one-to-one
translation. To alleviate the alignment estimation problem, we further propose
a novel alignment decomposition method. Our experiments show that AligNART
outperforms previous non-iterative NART models that focus on explicit modality
reduction on WMT14 En$\leftrightarrow$De and WMT16 Ro$\rightarrow$En.
Furthermore, AligNART achieves BLEU scores comparable to those of the
state-of-the-art connectionist temporal classification based models on WMT14
En$\leftrightarrow$De. We also observe that AligNART effectively addresses the
token repetition problem even without sequence-level knowledge distillation.
|
[
{
"created": "Tue, 14 Sep 2021 07:26:33 GMT",
"version": "v1"
}
] |
2021-09-15
|
[
[
"Song",
"Jongyoon",
""
],
[
"Kim",
"Sungwon",
""
],
[
"Yoon",
"Sungroh",
""
]
] |
Non-autoregressive neural machine translation (NART) models suffer from the multi-modality problem which causes translation inconsistency such as token repetition. Most recent approaches have attempted to solve this problem by implicitly modeling dependencies between outputs. In this paper, we introduce AligNART, which leverages full alignment information to explicitly reduce the modality of the target distribution. AligNART divides the machine translation task into $(i)$ alignment estimation and $(ii)$ translation with aligned decoder inputs, guiding the decoder to focus on simplified one-to-one translation. To alleviate the alignment estimation problem, we further propose a novel alignment decomposition method. Our experiments show that AligNART outperforms previous non-iterative NART models that focus on explicit modality reduction on WMT14 En$\leftrightarrow$De and WMT16 Ro$\rightarrow$En. Furthermore, AligNART achieves BLEU scores comparable to those of the state-of-the-art connectionist temporal classification based models on WMT14 En$\leftrightarrow$De. We also observe that AligNART effectively addresses the token repetition problem even without sequence-level knowledge distillation.
|
2310.05387
|
Da Long
|
Da Long, Wei W. Xing, Aditi S. Krishnapriyan, Robert M. Kirby,
Shandian Zhe, Michael W. Mahoney
|
Equation Discovery with Bayesian Spike-and-Slab Priors and Efficient
Kernels
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Discovering governing equations from data is important to many scientific and
engineering applications. Despite promising successes, existing methods are
still challenged by data sparsity and noise issues, both of which are
ubiquitous in practice. Moreover, state-of-the-art methods lack uncertainty
quantification and/or are costly in training. To overcome these limitations, we
propose a novel equation discovery method based on Kernel learning and BAyesian
Spike-and-Slab priors (KBASS). We use kernel regression to estimate the target
function, which is flexible, expressive, and more robust to data sparsity and
noises. We combine it with a Bayesian spike-and-slab prior -- an ideal Bayesian
sparse distribution -- for effective operator selection and uncertainty
quantification. We develop an expectation-propagation expectation-maximization
(EP-EM) algorithm for efficient posterior inference and function estimation. To
overcome the computational challenge of kernel regression, we place the
function values on a mesh and induce a Kronecker product construction, and we
use tensor algebra to enable efficient computation and optimization. We show
the advantages of KBASS on a list of benchmark ODE and PDE discovery tasks.
|
[
{
"created": "Mon, 9 Oct 2023 03:55:09 GMT",
"version": "v1"
},
{
"created": "Mon, 22 Apr 2024 02:07:18 GMT",
"version": "v2"
}
] |
2024-04-23
|
[
[
"Long",
"Da",
""
],
[
"Xing",
"Wei W.",
""
],
[
"Krishnapriyan",
"Aditi S.",
""
],
[
"Kirby",
"Robert M.",
""
],
[
"Zhe",
"Shandian",
""
],
[
"Mahoney",
"Michael W.",
""
]
] |
Discovering governing equations from data is important to many scientific and engineering applications. Despite promising successes, existing methods are still challenged by data sparsity and noise issues, both of which are ubiquitous in practice. Moreover, state-of-the-art methods lack uncertainty quantification and/or are costly in training. To overcome these limitations, we propose a novel equation discovery method based on Kernel learning and BAyesian Spike-and-Slab priors (KBASS). We use kernel regression to estimate the target function, which is flexible, expressive, and more robust to data sparsity and noises. We combine it with a Bayesian spike-and-slab prior -- an ideal Bayesian sparse distribution -- for effective operator selection and uncertainty quantification. We develop an expectation-propagation expectation-maximization (EP-EM) algorithm for efficient posterior inference and function estimation. To overcome the computational challenge of kernel regression, we place the function values on a mesh and induce a Kronecker product construction, and we use tensor algebra to enable efficient computation and optimization. We show the advantages of KBASS on a list of benchmark ODE and PDE discovery tasks.
|
2302.13290
|
Stefan Schoder
|
Stefan Schoder
|
Implementation of an aeroacoustic simulation pipeline using
openCFS-Acoustics and openCFS-Data applied to human phonation
|
9 pages, 4 figures
| null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The human phonation process be modeled using the Finite Element Method (FEM)
which provides a detailed representation of the voice production process. A
software implementation in C++ using FEM (openCFS) has been used to simulate
the phonation process. The FEM model consists of a 3D mesh of the upper human
airways. The simVoice model provides an accurate representation of the
phonation process and was valid in several publications. In this article, we
show how to set up the model using openCFS and openCFS-Data.
|
[
{
"created": "Sun, 26 Feb 2023 10:46:15 GMT",
"version": "v1"
}
] |
2023-02-28
|
[
[
"Schoder",
"Stefan",
""
]
] |
The human phonation process be modeled using the Finite Element Method (FEM) which provides a detailed representation of the voice production process. A software implementation in C++ using FEM (openCFS) has been used to simulate the phonation process. The FEM model consists of a 3D mesh of the upper human airways. The simVoice model provides an accurate representation of the phonation process and was valid in several publications. In this article, we show how to set up the model using openCFS and openCFS-Data.
|
2305.07717
|
Souad Taouti
|
Souad Taouti, Hadda Cherroun and Djelloul Ziadi
|
Parallel Tree Kernel Computation
|
9 pages, 4 figures
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Tree kernels are fundamental tools that have been leveraged in many
applications, particularly those based on machine learning for Natural Language
Processing tasks. In this paper, we devise a parallel implementation of the
sequential algorithm for the computation of some tree kernels of two finite
sets of trees (Ouali-Sebti, 2015). Our comparison is narrowed on a sequential
implementation of SubTree kernel computation. This latter is mainly reduced to
an intersection of weighted tree automata. Our approach relies on the nature of
the data parallelism source inherent in this computation by deploying the
MapReduce paradigm. One of the key benefits of our approach is its versatility
in being adaptable to a wide range of substructure tree kernel-based learning
methods. To evaluate the efficacy of our parallel approach, we conducted a
series of experiments that compared it against the sequential version using a
diverse set of synthetic tree language datasets that were manually crafted for
our analysis. The reached results clearly demonstrate that the proposed
parallel algorithm outperforms the sequential one in terms of latency.
|
[
{
"created": "Fri, 12 May 2023 18:16:45 GMT",
"version": "v1"
}
] |
2023-05-16
|
[
[
"Taouti",
"Souad",
""
],
[
"Cherroun",
"Hadda",
""
],
[
"Ziadi",
"Djelloul",
""
]
] |
Tree kernels are fundamental tools that have been leveraged in many applications, particularly those based on machine learning for Natural Language Processing tasks. In this paper, we devise a parallel implementation of the sequential algorithm for the computation of some tree kernels of two finite sets of trees (Ouali-Sebti, 2015). Our comparison is narrowed on a sequential implementation of SubTree kernel computation. This latter is mainly reduced to an intersection of weighted tree automata. Our approach relies on the nature of the data parallelism source inherent in this computation by deploying the MapReduce paradigm. One of the key benefits of our approach is its versatility in being adaptable to a wide range of substructure tree kernel-based learning methods. To evaluate the efficacy of our parallel approach, we conducted a series of experiments that compared it against the sequential version using a diverse set of synthetic tree language datasets that were manually crafted for our analysis. The reached results clearly demonstrate that the proposed parallel algorithm outperforms the sequential one in terms of latency.
|
2210.15504
|
Navid Kayhani
|
Navid Kayhani, Angela Schoellig, and Brenda McCabe
|
Perception-aware Tag Placement Planning for Robust Localization of UAVs
in Indoor Construction Environments
|
[Final draft] This material may be downloaded for personal use only.
Any other use requires prior permission of the American Society of Civil
Engineers and the Journal of Computing in Civil Engineering
| null |
10.1061/JCCEE5/CPENG-5068
| null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Tag-based visual-inertial localization is a lightweight method for enabling
autonomous data collection missions of low-cost unmanned aerial vehicles (UAVs)
in indoor construction environments. However, finding the optimal tag
configuration (i.e., number, size, and location) on dynamic construction sites
remains challenging. This paper proposes a perception-aware genetic
algorithm-based tag placement planner (PGA-TaPP) to determine the optimal tag
configuration using 4D-BIM, considering the project progress, safety
requirements, and UAV's localizability. The proposed method provides a 4D plan
for tag placement by maximizing the localizability in user-specified regions of
interest (ROIs) while limiting the installation costs. Localizability is
quantified using the Fisher information matrix (FIM) and encapsulated in
navigable grids. The experimental results show the effectiveness of our method
in finding an optimal 4D tag placement plan for the robust localization of UAVs
on under-construction indoor sites.
|
[
{
"created": "Thu, 27 Oct 2022 14:37:57 GMT",
"version": "v1"
}
] |
2022-10-28
|
[
[
"Kayhani",
"Navid",
""
],
[
"Schoellig",
"Angela",
""
],
[
"McCabe",
"Brenda",
""
]
] |
Tag-based visual-inertial localization is a lightweight method for enabling autonomous data collection missions of low-cost unmanned aerial vehicles (UAVs) in indoor construction environments. However, finding the optimal tag configuration (i.e., number, size, and location) on dynamic construction sites remains challenging. This paper proposes a perception-aware genetic algorithm-based tag placement planner (PGA-TaPP) to determine the optimal tag configuration using 4D-BIM, considering the project progress, safety requirements, and UAV's localizability. The proposed method provides a 4D plan for tag placement by maximizing the localizability in user-specified regions of interest (ROIs) while limiting the installation costs. Localizability is quantified using the Fisher information matrix (FIM) and encapsulated in navigable grids. The experimental results show the effectiveness of our method in finding an optimal 4D tag placement plan for the robust localization of UAVs on under-construction indoor sites.
|
2207.07901
|
Mahyuddin K. M. Nasution
|
Mahyuddin K. M. Nasution, Rahmat Hidayat, and Rahmad Syah
|
Computer Science
|
18, 2 figures
|
International Journal on Advanced Science Engineering Information
Technology, 12(3), 2022
| null |
123
|
cs.CY
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Possible for science itself, conceptually, to have and will understand
differently, let alone science also seen as technology, such as computer
science. After all, science and technology are viewpoints diverse by either
individual, community, or social. Generally, it depends on socioeconomic
capabilities. So it is with computer science has become a phenomenon and
fashionable, where based on the stream of documents, various issues arise in
either its theory or implementation, adapting different communities, or
designing curriculum holds in the education system.
|
[
{
"created": "Sat, 16 Jul 2022 10:54:57 GMT",
"version": "v1"
}
] |
2022-07-19
|
[
[
"Nasution",
"Mahyuddin K. M.",
""
],
[
"Hidayat",
"Rahmat",
""
],
[
"Syah",
"Rahmad",
""
]
] |
Possible for science itself, conceptually, to have and will understand differently, let alone science also seen as technology, such as computer science. After all, science and technology are viewpoints diverse by either individual, community, or social. Generally, it depends on socioeconomic capabilities. So it is with computer science has become a phenomenon and fashionable, where based on the stream of documents, various issues arise in either its theory or implementation, adapting different communities, or designing curriculum holds in the education system.
|
1809.01721
|
Ismail Shahin
|
Ismail Shahin and Ali Bou Nassif
|
Three-Stage Speaker Verification Architecture in Emotional Talking
Environments
|
18 pages. arXiv admin note: substantial text overlap with
arXiv:1804.00155, arXiv:1707.00137
|
International Journal of Speech Technology, 2018
|
10.1007/s10772-018-9543-4
| null |
cs.SD cs.AI eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Speaker verification performance in neutral talking environment is usually
high, while it is sharply decreased in emotional talking environments. This
performance degradation in emotional environments is due to the problem of
mismatch between training in neutral environment while testing in emotional
environments. In this work, a three-stage speaker verification architecture has
been proposed to enhance speaker verification performance in emotional
environments. This architecture is comprised of three cascaded stages: gender
identification stage followed by an emotion identification stage followed by a
speaker verification stage. The proposed framework has been evaluated on two
distinct and independent emotional speech datasets: in-house dataset and
Emotional Prosody Speech and Transcripts dataset. Our results show that speaker
verification based on both gender information and emotion information is
superior to each of speaker verification based on gender information only,
emotion information only, and neither gender information nor emotion
information. The attained average speaker verification performance based on the
proposed framework is very alike to that attained in subjective assessment by
human listeners.
|
[
{
"created": "Mon, 3 Sep 2018 09:25:35 GMT",
"version": "v1"
}
] |
2018-09-07
|
[
[
"Shahin",
"Ismail",
""
],
[
"Nassif",
"Ali Bou",
""
]
] |
Speaker verification performance in neutral talking environment is usually high, while it is sharply decreased in emotional talking environments. This performance degradation in emotional environments is due to the problem of mismatch between training in neutral environment while testing in emotional environments. In this work, a three-stage speaker verification architecture has been proposed to enhance speaker verification performance in emotional environments. This architecture is comprised of three cascaded stages: gender identification stage followed by an emotion identification stage followed by a speaker verification stage. The proposed framework has been evaluated on two distinct and independent emotional speech datasets: in-house dataset and Emotional Prosody Speech and Transcripts dataset. Our results show that speaker verification based on both gender information and emotion information is superior to each of speaker verification based on gender information only, emotion information only, and neither gender information nor emotion information. The attained average speaker verification performance based on the proposed framework is very alike to that attained in subjective assessment by human listeners.
|
1606.03194
|
Virendra Sule
|
Mayuresh Bakshi, Virendra Sule, Maryam Shoejai Baghini
|
Stabilization Theory for Active Multi Port Networks
|
9 pages, 6 figures
| null | null | null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes a theory for designing stable interconnection of linear
active multi-port networks at the ports. Such interconnections can lead to
unstable networks even if the original networks are stable with respect to
bounded port excitations. Hence such a theory is necessary for realising
interconnections of active multiport networks. Stabilization theory of linear
feedback systems using stable coprime factorizations of transfer functions has
been well known. This theory witnessed glorious developments in recent past
culminating into the $H_{\infty}$ approach to design of feedback systems.
However these important developments have seldom been utilized for network
interconnections due to the difficulty of realizing feedback signal flow graph
for multi-port networks with inputs and outputs as port sources and responses.
This paper resolves this problem by developing the stabilization theory
directly in terms of port connection description without formulation in terms
of signal flow graph of the implicit feedback connection. The stable port
interconnection results into an affine parametrized network function in which
the free parameter is itself a stable network function and describes all
stabilizing port compensations of a given network.
|
[
{
"created": "Fri, 10 Jun 2016 05:58:23 GMT",
"version": "v1"
}
] |
2016-06-13
|
[
[
"Bakshi",
"Mayuresh",
""
],
[
"Sule",
"Virendra",
""
],
[
"Baghini",
"Maryam Shoejai",
""
]
] |
This paper proposes a theory for designing stable interconnection of linear active multi-port networks at the ports. Such interconnections can lead to unstable networks even if the original networks are stable with respect to bounded port excitations. Hence such a theory is necessary for realising interconnections of active multiport networks. Stabilization theory of linear feedback systems using stable coprime factorizations of transfer functions has been well known. This theory witnessed glorious developments in recent past culminating into the $H_{\infty}$ approach to design of feedback systems. However these important developments have seldom been utilized for network interconnections due to the difficulty of realizing feedback signal flow graph for multi-port networks with inputs and outputs as port sources and responses. This paper resolves this problem by developing the stabilization theory directly in terms of port connection description without formulation in terms of signal flow graph of the implicit feedback connection. The stable port interconnection results into an affine parametrized network function in which the free parameter is itself a stable network function and describes all stabilizing port compensations of a given network.
|
2011.07743
|
Yu Gu
|
Yu Gu, Sue Kase, Michelle Vanni, Brian Sadler, Percy Liang, Xifeng
Yan, Yu Su
|
Beyond I.I.D.: Three Levels of Generalization for Question Answering on
Knowledge Bases
|
Accepted to TheWebConf 2021 (previously WWW)
| null |
10.1145/3442381.3449992
| null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Existing studies on question answering on knowledge bases (KBQA) mainly
operate with the standard i.i.d assumption, i.e., training distribution over
questions is the same as the test distribution. However, i.i.d may be neither
reasonably achievable nor desirable on large-scale KBs because 1) true user
distribution is hard to capture and 2) randomly sample training examples from
the enormous space would be highly data-inefficient. Instead, we suggest that
KBQA models should have three levels of built-in generalization: i.i.d,
compositional, and zero-shot. To facilitate the development of KBQA models with
stronger generalization, we construct and release a new large-scale,
high-quality dataset with 64,331 questions, GrailQA, and provide evaluation
settings for all three levels of generalization. In addition, we propose a
novel BERT-based KBQA model. The combination of our dataset and model enables
us to thoroughly examine and demonstrate, for the first time, the key role of
pre-trained contextual embeddings like BERT in the generalization of KBQA.
|
[
{
"created": "Mon, 16 Nov 2020 06:36:26 GMT",
"version": "v1"
},
{
"created": "Wed, 18 Nov 2020 03:36:38 GMT",
"version": "v2"
},
{
"created": "Sun, 13 Dec 2020 03:13:37 GMT",
"version": "v3"
},
{
"created": "Fri, 12 Feb 2021 18:48:38 GMT",
"version": "v4"
},
{
"created": "Fri, 19 Feb 2021 04:11:23 GMT",
"version": "v5"
},
{
"created": "Mon, 22 Feb 2021 19:04:45 GMT",
"version": "v6"
}
] |
2021-02-24
|
[
[
"Gu",
"Yu",
""
],
[
"Kase",
"Sue",
""
],
[
"Vanni",
"Michelle",
""
],
[
"Sadler",
"Brian",
""
],
[
"Liang",
"Percy",
""
],
[
"Yan",
"Xifeng",
""
],
[
"Su",
"Yu",
""
]
] |
Existing studies on question answering on knowledge bases (KBQA) mainly operate with the standard i.i.d assumption, i.e., training distribution over questions is the same as the test distribution. However, i.i.d may be neither reasonably achievable nor desirable on large-scale KBs because 1) true user distribution is hard to capture and 2) randomly sample training examples from the enormous space would be highly data-inefficient. Instead, we suggest that KBQA models should have three levels of built-in generalization: i.i.d, compositional, and zero-shot. To facilitate the development of KBQA models with stronger generalization, we construct and release a new large-scale, high-quality dataset with 64,331 questions, GrailQA, and provide evaluation settings for all three levels of generalization. In addition, we propose a novel BERT-based KBQA model. The combination of our dataset and model enables us to thoroughly examine and demonstrate, for the first time, the key role of pre-trained contextual embeddings like BERT in the generalization of KBQA.
|
2006.11419
|
Chuangchuang Sun
|
Chuangchuang Sun, Dong-Ki Kim, Jonathan P. How
|
FISAR: Forward Invariant Safe Reinforcement Learning with a Deep Neural
Network-Based Optimize
|
Accepted to ICML 2020 Workshop Theoretical Foundations of RL;
Accepted to ICRA 2021
| null | null | null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper investigates reinforcement learning with constraints, which are
indispensable in safety-critical environments. To drive the constraint
violation monotonically decrease, we take the constraints as Lyapunov functions
and impose new linear constraints on the policy parameters' updating dynamics.
As a result, the original safety set can be forward-invariant. However, because
the new guaranteed-feasible constraints are imposed on the updating dynamics
instead of the original policy parameters, classic optimization algorithms are
no longer applicable. To address this, we propose to learn a generic deep
neural network (DNN)-based optimizer to optimize the objective while satisfying
the linear constraints. The constraint-satisfaction is achieved via projection
onto a polytope formulated by multiple linear inequality constraints, which can
be solved analytically with our newly designed metric. To the best of our
knowledge, this is the \textit{first} DNN-based optimizer for constrained
optimization with the forward invariance guarantee. We show that our optimizer
trains a policy to decrease the constraint violation and maximize the
cumulative reward monotonically. Results on numerical constrained optimization
and obstacle-avoidance navigation validate the theoretical findings.
|
[
{
"created": "Fri, 19 Jun 2020 21:58:42 GMT",
"version": "v1"
},
{
"created": "Thu, 9 Jul 2020 03:11:59 GMT",
"version": "v2"
},
{
"created": "Tue, 3 Nov 2020 16:16:15 GMT",
"version": "v3"
},
{
"created": "Wed, 5 May 2021 23:42:55 GMT",
"version": "v4"
}
] |
2021-05-07
|
[
[
"Sun",
"Chuangchuang",
""
],
[
"Kim",
"Dong-Ki",
""
],
[
"How",
"Jonathan P.",
""
]
] |
This paper investigates reinforcement learning with constraints, which are indispensable in safety-critical environments. To drive the constraint violation monotonically decrease, we take the constraints as Lyapunov functions and impose new linear constraints on the policy parameters' updating dynamics. As a result, the original safety set can be forward-invariant. However, because the new guaranteed-feasible constraints are imposed on the updating dynamics instead of the original policy parameters, classic optimization algorithms are no longer applicable. To address this, we propose to learn a generic deep neural network (DNN)-based optimizer to optimize the objective while satisfying the linear constraints. The constraint-satisfaction is achieved via projection onto a polytope formulated by multiple linear inequality constraints, which can be solved analytically with our newly designed metric. To the best of our knowledge, this is the \textit{first} DNN-based optimizer for constrained optimization with the forward invariance guarantee. We show that our optimizer trains a policy to decrease the constraint violation and maximize the cumulative reward monotonically. Results on numerical constrained optimization and obstacle-avoidance navigation validate the theoretical findings.
|
2110.14363
|
Mucong Ding
|
Mucong Ding, Kezhi Kong, Jingling Li, Chen Zhu, John P Dickerson,
Furong Huang, Tom Goldstein
|
VQ-GNN: A Universal Framework to Scale up Graph Neural Networks using
Vector Quantization
|
NeurIPS 2021
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most state-of-the-art Graph Neural Networks (GNNs) can be defined as a form
of graph convolution which can be realized by message passing between direct
neighbors or beyond. To scale such GNNs to large graphs, various neighbor-,
layer-, or subgraph-sampling techniques are proposed to alleviate the "neighbor
explosion" problem by considering only a small subset of messages passed to the
nodes in a mini-batch. However, sampling-based methods are difficult to apply
to GNNs that utilize many-hops-away or global context each layer, show unstable
performance for different tasks and datasets, and do not speed up model
inference. We propose a principled and fundamentally different approach,
VQ-GNN, a universal framework to scale up any convolution-based GNNs using
Vector Quantization (VQ) without compromising the performance. In contrast to
sampling-based techniques, our approach can effectively preserve all the
messages passed to a mini-batch of nodes by learning and updating a small
number of quantized reference vectors of global node representations, using VQ
within each GNN layer. Our framework avoids the "neighbor explosion" problem of
GNNs using quantized representations combined with a low-rank version of the
graph convolution matrix. We show that such a compact low-rank version of the
gigantic convolution matrix is sufficient both theoretically and
experimentally. In company with VQ, we design a novel approximated message
passing algorithm and a nontrivial back-propagation rule for our framework.
Experiments on various types of GNN backbones demonstrate the scalability and
competitive performance of our framework on large-graph node classification and
link prediction benchmarks.
|
[
{
"created": "Wed, 27 Oct 2021 11:48:50 GMT",
"version": "v1"
}
] |
2021-10-28
|
[
[
"Ding",
"Mucong",
""
],
[
"Kong",
"Kezhi",
""
],
[
"Li",
"Jingling",
""
],
[
"Zhu",
"Chen",
""
],
[
"Dickerson",
"John P",
""
],
[
"Huang",
"Furong",
""
],
[
"Goldstein",
"Tom",
""
]
] |
Most state-of-the-art Graph Neural Networks (GNNs) can be defined as a form of graph convolution which can be realized by message passing between direct neighbors or beyond. To scale such GNNs to large graphs, various neighbor-, layer-, or subgraph-sampling techniques are proposed to alleviate the "neighbor explosion" problem by considering only a small subset of messages passed to the nodes in a mini-batch. However, sampling-based methods are difficult to apply to GNNs that utilize many-hops-away or global context each layer, show unstable performance for different tasks and datasets, and do not speed up model inference. We propose a principled and fundamentally different approach, VQ-GNN, a universal framework to scale up any convolution-based GNNs using Vector Quantization (VQ) without compromising the performance. In contrast to sampling-based techniques, our approach can effectively preserve all the messages passed to a mini-batch of nodes by learning and updating a small number of quantized reference vectors of global node representations, using VQ within each GNN layer. Our framework avoids the "neighbor explosion" problem of GNNs using quantized representations combined with a low-rank version of the graph convolution matrix. We show that such a compact low-rank version of the gigantic convolution matrix is sufficient both theoretically and experimentally. In company with VQ, we design a novel approximated message passing algorithm and a nontrivial back-propagation rule for our framework. Experiments on various types of GNN backbones demonstrate the scalability and competitive performance of our framework on large-graph node classification and link prediction benchmarks.
|
1905.07856
|
Shivashankar Subramanian
|
Shivashankar Subramanian and Trevor Cohn and Timothy Baldwin
|
Target Based Speech Act Classification in Political Campaign Text
|
Eighth Joint Conference on Lexical and Computational Semantics, *SEM
2019, Camera Ready
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study pragmatics in political campaign text, through analysis of speech
acts and the target of each utterance. We propose a new annotation schema
incorporating domain-specific speech acts, such as commissive-action, and
present a novel annotated corpus of media releases and speech transcripts from
the 2016 Australian election cycle. We show how speech acts and target
referents can be modeled as sequential classification, and evaluate several
techniques, exploiting contextualized word representations, semi-supervised
learning, task dependencies and speaker meta-data.
|
[
{
"created": "Mon, 20 May 2019 03:14:11 GMT",
"version": "v1"
}
] |
2019-05-21
|
[
[
"Subramanian",
"Shivashankar",
""
],
[
"Cohn",
"Trevor",
""
],
[
"Baldwin",
"Timothy",
""
]
] |
We study pragmatics in political campaign text, through analysis of speech acts and the target of each utterance. We propose a new annotation schema incorporating domain-specific speech acts, such as commissive-action, and present a novel annotated corpus of media releases and speech transcripts from the 2016 Australian election cycle. We show how speech acts and target referents can be modeled as sequential classification, and evaluate several techniques, exploiting contextualized word representations, semi-supervised learning, task dependencies and speaker meta-data.
|
2403.17014
|
Mohamad Dhaini
|
Mohamad Dhaini, Maxime Berar, Paul Honeine, Antonin Van Exem
|
Contrastive Learning for Regression on Hyperspectral Data
|
Accepted in IEEE International Conference on Acoustics, Speech and
Signal Processing (ICASSP) 2024
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Contrastive learning has demonstrated great effectiveness in representation
learning especially for image classification tasks. However, there is still a
shortage in the studies targeting regression tasks, and more specifically
applications on hyperspectral data. In this paper, we propose a contrastive
learning framework for the regression tasks for hyperspectral data. To this
end, we provide a collection of transformations relevant for augmenting
hyperspectral data, and investigate contrastive learning for regression.
Experiments on synthetic and real hyperspectral datasets show that the proposed
framework and transformations significantly improve the performance of
regression models, achieving better scores than other state-of-the-art
transformations.
|
[
{
"created": "Mon, 12 Feb 2024 21:33:46 GMT",
"version": "v1"
}
] |
2024-03-27
|
[
[
"Dhaini",
"Mohamad",
""
],
[
"Berar",
"Maxime",
""
],
[
"Honeine",
"Paul",
""
],
[
"Van Exem",
"Antonin",
""
]
] |
Contrastive learning has demonstrated great effectiveness in representation learning especially for image classification tasks. However, there is still a shortage in the studies targeting regression tasks, and more specifically applications on hyperspectral data. In this paper, we propose a contrastive learning framework for the regression tasks for hyperspectral data. To this end, we provide a collection of transformations relevant for augmenting hyperspectral data, and investigate contrastive learning for regression. Experiments on synthetic and real hyperspectral datasets show that the proposed framework and transformations significantly improve the performance of regression models, achieving better scores than other state-of-the-art transformations.
|
2401.14654
|
Alice Kwak
|
Alice Saebom Kwak, Cheonkam Jeong, Ji Weon Lim, and Byeongcheol Min
|
A Korean Legal Judgment Prediction Dataset for Insurance Disputes
|
5 pages, 1 figure
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This paper introduces a Korean legal judgment prediction (LJP) dataset for
insurance disputes. Successful LJP models on insurance disputes can benefit
insurance companies and their customers. It can save both sides' time and money
by allowing them to predict how the result would come out if they proceed to
the dispute mediation process. As is often the case with low-resource
languages, there is a limitation on the amount of data available for this
specific task. To mitigate this issue, we investigate how one can achieve a
good performance despite the limitation in data. In our experiment, we
demonstrate that Sentence Transformer Fine-tuning (SetFit, Tunstall et al.,
2022) is a good alternative to standard fine-tuning when training data are
limited. The models fine-tuned with the SetFit approach on our data show
similar performance to the Korean LJP benchmark models (Hwang et al., 2022)
despite the much smaller data size.
|
[
{
"created": "Fri, 26 Jan 2024 05:26:27 GMT",
"version": "v1"
}
] |
2024-01-29
|
[
[
"Kwak",
"Alice Saebom",
""
],
[
"Jeong",
"Cheonkam",
""
],
[
"Lim",
"Ji Weon",
""
],
[
"Min",
"Byeongcheol",
""
]
] |
This paper introduces a Korean legal judgment prediction (LJP) dataset for insurance disputes. Successful LJP models on insurance disputes can benefit insurance companies and their customers. It can save both sides' time and money by allowing them to predict how the result would come out if they proceed to the dispute mediation process. As is often the case with low-resource languages, there is a limitation on the amount of data available for this specific task. To mitigate this issue, we investigate how one can achieve a good performance despite the limitation in data. In our experiment, we demonstrate that Sentence Transformer Fine-tuning (SetFit, Tunstall et al., 2022) is a good alternative to standard fine-tuning when training data are limited. The models fine-tuned with the SetFit approach on our data show similar performance to the Korean LJP benchmark models (Hwang et al., 2022) despite the much smaller data size.
|
2403.10579
|
Adrian Kliks
|
Adrian Kliks
|
Metamaterialy, konfigurowalne matryce antenowe i komunikacja
holograficzna. Wstepna analiza nowej koncepcji bezprzewodowej transmisji
danych
|
19 pages, in Polish language, 7 figures
|
PRZEGLAD TELEKOMUNIKACYJNY I WIADOMOSCI TELEKOMUNIKACYJNE, no.4,
vol. 2023, pp. 29-39
|
10.15199/59.2023.4.4
| null |
cs.ET
|
http://creativecommons.org/licenses/by/4.0/
|
In the last few years, a very original concept of holographic communication
has gained a lot of interest among scientists from all over the world. The
specificity of this approach, on the one hand, is very different from the known
and currently used solutions, on the other hand, it creates great development
opportunities in the field of wireless communication. The article provides an
overview of two technological solutions that gave rise to the idea of
holographic communication. First, the possibility of using the so-called
metamaterials for the purposes of wireless data transmission, and the second,
the use of reconfigurable antenna surfaces. The last part presents the
assumptions of the idea of holographic communication, in which the principles
of creating images known from optical holography have been transferred to the
radio band, and to some extend, generalized.
|
[
{
"created": "Fri, 15 Mar 2024 10:58:17 GMT",
"version": "v1"
}
] |
2024-03-19
|
[
[
"Kliks",
"Adrian",
""
]
] |
In the last few years, a very original concept of holographic communication has gained a lot of interest among scientists from all over the world. The specificity of this approach, on the one hand, is very different from the known and currently used solutions, on the other hand, it creates great development opportunities in the field of wireless communication. The article provides an overview of two technological solutions that gave rise to the idea of holographic communication. First, the possibility of using the so-called metamaterials for the purposes of wireless data transmission, and the second, the use of reconfigurable antenna surfaces. The last part presents the assumptions of the idea of holographic communication, in which the principles of creating images known from optical holography have been transferred to the radio band, and to some extend, generalized.
|
1811.10986
|
Somayeh Asadifar
|
Somayeh Asadifar, Mohsen Kahani and Saeedeh Shekarpour
|
HCqa: Hybrid and Complex Question Answering on Textual Corpus and
Knowledge Graph
| null | null | null | null |
cs.CL cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Question Answering (QA) systems provide easy access to the vast amount of
knowledge without having to know the underlying complex structure of the
knowledge. The research community has provided ad hoc solutions to the key QA
tasks, including named entity recognition and disambiguation, relation
extraction and query building. Furthermore, some have integrated and composed
these components to implement many tasks automatically and efficiently.
However, in general, the existing solutions are limited to simple and short
questions and still do not address complex questions composed of several
sub-questions. Exploiting the answer to complex questions is further challenged
if it requires integrating knowledge from unstructured data sources, i.e.,
textual corpus, as well as structured data sources, i.e., knowledge graphs. In
this paper, an approach (HCqa) is introduced for dealing with complex questions
requiring federating knowledge from a hybrid of heterogeneous data sources
(structured and unstructured). We contribute in developing (i) a decomposition
mechanism which extracts sub-questions from potentially long and complex input
questions, (ii) a novel comprehensive schema, first of its kind, for extracting
and annotating relations, and (iii) an approach for executing and aggregating
the answers of sub-questions. The evaluation of HCqa showed a superior accuracy
in the fundamental tasks, such as relation extraction, as well as the
federation task.
|
[
{
"created": "Sat, 24 Nov 2018 07:03:53 GMT",
"version": "v1"
},
{
"created": "Thu, 3 Jan 2019 08:19:45 GMT",
"version": "v2"
},
{
"created": "Mon, 28 Jan 2019 09:42:23 GMT",
"version": "v3"
},
{
"created": "Thu, 31 Jan 2019 06:39:48 GMT",
"version": "v4"
},
{
"created": "Sun, 9 Jun 2019 04:56:51 GMT",
"version": "v5"
}
] |
2019-06-11
|
[
[
"Asadifar",
"Somayeh",
""
],
[
"Kahani",
"Mohsen",
""
],
[
"Shekarpour",
"Saeedeh",
""
]
] |
Question Answering (QA) systems provide easy access to the vast amount of knowledge without having to know the underlying complex structure of the knowledge. The research community has provided ad hoc solutions to the key QA tasks, including named entity recognition and disambiguation, relation extraction and query building. Furthermore, some have integrated and composed these components to implement many tasks automatically and efficiently. However, in general, the existing solutions are limited to simple and short questions and still do not address complex questions composed of several sub-questions. Exploiting the answer to complex questions is further challenged if it requires integrating knowledge from unstructured data sources, i.e., textual corpus, as well as structured data sources, i.e., knowledge graphs. In this paper, an approach (HCqa) is introduced for dealing with complex questions requiring federating knowledge from a hybrid of heterogeneous data sources (structured and unstructured). We contribute in developing (i) a decomposition mechanism which extracts sub-questions from potentially long and complex input questions, (ii) a novel comprehensive schema, first of its kind, for extracting and annotating relations, and (iii) an approach for executing and aggregating the answers of sub-questions. The evaluation of HCqa showed a superior accuracy in the fundamental tasks, such as relation extraction, as well as the federation task.
|
2303.02328
|
Sangrok Lee
|
Sangrok Lee, Jongseong Bae, Ha Young Kim
|
Decompose, Adjust, Compose: Effective Normalization by Playing with
Frequency for Domain Generalization
|
10 pages,6 figures, Conference on Computer Vision and Pattern
Recognition 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Domain generalization (DG) is a principal task to evaluate the robustness of
computer vision models. Many previous studies have used normalization for DG.
In normalization, statistics and normalized features are regarded as style and
content, respectively. However, it has a content variation problem when
removing style because the boundary between content and style is unclear. This
study addresses this problem from the frequency domain perspective, where
amplitude and phase are considered as style and content, respectively. First,
we verify the quantitative phase variation of normalization through the
mathematical derivation of the Fourier transform formula. Then, based on this,
we propose a novel normalization method, PCNorm, which eliminates style only as
the preserving content through spectral decomposition. Furthermore, we propose
advanced PCNorm variants, CCNorm and SCNorm, which adjust the degrees of
variations in content and style, respectively. Thus, they can learn
domain-agnostic representations for DG. With the normalization methods, we
propose ResNet-variant models, DAC-P and DAC-SC, which are robust to the domain
gap. The proposed models outperform other recent DG methods. The DAC-SC
achieves an average state-of-the-art performance of 65.6% on five datasets:
PACS, VLCS, Office-Home, DomainNet, and TerraIncognita.
|
[
{
"created": "Sat, 4 Mar 2023 05:23:11 GMT",
"version": "v1"
},
{
"created": "Mon, 13 Mar 2023 16:04:17 GMT",
"version": "v2"
},
{
"created": "Wed, 15 Mar 2023 12:39:19 GMT",
"version": "v3"
}
] |
2023-03-16
|
[
[
"Lee",
"Sangrok",
""
],
[
"Bae",
"Jongseong",
""
],
[
"Kim",
"Ha Young",
""
]
] |
Domain generalization (DG) is a principal task to evaluate the robustness of computer vision models. Many previous studies have used normalization for DG. In normalization, statistics and normalized features are regarded as style and content, respectively. However, it has a content variation problem when removing style because the boundary between content and style is unclear. This study addresses this problem from the frequency domain perspective, where amplitude and phase are considered as style and content, respectively. First, we verify the quantitative phase variation of normalization through the mathematical derivation of the Fourier transform formula. Then, based on this, we propose a novel normalization method, PCNorm, which eliminates style only as the preserving content through spectral decomposition. Furthermore, we propose advanced PCNorm variants, CCNorm and SCNorm, which adjust the degrees of variations in content and style, respectively. Thus, they can learn domain-agnostic representations for DG. With the normalization methods, we propose ResNet-variant models, DAC-P and DAC-SC, which are robust to the domain gap. The proposed models outperform other recent DG methods. The DAC-SC achieves an average state-of-the-art performance of 65.6% on five datasets: PACS, VLCS, Office-Home, DomainNet, and TerraIncognita.
|
2402.02592
|
Gerald Woo
|
Gerald Woo, Chenghao Liu, Akshat Kumar, Caiming Xiong, Silvio
Savarese, Doyen Sahoo
|
Unified Training of Universal Time Series Forecasting Transformers
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Deep learning for time series forecasting has traditionally operated within a
one-model-per-dataset framework, limiting its potential to leverage the
game-changing impact of large pre-trained models. The concept of universal
forecasting, emerging from pre-training on a vast collection of time series
datasets, envisions a single Large Time Series Model capable of addressing
diverse downstream forecasting tasks. However, constructing such a model poses
unique challenges specific to time series data: i) cross-frequency learning,
ii) accommodating an arbitrary number of variates for multivariate time series,
and iii) addressing the varying distributional properties inherent in
large-scale data. To address these challenges, we present novel enhancements to
the conventional time series Transformer architecture, resulting in our
proposed Masked Encoder-based Universal Time Series Forecasting Transformer
(Moirai). Trained on our newly introduced Large-scale Open Time Series Archive
(LOTSA) featuring over 27B observations across nine domains, Moirai achieves
competitive or superior performance as a zero-shot forecaster when compared to
full-shot models. Code, data, and model weights can be found at
https://github.com/SalesforceAIResearch/uni2ts.
|
[
{
"created": "Sun, 4 Feb 2024 20:00:45 GMT",
"version": "v1"
},
{
"created": "Wed, 22 May 2024 11:49:59 GMT",
"version": "v2"
}
] |
2024-05-24
|
[
[
"Woo",
"Gerald",
""
],
[
"Liu",
"Chenghao",
""
],
[
"Kumar",
"Akshat",
""
],
[
"Xiong",
"Caiming",
""
],
[
"Savarese",
"Silvio",
""
],
[
"Sahoo",
"Doyen",
""
]
] |
Deep learning for time series forecasting has traditionally operated within a one-model-per-dataset framework, limiting its potential to leverage the game-changing impact of large pre-trained models. The concept of universal forecasting, emerging from pre-training on a vast collection of time series datasets, envisions a single Large Time Series Model capable of addressing diverse downstream forecasting tasks. However, constructing such a model poses unique challenges specific to time series data: i) cross-frequency learning, ii) accommodating an arbitrary number of variates for multivariate time series, and iii) addressing the varying distributional properties inherent in large-scale data. To address these challenges, we present novel enhancements to the conventional time series Transformer architecture, resulting in our proposed Masked Encoder-based Universal Time Series Forecasting Transformer (Moirai). Trained on our newly introduced Large-scale Open Time Series Archive (LOTSA) featuring over 27B observations across nine domains, Moirai achieves competitive or superior performance as a zero-shot forecaster when compared to full-shot models. Code, data, and model weights can be found at https://github.com/SalesforceAIResearch/uni2ts.
|
1904.08854
|
Fernando Garcia
|
Fernando Garcia, Amit Kumar Pandey and Charles Fattal
|
Wait for me! Towards socially assistive walk companions
|
2nd Workshop on Social Robots in Therapy and Care. 14th ACM/IEEE
International Conference on Human-Robot Interaction (HRI 2019)
| null | null |
SREC/2019/01
|
cs.RO cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The aim of the present study involves designing a humanoid robot guide as a
walking trainer for elderly and rehabilitation patients. The system is based on
the humanoid robot Pepper with a compliance approach that allows to match the
motion intention of the user to the robot's pace. This feasibility study is
backed up by an experimental evaluation conducted in a rehabilitation centre.
We hypothesize that Pepper robot used as an assistive partner, can also benefit
elderly users by motivating them to perform physical activity.
|
[
{
"created": "Thu, 18 Apr 2019 15:56:14 GMT",
"version": "v1"
}
] |
2019-09-11
|
[
[
"Garcia",
"Fernando",
""
],
[
"Pandey",
"Amit Kumar",
""
],
[
"Fattal",
"Charles",
""
]
] |
The aim of the present study involves designing a humanoid robot guide as a walking trainer for elderly and rehabilitation patients. The system is based on the humanoid robot Pepper with a compliance approach that allows to match the motion intention of the user to the robot's pace. This feasibility study is backed up by an experimental evaluation conducted in a rehabilitation centre. We hypothesize that Pepper robot used as an assistive partner, can also benefit elderly users by motivating them to perform physical activity.
|
2008.00086
|
Songyang Zhang
|
Songyang Zhang
|
LearningCC: An online learning approach for congestion control
|
5 figures
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, much effort has been devoted by researchers from both academia and
industry to develop novel congestion control methods. LearningCC is presented
in this letter, in which the congestion control problem is solved by reinforce
learning approach. Instead of adjusting the congestion window with fixed
policy, there are serval options for an endpoint to choose. To predict the best
option is a hard task. Each option is mapped as an arm of a bandit machine. The
endpoint can learn to determine the optimal choice through trial and error
method. Experiments are performed on ns3 platform to verify the effectiveness
of LearningCC by comparing with other benchmark algorithms. Results indicate it
can achieve lower transmission delay than loss based algorithms. Especially, we
found LearningCC makes significant improvement in link suffering from random
loss.
|
[
{
"created": "Mon, 3 Aug 2020 06:51:34 GMT",
"version": "v1"
}
] |
2020-08-04
|
[
[
"Zhang",
"Songyang",
""
]
] |
Recently, much effort has been devoted by researchers from both academia and industry to develop novel congestion control methods. LearningCC is presented in this letter, in which the congestion control problem is solved by reinforce learning approach. Instead of adjusting the congestion window with fixed policy, there are serval options for an endpoint to choose. To predict the best option is a hard task. Each option is mapped as an arm of a bandit machine. The endpoint can learn to determine the optimal choice through trial and error method. Experiments are performed on ns3 platform to verify the effectiveness of LearningCC by comparing with other benchmark algorithms. Results indicate it can achieve lower transmission delay than loss based algorithms. Especially, we found LearningCC makes significant improvement in link suffering from random loss.
|
1409.1055
|
Uwe Aickelin
|
Diman Hassan, Uwe Aickelin and Christian Wagner
|
Comparison of Distance Metrics for Hierarchical Data in Medical
Databases
|
Proceedings of the 2014 World Congress on Computational Intelligence
(WCCI 2014), pp. 3636-3643, 2014
| null | null | null |
cs.DB cs.CE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Distance metrics are broadly used in different research areas and
applications, such as bio-informatics, data mining and many other fields.
However, there are some metrics, like pq-gram and Edit Distance used
specifically for data with a hierarchical structure. Other metrics used for
non-hierarchical data are the geometric and Hamming metrics. We have applied
these metrics to The Health Improvement Network (THIN) database which has some
hierarchical data. The THIN data has to be converted into a tree-like structure
for the first group of metrics. For the second group of metrics, the data are
converted into a frequency table or matrix, then for all metrics, all distances
are found and normalised. Based on this particular data set, our research
question: which of these metrics is useful for THIN data? This paper compares
the metrics, particularly the pq-gram metric on finding the similarities of
patients' data. It also investigates the similar patients who have the same
close distances as well as the metrics suitability for clustering the whole
patient population. Our results show that the two groups of metrics perform
differently as they represent different structures of the data. Nevertheless,
all the metrics could represent some similar data of patients as well as
discriminate sufficiently well in clustering the patient population using
$k$-means clustering algorithm.
|
[
{
"created": "Wed, 3 Sep 2014 12:19:19 GMT",
"version": "v1"
}
] |
2014-09-04
|
[
[
"Hassan",
"Diman",
""
],
[
"Aickelin",
"Uwe",
""
],
[
"Wagner",
"Christian",
""
]
] |
Distance metrics are broadly used in different research areas and applications, such as bio-informatics, data mining and many other fields. However, there are some metrics, like pq-gram and Edit Distance used specifically for data with a hierarchical structure. Other metrics used for non-hierarchical data are the geometric and Hamming metrics. We have applied these metrics to The Health Improvement Network (THIN) database which has some hierarchical data. The THIN data has to be converted into a tree-like structure for the first group of metrics. For the second group of metrics, the data are converted into a frequency table or matrix, then for all metrics, all distances are found and normalised. Based on this particular data set, our research question: which of these metrics is useful for THIN data? This paper compares the metrics, particularly the pq-gram metric on finding the similarities of patients' data. It also investigates the similar patients who have the same close distances as well as the metrics suitability for clustering the whole patient population. Our results show that the two groups of metrics perform differently as they represent different structures of the data. Nevertheless, all the metrics could represent some similar data of patients as well as discriminate sufficiently well in clustering the patient population using $k$-means clustering algorithm.
|
1601.03533
|
Bernd Zwattendorfer
|
Bernd Zwattendorfer and Daniel Slamanig
|
The Austrian eID Ecosystem in the Public Cloud: How to Obtain Privacy
While Preserving Practicality
|
47 pages, 5 figures, Journal of Information Security and
Applications, 2015
| null |
10.1016/j.jisa.2015.11.004.
| null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The Austrian eID system constitutes a main pillar within the Austrian
e-Government strategy. The eID system ensures unique identification and secure
authentication for citizens protecting access to applications where sensitive
and personal data is involved. In particular, the Austrian eID system supports
three main use cases: Identification and authentication of Austrian citizens,
electronic representation, and foreign citizen authentication at Austrian
public sector applications. For supporting all these use cases, several
components -- either locally deployed in the applications' domain or centrally
deployed -- need to communicate with each other. While local deployments have
some advantages in terms of scalability, still a central deployment of all
involved components would be advantageous, e.g. due to less maintenance
efforts. However, a central deployment can easily lead to load bottlenecks
because theoretically the whole Austrian population as well as -- for foreign
citizens -- the whole EU population could use the provided services. To
mitigate the issue on scalability, in this paper we propose the migration of
main components of the ecosystem into a public cloud. However, a move of
trusted services into a public cloud brings up new obstacles, particular with
respect to privacy. To bypass the issue on privacy, in this paper we propose an
approach on how the complete Austrian eID ecosystem can be moved into a public
cloud in a privacy-preserving manner by applying selected cryptographic
technologies (in particular using proxy re-encryption and redactable
signatures). Applying this approach, no sensitive data will be disclosed to a
public cloud provider by still supporting all three main eID system use cases.
We finally discuss our approach based on selected criteria.
|
[
{
"created": "Thu, 14 Jan 2016 10:08:58 GMT",
"version": "v1"
}
] |
2016-01-15
|
[
[
"Zwattendorfer",
"Bernd",
""
],
[
"Slamanig",
"Daniel",
""
]
] |
The Austrian eID system constitutes a main pillar within the Austrian e-Government strategy. The eID system ensures unique identification and secure authentication for citizens protecting access to applications where sensitive and personal data is involved. In particular, the Austrian eID system supports three main use cases: Identification and authentication of Austrian citizens, electronic representation, and foreign citizen authentication at Austrian public sector applications. For supporting all these use cases, several components -- either locally deployed in the applications' domain or centrally deployed -- need to communicate with each other. While local deployments have some advantages in terms of scalability, still a central deployment of all involved components would be advantageous, e.g. due to less maintenance efforts. However, a central deployment can easily lead to load bottlenecks because theoretically the whole Austrian population as well as -- for foreign citizens -- the whole EU population could use the provided services. To mitigate the issue on scalability, in this paper we propose the migration of main components of the ecosystem into a public cloud. However, a move of trusted services into a public cloud brings up new obstacles, particular with respect to privacy. To bypass the issue on privacy, in this paper we propose an approach on how the complete Austrian eID ecosystem can be moved into a public cloud in a privacy-preserving manner by applying selected cryptographic technologies (in particular using proxy re-encryption and redactable signatures). Applying this approach, no sensitive data will be disclosed to a public cloud provider by still supporting all three main eID system use cases. We finally discuss our approach based on selected criteria.
|
2002.05822
|
Yangchen Pan
|
Yangchen Pan, Jincheng Mei, Amir-massoud Farahmand
|
Frequency-based Search-control in Dyna
|
Accepted to ICLR 2020
| null | null | null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Model-based reinforcement learning has been empirically demonstrated as a
successful strategy to improve sample efficiency. In particular, Dyna is an
elegant model-based architecture integrating learning and planning that
provides huge flexibility of using a model. One of the most important
components in Dyna is called search-control, which refers to the process of
generating state or state-action pairs from which we query the model to acquire
simulated experiences. Search-control is critical in improving learning
efficiency. In this work, we propose a simple and novel search-control strategy
by searching high frequency regions of the value function. Our main intuition
is built on Shannon sampling theorem from signal processing, which indicates
that a high frequency signal requires more samples to reconstruct. We
empirically show that a high frequency function is more difficult to
approximate. This suggests a search-control strategy: we should use states from
high frequency regions of the value function to query the model to acquire more
samples. We develop a simple strategy to locally measure the frequency of a
function by gradient and hessian norms, and provide theoretical justification
for this approach. We then apply our strategy to search-control in Dyna, and
conduct experiments to show its property and effectiveness on benchmark
domains.
|
[
{
"created": "Fri, 14 Feb 2020 00:27:58 GMT",
"version": "v1"
}
] |
2020-02-17
|
[
[
"Pan",
"Yangchen",
""
],
[
"Mei",
"Jincheng",
""
],
[
"Farahmand",
"Amir-massoud",
""
]
] |
Model-based reinforcement learning has been empirically demonstrated as a successful strategy to improve sample efficiency. In particular, Dyna is an elegant model-based architecture integrating learning and planning that provides huge flexibility of using a model. One of the most important components in Dyna is called search-control, which refers to the process of generating state or state-action pairs from which we query the model to acquire simulated experiences. Search-control is critical in improving learning efficiency. In this work, we propose a simple and novel search-control strategy by searching high frequency regions of the value function. Our main intuition is built on Shannon sampling theorem from signal processing, which indicates that a high frequency signal requires more samples to reconstruct. We empirically show that a high frequency function is more difficult to approximate. This suggests a search-control strategy: we should use states from high frequency regions of the value function to query the model to acquire more samples. We develop a simple strategy to locally measure the frequency of a function by gradient and hessian norms, and provide theoretical justification for this approach. We then apply our strategy to search-control in Dyna, and conduct experiments to show its property and effectiveness on benchmark domains.
|
1208.3839
|
Jing-Yan Wang
|
Jing-Yan Wang
|
Discriminative Sparse Coding on Multi-Manifold for Data Representation
and Classification
|
This paper has been withdrawn by the author due to the terrible
writing
| null | null | null |
cs.CV cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sparse coding has been popularly used as an effective data representation
method in various applications, such as computer vision, medical imaging and
bioinformatics, etc. However, the conventional sparse coding algorithms and its
manifold regularized variants (graph sparse coding and Laplacian sparse
coding), learn the codebook and codes in a unsupervised manner and neglect the
class information available in the training set. To address this problem, in
this paper we propose a novel discriminative sparse coding method based on
multi-manifold, by learning discriminative class-conditional codebooks and
sparse codes from both data feature space and class labels. First, the entire
training set is partitioned into multiple manifolds according to the class
labels. Then, we formulate the sparse coding as a manifold-manifold matching
problem and learn class-conditional codebooks and codes to maximize the
manifold margins of different classes. Lastly, we present a data point-manifold
matching error based strategy to classify the unlabeled data point.
Experimental results on somatic mutations identification and breast tumors
classification in ultrasonic images tasks demonstrate the efficacy of the
proposed data representation-classification approach.
|
[
{
"created": "Sun, 19 Aug 2012 14:49:27 GMT",
"version": "v1"
},
{
"created": "Wed, 3 Apr 2013 14:21:40 GMT",
"version": "v2"
}
] |
2013-04-04
|
[
[
"Wang",
"Jing-Yan",
""
]
] |
Sparse coding has been popularly used as an effective data representation method in various applications, such as computer vision, medical imaging and bioinformatics, etc. However, the conventional sparse coding algorithms and its manifold regularized variants (graph sparse coding and Laplacian sparse coding), learn the codebook and codes in a unsupervised manner and neglect the class information available in the training set. To address this problem, in this paper we propose a novel discriminative sparse coding method based on multi-manifold, by learning discriminative class-conditional codebooks and sparse codes from both data feature space and class labels. First, the entire training set is partitioned into multiple manifolds according to the class labels. Then, we formulate the sparse coding as a manifold-manifold matching problem and learn class-conditional codebooks and codes to maximize the manifold margins of different classes. Lastly, we present a data point-manifold matching error based strategy to classify the unlabeled data point. Experimental results on somatic mutations identification and breast tumors classification in ultrasonic images tasks demonstrate the efficacy of the proposed data representation-classification approach.
|
2407.10112
|
Quanming Yao
|
Yaqing Wang and Hongming Piao and Daxiang Dong and Quanming Yao and
Jingbo Zhou
|
Warming Up Cold-Start CTR Prediction by Learning Item-Specific Feature
Interactions
|
KDD 2024
| null | null | null |
cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
In recommendation systems, new items are continuously introduced, initially
lacking interaction records but gradually accumulating them over time.
Accurately predicting the click-through rate (CTR) for these items is crucial
for enhancing both revenue and user experience. While existing methods focus on
enhancing item ID embeddings for new items within general CTR models, they tend
to adopt a global feature interaction approach, often overshadowing new items
with sparse data by those with abundant interactions. Addressing this, our work
introduces EmerG, a novel approach that warms up cold-start CTR prediction by
learning item-specific feature interaction patterns. EmerG utilizes
hypernetworks to generate an item-specific feature graph based on item
characteristics, which is then processed by a Graph Neural Network (GNN). This
GNN is specially tailored to provably capture feature interactions at any order
through a customized message passing mechanism. We further design a meta
learning strategy that optimizes parameters of hypernetworks and GNN across
various item CTR prediction tasks, while only adjusting a minimal set of
item-specific parameters within each task. This strategy effectively reduces
the risk of overfitting when dealing with limited data. Extensive experiments
on benchmark datasets validate that EmerG consistently performs the best given
no, a few and sufficient instances of new items.
|
[
{
"created": "Sun, 14 Jul 2024 07:58:13 GMT",
"version": "v1"
}
] |
2024-07-16
|
[
[
"Wang",
"Yaqing",
""
],
[
"Piao",
"Hongming",
""
],
[
"Dong",
"Daxiang",
""
],
[
"Yao",
"Quanming",
""
],
[
"Zhou",
"Jingbo",
""
]
] |
In recommendation systems, new items are continuously introduced, initially lacking interaction records but gradually accumulating them over time. Accurately predicting the click-through rate (CTR) for these items is crucial for enhancing both revenue and user experience. While existing methods focus on enhancing item ID embeddings for new items within general CTR models, they tend to adopt a global feature interaction approach, often overshadowing new items with sparse data by those with abundant interactions. Addressing this, our work introduces EmerG, a novel approach that warms up cold-start CTR prediction by learning item-specific feature interaction patterns. EmerG utilizes hypernetworks to generate an item-specific feature graph based on item characteristics, which is then processed by a Graph Neural Network (GNN). This GNN is specially tailored to provably capture feature interactions at any order through a customized message passing mechanism. We further design a meta learning strategy that optimizes parameters of hypernetworks and GNN across various item CTR prediction tasks, while only adjusting a minimal set of item-specific parameters within each task. This strategy effectively reduces the risk of overfitting when dealing with limited data. Extensive experiments on benchmark datasets validate that EmerG consistently performs the best given no, a few and sufficient instances of new items.
|
2303.11502
|
Subhadeep Koley
|
Ayan Kumar Bhunia, Subhadeep Koley, Amandeep Kumar, Aneeshan Sain,
Pinaki Nath Chowdhury, Tao Xiang, Yi-Zhe Song
|
Sketch2Saliency: Learning to Detect Salient Objects from Human Drawings
|
CVPR 2023. Project page available at
https://ayankumarbhunia.github.io/Sketch2Saliency/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Human sketch has already proved its worth in various visual understanding
tasks (e.g., retrieval, segmentation, image-captioning, etc). In this paper, we
reveal a new trait of sketches - that they are also salient. This is intuitive
as sketching is a natural attentive process at its core. More specifically, we
aim to study how sketches can be used as a weak label to detect salient objects
present in an image. To this end, we propose a novel method that emphasises on
how "salient object" could be explained by hand-drawn sketches. To accomplish
this, we introduce a photo-to-sketch generation model that aims to generate
sequential sketch coordinates corresponding to a given visual photo through a
2D attention mechanism. Attention maps accumulated across the time steps give
rise to salient regions in the process. Extensive quantitative and qualitative
experiments prove our hypothesis and delineate how our sketch-based saliency
detection model gives a competitive performance compared to the
state-of-the-art.
|
[
{
"created": "Mon, 20 Mar 2023 23:46:46 GMT",
"version": "v1"
},
{
"created": "Thu, 23 Mar 2023 22:14:11 GMT",
"version": "v2"
},
{
"created": "Thu, 30 Mar 2023 15:08:36 GMT",
"version": "v3"
}
] |
2023-03-31
|
[
[
"Bhunia",
"Ayan Kumar",
""
],
[
"Koley",
"Subhadeep",
""
],
[
"Kumar",
"Amandeep",
""
],
[
"Sain",
"Aneeshan",
""
],
[
"Chowdhury",
"Pinaki Nath",
""
],
[
"Xiang",
"Tao",
""
],
[
"Song",
"Yi-Zhe",
""
]
] |
Human sketch has already proved its worth in various visual understanding tasks (e.g., retrieval, segmentation, image-captioning, etc). In this paper, we reveal a new trait of sketches - that they are also salient. This is intuitive as sketching is a natural attentive process at its core. More specifically, we aim to study how sketches can be used as a weak label to detect salient objects present in an image. To this end, we propose a novel method that emphasises on how "salient object" could be explained by hand-drawn sketches. To accomplish this, we introduce a photo-to-sketch generation model that aims to generate sequential sketch coordinates corresponding to a given visual photo through a 2D attention mechanism. Attention maps accumulated across the time steps give rise to salient regions in the process. Extensive quantitative and qualitative experiments prove our hypothesis and delineate how our sketch-based saliency detection model gives a competitive performance compared to the state-of-the-art.
|
2212.10292
|
Monika Wysocza\'nska
|
Monika Wysocza\'nska, Tom Monnier, Tomasz Trzci\'nski, David Picard
|
Towards Unsupervised Visual Reasoning: Do Off-The-Shelf Features Know
How to Reason?
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Recent advances in visual representation learning allowed to build an
abundance of powerful off-the-shelf features that are ready-to-use for numerous
downstream tasks. This work aims to assess how well these features preserve
information about the objects, such as their spatial location, their visual
properties and their relative relationships. We propose to do so by evaluating
them in the context of visual reasoning, where multiple objects with complex
relationships and different attributes are at play. More specifically, we
introduce a protocol to evaluate visual representations for the task of Visual
Question Answering. In order to decouple visual feature extraction from
reasoning, we design a specific attention-based reasoning module which is
trained on the frozen visual representations to be evaluated, in a spirit
similar to standard feature evaluations relying on shallow networks. We compare
two types of visual representations, densely extracted local features and
object-centric ones, against the performances of a perfect image representation
using ground truth. Our main findings are two-fold. First, despite excellent
performances on classical proxy tasks, such representations fall short for
solving complex reasoning problem. Second, object-centric features better
preserve the critical information necessary to perform visual reasoning. In our
proposed framework we show how to methodologically approach this evaluation.
|
[
{
"created": "Tue, 20 Dec 2022 14:36:45 GMT",
"version": "v1"
}
] |
2022-12-21
|
[
[
"Wysoczańska",
"Monika",
""
],
[
"Monnier",
"Tom",
""
],
[
"Trzciński",
"Tomasz",
""
],
[
"Picard",
"David",
""
]
] |
Recent advances in visual representation learning allowed to build an abundance of powerful off-the-shelf features that are ready-to-use for numerous downstream tasks. This work aims to assess how well these features preserve information about the objects, such as their spatial location, their visual properties and their relative relationships. We propose to do so by evaluating them in the context of visual reasoning, where multiple objects with complex relationships and different attributes are at play. More specifically, we introduce a protocol to evaluate visual representations for the task of Visual Question Answering. In order to decouple visual feature extraction from reasoning, we design a specific attention-based reasoning module which is trained on the frozen visual representations to be evaluated, in a spirit similar to standard feature evaluations relying on shallow networks. We compare two types of visual representations, densely extracted local features and object-centric ones, against the performances of a perfect image representation using ground truth. Our main findings are two-fold. First, despite excellent performances on classical proxy tasks, such representations fall short for solving complex reasoning problem. Second, object-centric features better preserve the critical information necessary to perform visual reasoning. In our proposed framework we show how to methodologically approach this evaluation.
|
1904.00368
|
Soheil Mehrabkhani
|
Soheil Mehrabkhani
|
Fourier Transform Approach to Machine Learning I: Fourier Regression
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a supervised learning algorithm for machine learning applications.
Contrary to the model developing in the classical methods, which treat
training, validation, and test as separate steps, in the presented approach,
there is a unified training and evaluating procedure based on an iterative band
filtering by the use of a fast Fourier transform. The presented approach does
not apply the method of least squares, thus, basically typical ill-conditioned
matrices do not occur at all. The optimal model results from the convergence of
the performance metric, which automatically prevents the usual underfitting and
overfitting problems. The algorithm capability is investigated for noisy data,
and the obtained result demonstrates a reliable and powerful machine learning
approach beyond the typical limits of the classical methods.
|
[
{
"created": "Sun, 31 Mar 2019 09:41:28 GMT",
"version": "v1"
},
{
"created": "Tue, 2 Apr 2019 09:24:18 GMT",
"version": "v2"
},
{
"created": "Sun, 22 Sep 2019 09:22:34 GMT",
"version": "v3"
}
] |
2019-09-24
|
[
[
"Mehrabkhani",
"Soheil",
""
]
] |
We propose a supervised learning algorithm for machine learning applications. Contrary to the model developing in the classical methods, which treat training, validation, and test as separate steps, in the presented approach, there is a unified training and evaluating procedure based on an iterative band filtering by the use of a fast Fourier transform. The presented approach does not apply the method of least squares, thus, basically typical ill-conditioned matrices do not occur at all. The optimal model results from the convergence of the performance metric, which automatically prevents the usual underfitting and overfitting problems. The algorithm capability is investigated for noisy data, and the obtained result demonstrates a reliable and powerful machine learning approach beyond the typical limits of the classical methods.
|
2105.09660
|
Felix Hamborg
|
Felix Hamborg and Karsten Donnay and Bela Gipp
|
Towards Target-dependent Sentiment Classification in News Articles
| null | null |
10.1007/978-3-030-71305-8_12
| null |
cs.CL cs.CY
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Extensive research on target-dependent sentiment classification (TSC) has led
to strong classification performances in domains where authors tend to
explicitly express sentiment about specific entities or topics, such as in
reviews or on social media. We investigate TSC in news articles, a much less
researched domain, despite the importance of news as an essential information
source in individual and societal decision making. This article introduces
NewsTSC, a manually annotated dataset to explore TSC on news articles.
Investigating characteristics of sentiment in news and contrasting them to
popular TSC domains, we find that sentiment in the news is expressed less
explicitly, is more dependent on context and readership, and requires a greater
degree of interpretation. In an extensive evaluation, we find that the state of
the art in TSC performs worse on news articles than on other domains (average
recall AvgRec = 69.8 on NewsTSC compared to AvgRev = [75.6, 82.2] on
established TSC datasets). Reasons include incorrectly resolved relation of
target and sentiment-bearing phrases and off-context dependence. As a major
improvement over previous news TSC, we find that BERT's natural language
understanding capabilities capture the less explicit sentiment used in news
articles.
|
[
{
"created": "Thu, 20 May 2021 10:48:03 GMT",
"version": "v1"
}
] |
2021-05-21
|
[
[
"Hamborg",
"Felix",
""
],
[
"Donnay",
"Karsten",
""
],
[
"Gipp",
"Bela",
""
]
] |
Extensive research on target-dependent sentiment classification (TSC) has led to strong classification performances in domains where authors tend to explicitly express sentiment about specific entities or topics, such as in reviews or on social media. We investigate TSC in news articles, a much less researched domain, despite the importance of news as an essential information source in individual and societal decision making. This article introduces NewsTSC, a manually annotated dataset to explore TSC on news articles. Investigating characteristics of sentiment in news and contrasting them to popular TSC domains, we find that sentiment in the news is expressed less explicitly, is more dependent on context and readership, and requires a greater degree of interpretation. In an extensive evaluation, we find that the state of the art in TSC performs worse on news articles than on other domains (average recall AvgRec = 69.8 on NewsTSC compared to AvgRev = [75.6, 82.2] on established TSC datasets). Reasons include incorrectly resolved relation of target and sentiment-bearing phrases and off-context dependence. As a major improvement over previous news TSC, we find that BERT's natural language understanding capabilities capture the less explicit sentiment used in news articles.
|
1904.09561
|
EPTCS
|
Michele Pagani (IRIF, Universit\'e Paris Diderot, France), Sandra
Alves (Porto University)
|
Proceedings Twelfth Workshop on Developments in Computational Models and
Ninth Workshop on Intersection Types and Related Systems
| null |
EPTCS 293, 2019
|
10.4204/EPTCS.293
| null |
cs.LO cs.CC cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This volume contains a final and revised selection of papers presented at
Twelfth Workshop on Developments in Computational Models (DCM 2018) and the
Ninth Workshop on Intersection Types and Related Systems (ITRS 2018), held on
July 8, 2018 in Oxford, in affiliation with FLOC 2018.
|
[
{
"created": "Sun, 21 Apr 2019 07:53:20 GMT",
"version": "v1"
}
] |
2019-04-23
|
[
[
"Pagani",
"Michele",
"",
"IRIF, Université Paris Diderot, France"
],
[
"Alves",
"Sandra",
"",
"Porto University"
]
] |
This volume contains a final and revised selection of papers presented at Twelfth Workshop on Developments in Computational Models (DCM 2018) and the Ninth Workshop on Intersection Types and Related Systems (ITRS 2018), held on July 8, 2018 in Oxford, in affiliation with FLOC 2018.
|
2105.03215
|
Yida Wang
|
Zhi Chen, Cody Hao Yu, Trevor Morris, Jorn Tuyls, Yi-Hsiang Lai, Jared
Roesch, Elliott Delaye, Vin Sharma, Yida Wang
|
Bring Your Own Codegen to Deep Learning Compiler
| null | null | null | null |
cs.LG cs.PF cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep neural networks (DNNs) have been ubiquitously applied in many
applications, and accelerators are emerged as an enabler to support the fast
and efficient inference tasks of these applications. However, to achieve high
model coverage with high performance, each accelerator vendor has to develop a
full compiler stack to ingest, optimize, and execute the DNNs. This poses
significant challenges in the development and maintenance of the software
stack. In addition, the vendors have to contiguously update their hardware
and/or software to cope with the rapid evolution of the DNN model architectures
and operators. To address these issues, this paper proposes an open source
framework that enables users to only concentrate on the development of their
proprietary code generation tools by reusing as many as possible components in
the existing deep learning compilers. Our framework provides users flexible and
easy-to-use interfaces to partition their models into segments that can be
executed on "the best" processors to take advantage of the powerful computation
capability of accelerators. Our case study shows that our framework has been
deployed in multiple commercial vendors' compiler stacks with only a few
thousand lines of code.
|
[
{
"created": "Mon, 3 May 2021 17:22:25 GMT",
"version": "v1"
}
] |
2021-05-10
|
[
[
"Chen",
"Zhi",
""
],
[
"Yu",
"Cody Hao",
""
],
[
"Morris",
"Trevor",
""
],
[
"Tuyls",
"Jorn",
""
],
[
"Lai",
"Yi-Hsiang",
""
],
[
"Roesch",
"Jared",
""
],
[
"Delaye",
"Elliott",
""
],
[
"Sharma",
"Vin",
""
],
[
"Wang",
"Yida",
""
]
] |
Deep neural networks (DNNs) have been ubiquitously applied in many applications, and accelerators are emerged as an enabler to support the fast and efficient inference tasks of these applications. However, to achieve high model coverage with high performance, each accelerator vendor has to develop a full compiler stack to ingest, optimize, and execute the DNNs. This poses significant challenges in the development and maintenance of the software stack. In addition, the vendors have to contiguously update their hardware and/or software to cope with the rapid evolution of the DNN model architectures and operators. To address these issues, this paper proposes an open source framework that enables users to only concentrate on the development of their proprietary code generation tools by reusing as many as possible components in the existing deep learning compilers. Our framework provides users flexible and easy-to-use interfaces to partition their models into segments that can be executed on "the best" processors to take advantage of the powerful computation capability of accelerators. Our case study shows that our framework has been deployed in multiple commercial vendors' compiler stacks with only a few thousand lines of code.
|
2204.09717
|
Anwesh Reddy Paduri
|
Narayana Darapaneni, Selvakumar Raj, Raghul V, Venkatesh Sivaraman,
Sunil Mohan, and Anwesh Reddy Paduri
|
LSTM-RASA Based Agri Farm Assistant for Farmers
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The application of Deep Learning and Natural Language based ChatBots are
growing rapidly in recent years. They are used in many fields like customer
support, reservation system and as personal assistant. The Enterprises are
using such ChatBots to serve their customers in a better and efficient manner.
Even after such technological advancement, the expert advice does not reach the
farmers on timely manner. The farmers are still largely dependent on their
peers knowledge in solving the problems they face in their field. These
technologies have not been effectively used to give the required information to
farmers on timely manner. This project aims to implement a closed domain
ChatBot for the field of Agriculture Farmers Assistant. Farmers can have
conversation with the Chatbot and get the expert advice in their field. Farmers
Assistant is based on RASA Open Source Framework. The Chatbot identifies the
intent and entity from user utterances and retrieve the remedy from the
database and share it with the user. We tested the Bot with existing data and
it showed promising results.
|
[
{
"created": "Thu, 7 Apr 2022 11:01:54 GMT",
"version": "v1"
}
] |
2022-04-22
|
[
[
"Darapaneni",
"Narayana",
""
],
[
"Raj",
"Selvakumar",
""
],
[
"V",
"Raghul",
""
],
[
"Sivaraman",
"Venkatesh",
""
],
[
"Mohan",
"Sunil",
""
],
[
"Paduri",
"Anwesh Reddy",
""
]
] |
The application of Deep Learning and Natural Language based ChatBots are growing rapidly in recent years. They are used in many fields like customer support, reservation system and as personal assistant. The Enterprises are using such ChatBots to serve their customers in a better and efficient manner. Even after such technological advancement, the expert advice does not reach the farmers on timely manner. The farmers are still largely dependent on their peers knowledge in solving the problems they face in their field. These technologies have not been effectively used to give the required information to farmers on timely manner. This project aims to implement a closed domain ChatBot for the field of Agriculture Farmers Assistant. Farmers can have conversation with the Chatbot and get the expert advice in their field. Farmers Assistant is based on RASA Open Source Framework. The Chatbot identifies the intent and entity from user utterances and retrieve the remedy from the database and share it with the user. We tested the Bot with existing data and it showed promising results.
|
1810.02383
|
Alphan Sahin
|
Alphan Sahin, Rui Yang
|
A Generic Complementary Sequence Construction and Associated
Encoder/Decoder Design
|
15 pages, to appear in IEEE Transactions on Communications
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this study, we propose a flexible construction of complementary sequences
(CSs) that can contain zero-valued elements. To derive the construction, we use
Boolean functions to represent a polynomial generated with a recursion. By
applying this representation to recursive CS constructions, we show the impact
of construction parameters such as sign, amplitude, phase rotation used in the
recursion on the elements of the synthesized CS. As a result, we extend Davis
and Jedwab's CS construction by obtaining independent functions for the
amplitude and phase of each element of the CS, and the seed sequence positions
in the CS. The proposed construction shows that a set of distinct CSs
compatible with non-contiguous resource allocations for orthogonal
frequency-division multiplexing (OFDM) and various constellations can be
synthesized systematically. It also leads to a low peak-to-mean-envelope-power
ratio (PMEPR) multiple accessing scheme in the uplink and a low-complexity
recursive decoder. We demonstrate the performance of the proposed encoder and
decoder through comprehensive simulations.
|
[
{
"created": "Thu, 4 Oct 2018 18:13:16 GMT",
"version": "v1"
},
{
"created": "Sat, 27 Jul 2019 20:20:54 GMT",
"version": "v2"
},
{
"created": "Wed, 4 Aug 2021 14:54:38 GMT",
"version": "v3"
}
] |
2021-08-05
|
[
[
"Sahin",
"Alphan",
""
],
[
"Yang",
"Rui",
""
]
] |
In this study, we propose a flexible construction of complementary sequences (CSs) that can contain zero-valued elements. To derive the construction, we use Boolean functions to represent a polynomial generated with a recursion. By applying this representation to recursive CS constructions, we show the impact of construction parameters such as sign, amplitude, phase rotation used in the recursion on the elements of the synthesized CS. As a result, we extend Davis and Jedwab's CS construction by obtaining independent functions for the amplitude and phase of each element of the CS, and the seed sequence positions in the CS. The proposed construction shows that a set of distinct CSs compatible with non-contiguous resource allocations for orthogonal frequency-division multiplexing (OFDM) and various constellations can be synthesized systematically. It also leads to a low peak-to-mean-envelope-power ratio (PMEPR) multiple accessing scheme in the uplink and a low-complexity recursive decoder. We demonstrate the performance of the proposed encoder and decoder through comprehensive simulations.
|
1705.05301
|
Paschalis Panteleris
|
Paschalis Panteleris (1) and Antonis Argyros (1 and 2) ((1) Institute
of Computer Science, FORTH, (2) Computer Science Department, University of
Crete)
|
Back to RGB: 3D tracking of hands and hand-object interactions based on
short-baseline stereo
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a novel solution to the problem of 3D tracking of the articulated
motion of human hand(s), possibly in interaction with other objects. The vast
majority of contemporary relevant work capitalizes on depth information
provided by RGBD cameras. In this work, we show that accurate and efficient 3D
hand tracking is possible, even for the case of RGB stereo. A straightforward
approach for solving the problem based on such input would be to first recover
depth and then apply a state of the art depth-based 3D hand tracking method.
Unfortunately, this does not work well in practice because the stereo-based,
dense 3D reconstruction of hands is far less accurate than the one obtained by
RGBD cameras. Our approach bypasses 3D reconstruction and follows a completely
different route: 3D hand tracking is formulated as an optimization problem
whose solution is the hand configuration that maximizes the color consistency
between the two views of the hand. We demonstrate the applicability of our
method for real time tracking of a single hand, of a hand manipulating an
object and of two interacting hands. The method has been evaluated
quantitatively on standard datasets and in comparison to relevant, state of the
art RGBD-based approaches. The obtained results demonstrate that the proposed
stereo-based method performs equally well to its RGBD-based competitors, and in
some cases, it even outperforms them.
|
[
{
"created": "Mon, 15 May 2017 15:38:56 GMT",
"version": "v1"
}
] |
2017-05-16
|
[
[
"Panteleris",
"Paschalis",
"",
"1 and 2"
],
[
"Argyros",
"Antonis",
"",
"1 and 2"
]
] |
We present a novel solution to the problem of 3D tracking of the articulated motion of human hand(s), possibly in interaction with other objects. The vast majority of contemporary relevant work capitalizes on depth information provided by RGBD cameras. In this work, we show that accurate and efficient 3D hand tracking is possible, even for the case of RGB stereo. A straightforward approach for solving the problem based on such input would be to first recover depth and then apply a state of the art depth-based 3D hand tracking method. Unfortunately, this does not work well in practice because the stereo-based, dense 3D reconstruction of hands is far less accurate than the one obtained by RGBD cameras. Our approach bypasses 3D reconstruction and follows a completely different route: 3D hand tracking is formulated as an optimization problem whose solution is the hand configuration that maximizes the color consistency between the two views of the hand. We demonstrate the applicability of our method for real time tracking of a single hand, of a hand manipulating an object and of two interacting hands. The method has been evaluated quantitatively on standard datasets and in comparison to relevant, state of the art RGBD-based approaches. The obtained results demonstrate that the proposed stereo-based method performs equally well to its RGBD-based competitors, and in some cases, it even outperforms them.
|
2001.05572
|
Oliver Urbann
|
Oliver Urbann, Simon Camphausen, Arne Moos, Ingmar Schwarz, S\"oren
Kerner, Maximilian Otten
|
A C Code Generator for Fast Inference and Simple Deployment of
Convolutional Neural Networks on Resource Constrained Systems
| null | null | null | null |
cs.LG cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Inference of Convolutional Neural Networks in time critical applications
usually requires a GPU. In robotics or embedded devices these are often not
available due to energy, space and cost constraints. Furthermore, installation
of a deep learning framework or even a native compiler on the target platform
is not possible. This paper presents a neural network code generator (NNCG)
that generates from a trained CNN a plain ANSI C code file that encapsulates
the inference in single a function. It can easily be included in existing
projects and due to lack of dependencies, cross compilation is usually
possible. Additionally, the code generation is optimized based on the known
trained CNN and target platform following four design principles. The system is
evaluated utilizing small CNN designed for this application. Compared to
TensorFlow XLA and Glow speed-ups of up to 11.81 can be shown and even GPUs are
outperformed regarding latency.
|
[
{
"created": "Tue, 14 Jan 2020 09:46:14 GMT",
"version": "v1"
}
] |
2020-01-17
|
[
[
"Urbann",
"Oliver",
""
],
[
"Camphausen",
"Simon",
""
],
[
"Moos",
"Arne",
""
],
[
"Schwarz",
"Ingmar",
""
],
[
"Kerner",
"Sören",
""
],
[
"Otten",
"Maximilian",
""
]
] |
Inference of Convolutional Neural Networks in time critical applications usually requires a GPU. In robotics or embedded devices these are often not available due to energy, space and cost constraints. Furthermore, installation of a deep learning framework or even a native compiler on the target platform is not possible. This paper presents a neural network code generator (NNCG) that generates from a trained CNN a plain ANSI C code file that encapsulates the inference in single a function. It can easily be included in existing projects and due to lack of dependencies, cross compilation is usually possible. Additionally, the code generation is optimized based on the known trained CNN and target platform following four design principles. The system is evaluated utilizing small CNN designed for this application. Compared to TensorFlow XLA and Glow speed-ups of up to 11.81 can be shown and even GPUs are outperformed regarding latency.
|
2106.02954
|
Avi Caciularu
|
Avi Caciularu, Ido Dagan, Jacob Goldberger
|
Denoising Word Embeddings by Averaging in a Shared Space
|
Accepted to *SEM 2021
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce a new approach for smoothing and improving the quality of word
embeddings. We consider a method of fusing word embeddings that were trained on
the same corpus but with different initializations. We project all the models
to a shared vector space using an efficient implementation of the Generalized
Procrustes Analysis (GPA) procedure, previously used in multilingual word
translation. Our word representation demonstrates consistent improvements over
the raw models as well as their simplistic average, on a range of tasks. As the
new representations are more stable and reliable, there is a noticeable
improvement in rare word evaluations.
|
[
{
"created": "Sat, 5 Jun 2021 19:49:02 GMT",
"version": "v1"
}
] |
2021-06-08
|
[
[
"Caciularu",
"Avi",
""
],
[
"Dagan",
"Ido",
""
],
[
"Goldberger",
"Jacob",
""
]
] |
We introduce a new approach for smoothing and improving the quality of word embeddings. We consider a method of fusing word embeddings that were trained on the same corpus but with different initializations. We project all the models to a shared vector space using an efficient implementation of the Generalized Procrustes Analysis (GPA) procedure, previously used in multilingual word translation. Our word representation demonstrates consistent improvements over the raw models as well as their simplistic average, on a range of tasks. As the new representations are more stable and reliable, there is a noticeable improvement in rare word evaluations.
|
2001.07698
|
Bernardo Huberman
|
Qi Zhou, Jingjie Zhu, Junwen Zhang, Zhensheng Jia, Bernardo Huberman
and Gee-Kung Chang
|
Intelligent Bandwidth Allocation for Latency Management in NG-EPON using
Reinforcement Learning Methods
| null | null | null | null |
cs.NI cs.LG eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A novel intelligent bandwidth allocation scheme in NG-EPON using
reinforcement learning is proposed and demonstrated for latency management. We
verify the capability of the proposed scheme under both fixed and dynamic
traffic loads scenarios to achieve <1ms average latency. The RL agent
demonstrates an efficient intelligent mechanism to manage the latency, which
provides a promising IBA solution for the next-generation access network.
|
[
{
"created": "Tue, 21 Jan 2020 18:58:56 GMT",
"version": "v1"
}
] |
2020-01-22
|
[
[
"Zhou",
"Qi",
""
],
[
"Zhu",
"Jingjie",
""
],
[
"Zhang",
"Junwen",
""
],
[
"Jia",
"Zhensheng",
""
],
[
"Huberman",
"Bernardo",
""
],
[
"Chang",
"Gee-Kung",
""
]
] |
A novel intelligent bandwidth allocation scheme in NG-EPON using reinforcement learning is proposed and demonstrated for latency management. We verify the capability of the proposed scheme under both fixed and dynamic traffic loads scenarios to achieve <1ms average latency. The RL agent demonstrates an efficient intelligent mechanism to manage the latency, which provides a promising IBA solution for the next-generation access network.
|
2302.09040
|
Karine Levonyan
|
Karine Levonyan, Jesse Harder, Fernando De Mesentier Silva
|
Automated Graph Genetic Algorithm based Puzzle Validation for Faster
Game Design
| null |
2022 IEEE Congress on Evolutionary Computation (CEC), Padua,
Italy, 2022, pp. 1-8
|
10.1109/CEC55065.2022.9870402
| null |
cs.NE cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Many games are reliant on creating new and engaging content constantly to
maintain the interest of their player-base. One such example are puzzle games,
in such it is common to have a recurrent need to create new puzzles. Creating
new puzzles requires guaranteeing that they are solvable and interesting to
players, both of which require significant time from the designers. Automatic
validation of puzzles provides designers with a significant time saving and
potential boost in quality. Automation allows puzzle designers to estimate
different properties, increase the variety of constraints, and even personalize
puzzles to specific players. Puzzles often have a large design space, which
renders exhaustive search approaches infeasible, if they require significant
time. Specifically, those puzzles can be formulated as quadratic combinatorial
optimization problems. This paper presents an evolutionary algorithm, empowered
by expert-knowledge informed heuristics, for solving logical puzzles in video
games efficiently, leading to a more efficient design process. We discuss
multiple variations of hybrid genetic approaches for constraint satisfaction
problems that allow us to find a diverse set of near-optimal solutions for
puzzles. We demonstrate our approach on a fantasy Party Building Puzzle game,
and discuss how it can be applied more broadly to other puzzles to guide
designers in their creative process.
|
[
{
"created": "Fri, 17 Feb 2023 18:15:33 GMT",
"version": "v1"
},
{
"created": "Tue, 21 Feb 2023 19:23:15 GMT",
"version": "v2"
}
] |
2023-02-23
|
[
[
"Levonyan",
"Karine",
""
],
[
"Harder",
"Jesse",
""
],
[
"Silva",
"Fernando De Mesentier",
""
]
] |
Many games are reliant on creating new and engaging content constantly to maintain the interest of their player-base. One such example are puzzle games, in such it is common to have a recurrent need to create new puzzles. Creating new puzzles requires guaranteeing that they are solvable and interesting to players, both of which require significant time from the designers. Automatic validation of puzzles provides designers with a significant time saving and potential boost in quality. Automation allows puzzle designers to estimate different properties, increase the variety of constraints, and even personalize puzzles to specific players. Puzzles often have a large design space, which renders exhaustive search approaches infeasible, if they require significant time. Specifically, those puzzles can be formulated as quadratic combinatorial optimization problems. This paper presents an evolutionary algorithm, empowered by expert-knowledge informed heuristics, for solving logical puzzles in video games efficiently, leading to a more efficient design process. We discuss multiple variations of hybrid genetic approaches for constraint satisfaction problems that allow us to find a diverse set of near-optimal solutions for puzzles. We demonstrate our approach on a fantasy Party Building Puzzle game, and discuss how it can be applied more broadly to other puzzles to guide designers in their creative process.
|
1806.10061
|
Kamil Senel
|
Kamil Senel and Erik G. Larsson
|
Grant-Free Massive MTC-Enabled Massive MIMO: A Compressive Sensing
Approach
|
Submitted to IEEE Transactions on Communications
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A key challenge of massive MTC (mMTC), is the joint detection of device
activity and decoding of data. The sparse characteristics of mMTC makes
compressed sensing (CS) approaches a promising solution to the device detection
problem. However, utilizing CS-based approaches for device detection along with
channel estimation, and using the acquired estimates for coherent data
transmission is suboptimal, especially when the goal is to convey only a few
bits of data.
First, we focus on the coherent transmission and demonstrate that it is
possible to obtain more accurate channel state information by combining
conventional estimators with CS-based techniques. Moreover, we illustrate that
even simple power control techniques can enhance the device detection
performance in mMTC setups.
Second, we devise a new non-coherent transmission scheme for mMTC and
specifically for grant-free random access. We design an algorithm that jointly
detects device activity along with embedded information bits. The approach
leverages elements from the approximate message passing (AMP) algorithm, and
exploits the structured sparsity introduced by the non-coherent transmission
scheme. Our analysis reveals that the proposed approach has superior
performance compared to application of the original AMP approach.
|
[
{
"created": "Tue, 26 Jun 2018 15:25:45 GMT",
"version": "v1"
}
] |
2018-06-27
|
[
[
"Senel",
"Kamil",
""
],
[
"Larsson",
"Erik G.",
""
]
] |
A key challenge of massive MTC (mMTC), is the joint detection of device activity and decoding of data. The sparse characteristics of mMTC makes compressed sensing (CS) approaches a promising solution to the device detection problem. However, utilizing CS-based approaches for device detection along with channel estimation, and using the acquired estimates for coherent data transmission is suboptimal, especially when the goal is to convey only a few bits of data. First, we focus on the coherent transmission and demonstrate that it is possible to obtain more accurate channel state information by combining conventional estimators with CS-based techniques. Moreover, we illustrate that even simple power control techniques can enhance the device detection performance in mMTC setups. Second, we devise a new non-coherent transmission scheme for mMTC and specifically for grant-free random access. We design an algorithm that jointly detects device activity along with embedded information bits. The approach leverages elements from the approximate message passing (AMP) algorithm, and exploits the structured sparsity introduced by the non-coherent transmission scheme. Our analysis reveals that the proposed approach has superior performance compared to application of the original AMP approach.
|
1610.01245
|
Ali Parsai
|
Ali Parsai, Alessandro Murgia, Serge Demeyer
|
A Model to Estimate First-Order Mutation Coverage from Higher-Order
Mutation Coverage
|
2016 IEEE International Conference on Software Quality, Reliability,
and Security. 9 pages
|
2016 IEEE International Conference on Software Quality,
Reliability and Security (QRS), Vienna, Austria, 2016, pp. 365-373
|
10.1109/QRS.2016.48
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The test suite is essential for fault detection during software development.
First-order mutation coverage is an accurate metric to quantify the quality of
the test suite. However, it is computationally expensive. Hence, the adoption
of this metric is limited. In this study, we address this issue by proposing a
realistic model able to estimate first-order mutation coverage using only
higher-order mutation coverage. Our study shows how the estimation evolves
along with the order of mutation. We validate the model with an empirical study
based on 17 open-source projects.
|
[
{
"created": "Wed, 5 Oct 2016 01:15:42 GMT",
"version": "v1"
}
] |
2016-10-18
|
[
[
"Parsai",
"Ali",
""
],
[
"Murgia",
"Alessandro",
""
],
[
"Demeyer",
"Serge",
""
]
] |
The test suite is essential for fault detection during software development. First-order mutation coverage is an accurate metric to quantify the quality of the test suite. However, it is computationally expensive. Hence, the adoption of this metric is limited. In this study, we address this issue by proposing a realistic model able to estimate first-order mutation coverage using only higher-order mutation coverage. Our study shows how the estimation evolves along with the order of mutation. We validate the model with an empirical study based on 17 open-source projects.
|
2012.02344
|
Vassillen Chizhov
|
Vassillen Chizhov, Iliyan Georgiev, Karol Myszkowski, Gurprit Singh
|
Perceptual error optimization for Monte Carlo rendering
| null |
ACM Transactions on Graphics, Volume 41, Issue 3, June 2022,
Article No.: 26, pp 1-17
|
10.1145/3504002
| null |
cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Synthesizing realistic images involves computing high-dimensional
light-transport integrals. In practice, these integrals are numerically
estimated via Monte Carlo integration. The error of this estimation manifests
itself as conspicuous aliasing or noise. To ameliorate such artifacts and
improve image fidelity, we propose a perception-oriented framework to optimize
the error of Monte Carlo rendering. We leverage models based on human
perception from the halftoning literature. The result is an optimization
problem whose solution distributes the error as visually pleasing blue noise in
image space. To find solutions, we present a set of algorithms that provide
varying trade-offs between quality and speed, showing substantial improvements
over prior state of the art. We perform evaluations using quantitative and
error metrics, and provide extensive supplemental material to demonstrate the
perceptual improvements achieved by our methods.
|
[
{
"created": "Fri, 4 Dec 2020 00:30:45 GMT",
"version": "v1"
},
{
"created": "Mon, 7 Dec 2020 16:04:18 GMT",
"version": "v2"
},
{
"created": "Mon, 5 Jul 2021 12:16:31 GMT",
"version": "v3"
},
{
"created": "Tue, 13 Jul 2021 17:52:16 GMT",
"version": "v4"
},
{
"created": "Mon, 4 Apr 2022 11:46:03 GMT",
"version": "v5"
},
{
"created": "Tue, 5 Apr 2022 10:43:45 GMT",
"version": "v6"
}
] |
2022-04-06
|
[
[
"Chizhov",
"Vassillen",
""
],
[
"Georgiev",
"Iliyan",
""
],
[
"Myszkowski",
"Karol",
""
],
[
"Singh",
"Gurprit",
""
]
] |
Synthesizing realistic images involves computing high-dimensional light-transport integrals. In practice, these integrals are numerically estimated via Monte Carlo integration. The error of this estimation manifests itself as conspicuous aliasing or noise. To ameliorate such artifacts and improve image fidelity, we propose a perception-oriented framework to optimize the error of Monte Carlo rendering. We leverage models based on human perception from the halftoning literature. The result is an optimization problem whose solution distributes the error as visually pleasing blue noise in image space. To find solutions, we present a set of algorithms that provide varying trade-offs between quality and speed, showing substantial improvements over prior state of the art. We perform evaluations using quantitative and error metrics, and provide extensive supplemental material to demonstrate the perceptual improvements achieved by our methods.
|
2208.00771
|
Vladan Majerech Dr.
|
Vladan Majerech
|
100 prisoners and a lightbulb -- looking back
|
12 pages, 1 table, 1 graph
| null | null | null |
cs.DM math.HO
|
http://creativecommons.org/publicdomain/zero/1.0/
|
100 prisoners and a light bulb is a long standing mathematical puzzle. The
problem was studied mostly in 2002 [5], 2003 [1], and 2004 [3]. Solutions in
published articles had average number of visits above 3850, but best solutions
on forums had (declared) average number of visits around 3500. I spent some
time in 2007-2009 to optimize the communication strategy and I pushed the
average number of visits below 3390, seems no new ideas appear after it.
Recently I have met several people familiar with published papers from
2002-2003 but not knowing newer results. Even after 2009 several papers on the
topic were published where the new results were not mentioned [4]. Whole book
was written about the problem [2]. This is why I am writing this summary.
|
[
{
"created": "Wed, 6 Jul 2022 22:22:53 GMT",
"version": "v1"
}
] |
2022-08-02
|
[
[
"Majerech",
"Vladan",
""
]
] |
100 prisoners and a light bulb is a long standing mathematical puzzle. The problem was studied mostly in 2002 [5], 2003 [1], and 2004 [3]. Solutions in published articles had average number of visits above 3850, but best solutions on forums had (declared) average number of visits around 3500. I spent some time in 2007-2009 to optimize the communication strategy and I pushed the average number of visits below 3390, seems no new ideas appear after it. Recently I have met several people familiar with published papers from 2002-2003 but not knowing newer results. Even after 2009 several papers on the topic were published where the new results were not mentioned [4]. Whole book was written about the problem [2]. This is why I am writing this summary.
|
2102.05224
|
Hyeongtaek Lee
|
Hyeongtaek Lee, Hyuckjin Choi, Hwanjin Kim, Sucheol Kim, Chulhee Jang,
Yongyun Choi, and Junil Choi
|
Downlink Channel Reconstruction for Spatial Multiplexing in Massive MIMO
Systems
|
Submitted to IEEE Transactions on Wireless Communications
|
IEEE Transactions on Wireless Communications, vol. 20, no. 9, pp.
6154-6166, Sept. 2021
|
10.1109/TWC.2021.3072158
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To get channel state information (CSI) at a base station (BS), most of
researches on massive multiple-input multiple-output (MIMO) systems consider
time division duplexing (TDD) to get benefit from the uplink and downlink
channel reciprocity. Even in TDD, however, the BS still needs to transmit
downlink training signals, which are referred to as channel state information
reference signals (CSI-RSs) in the 3GPP standard, to support spatial
multiplexing in practice. This is because there are many cases that the number
of transmit antennas is less than the number of receive antennas at a user
equipment (UE) due to power consumption and circuit complexity issues. Because
of this mismatch, uplink sounding reference signals (SRSs) from the UE are not
enough for the BS to obtain full downlink MIMO CSI. Therefore, after receiving
the downlink CSI-RSs, the UE needs to feed back quantized CSI to the BS using a
pre-defined codebook to support spatial multiplexing. In this paper, possible
approaches to reconstruct full downlink MIMO CSI at the BS are proposed by
exploiting both the SRS and quantized downlink CSI considering practical
antenna structures with reduced downlink CSI-RS overhead. Numerical results
show that the spectral efficiencies by spatial multiplexing based on the
proposed downlink MIMO CSI reconstruction techniques outperform the
conventional methods solely based on the quantized CSI.
|
[
{
"created": "Wed, 10 Feb 2021 02:26:01 GMT",
"version": "v1"
}
] |
2022-06-28
|
[
[
"Lee",
"Hyeongtaek",
""
],
[
"Choi",
"Hyuckjin",
""
],
[
"Kim",
"Hwanjin",
""
],
[
"Kim",
"Sucheol",
""
],
[
"Jang",
"Chulhee",
""
],
[
"Choi",
"Yongyun",
""
],
[
"Choi",
"Junil",
""
]
] |
To get channel state information (CSI) at a base station (BS), most of researches on massive multiple-input multiple-output (MIMO) systems consider time division duplexing (TDD) to get benefit from the uplink and downlink channel reciprocity. Even in TDD, however, the BS still needs to transmit downlink training signals, which are referred to as channel state information reference signals (CSI-RSs) in the 3GPP standard, to support spatial multiplexing in practice. This is because there are many cases that the number of transmit antennas is less than the number of receive antennas at a user equipment (UE) due to power consumption and circuit complexity issues. Because of this mismatch, uplink sounding reference signals (SRSs) from the UE are not enough for the BS to obtain full downlink MIMO CSI. Therefore, after receiving the downlink CSI-RSs, the UE needs to feed back quantized CSI to the BS using a pre-defined codebook to support spatial multiplexing. In this paper, possible approaches to reconstruct full downlink MIMO CSI at the BS are proposed by exploiting both the SRS and quantized downlink CSI considering practical antenna structures with reduced downlink CSI-RS overhead. Numerical results show that the spectral efficiencies by spatial multiplexing based on the proposed downlink MIMO CSI reconstruction techniques outperform the conventional methods solely based on the quantized CSI.
|
1507.00772
|
Roman Kecher
|
Yehuda Afek, Roman Kecher, Moshe Sulamy
|
Optimal and Resilient Pheromone Utilization in Ant Foraging
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pheromones are a chemical substance produced and released by ants as means of
communication. In this work we present the minimum amount of pheromones
necessary and sufficient for a colony of ants (identical mobile agents) to
deterministically find a food source (treasure), assuming that each ant has the
computational capabilities of either a Finite State Machine (FSM) or a Turing
Machine (TM). In addition, we provide pheromone-based foraging algorithms
capable of handling fail-stop faults.
In more detail, we consider the case where $k$ identical ants, initially
located at the center (nest) of an infinite two-dimensional grid and
communicate only through pheromones, perform a collaborative search for an
adversarially hidden treasure placed at an unknown distance $D$. We begin by
proving a tight lower bound of $\Omega(D)$ on the amount of pheromones required
by any number of FSM based ants to complete the search, and continue to reduce
the lower bound to $\Omega(k)$ for the stronger ants modeled as TM. We provide
algorithms which match the aforementioned lower bounds, and still terminate in
optimal $\mathcal{O}(D + D^2 / k)$ time, under both the synchronous and
asynchronous models. Furthermore, we consider a more realistic setting, where
an unknown number $f < k$ of ants may fail-stop at any time; we provide
fault-tolerant FSM algorithms (synchronous and asynchronous), that terminate in
$\mathcal{O}(D + D^2/(k-f) + Df)$ rounds and emit no more than the same
asymptotic minimum number of $\mathcal{O}(D)$ pheromones overall.
|
[
{
"created": "Thu, 2 Jul 2015 21:34:58 GMT",
"version": "v1"
}
] |
2015-07-06
|
[
[
"Afek",
"Yehuda",
""
],
[
"Kecher",
"Roman",
""
],
[
"Sulamy",
"Moshe",
""
]
] |
Pheromones are a chemical substance produced and released by ants as means of communication. In this work we present the minimum amount of pheromones necessary and sufficient for a colony of ants (identical mobile agents) to deterministically find a food source (treasure), assuming that each ant has the computational capabilities of either a Finite State Machine (FSM) or a Turing Machine (TM). In addition, we provide pheromone-based foraging algorithms capable of handling fail-stop faults. In more detail, we consider the case where $k$ identical ants, initially located at the center (nest) of an infinite two-dimensional grid and communicate only through pheromones, perform a collaborative search for an adversarially hidden treasure placed at an unknown distance $D$. We begin by proving a tight lower bound of $\Omega(D)$ on the amount of pheromones required by any number of FSM based ants to complete the search, and continue to reduce the lower bound to $\Omega(k)$ for the stronger ants modeled as TM. We provide algorithms which match the aforementioned lower bounds, and still terminate in optimal $\mathcal{O}(D + D^2 / k)$ time, under both the synchronous and asynchronous models. Furthermore, we consider a more realistic setting, where an unknown number $f < k$ of ants may fail-stop at any time; we provide fault-tolerant FSM algorithms (synchronous and asynchronous), that terminate in $\mathcal{O}(D + D^2/(k-f) + Df)$ rounds and emit no more than the same asymptotic minimum number of $\mathcal{O}(D)$ pheromones overall.
|
2303.18008
|
Bashar Huleihel
|
Bashar Huleihel, Oron Sabag, Haim H. Permuter, Victoria Kostina
|
Capacity of Finite-State Channels with Delayed Feedback
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we investigate the capacity of finite-state channels (FSCs) in
presence of delayed feedback. We show that the capacity of a FSC with delayed
feedback can be computed as that of a new FSC with instantaneous feedback and
an extended state. Consequently, graph-based methods to obtain computable upper
and lower bounds on the delayed feedback capacity of unifilar FSCs are
proposed. Based on these methods, we establish that the capacity of the
trapdoor channel with delayed feedback of two time instances is given by
$\log_2(3/2)$. In addition, we derive an analytical upper bound on the delayed
feedback capacity of the binary symmetric channel with a no consecutive ones
input constraint. This bound also serves as a novel upper bound on its
non-feedback capacity, which outperforms all previously known bounds. Lastly,
we demonstrate that feedback does improve the capacity of the dicode erasure
channel.
|
[
{
"created": "Fri, 31 Mar 2023 12:28:18 GMT",
"version": "v1"
},
{
"created": "Sat, 22 Jun 2024 11:14:27 GMT",
"version": "v2"
}
] |
2024-06-25
|
[
[
"Huleihel",
"Bashar",
""
],
[
"Sabag",
"Oron",
""
],
[
"Permuter",
"Haim H.",
""
],
[
"Kostina",
"Victoria",
""
]
] |
In this paper, we investigate the capacity of finite-state channels (FSCs) in presence of delayed feedback. We show that the capacity of a FSC with delayed feedback can be computed as that of a new FSC with instantaneous feedback and an extended state. Consequently, graph-based methods to obtain computable upper and lower bounds on the delayed feedback capacity of unifilar FSCs are proposed. Based on these methods, we establish that the capacity of the trapdoor channel with delayed feedback of two time instances is given by $\log_2(3/2)$. In addition, we derive an analytical upper bound on the delayed feedback capacity of the binary symmetric channel with a no consecutive ones input constraint. This bound also serves as a novel upper bound on its non-feedback capacity, which outperforms all previously known bounds. Lastly, we demonstrate that feedback does improve the capacity of the dicode erasure channel.
|
2007.13657
|
Behnam Neyshabur
|
Behnam Neyshabur
|
Towards Learning Convolutions from Scratch
|
18 pages, 9 figures, 4 tables
| null | null | null |
cs.LG cs.CV stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Convolution is one of the most essential components of architectures used in
computer vision. As machine learning moves towards reducing the expert bias and
learning it from data, a natural next step seems to be learning
convolution-like structures from scratch. This, however, has proven elusive.
For example, current state-of-the-art architecture search algorithms use
convolution as one of the existing modules rather than learning it from data.
In an attempt to understand the inductive bias that gives rise to convolutions,
we investigate minimum description length as a guiding principle and show that
in some settings, it can indeed be indicative of the performance of
architectures. To find architectures with small description length, we propose
$\beta$-LASSO, a simple variant of LASSO algorithm that, when applied on
fully-connected networks for image classification tasks, learns architectures
with local connections and achieves state-of-the-art accuracies for training
fully-connected nets on CIFAR-10 (85.19%), CIFAR-100 (59.56%) and SVHN (94.07%)
bridging the gap between fully-connected and convolutional nets.
|
[
{
"created": "Mon, 27 Jul 2020 16:13:13 GMT",
"version": "v1"
}
] |
2020-07-28
|
[
[
"Neyshabur",
"Behnam",
""
]
] |
Convolution is one of the most essential components of architectures used in computer vision. As machine learning moves towards reducing the expert bias and learning it from data, a natural next step seems to be learning convolution-like structures from scratch. This, however, has proven elusive. For example, current state-of-the-art architecture search algorithms use convolution as one of the existing modules rather than learning it from data. In an attempt to understand the inductive bias that gives rise to convolutions, we investigate minimum description length as a guiding principle and show that in some settings, it can indeed be indicative of the performance of architectures. To find architectures with small description length, we propose $\beta$-LASSO, a simple variant of LASSO algorithm that, when applied on fully-connected networks for image classification tasks, learns architectures with local connections and achieves state-of-the-art accuracies for training fully-connected nets on CIFAR-10 (85.19%), CIFAR-100 (59.56%) and SVHN (94.07%) bridging the gap between fully-connected and convolutional nets.
|
2407.21298
|
An Wu
|
An Wu and Yu Pan and Fuqi Zhou and Jinghui Yan and Chuanlu Liu
|
A Vectorization Method Induced By Maximal Margin Classification For
Persistent Diagrams
| null | null | null | null |
cs.LG cs.AI q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Persistent homology is an effective method for extracting topological
information, represented as persistent diagrams, of spatial structure data.
Hence it is well-suited for the study of protein structures. Attempts to
incorporate Persistent homology in machine learning methods of protein function
prediction have resulted in several techniques for vectorizing persistent
diagrams. However, current vectorization methods are excessively artificial and
cannot ensure the effective utilization of information or the rationality of
the methods. To address this problem, we propose a more geometrical
vectorization method of persistent diagrams based on maximal margin
classification for Banach space, and additionaly propose a framework that
utilizes topological data analysis to identify proteins with specific
functions. We evaluated our vectorization method using a binary classification
task on proteins and compared it with the statistical methods that exhibit the
best performance among thirteen commonly used vectorization methods. The
experimental results indicate that our approach surpasses the statistical
methods in both robustness and precision.
|
[
{
"created": "Wed, 31 Jul 2024 02:55:01 GMT",
"version": "v1"
}
] |
2024-08-01
|
[
[
"Wu",
"An",
""
],
[
"Pan",
"Yu",
""
],
[
"Zhou",
"Fuqi",
""
],
[
"Yan",
"Jinghui",
""
],
[
"Liu",
"Chuanlu",
""
]
] |
Persistent homology is an effective method for extracting topological information, represented as persistent diagrams, of spatial structure data. Hence it is well-suited for the study of protein structures. Attempts to incorporate Persistent homology in machine learning methods of protein function prediction have resulted in several techniques for vectorizing persistent diagrams. However, current vectorization methods are excessively artificial and cannot ensure the effective utilization of information or the rationality of the methods. To address this problem, we propose a more geometrical vectorization method of persistent diagrams based on maximal margin classification for Banach space, and additionaly propose a framework that utilizes topological data analysis to identify proteins with specific functions. We evaluated our vectorization method using a binary classification task on proteins and compared it with the statistical methods that exhibit the best performance among thirteen commonly used vectorization methods. The experimental results indicate that our approach surpasses the statistical methods in both robustness and precision.
|
2002.04830
|
Kevin Tian
|
Arun Jambulapati, Yin Tat Lee, Jerry Li, Swati Padmanabhan, Kevin Tian
|
Positive Semidefinite Programming: Mixed, Parallel, and
Width-Independent
|
There is an error in this manuscript. This version notes the source
of the error on the first page
| null | null | null |
cs.DS math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We give the first approximation algorithm for mixed packing and covering
semidefinite programs (SDPs) with polylogarithmic dependence on width. Mixed
packing and covering SDPs constitute a fundamental algorithmic primitive with
recent applications in combinatorial optimization, robust learning, and quantum
complexity. The current approximate solvers for positive semidefinite
programming can handle only pure packing instances, and technical hurdles
prevent their generalization to a wider class of positive instances. For a
given multiplicative accuracy of $\epsilon$, our algorithm takes
$O(\log^3(nd\rho) \cdot \epsilon^{-3})$ parallelizable iterations, where $n$,
$d$ are dimensions of the problem and $\rho$ is a width parameter of the
instance, generalizing or improving all previous parallel algorithms in the
positive linear and semidefinite programming literature. When specialized to
pure packing SDPs, our algorithm's iteration complexity is $O(\log^2 (nd) \cdot
\epsilon^{-2})$, a slight improvement and derandomization of the
state-of-the-art (Allen-Zhu et. al. '16, Peng et. al. '16, Wang et. al. '15).
For a wide variety of structured instances commonly found in applications, the
iterations of our algorithm run in nearly-linear time.
In doing so, we give matrix analytic techniques for overcoming obstacles that
have stymied prior approaches to this open problem, as stated in past works
(Peng et. al. '16, Mahoney et. al. '16). Crucial to our analysis are a
simplification of existing algorithms for mixed positive linear programs,
achieved by removing an asymmetry caused by modifying covering constraints, and
a suite of matrix inequalities whose proofs are based on analyzing the Schur
complements of matrices in a higher dimension. We hope that both our algorithm
and techniques open the door to improved solvers for positive semidefinite
programming, as well as its applications.
|
[
{
"created": "Wed, 12 Feb 2020 07:47:50 GMT",
"version": "v1"
},
{
"created": "Fri, 12 Jun 2020 22:48:35 GMT",
"version": "v2"
},
{
"created": "Mon, 12 Jul 2021 04:07:02 GMT",
"version": "v3"
}
] |
2021-07-13
|
[
[
"Jambulapati",
"Arun",
""
],
[
"Lee",
"Yin Tat",
""
],
[
"Li",
"Jerry",
""
],
[
"Padmanabhan",
"Swati",
""
],
[
"Tian",
"Kevin",
""
]
] |
We give the first approximation algorithm for mixed packing and covering semidefinite programs (SDPs) with polylogarithmic dependence on width. Mixed packing and covering SDPs constitute a fundamental algorithmic primitive with recent applications in combinatorial optimization, robust learning, and quantum complexity. The current approximate solvers for positive semidefinite programming can handle only pure packing instances, and technical hurdles prevent their generalization to a wider class of positive instances. For a given multiplicative accuracy of $\epsilon$, our algorithm takes $O(\log^3(nd\rho) \cdot \epsilon^{-3})$ parallelizable iterations, where $n$, $d$ are dimensions of the problem and $\rho$ is a width parameter of the instance, generalizing or improving all previous parallel algorithms in the positive linear and semidefinite programming literature. When specialized to pure packing SDPs, our algorithm's iteration complexity is $O(\log^2 (nd) \cdot \epsilon^{-2})$, a slight improvement and derandomization of the state-of-the-art (Allen-Zhu et. al. '16, Peng et. al. '16, Wang et. al. '15). For a wide variety of structured instances commonly found in applications, the iterations of our algorithm run in nearly-linear time. In doing so, we give matrix analytic techniques for overcoming obstacles that have stymied prior approaches to this open problem, as stated in past works (Peng et. al. '16, Mahoney et. al. '16). Crucial to our analysis are a simplification of existing algorithms for mixed positive linear programs, achieved by removing an asymmetry caused by modifying covering constraints, and a suite of matrix inequalities whose proofs are based on analyzing the Schur complements of matrices in a higher dimension. We hope that both our algorithm and techniques open the door to improved solvers for positive semidefinite programming, as well as its applications.
|
2401.07055
|
Alessandro Di Giorgio
|
Filippo Bonchi, Alessandro Di Giorgio, Nathan Haydon, Pawel Sobocinski
|
Diagrammatic Algebra of First Order Logic
| null | null | null | null |
cs.LO math.CT
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce the calculus of neo-Peircean relations, a string diagrammatic
extension of the calculus of binary relations that has the same expressivity as
first order logic and comes with a complete axiomatisation. The axioms are
obtained by combining two well known categorical structures: cartesian and
linear bicategories.
|
[
{
"created": "Sat, 13 Jan 2024 12:09:02 GMT",
"version": "v1"
}
] |
2024-01-17
|
[
[
"Bonchi",
"Filippo",
""
],
[
"Di Giorgio",
"Alessandro",
""
],
[
"Haydon",
"Nathan",
""
],
[
"Sobocinski",
"Pawel",
""
]
] |
We introduce the calculus of neo-Peircean relations, a string diagrammatic extension of the calculus of binary relations that has the same expressivity as first order logic and comes with a complete axiomatisation. The axioms are obtained by combining two well known categorical structures: cartesian and linear bicategories.
|
2312.01326
|
Xinyi Wang
|
Xinyi Wang, Yulong Ding, Yizhou Chen, Ruihua Han, Lele Xi, and Ben M.
Chen
|
OA-ECBVC: A Cooperative Collision-free Encirclement and Capture Approach
in Cluttered Environments
|
7 pages, 7 figures, conference
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This article investigates the practical scenarios of chasing an adversarial
evader in an unbounded environment with cluttered obstacles. We propose a
Voronoi-based decentralized algorithm for multiple pursuers to encircle and
capture the evader by reacting to collisions. An efficient approach is
presented for constructing an obstacle-aware evader-centered bounded Voronoi
cell (OA-ECBVC), which strictly ensures collision avoidance in various obstacle
scenarios when pursuing the evader. The evader can be efficiently enclosed in a
convex hull given random initial configurations. Furthermore, to cooperatively
capture the evader, each pursuer continually compresses the boundary of its
OA-ECBVC to quickly reduce the movement space of the evader while maintaining
encirclement. Our OA-ECBVC algorithm is validated in various simulated
environments with different dynamic systems of robots. Real-time performance of
resisting uncertainties shows the superior reliability of our method for
deployment on multiple robot platforms.
|
[
{
"created": "Sun, 3 Dec 2023 09:11:43 GMT",
"version": "v1"
}
] |
2023-12-05
|
[
[
"Wang",
"Xinyi",
""
],
[
"Ding",
"Yulong",
""
],
[
"Chen",
"Yizhou",
""
],
[
"Han",
"Ruihua",
""
],
[
"Xi",
"Lele",
""
],
[
"Chen",
"Ben M.",
""
]
] |
This article investigates the practical scenarios of chasing an adversarial evader in an unbounded environment with cluttered obstacles. We propose a Voronoi-based decentralized algorithm for multiple pursuers to encircle and capture the evader by reacting to collisions. An efficient approach is presented for constructing an obstacle-aware evader-centered bounded Voronoi cell (OA-ECBVC), which strictly ensures collision avoidance in various obstacle scenarios when pursuing the evader. The evader can be efficiently enclosed in a convex hull given random initial configurations. Furthermore, to cooperatively capture the evader, each pursuer continually compresses the boundary of its OA-ECBVC to quickly reduce the movement space of the evader while maintaining encirclement. Our OA-ECBVC algorithm is validated in various simulated environments with different dynamic systems of robots. Real-time performance of resisting uncertainties shows the superior reliability of our method for deployment on multiple robot platforms.
|
2403.10780
|
Mariia Khan
|
Mariia Khan, Yue Qiu, Yuren Cong, Jumana Abu-Khalaf, David Suter, Bodo
Rosenhahn
|
Segment Any Object Model (SAOM): Real-to-Simulation Fine-Tuning Strategy
for Multi-Class Multi-Instance Segmentation
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Multi-class multi-instance segmentation is the task of identifying masks for
multiple object classes and multiple instances of the same class within an
image. The foundational Segment Anything Model (SAM) is designed for promptable
multi-class multi-instance segmentation but tends to output part or sub-part
masks in the "everything" mode for various real-world applications. Whole
object segmentation masks play a crucial role for indoor scene understanding,
especially in robotics applications. We propose a new domain invariant
Real-to-Simulation (Real-Sim) fine-tuning strategy for SAM. We use object
images and ground truth data collected from Ai2Thor simulator during
fine-tuning (real-to-sim). To allow our Segment Any Object Model (SAOM) to work
in the "everything" mode, we propose the novel nearest neighbour assignment
method, updating point embeddings for each ground-truth mask. SAOM is evaluated
on our own dataset collected from Ai2Thor simulator. SAOM significantly
improves on SAM, with a 28% increase in mIoU and a 25% increase in mAcc for 54
frequently-seen indoor object classes. Moreover, our Real-to-Simulation
fine-tuning strategy demonstrates promising generalization performance in real
environments without being trained on the real-world data (sim-to-real). The
dataset and the code will be released after publication.
|
[
{
"created": "Sat, 16 Mar 2024 02:54:49 GMT",
"version": "v1"
}
] |
2024-03-19
|
[
[
"Khan",
"Mariia",
""
],
[
"Qiu",
"Yue",
""
],
[
"Cong",
"Yuren",
""
],
[
"Abu-Khalaf",
"Jumana",
""
],
[
"Suter",
"David",
""
],
[
"Rosenhahn",
"Bodo",
""
]
] |
Multi-class multi-instance segmentation is the task of identifying masks for multiple object classes and multiple instances of the same class within an image. The foundational Segment Anything Model (SAM) is designed for promptable multi-class multi-instance segmentation but tends to output part or sub-part masks in the "everything" mode for various real-world applications. Whole object segmentation masks play a crucial role for indoor scene understanding, especially in robotics applications. We propose a new domain invariant Real-to-Simulation (Real-Sim) fine-tuning strategy for SAM. We use object images and ground truth data collected from Ai2Thor simulator during fine-tuning (real-to-sim). To allow our Segment Any Object Model (SAOM) to work in the "everything" mode, we propose the novel nearest neighbour assignment method, updating point embeddings for each ground-truth mask. SAOM is evaluated on our own dataset collected from Ai2Thor simulator. SAOM significantly improves on SAM, with a 28% increase in mIoU and a 25% increase in mAcc for 54 frequently-seen indoor object classes. Moreover, our Real-to-Simulation fine-tuning strategy demonstrates promising generalization performance in real environments without being trained on the real-world data (sim-to-real). The dataset and the code will be released after publication.
|
2303.01746
|
Kusum Sangwan
|
Michael A. Henning, Kusum, Arti Pandey, Kaustav Paul
|
Complexity of total dominator coloring in graphs
|
V1, 18 pages, 1 figure
| null | null | null |
cs.DM cs.CC math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Let $G=(V,E)$ be a graph with no isolated vertices. A vertex $v$ totally
dominate a vertex $w$ ($w \ne v$), if $v$ is adjacent to $w$. A set $D
\subseteq V$ called a total dominating set of $G$ if every vertex $v\in V$ is
totally dominated by some vertex in $D$. The minimum cardinality of a total
dominating set is the total domination number of $G$ and is denoted by
$\gamma_t(G)$. A total dominator coloring of graph $G$ is a proper coloring of
vertices of $G$, so that each vertex totally dominates some color class. The
total dominator chromatic number $\chi_{td}(G)$ of $G$ is the least number of
colors required for a total dominator coloring of $G$. The Total Dominator
Coloring problem is to find a total dominator coloring of $G$ using the minimum
number of colors. It is known that the decision version of this problem is
NP-complete for general graphs. We show that it remains NP-complete even when
restricted to bipartite, planar and split graphs. We further study the Total
Dominator Coloring problem for various graph classes, including trees, cographs
and chain graphs. First, we characterize the trees having
$\chi_{td}(T)=\gamma_t(T)+1$, which completes the characterization of trees
achieving all possible values of $\chi_{td}(T)$. Also, we show that for a
cograph $G$, $\chi_{td}(G)$ can be computed in linear-time. Moreover, we show
that $2 \le \chi_{td}(G) \le 4$ for a chain graph $G$ and give characterization
of chain graphs for every possible value of $\chi_{td}(G)$ in linear-time.
|
[
{
"created": "Fri, 3 Mar 2023 07:17:22 GMT",
"version": "v1"
}
] |
2023-03-06
|
[
[
"Henning",
"Michael A.",
""
],
[
"Kusum",
"",
""
],
[
"Pandey",
"Arti",
""
],
[
"Paul",
"Kaustav",
""
]
] |
Let $G=(V,E)$ be a graph with no isolated vertices. A vertex $v$ totally dominate a vertex $w$ ($w \ne v$), if $v$ is adjacent to $w$. A set $D \subseteq V$ called a total dominating set of $G$ if every vertex $v\in V$ is totally dominated by some vertex in $D$. The minimum cardinality of a total dominating set is the total domination number of $G$ and is denoted by $\gamma_t(G)$. A total dominator coloring of graph $G$ is a proper coloring of vertices of $G$, so that each vertex totally dominates some color class. The total dominator chromatic number $\chi_{td}(G)$ of $G$ is the least number of colors required for a total dominator coloring of $G$. The Total Dominator Coloring problem is to find a total dominator coloring of $G$ using the minimum number of colors. It is known that the decision version of this problem is NP-complete for general graphs. We show that it remains NP-complete even when restricted to bipartite, planar and split graphs. We further study the Total Dominator Coloring problem for various graph classes, including trees, cographs and chain graphs. First, we characterize the trees having $\chi_{td}(T)=\gamma_t(T)+1$, which completes the characterization of trees achieving all possible values of $\chi_{td}(T)$. Also, we show that for a cograph $G$, $\chi_{td}(G)$ can be computed in linear-time. Moreover, we show that $2 \le \chi_{td}(G) \le 4$ for a chain graph $G$ and give characterization of chain graphs for every possible value of $\chi_{td}(G)$ in linear-time.
|
2008.05180
|
Christian Schulz
|
Alexander Gellner, Sebastian Lamm, Christian Schulz, Darren Strash,
Bogd\'an Zav\'alnij
|
Boosting Data Reduction for the Maximum Weight Independent Set Problem
Using Increasing Transformations
| null | null | null | null |
cs.DS cs.AI cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Given a vertex-weighted graph, the maximum weight independent set problem
asks for a pair-wise non-adjacent set of vertices such that the sum of their
weights is maximum. The branch-and-reduce paradigm is the de facto standard
approach to solve the problem to optimality in practice. In this paradigm, data
reduction rules are applied to decrease the problem size. These data reduction
rules ensure that given an optimum solution on the new (smaller) input, one can
quickly construct an optimum solution on the original input.
We introduce new generalized data reduction and transformation rules for the
problem. A key feature of our work is that some transformation rules can
increase the size of the input. Surprisingly, these so-called increasing
transformations can simplify the problem and also open up the reduction space
to yield even smaller irreducible graphs later throughout the algorithm. In
experiments, our algorithm computes significantly smaller irreducible graphs on
all except one instance, solves more instances to optimality than previously
possible, is up to two orders of magnitude faster than the best
state-of-the-art solver, and finds higher-quality solutions than heuristic
solvers DynWVC and HILS on many instances. While the increasing transformations
are only efficient enough for preprocessing at this time, we see this as a
critical initial step towards a new branch-and-transform paradigm.
|
[
{
"created": "Wed, 12 Aug 2020 08:52:50 GMT",
"version": "v1"
},
{
"created": "Thu, 13 Aug 2020 05:45:23 GMT",
"version": "v2"
}
] |
2020-08-14
|
[
[
"Gellner",
"Alexander",
""
],
[
"Lamm",
"Sebastian",
""
],
[
"Schulz",
"Christian",
""
],
[
"Strash",
"Darren",
""
],
[
"Zaválnij",
"Bogdán",
""
]
] |
Given a vertex-weighted graph, the maximum weight independent set problem asks for a pair-wise non-adjacent set of vertices such that the sum of their weights is maximum. The branch-and-reduce paradigm is the de facto standard approach to solve the problem to optimality in practice. In this paradigm, data reduction rules are applied to decrease the problem size. These data reduction rules ensure that given an optimum solution on the new (smaller) input, one can quickly construct an optimum solution on the original input. We introduce new generalized data reduction and transformation rules for the problem. A key feature of our work is that some transformation rules can increase the size of the input. Surprisingly, these so-called increasing transformations can simplify the problem and also open up the reduction space to yield even smaller irreducible graphs later throughout the algorithm. In experiments, our algorithm computes significantly smaller irreducible graphs on all except one instance, solves more instances to optimality than previously possible, is up to two orders of magnitude faster than the best state-of-the-art solver, and finds higher-quality solutions than heuristic solvers DynWVC and HILS on many instances. While the increasing transformations are only efficient enough for preprocessing at this time, we see this as a critical initial step towards a new branch-and-transform paradigm.
|
2307.08847
|
Gamze Gursoy
|
Ahmed Elhussein and Gamze Gursoy
|
Privacy-preserving patient clustering for personalized federated
learning
| null | null | null | null |
cs.LG cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Federated Learning (FL) is a machine learning framework that enables multiple
organizations to train a model without sharing their data with a central
server. However, it experiences significant performance degradation if the data
is non-identically independently distributed (non-IID). This is a problem in
medical settings, where variations in the patient population contribute
significantly to distribution differences across hospitals. Personalized FL
addresses this issue by accounting for site-specific distribution differences.
Clustered FL, a Personalized FL variant, was used to address this problem by
clustering patients into groups across hospitals and training separate models
on each group. However, privacy concerns remained as a challenge as the
clustering process requires exchange of patient-level information. This was
previously solved by forming clusters using aggregated data, which led to
inaccurate groups and performance degradation. In this study, we propose
Privacy-preserving Community-Based Federated machine Learning (PCBFL), a novel
Clustered FL framework that can cluster patients using patient-level data while
protecting privacy. PCBFL uses Secure Multiparty Computation, a cryptographic
technique, to securely calculate patient-level similarity scores across
hospitals. We then evaluate PCBFL by training a federated mortality prediction
model using 20 sites from the eICU dataset. We compare the performance gain
from PCBFL against traditional and existing Clustered FL frameworks. Our
results show that PCBFL successfully forms clinically meaningful cohorts of
low, medium, and high-risk patients. PCBFL outperforms traditional and existing
Clustered FL frameworks with an average AUC improvement of 4.3% and AUPRC
improvement of 7.8%.
|
[
{
"created": "Mon, 17 Jul 2023 21:19:08 GMT",
"version": "v1"
}
] |
2023-07-19
|
[
[
"Elhussein",
"Ahmed",
""
],
[
"Gursoy",
"Gamze",
""
]
] |
Federated Learning (FL) is a machine learning framework that enables multiple organizations to train a model without sharing their data with a central server. However, it experiences significant performance degradation if the data is non-identically independently distributed (non-IID). This is a problem in medical settings, where variations in the patient population contribute significantly to distribution differences across hospitals. Personalized FL addresses this issue by accounting for site-specific distribution differences. Clustered FL, a Personalized FL variant, was used to address this problem by clustering patients into groups across hospitals and training separate models on each group. However, privacy concerns remained as a challenge as the clustering process requires exchange of patient-level information. This was previously solved by forming clusters using aggregated data, which led to inaccurate groups and performance degradation. In this study, we propose Privacy-preserving Community-Based Federated machine Learning (PCBFL), a novel Clustered FL framework that can cluster patients using patient-level data while protecting privacy. PCBFL uses Secure Multiparty Computation, a cryptographic technique, to securely calculate patient-level similarity scores across hospitals. We then evaluate PCBFL by training a federated mortality prediction model using 20 sites from the eICU dataset. We compare the performance gain from PCBFL against traditional and existing Clustered FL frameworks. Our results show that PCBFL successfully forms clinically meaningful cohorts of low, medium, and high-risk patients. PCBFL outperforms traditional and existing Clustered FL frameworks with an average AUC improvement of 4.3% and AUPRC improvement of 7.8%.
|
2107.01464
|
Ibrahim Sabek
|
Ibrahim Sabek, Kapil Vaidya, Dominik Horn, Andreas Kipf, Tim Kraska
|
When Are Learned Models Better Than Hash Functions?
| null | null | null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we aim to study when learned models are better hash functions,
particular for hash-maps. We use lightweight piece-wise linear models to
replace the hash functions as they have small inference times and are
sufficiently general to capture complex distributions. We analyze the learned
models in terms of: the model inference time and the number of collisions.
Surprisingly, we found that learned models are not much slower to compute than
hash functions if optimized correctly. However, it turns out that learned
models can only reduce the number of collisions (i.e., the number of times
different keys have the same hash value) if the model is able to over-fit to
the data; otherwise, it can not be better than an ordinary hash function.
Hence, how much better a learned model is in avoiding collisions highly depends
on the data and the ability of the model to over-fit. To evaluate the
effectiveness of learned models, we used them as hash functions in the bucket
chaining and Cuckoo hash tables. For bucket chaining hash table, we found that
learned models can achieve 30% smaller sizes and 10% lower probe latency. For
Cuckoo hash tables, in some datasets, learned models can increase the ratio of
keys stored in their primary locations by around 10%. In summary, we found that
learned models can indeed outperform hash functions but only for certain data
distributions and with a limited margin.
|
[
{
"created": "Sat, 3 Jul 2021 16:50:52 GMT",
"version": "v1"
}
] |
2021-07-06
|
[
[
"Sabek",
"Ibrahim",
""
],
[
"Vaidya",
"Kapil",
""
],
[
"Horn",
"Dominik",
""
],
[
"Kipf",
"Andreas",
""
],
[
"Kraska",
"Tim",
""
]
] |
In this work, we aim to study when learned models are better hash functions, particular for hash-maps. We use lightweight piece-wise linear models to replace the hash functions as they have small inference times and are sufficiently general to capture complex distributions. We analyze the learned models in terms of: the model inference time and the number of collisions. Surprisingly, we found that learned models are not much slower to compute than hash functions if optimized correctly. However, it turns out that learned models can only reduce the number of collisions (i.e., the number of times different keys have the same hash value) if the model is able to over-fit to the data; otherwise, it can not be better than an ordinary hash function. Hence, how much better a learned model is in avoiding collisions highly depends on the data and the ability of the model to over-fit. To evaluate the effectiveness of learned models, we used them as hash functions in the bucket chaining and Cuckoo hash tables. For bucket chaining hash table, we found that learned models can achieve 30% smaller sizes and 10% lower probe latency. For Cuckoo hash tables, in some datasets, learned models can increase the ratio of keys stored in their primary locations by around 10%. In summary, we found that learned models can indeed outperform hash functions but only for certain data distributions and with a limited margin.
|
1804.02773
|
Attila Varga
|
Attila Varga
|
Novelty and Foreseeing Research Trends; The Case of Astrophysics and
Astronomy
| null | null |
10.3847/1538-4365/aab765
| null |
cs.DL stat.AP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Metrics based on reference lists of research articles or on keywords have
been used to predict citation impact. The concept behind such metrics is that
original ideas stem from the reconfiguration of the structure of past
knowledge, and therefore atypical combinations in the reference lists,
keywords, or classification codes indicate future high impact research. The
current paper serves as an introduction to this line of research for
astronomers and also addresses some methodological questions of this field of
innovation studies. It is still not clear if the choice of particular indexes,
such as references to journals, articles, or specific bibliometric
classification codes would affect the relationship between atypical
combinations and citation impact. To understand more aspects of the innovation
process, a new metric has been devised to measure to what extent researchers
are able to anticipate the changing combinatorial trends of the future. Results
show that the variant of the latter anticipation scores that is based on paper
combinations is a good predictor of future citation impact of scholarly works.
The study also shows that the effect of tested indexes vary with the
aggregation level that was used to construct them. A detailed analysis of
combinatorial novelty in the field reveals that certain sub-fields of astronomy
and astrophysics have different roles in the reconfiguration in past knowledge.
|
[
{
"created": "Sun, 8 Apr 2018 23:03:01 GMT",
"version": "v1"
}
] |
2018-05-23
|
[
[
"Varga",
"Attila",
""
]
] |
Metrics based on reference lists of research articles or on keywords have been used to predict citation impact. The concept behind such metrics is that original ideas stem from the reconfiguration of the structure of past knowledge, and therefore atypical combinations in the reference lists, keywords, or classification codes indicate future high impact research. The current paper serves as an introduction to this line of research for astronomers and also addresses some methodological questions of this field of innovation studies. It is still not clear if the choice of particular indexes, such as references to journals, articles, or specific bibliometric classification codes would affect the relationship between atypical combinations and citation impact. To understand more aspects of the innovation process, a new metric has been devised to measure to what extent researchers are able to anticipate the changing combinatorial trends of the future. Results show that the variant of the latter anticipation scores that is based on paper combinations is a good predictor of future citation impact of scholarly works. The study also shows that the effect of tested indexes vary with the aggregation level that was used to construct them. A detailed analysis of combinatorial novelty in the field reveals that certain sub-fields of astronomy and astrophysics have different roles in the reconfiguration in past knowledge.
|
1009.1512
|
Christine Bachoc
|
Christine Bachoc (IMB)
|
Applications of semidefinite programming to coding theory
|
5 pages; ITW 2010, Dublib : Ireland (2010)
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We survey recent generalizations and improvements of the linear programming
method that involve semidefinite programming. A general framework using group
representations and tools from graph theory is provided.
|
[
{
"created": "Wed, 8 Sep 2010 12:10:50 GMT",
"version": "v1"
}
] |
2010-09-09
|
[
[
"Bachoc",
"Christine",
"",
"IMB"
]
] |
We survey recent generalizations and improvements of the linear programming method that involve semidefinite programming. A general framework using group representations and tools from graph theory is provided.
|
2405.02070
|
Andreas A{\ss}muth
|
George R. S. Weir and Andreas A{\ss}muth
|
Strategies for Intrusion Monitoring in Cloud Services
|
5 pages
|
Proc of the 8th International Conference on Cloud Computing,
GRIDs, and Virtualization (Cloud Computing 2017), Athens, Greece, February
2017, pp. 49-53, ISSN 2308-4294
| null | null |
cs.CR cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Effective activity and event monitoring is an essential aspect of digital
forensic readiness. Techniques for capturing log and other event data are
familiar from conventional networked hosts and transfer directly to the Cloud
context. In both contexts, a major concern is the risk that monitoring systems
may be targeted and impaired by intruders seeking to conceal their illicit
presence and activities. We outline an approach to intrusion monitoring that
aims (i)~to ensure the credibility of log data and (ii)~provide a means of data
sharing that supports log reconstruction in the event that one or more logging
systems is maliciously impaired.
|
[
{
"created": "Fri, 3 May 2024 13:00:36 GMT",
"version": "v1"
}
] |
2024-05-06
|
[
[
"Weir",
"George R. S.",
""
],
[
"Aßmuth",
"Andreas",
""
]
] |
Effective activity and event monitoring is an essential aspect of digital forensic readiness. Techniques for capturing log and other event data are familiar from conventional networked hosts and transfer directly to the Cloud context. In both contexts, a major concern is the risk that monitoring systems may be targeted and impaired by intruders seeking to conceal their illicit presence and activities. We outline an approach to intrusion monitoring that aims (i)~to ensure the credibility of log data and (ii)~provide a means of data sharing that supports log reconstruction in the event that one or more logging systems is maliciously impaired.
|
1706.02815
|
Hao He
|
Hao He, Bo Xin, David Wipf
|
From Bayesian Sparsity to Gated Recurrent Nets
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The iterations of many first-order algorithms, when applied to minimizing
common regularized regression functions, often resemble neural network layers
with pre-specified weights. This observation has prompted the development of
learning-based approaches that purport to replace these iterations with
enhanced surrogates forged as DNN models from available training data. For
example, important NP-hard sparse estimation problems have recently benefitted
from this genre of upgrade, with simple feedforward or recurrent networks
ousting proximal gradient-based iterations. Analogously, this paper
demonstrates that more powerful Bayesian algorithms for promoting sparsity,
which rely on complex multi-loop majorization-minimization techniques, mirror
the structure of more sophisticated long short-term memory (LSTM) networks, or
alternative gated feedback networks previously designed for sequence
prediction. As part of this development, we examine the parallels between
latent variable trajectories operating across multiple time-scales during
optimization, and the activations within deep network structures designed to
adaptively model such characteristic sequences. The resulting insights lead to
a novel sparse estimation system that, when granted training data, can estimate
optimal solutions efficiently in regimes where other algorithms fail, including
practical direction-of-arrival (DOA) and 3D geometry recovery problems. The
underlying principles we expose are also suggestive of a learning process for a
richer class of multi-loop algorithms in other domains.
|
[
{
"created": "Fri, 9 Jun 2017 02:56:54 GMT",
"version": "v1"
},
{
"created": "Wed, 2 Aug 2017 17:03:43 GMT",
"version": "v2"
}
] |
2017-08-03
|
[
[
"He",
"Hao",
""
],
[
"Xin",
"Bo",
""
],
[
"Wipf",
"David",
""
]
] |
The iterations of many first-order algorithms, when applied to minimizing common regularized regression functions, often resemble neural network layers with pre-specified weights. This observation has prompted the development of learning-based approaches that purport to replace these iterations with enhanced surrogates forged as DNN models from available training data. For example, important NP-hard sparse estimation problems have recently benefitted from this genre of upgrade, with simple feedforward or recurrent networks ousting proximal gradient-based iterations. Analogously, this paper demonstrates that more powerful Bayesian algorithms for promoting sparsity, which rely on complex multi-loop majorization-minimization techniques, mirror the structure of more sophisticated long short-term memory (LSTM) networks, or alternative gated feedback networks previously designed for sequence prediction. As part of this development, we examine the parallels between latent variable trajectories operating across multiple time-scales during optimization, and the activations within deep network structures designed to adaptively model such characteristic sequences. The resulting insights lead to a novel sparse estimation system that, when granted training data, can estimate optimal solutions efficiently in regimes where other algorithms fail, including practical direction-of-arrival (DOA) and 3D geometry recovery problems. The underlying principles we expose are also suggestive of a learning process for a richer class of multi-loop algorithms in other domains.
|
2402.10478
|
Ishan Rajendrakumar Dave
|
Ishan Rajendrakumar Dave, Tristan de Blegiers, Chen Chen, Mubarak Shah
|
CodaMal: Contrastive Domain Adaptation for Malaria Detection in Low-Cost
Microscopes
|
Under Review. Project Page:
https://daveishan.github.io/codamal-webpage/
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Malaria is a major health issue worldwide, and its diagnosis requires
scalable solutions that can work effectively with low-cost microscopes (LCM).
Deep learning-based methods have shown success in computer-aided diagnosis from
microscopic images. However, these methods need annotated images that show
cells affected by malaria parasites and their life stages. Annotating images
from LCM significantly increases the burden on medical experts compared to
annotating images from high-cost microscopes (HCM). For this reason, a
practical solution would be trained on HCM images which should generalize well
on LCM images during testing. While earlier methods adopted a multi-stage
learning process, they did not offer an end-to-end approach. In this work, we
present an end-to-end learning framework, named CodaMal (Contrastive Domain
Adpation for Malaria). In order to bridge the gap between HCM (training) and
LCM (testing), we propose a domain adaptive contrastive loss. It reduces the
domain shift by promoting similarity between the representations of HCM and its
corresponding LCM image, without imposing an additional annotation burden. In
addition, the training objective includes object detection objectives with
carefully designed augmentations, ensuring the accurate detection of malaria
parasites. On the publicly available large-scale M5-dataset, our proposed
method shows a significant improvement of 16% over the state-of-the-art methods
in terms of the mean average precision metric (mAP), provides 21x speed up
during inference, and requires only half learnable parameters than the prior
methods. Our code is publicly available.
|
[
{
"created": "Fri, 16 Feb 2024 06:57:03 GMT",
"version": "v1"
}
] |
2024-02-19
|
[
[
"Dave",
"Ishan Rajendrakumar",
""
],
[
"de Blegiers",
"Tristan",
""
],
[
"Chen",
"Chen",
""
],
[
"Shah",
"Mubarak",
""
]
] |
Malaria is a major health issue worldwide, and its diagnosis requires scalable solutions that can work effectively with low-cost microscopes (LCM). Deep learning-based methods have shown success in computer-aided diagnosis from microscopic images. However, these methods need annotated images that show cells affected by malaria parasites and their life stages. Annotating images from LCM significantly increases the burden on medical experts compared to annotating images from high-cost microscopes (HCM). For this reason, a practical solution would be trained on HCM images which should generalize well on LCM images during testing. While earlier methods adopted a multi-stage learning process, they did not offer an end-to-end approach. In this work, we present an end-to-end learning framework, named CodaMal (Contrastive Domain Adpation for Malaria). In order to bridge the gap between HCM (training) and LCM (testing), we propose a domain adaptive contrastive loss. It reduces the domain shift by promoting similarity between the representations of HCM and its corresponding LCM image, without imposing an additional annotation burden. In addition, the training objective includes object detection objectives with carefully designed augmentations, ensuring the accurate detection of malaria parasites. On the publicly available large-scale M5-dataset, our proposed method shows a significant improvement of 16% over the state-of-the-art methods in terms of the mean average precision metric (mAP), provides 21x speed up during inference, and requires only half learnable parameters than the prior methods. Our code is publicly available.
|
2009.06799
|
Jacob Buckman
|
Jacob Buckman, Carles Gelada, Marc G. Bellemare
|
The Importance of Pessimism in Fixed-Dataset Policy Optimization
| null | null | null | null |
cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study worst-case guarantees on the expected return of fixed-dataset policy
optimization algorithms. Our core contribution is a unified conceptual and
mathematical framework for the study of algorithms in this regime. This
analysis reveals that for naive approaches, the possibility of erroneous value
overestimation leads to a difficult-to-satisfy requirement: in order to
guarantee that we select a policy which is near-optimal, we may need the
dataset to be informative of the value of every policy. To avoid this,
algorithms can follow the pessimism principle, which states that we should
choose the policy which acts optimally in the worst possible world. We show why
pessimistic algorithms can achieve good performance even when the dataset is
not informative of every policy, and derive families of algorithms which follow
this principle. These theoretical findings are validated by experiments on a
tabular gridworld, and deep learning experiments on four MinAtar environments.
|
[
{
"created": "Tue, 15 Sep 2020 00:18:34 GMT",
"version": "v1"
},
{
"created": "Sun, 4 Oct 2020 23:51:31 GMT",
"version": "v2"
},
{
"created": "Sun, 29 Nov 2020 05:58:30 GMT",
"version": "v3"
}
] |
2020-12-01
|
[
[
"Buckman",
"Jacob",
""
],
[
"Gelada",
"Carles",
""
],
[
"Bellemare",
"Marc G.",
""
]
] |
We study worst-case guarantees on the expected return of fixed-dataset policy optimization algorithms. Our core contribution is a unified conceptual and mathematical framework for the study of algorithms in this regime. This analysis reveals that for naive approaches, the possibility of erroneous value overestimation leads to a difficult-to-satisfy requirement: in order to guarantee that we select a policy which is near-optimal, we may need the dataset to be informative of the value of every policy. To avoid this, algorithms can follow the pessimism principle, which states that we should choose the policy which acts optimally in the worst possible world. We show why pessimistic algorithms can achieve good performance even when the dataset is not informative of every policy, and derive families of algorithms which follow this principle. These theoretical findings are validated by experiments on a tabular gridworld, and deep learning experiments on four MinAtar environments.
|
1806.01224
|
Nils M\"uller
|
Nils M\"uller and Tobias Glasmachers
|
Challenges in High-dimensional Reinforcement Learning with Evolution
Strategies
|
12 pages, 5 figures
| null | null | null |
cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Evolution Strategies (ESs) have recently become popular for training deep
neural networks, in particular on reinforcement learning tasks, a special form
of controller design. Compared to classic problems in continuous direct search,
deep networks pose extremely high-dimensional optimization problems, with many
thousands or even millions of variables. In addition, many control problems
give rise to a stochastic fitness function. Considering the relevance of the
application, we study the suitability of evolution strategies for
high-dimensional, stochastic problems. Our results give insights into which
algorithmic mechanisms of modern ES are of value for the class of problems at
hand, and they reveal principled limitations of the approach. They are in line
with our theoretical understanding of ESs. We show that combining ESs that
offer reduced internal algorithm cost with uncertainty handling techniques
yields promising methods for this class of problems.
|
[
{
"created": "Mon, 4 Jun 2018 17:08:23 GMT",
"version": "v1"
},
{
"created": "Sun, 1 Jul 2018 18:30:36 GMT",
"version": "v2"
}
] |
2018-07-03
|
[
[
"Müller",
"Nils",
""
],
[
"Glasmachers",
"Tobias",
""
]
] |
Evolution Strategies (ESs) have recently become popular for training deep neural networks, in particular on reinforcement learning tasks, a special form of controller design. Compared to classic problems in continuous direct search, deep networks pose extremely high-dimensional optimization problems, with many thousands or even millions of variables. In addition, many control problems give rise to a stochastic fitness function. Considering the relevance of the application, we study the suitability of evolution strategies for high-dimensional, stochastic problems. Our results give insights into which algorithmic mechanisms of modern ES are of value for the class of problems at hand, and they reveal principled limitations of the approach. They are in line with our theoretical understanding of ESs. We show that combining ESs that offer reduced internal algorithm cost with uncertainty handling techniques yields promising methods for this class of problems.
|
2306.09998
|
Juliette Marrie
|
Juliette Marrie, Michael Arbel, Diane Larlus, Julien Mairal
|
SLACK: Stable Learning of Augmentations with Cold-start and KL
regularization
|
Accepted to CVPR 2023
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Data augmentation is known to improve the generalization capabilities of
neural networks, provided that the set of transformations is chosen with care,
a selection often performed manually. Automatic data augmentation aims at
automating this process. However, most recent approaches still rely on some
prior information; they start from a small pool of manually-selected default
transformations that are either used to pretrain the network or forced to be
part of the policy learned by the automatic data augmentation algorithm. In
this paper, we propose to directly learn the augmentation policy without
leveraging such prior knowledge. The resulting bilevel optimization problem
becomes more challenging due to the larger search space and the inherent
instability of bilevel optimization algorithms. To mitigate these issues (i) we
follow a successive cold-start strategy with a Kullback-Leibler regularization,
and (ii) we parameterize magnitudes as continuous distributions. Our approach
leads to competitive results on standard benchmarks despite a more challenging
setting, and generalizes beyond natural images.
|
[
{
"created": "Fri, 16 Jun 2023 17:51:07 GMT",
"version": "v1"
}
] |
2023-06-19
|
[
[
"Marrie",
"Juliette",
""
],
[
"Arbel",
"Michael",
""
],
[
"Larlus",
"Diane",
""
],
[
"Mairal",
"Julien",
""
]
] |
Data augmentation is known to improve the generalization capabilities of neural networks, provided that the set of transformations is chosen with care, a selection often performed manually. Automatic data augmentation aims at automating this process. However, most recent approaches still rely on some prior information; they start from a small pool of manually-selected default transformations that are either used to pretrain the network or forced to be part of the policy learned by the automatic data augmentation algorithm. In this paper, we propose to directly learn the augmentation policy without leveraging such prior knowledge. The resulting bilevel optimization problem becomes more challenging due to the larger search space and the inherent instability of bilevel optimization algorithms. To mitigate these issues (i) we follow a successive cold-start strategy with a Kullback-Leibler regularization, and (ii) we parameterize magnitudes as continuous distributions. Our approach leads to competitive results on standard benchmarks despite a more challenging setting, and generalizes beyond natural images.
|
1311.4088
|
Adel Alinezhad kolaei
|
Adel Alinezhad Kolaei and Marzieh Ahmadzadeh
|
The Optimization of Running Queries in Relational Databases Using
ANT-Colony Algorithm
| null | null | null | null |
cs.DB cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The issue of optimizing queries is a cost-sensitive process and with respect
to the number of associated tables in a query, its number of permutations grows
exponentially. On one hand, in comparison with other operators in relational
database, join operator is the most difficult and complicated one in terms of
optimization for reducing its runtime. Accordingly, various algorithms have so
far been proposed to solve this problem. On the other hand, the success of any
database management system (DBMS) means exploiting the query model. In the
current paper, the heuristic ant algorithm has been proposed to solve this
problem and improve the runtime of join operation. Experiments and observed
results reveal the efficiency of this algorithm compared to its similar
algorithms.
|
[
{
"created": "Sat, 16 Nov 2013 18:43:19 GMT",
"version": "v1"
}
] |
2013-11-19
|
[
[
"Kolaei",
"Adel Alinezhad",
""
],
[
"Ahmadzadeh",
"Marzieh",
""
]
] |
The issue of optimizing queries is a cost-sensitive process and with respect to the number of associated tables in a query, its number of permutations grows exponentially. On one hand, in comparison with other operators in relational database, join operator is the most difficult and complicated one in terms of optimization for reducing its runtime. Accordingly, various algorithms have so far been proposed to solve this problem. On the other hand, the success of any database management system (DBMS) means exploiting the query model. In the current paper, the heuristic ant algorithm has been proposed to solve this problem and improve the runtime of join operation. Experiments and observed results reveal the efficiency of this algorithm compared to its similar algorithms.
|
1410.2834
|
Ubiratam de Paula Junior
|
Ubiratam de Paula Junior, L\'ucia M. A. Drummond, Daniel de Oliveira,
Yuri Frota, Valmir C. Barbosa
|
Handling Flash-Crowd Events to Improve the Performance of Web
Applications
|
Submitted to the 30th Symposium On Applied Computing (2015)
|
Proceedings of the 30th ACM/SIGAPP Symposium on Applied Computing,
769-774, 2015
|
10.1145/2695664.2695839
| null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cloud computing can offer a set of computing resources according to users'
demand. It is suitable to be used to handle flash-crowd events in Web
applications due to its elasticity and on-demand characteristics. Thus, when
Web applications need more computing or storage capacity, they just instantiate
new resources. However, providers have to estimate the amount of resources to
instantiate to handle with the flash-crowd event. This estimation is far from
trivial since each cloud environment provides several kinds of heterogeneous
resources, each one with its own characteristics such as bandwidth, CPU, memory
and financial cost. In this paper, the Flash Crowd Handling Problem (FCHP) is
precisely defined and formulated as an integer programming problem. A new
algorithm for handling with a flash crowd named FCHP-ILS is also proposed. With
FCHP-ILS the Web applications can replicate contents in the already
instantiated resources and define the types and amount of resources to
instantiate in the cloud during a flash crowd. Our approach is evaluated
considering real flash crowd traces obtained from the related literature. We
also present a case study, based on a synthetic dataset representing
flash-crowd events in small scenarios aiming at the comparison of the proposed
approach against Amazon's Auto-Scale mechanism.
|
[
{
"created": "Fri, 10 Oct 2014 16:36:09 GMT",
"version": "v1"
}
] |
2015-05-04
|
[
[
"Junior",
"Ubiratam de Paula",
""
],
[
"Drummond",
"Lúcia M. A.",
""
],
[
"de Oliveira",
"Daniel",
""
],
[
"Frota",
"Yuri",
""
],
[
"Barbosa",
"Valmir C.",
""
]
] |
Cloud computing can offer a set of computing resources according to users' demand. It is suitable to be used to handle flash-crowd events in Web applications due to its elasticity and on-demand characteristics. Thus, when Web applications need more computing or storage capacity, they just instantiate new resources. However, providers have to estimate the amount of resources to instantiate to handle with the flash-crowd event. This estimation is far from trivial since each cloud environment provides several kinds of heterogeneous resources, each one with its own characteristics such as bandwidth, CPU, memory and financial cost. In this paper, the Flash Crowd Handling Problem (FCHP) is precisely defined and formulated as an integer programming problem. A new algorithm for handling with a flash crowd named FCHP-ILS is also proposed. With FCHP-ILS the Web applications can replicate contents in the already instantiated resources and define the types and amount of resources to instantiate in the cloud during a flash crowd. Our approach is evaluated considering real flash crowd traces obtained from the related literature. We also present a case study, based on a synthetic dataset representing flash-crowd events in small scenarios aiming at the comparison of the proposed approach against Amazon's Auto-Scale mechanism.
|
1910.08071
|
Shadrokh Samavi
|
Mahdi Ahmadi, Nader Karimi, Shadrokh Samavi
|
Context-Aware Saliency Detection for Image Retargeting Using
Convolutional Neural Networks
|
20 pages, 19 figures
| null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Image retargeting is the task of making images capable of being displayed on
screens with different sizes. This work should be done so that high-level
visual information and low-level features such as texture remain as intact as
possible to the human visual system, while the output image may have different
dimensions. Thus, simple methods such as scaling and cropping are not adequate
for this purpose. In recent years, researchers have tried to improve the
existing retargeting methods and introduce new ones. However, a specific method
cannot be utilized to retarget all types of images. In other words, different
images require different retargeting methods. Image retargeting has a close
relationship to image saliency detection, which is relatively a new image
processing task. Earlier saliency detection methods were based on local and
global but low-level image information. These methods are called bottom-up
methods. On the other hand, newer approaches are top-down and mixed methods
that consider the high level and semantic information of the image too. In this
paper, we introduce the proposed methods in both saliency detection and
retargeting. For the saliency detection, the use of image context and semantic
segmentation are examined, and a novel mixed bottom-up, and top-down saliency
detection method is introduced. After saliency detection, a modified version of
an existing retargeting method is utilized for retargeting the images. The
results suggest that the proposed image retargeting pipeline has excellent
performance compared to other tested methods. Also, the subjective evaluations
on the Pascal dataset can be used as a retargeting quality assessment dataset
for further research.
|
[
{
"created": "Thu, 17 Oct 2019 17:59:46 GMT",
"version": "v1"
}
] |
2019-10-18
|
[
[
"Ahmadi",
"Mahdi",
""
],
[
"Karimi",
"Nader",
""
],
[
"Samavi",
"Shadrokh",
""
]
] |
Image retargeting is the task of making images capable of being displayed on screens with different sizes. This work should be done so that high-level visual information and low-level features such as texture remain as intact as possible to the human visual system, while the output image may have different dimensions. Thus, simple methods such as scaling and cropping are not adequate for this purpose. In recent years, researchers have tried to improve the existing retargeting methods and introduce new ones. However, a specific method cannot be utilized to retarget all types of images. In other words, different images require different retargeting methods. Image retargeting has a close relationship to image saliency detection, which is relatively a new image processing task. Earlier saliency detection methods were based on local and global but low-level image information. These methods are called bottom-up methods. On the other hand, newer approaches are top-down and mixed methods that consider the high level and semantic information of the image too. In this paper, we introduce the proposed methods in both saliency detection and retargeting. For the saliency detection, the use of image context and semantic segmentation are examined, and a novel mixed bottom-up, and top-down saliency detection method is introduced. After saliency detection, a modified version of an existing retargeting method is utilized for retargeting the images. The results suggest that the proposed image retargeting pipeline has excellent performance compared to other tested methods. Also, the subjective evaluations on the Pascal dataset can be used as a retargeting quality assessment dataset for further research.
|
1711.11469
|
Aaron Potechin
|
Aaron Potechin
|
Sum of squares lower bounds from symmetry and a good story
| null | null | null | null |
cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we develop machinery which makes it much easier to prove sum
of squares lower bounds when the problem is symmetric under permutations of
$[1,n]$ and the unsatisfiability of our problem comes from integrality
arguments, i.e. arguments that an expression must be an integer. Roughly
speaking, to prove SOS lower bounds with our machinery it is sufficient to
verify that the answer to the following three questions is yes:
1. Are there natural pseudo-expectation values for the problem?
2. Are these pseudo-expectation values rational functions of the problem
parameters?
3. Are there sufficiently many values of the parameters for which these
pseudo-expectation values correspond to the actual expected values over a
distribution of solutions which is the uniform distribution over permutations
of a single solution?
We demonstrate our machinery on three problems, the knapsack problem analyzed
by Grigoriev, the MOD 2 principle (which says that the complete graph $K_n$ has
no perfect matching when $n$ is odd), and the following Turan type problem:
Minimize the number of triangles in a graph $G$ with a given edge density. For
knapsack, we recover Grigoriev's lower bound exactly. For the MOD 2 principle,
we tighten Grigoriev's linear degree sum of squares lower bound, making it
exact. Finally, for the triangle problem, we prove a sum of squares lower bound
for finding the minimum triangle density. This lower bound is completely new
and gives a simple example where constant degree sum of squares methods have a
constant factor error in estimating graph densities.
|
[
{
"created": "Thu, 30 Nov 2017 15:47:13 GMT",
"version": "v1"
},
{
"created": "Thu, 17 May 2018 14:20:45 GMT",
"version": "v2"
},
{
"created": "Fri, 14 Dec 2018 00:11:25 GMT",
"version": "v3"
}
] |
2018-12-17
|
[
[
"Potechin",
"Aaron",
""
]
] |
In this paper, we develop machinery which makes it much easier to prove sum of squares lower bounds when the problem is symmetric under permutations of $[1,n]$ and the unsatisfiability of our problem comes from integrality arguments, i.e. arguments that an expression must be an integer. Roughly speaking, to prove SOS lower bounds with our machinery it is sufficient to verify that the answer to the following three questions is yes: 1. Are there natural pseudo-expectation values for the problem? 2. Are these pseudo-expectation values rational functions of the problem parameters? 3. Are there sufficiently many values of the parameters for which these pseudo-expectation values correspond to the actual expected values over a distribution of solutions which is the uniform distribution over permutations of a single solution? We demonstrate our machinery on three problems, the knapsack problem analyzed by Grigoriev, the MOD 2 principle (which says that the complete graph $K_n$ has no perfect matching when $n$ is odd), and the following Turan type problem: Minimize the number of triangles in a graph $G$ with a given edge density. For knapsack, we recover Grigoriev's lower bound exactly. For the MOD 2 principle, we tighten Grigoriev's linear degree sum of squares lower bound, making it exact. Finally, for the triangle problem, we prove a sum of squares lower bound for finding the minimum triangle density. This lower bound is completely new and gives a simple example where constant degree sum of squares methods have a constant factor error in estimating graph densities.
|
2204.06718
|
Hengyue Pan
|
Hengyue Pan and Yixin Chen and Xin Niu and Wenbo Zhou and Dongsheng Li
|
Learning Convolutional Neural Networks in the Frequency Domain
|
Submitted to NeurIPS 2022
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Convolutional neural network (CNN) has achieved impressive success in
computer vision during the past few decades. The image convolution operation
helps CNNs to get good performance on image-related tasks. However, the image
convolution has high computation complexity and hard to be implemented. This
paper proposes the CEMNet, which can be trained in the frequency domain. The
most important motivation of this research is that we can use the
straightforward element-wise multiplication operation to replace the image
convolution in the frequency domain based on the Cross-Correlation Theorem,
which obviously reduces the computation complexity. We further introduce a
Weight Fixation mechanism to alleviate the problem of over-fitting, and analyze
the working behavior of Batch Normalization, Leaky ReLU, and Dropout in the
frequency domain to design their counterparts for CEMNet. Also, to deal with
complex inputs brought by Discrete Fourier Transform, we design a two-branches
network structure for CEMNet. Experimental results imply that CEMNet achieves
good performance on MNIST and CIFAR-10 databases.
|
[
{
"created": "Thu, 14 Apr 2022 03:08:40 GMT",
"version": "v1"
},
{
"created": "Wed, 20 Jul 2022 08:16:09 GMT",
"version": "v10"
},
{
"created": "Fri, 15 Apr 2022 10:10:30 GMT",
"version": "v2"
},
{
"created": "Tue, 19 Apr 2022 13:17:24 GMT",
"version": "v3"
},
{
"created": "Fri, 22 Apr 2022 01:28:16 GMT",
"version": "v4"
},
{
"created": "Wed, 27 Apr 2022 02:16:16 GMT",
"version": "v5"
},
{
"created": "Thu, 28 Apr 2022 09:12:35 GMT",
"version": "v6"
},
{
"created": "Tue, 3 May 2022 07:09:47 GMT",
"version": "v7"
},
{
"created": "Fri, 20 May 2022 01:21:46 GMT",
"version": "v8"
},
{
"created": "Mon, 18 Jul 2022 09:06:09 GMT",
"version": "v9"
}
] |
2022-07-21
|
[
[
"Pan",
"Hengyue",
""
],
[
"Chen",
"Yixin",
""
],
[
"Niu",
"Xin",
""
],
[
"Zhou",
"Wenbo",
""
],
[
"Li",
"Dongsheng",
""
]
] |
Convolutional neural network (CNN) has achieved impressive success in computer vision during the past few decades. The image convolution operation helps CNNs to get good performance on image-related tasks. However, the image convolution has high computation complexity and hard to be implemented. This paper proposes the CEMNet, which can be trained in the frequency domain. The most important motivation of this research is that we can use the straightforward element-wise multiplication operation to replace the image convolution in the frequency domain based on the Cross-Correlation Theorem, which obviously reduces the computation complexity. We further introduce a Weight Fixation mechanism to alleviate the problem of over-fitting, and analyze the working behavior of Batch Normalization, Leaky ReLU, and Dropout in the frequency domain to design their counterparts for CEMNet. Also, to deal with complex inputs brought by Discrete Fourier Transform, we design a two-branches network structure for CEMNet. Experimental results imply that CEMNet achieves good performance on MNIST and CIFAR-10 databases.
|
1401.6576
|
Luc Dartois
|
Luc Dartois (ULB), Charles Paperman
|
Adding modular predicates to first-order fragments
| null | null | null | null |
cs.LO cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We investigate the decidability of the definability problem for fragments of
first order logic over finite words enriched with modular predicates. Our
approach aims toward the most generic statements that we could achieve, which
successfully covers the quantifier alternation hierarchy of first order logic
and some of its fragments. We obtain that deciding this problem for each level
of the alternation hierarchy of both first order logic and its two-variable
fragment when equipped with all regular numerical predicates is not harder than
deciding it for the corresponding level equipped with only the linear order and
the successor. For two-variable fragments we also treat the case of the
signature containing only the order and modular predicates.Relying on some
recent results, this proves the decidability for each level of the alternation
hierarchy of the two-variable first order fragmentwhile in the case of the
first order logic the question remains open for levels greater than two.The
main ingredients of the proofs are syntactic transformations of first order
formulas as well as the algebraic framework of finite categories.
|
[
{
"created": "Sat, 25 Jan 2014 19:57:46 GMT",
"version": "v1"
},
{
"created": "Mon, 19 May 2014 18:58:57 GMT",
"version": "v2"
},
{
"created": "Fri, 13 Nov 2015 14:43:48 GMT",
"version": "v3"
}
] |
2015-11-16
|
[
[
"Dartois",
"Luc",
"",
"ULB"
],
[
"Paperman",
"Charles",
""
]
] |
We investigate the decidability of the definability problem for fragments of first order logic over finite words enriched with modular predicates. Our approach aims toward the most generic statements that we could achieve, which successfully covers the quantifier alternation hierarchy of first order logic and some of its fragments. We obtain that deciding this problem for each level of the alternation hierarchy of both first order logic and its two-variable fragment when equipped with all regular numerical predicates is not harder than deciding it for the corresponding level equipped with only the linear order and the successor. For two-variable fragments we also treat the case of the signature containing only the order and modular predicates.Relying on some recent results, this proves the decidability for each level of the alternation hierarchy of the two-variable first order fragmentwhile in the case of the first order logic the question remains open for levels greater than two.The main ingredients of the proofs are syntactic transformations of first order formulas as well as the algebraic framework of finite categories.
|
1706.05924
|
Manoel Horta Ribeiro
|
Manoel Horta Ribeiro, Pedro H. Calais, Virg\'ilio A. F. Almeida,
Wagner Meira Jr
|
"Everything I Disagree With is #FakeNews": Correlating Political
Polarization and Spread of Misinformation
|
8 pages, 10 figures, to be presented at DS+J Workshop @ KDD'17
| null | null | null |
cs.SI cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An important challenge in the process of tracking and detecting the
dissemination of misinformation is to understand the political gap between
people that engage with the so called "fake news". A possible factor
responsible for this gap is opinion polarization, which may prompt the general
public to classify content that they disagree or want to discredit as fake. In
this work, we study the relationship between political polarization and content
reported by Twitter users as related to "fake news". We investigate how
polarization may create distinct narratives on what misinformation actually is.
We perform our study based on two datasets collected from Twitter. The first
dataset contains tweets about US politics in general, from which we compute the
degree of polarization of each user towards the Republican and Democratic
Party. In the second dataset, we collect tweets and URLs that co-occurred with
"fake news" related keywords and hashtags, such as #FakeNews and
#AlternativeFact, as well as reactions towards such tweets and URLs. We then
analyze the relationship between polarization and what is perceived as
misinformation, and whether users are designating information that they
disagree as fake. Our results show an increase in the polarization of users and
URLs associated with fake-news keywords and hashtags, when compared to
information not labeled as "fake news". We discuss the impact of our findings
on the challenges of tracking "fake news" in the ongoing battle against
misinformation.
|
[
{
"created": "Mon, 19 Jun 2017 13:26:41 GMT",
"version": "v1"
},
{
"created": "Mon, 17 Jul 2017 17:17:37 GMT",
"version": "v2"
}
] |
2017-07-18
|
[
[
"Ribeiro",
"Manoel Horta",
""
],
[
"Calais",
"Pedro H.",
""
],
[
"Almeida",
"Virgílio A. F.",
""
],
[
"Meira",
"Wagner",
"Jr"
]
] |
An important challenge in the process of tracking and detecting the dissemination of misinformation is to understand the political gap between people that engage with the so called "fake news". A possible factor responsible for this gap is opinion polarization, which may prompt the general public to classify content that they disagree or want to discredit as fake. In this work, we study the relationship between political polarization and content reported by Twitter users as related to "fake news". We investigate how polarization may create distinct narratives on what misinformation actually is. We perform our study based on two datasets collected from Twitter. The first dataset contains tweets about US politics in general, from which we compute the degree of polarization of each user towards the Republican and Democratic Party. In the second dataset, we collect tweets and URLs that co-occurred with "fake news" related keywords and hashtags, such as #FakeNews and #AlternativeFact, as well as reactions towards such tweets and URLs. We then analyze the relationship between polarization and what is perceived as misinformation, and whether users are designating information that they disagree as fake. Our results show an increase in the polarization of users and URLs associated with fake-news keywords and hashtags, when compared to information not labeled as "fake news". We discuss the impact of our findings on the challenges of tracking "fake news" in the ongoing battle against misinformation.
|
1812.03243
|
Md Kamruzzaman Sarker
|
Md Kamruzzaman Sarker, Pascal Hitzler
|
Efficient Concept Induction for Description Logics
|
Accepted at AAAI-19
| null | null | null |
cs.AI cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Concept Induction refers to the problem of creating complex Description Logic
class descriptions (i.e., TBox axioms) from instance examples (i.e., ABox
data). In this paper we look particularly at the case where both a set of
positive and a set of negative instances are given, and complex class
expressions are sought under which the positive but not the negative examples
fall. Concept induction has found applications in ontology engineering, but
existing algorithms have fundamental performance issues in some scenarios,
mainly because a high number of invokations of an external Description Logic
reasoner is usually required. In this paper we present a new algorithm for this
problem which drastically reduces the number of reasoner invokations needed.
While this comes at the expense of a more limited traversal of the search
space, we show that our approach improves execution times by up to several
orders of magnitude, while output correctness, measured in the amount of
correct coverage of the input instances, remains reasonably high in many cases.
Our approach thus should provide a strong alternative to existing systems, in
particular in settings where other systems are prohibitively slow.
|
[
{
"created": "Sat, 8 Dec 2018 00:10:05 GMT",
"version": "v1"
}
] |
2018-12-11
|
[
[
"Sarker",
"Md Kamruzzaman",
""
],
[
"Hitzler",
"Pascal",
""
]
] |
Concept Induction refers to the problem of creating complex Description Logic class descriptions (i.e., TBox axioms) from instance examples (i.e., ABox data). In this paper we look particularly at the case where both a set of positive and a set of negative instances are given, and complex class expressions are sought under which the positive but not the negative examples fall. Concept induction has found applications in ontology engineering, but existing algorithms have fundamental performance issues in some scenarios, mainly because a high number of invokations of an external Description Logic reasoner is usually required. In this paper we present a new algorithm for this problem which drastically reduces the number of reasoner invokations needed. While this comes at the expense of a more limited traversal of the search space, we show that our approach improves execution times by up to several orders of magnitude, while output correctness, measured in the amount of correct coverage of the input instances, remains reasonably high in many cases. Our approach thus should provide a strong alternative to existing systems, in particular in settings where other systems are prohibitively slow.
|
2407.12487
|
Xin Wang
|
Mengxiao Zhu, Xin Wang, Xiantao Wang, Zihang Chen, Wei Huang
|
Application of Prompt Learning Models in Identifying the Collaborative
Problem Solving Skills in an Online Task
| null | null | null | null |
cs.HC
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Collaborative problem solving (CPS) competence is considered one of the
essential 21st-century skills. To facilitate the assessment and learning of CPS
competence, researchers have proposed a series of frameworks to conceptualize
CPS and explored ways to make sense of the complex processes involved in
collaborative problem solving. However, encoding explicit behaviors into
subskills within the frameworks of CPS skills is still a challenging task.
Traditional studies have relied on manual coding to decipher behavioral data
for CPS, but such coding methods can be very time-consuming and cannot support
real-time analyses. Scholars have begun to explore approaches for constructing
automatic coding models. Nevertheless, the existing models built using machine
learning or deep learning techniques depend on a large amount of training data
and have relatively low accuracy. To address these problems, this paper
proposes a prompt-based learning pre-trained model. The model can achieve high
performance even with limited training data. In this study, three experiments
were conducted, and the results showed that our model not only produced the
highest accuracy, macro F1 score, and kappa values on large training sets, but
also performed the best on small training sets of the CPS behavioral data. The
application of the proposed prompt-based learning pre-trained model contributes
to the CPS skills coding task and can also be used for other CSCW coding tasks
to replace manual coding.
|
[
{
"created": "Wed, 17 Jul 2024 11:12:02 GMT",
"version": "v1"
}
] |
2024-07-18
|
[
[
"Zhu",
"Mengxiao",
""
],
[
"Wang",
"Xin",
""
],
[
"Wang",
"Xiantao",
""
],
[
"Chen",
"Zihang",
""
],
[
"Huang",
"Wei",
""
]
] |
Collaborative problem solving (CPS) competence is considered one of the essential 21st-century skills. To facilitate the assessment and learning of CPS competence, researchers have proposed a series of frameworks to conceptualize CPS and explored ways to make sense of the complex processes involved in collaborative problem solving. However, encoding explicit behaviors into subskills within the frameworks of CPS skills is still a challenging task. Traditional studies have relied on manual coding to decipher behavioral data for CPS, but such coding methods can be very time-consuming and cannot support real-time analyses. Scholars have begun to explore approaches for constructing automatic coding models. Nevertheless, the existing models built using machine learning or deep learning techniques depend on a large amount of training data and have relatively low accuracy. To address these problems, this paper proposes a prompt-based learning pre-trained model. The model can achieve high performance even with limited training data. In this study, three experiments were conducted, and the results showed that our model not only produced the highest accuracy, macro F1 score, and kappa values on large training sets, but also performed the best on small training sets of the CPS behavioral data. The application of the proposed prompt-based learning pre-trained model contributes to the CPS skills coding task and can also be used for other CSCW coding tasks to replace manual coding.
|
1801.09866
|
Kyungmin Lee
|
Kyungmin Lee, Chiyoun Park, Namhoon Kim, and Jaewon Lee
|
Accelerating recurrent neural network language model based online speech
recognition system
|
4 pages, 4 figures, 3 tables, ICASSP2018(Accepted)
| null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents methods to accelerate recurrent neural network based
language models (RNNLMs) for online speech recognition systems. Firstly, a
lossy compression of the past hidden layer outputs (history vector) with
caching is introduced in order to reduce the number of LM queries. Next, RNNLM
computations are deployed in a CPU-GPU hybrid manner, which computes each layer
of the model on a more advantageous platform. The added overhead by data
exchanges between CPU and GPU is compensated through a frame-wise batching
strategy. The performance of the proposed methods evaluated on LibriSpeech test
sets indicates that the reduction in history vector precision improves the
average recognition speed by 1.23 times with minimum degradation in accuracy.
On the other hand, the CPU-GPU hybrid parallelization enables RNNLM based
real-time recognition with a four times improvement in speed.
|
[
{
"created": "Tue, 30 Jan 2018 06:58:50 GMT",
"version": "v1"
}
] |
2018-01-31
|
[
[
"Lee",
"Kyungmin",
""
],
[
"Park",
"Chiyoun",
""
],
[
"Kim",
"Namhoon",
""
],
[
"Lee",
"Jaewon",
""
]
] |
This paper presents methods to accelerate recurrent neural network based language models (RNNLMs) for online speech recognition systems. Firstly, a lossy compression of the past hidden layer outputs (history vector) with caching is introduced in order to reduce the number of LM queries. Next, RNNLM computations are deployed in a CPU-GPU hybrid manner, which computes each layer of the model on a more advantageous platform. The added overhead by data exchanges between CPU and GPU is compensated through a frame-wise batching strategy. The performance of the proposed methods evaluated on LibriSpeech test sets indicates that the reduction in history vector precision improves the average recognition speed by 1.23 times with minimum degradation in accuracy. On the other hand, the CPU-GPU hybrid parallelization enables RNNLM based real-time recognition with a four times improvement in speed.
|
2311.01793
|
Tomasz Kociumaka
|
Daniel Gibney, Ce Jin, Tomasz Kociumaka, Sharma V. Thankachan
|
Near-Optimal Quantum Algorithms for Bounded Edit Distance and Lempel-Ziv
Factorization
|
Accepted to SODA 2024. arXiv admin note: substantial text overlap
with arXiv:2302.07235
| null | null | null |
cs.DS quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Classically, the edit distance of two length-$n$ strings can be computed in
$O(n^2)$ time, whereas an $O(n^{2-\epsilon})$-time procedure would falsify the
Orthogonal Vectors Hypothesis. If the edit distance does not exceed $k$, the
running time can be improved to $O(n+k^2)$, which is near-optimal (conditioned
on OVH) as a function of $n$ and $k$. Our first main contribution is a quantum
$\tilde{O}(\sqrt{nk}+k^2)$-time algorithm that uses $\tilde{O}(\sqrt{nk})$
queries, where $\tilde{O}(\cdot)$ hides polylogarithmic factors. This query
complexity is unconditionally optimal, and any significant improvement in the
time complexity would resolve a long-standing open question of whether edit
distance admits an $O(n^{2-\epsilon})$-time quantum algorithm. Our
divide-and-conquer quantum algorithm reduces the edit distance problem to a
case where the strings have small Lempel-Ziv factorizations. Then, it combines
a quantum LZ compression algorithm with a classical edit-distance subroutine
for compressed strings.
The LZ factorization problem can be classically solved in $O(n)$ time, which
is unconditionally optimal in the quantum setting. We can, however, hope for a
quantum speedup if we parameterize the complexity in terms of the factorization
size $z$. Already a generic oracle identification algorithm yields the optimal
query complexity of $\tilde{O}(\sqrt{nz})$ at the price of exponential running
time. Our second main contribution is a quantum algorithm that achieves the
optimal time complexity of $\tilde{O}(\sqrt{nz})$. The key tool is a novel
LZ-like factorization of size $O(z\log^2n)$ whose subsequent factors can be
efficiently computed through a combination of classical and quantum techniques.
We can then obtain the string's run-length encoded Burrows-Wheeler Transform
(BWT), construct the $r$-index, and solve many fundamental string processing
problems in time $\tilde{O}(\sqrt{nz})$.
|
[
{
"created": "Fri, 3 Nov 2023 09:09:23 GMT",
"version": "v1"
}
] |
2023-11-06
|
[
[
"Gibney",
"Daniel",
""
],
[
"Jin",
"Ce",
""
],
[
"Kociumaka",
"Tomasz",
""
],
[
"Thankachan",
"Sharma V.",
""
]
] |
Classically, the edit distance of two length-$n$ strings can be computed in $O(n^2)$ time, whereas an $O(n^{2-\epsilon})$-time procedure would falsify the Orthogonal Vectors Hypothesis. If the edit distance does not exceed $k$, the running time can be improved to $O(n+k^2)$, which is near-optimal (conditioned on OVH) as a function of $n$ and $k$. Our first main contribution is a quantum $\tilde{O}(\sqrt{nk}+k^2)$-time algorithm that uses $\tilde{O}(\sqrt{nk})$ queries, where $\tilde{O}(\cdot)$ hides polylogarithmic factors. This query complexity is unconditionally optimal, and any significant improvement in the time complexity would resolve a long-standing open question of whether edit distance admits an $O(n^{2-\epsilon})$-time quantum algorithm. Our divide-and-conquer quantum algorithm reduces the edit distance problem to a case where the strings have small Lempel-Ziv factorizations. Then, it combines a quantum LZ compression algorithm with a classical edit-distance subroutine for compressed strings. The LZ factorization problem can be classically solved in $O(n)$ time, which is unconditionally optimal in the quantum setting. We can, however, hope for a quantum speedup if we parameterize the complexity in terms of the factorization size $z$. Already a generic oracle identification algorithm yields the optimal query complexity of $\tilde{O}(\sqrt{nz})$ at the price of exponential running time. Our second main contribution is a quantum algorithm that achieves the optimal time complexity of $\tilde{O}(\sqrt{nz})$. The key tool is a novel LZ-like factorization of size $O(z\log^2n)$ whose subsequent factors can be efficiently computed through a combination of classical and quantum techniques. We can then obtain the string's run-length encoded Burrows-Wheeler Transform (BWT), construct the $r$-index, and solve many fundamental string processing problems in time $\tilde{O}(\sqrt{nz})$.
|
1912.09697
|
Xingkui Wei
|
Xingkui Wei, Yinda Zhang, Zhuwen Li, Yanwei Fu and Xiangyang Xue
|
DeepSFM: Structure From Motion Via Deep Bundle Adjustment
|
Accepted by ECCV2020(Oral)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Structure from motion (SfM) is an essential computer vision problem which has
not been well handled by deep learning. One of the promising trends is to apply
explicit structural constraint, e.g. 3D cost volume, into the network. However,
existing methods usually assume accurate camera poses either from GT or other
methods, which is unrealistic in practice. In this work, we design a physical
driven architecture, namely DeepSFM, inspired by traditional Bundle Adjustment
(BA), which consists of two cost volume based architectures for depth and pose
estimation respectively, iteratively running to improve both. The explicit
constraints on both depth (structure) and pose (motion), when combined with the
learning components, bring the merit from both traditional BA and emerging deep
learning technology. Extensive experiments on various datasets show that our
model achieves the state-of-the-art performance on both depth and pose
estimation with superior robustness against less number of inputs and the noise
in initialization.
|
[
{
"created": "Fri, 20 Dec 2019 08:47:41 GMT",
"version": "v1"
},
{
"created": "Mon, 10 Aug 2020 07:31:28 GMT",
"version": "v2"
}
] |
2020-08-11
|
[
[
"Wei",
"Xingkui",
""
],
[
"Zhang",
"Yinda",
""
],
[
"Li",
"Zhuwen",
""
],
[
"Fu",
"Yanwei",
""
],
[
"Xue",
"Xiangyang",
""
]
] |
Structure from motion (SfM) is an essential computer vision problem which has not been well handled by deep learning. One of the promising trends is to apply explicit structural constraint, e.g. 3D cost volume, into the network. However, existing methods usually assume accurate camera poses either from GT or other methods, which is unrealistic in practice. In this work, we design a physical driven architecture, namely DeepSFM, inspired by traditional Bundle Adjustment (BA), which consists of two cost volume based architectures for depth and pose estimation respectively, iteratively running to improve both. The explicit constraints on both depth (structure) and pose (motion), when combined with the learning components, bring the merit from both traditional BA and emerging deep learning technology. Extensive experiments on various datasets show that our model achieves the state-of-the-art performance on both depth and pose estimation with superior robustness against less number of inputs and the noise in initialization.
|
2109.11406
|
Maud Ehrmann
|
Maud Ehrmann, Ahmed Hamdi, Elvys Linhares Pontes, Matteo Romanello,
Antoine Doucet
|
Named Entity Recognition and Classification on Historical Documents: A
Survey
|
39 pages
|
ACM Computing Surveys 56-2 (2023) 1-47
|
10.1145/3604931
| null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
After decades of massive digitisation, an unprecedented amount of historical
documents is available in digital format, along with their machine-readable
texts. While this represents a major step forward with respect to preservation
and accessibility, it also opens up new opportunities in terms of content
mining and the next fundamental challenge is to develop appropriate
technologies to efficiently search, retrieve and explore information from this
'big data of the past'. Among semantic indexing opportunities, the recognition
and classification of named entities are in great demand among humanities
scholars. Yet, named entity recognition (NER) systems are heavily challenged
with diverse, historical and noisy inputs. In this survey, we present the array
of challenges posed by historical documents to NER, inventory existing
resources, describe the main approaches deployed so far, and identify key
priorities for future developments.
|
[
{
"created": "Thu, 23 Sep 2021 14:37:40 GMT",
"version": "v1"
}
] |
2024-04-02
|
[
[
"Ehrmann",
"Maud",
""
],
[
"Hamdi",
"Ahmed",
""
],
[
"Pontes",
"Elvys Linhares",
""
],
[
"Romanello",
"Matteo",
""
],
[
"Doucet",
"Antoine",
""
]
] |
After decades of massive digitisation, an unprecedented amount of historical documents is available in digital format, along with their machine-readable texts. While this represents a major step forward with respect to preservation and accessibility, it also opens up new opportunities in terms of content mining and the next fundamental challenge is to develop appropriate technologies to efficiently search, retrieve and explore information from this 'big data of the past'. Among semantic indexing opportunities, the recognition and classification of named entities are in great demand among humanities scholars. Yet, named entity recognition (NER) systems are heavily challenged with diverse, historical and noisy inputs. In this survey, we present the array of challenges posed by historical documents to NER, inventory existing resources, describe the main approaches deployed so far, and identify key priorities for future developments.
|
1312.7179
|
Patoomsiri Songsiri Ms.
|
Patoomsiri Songsiri, Thimaporn Phetkaew, Ryutaro Ichise and Boonserm
Kijsirikul
|
Sub-Classifier Construction for Error Correcting Output Code Using
Minimum Weight Perfect Matching
|
7 pages, 3 figures
| null | null | null |
cs.LG cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-class classification is mandatory for real world problems and one of
promising techniques for multi-class classification is Error Correcting Output
Code. We propose a method for constructing the Error Correcting Output Code to
obtain the suitable combination of positive and negative classes encoded to
represent binary classifiers. The minimum weight perfect matching algorithm is
applied to find the optimal pairs of subset of classes by using the
generalization performance as a weighting criterion. Based on our method, each
subset of classes with positive and negative labels is appropriately combined
for learning the binary classifiers. Experimental results show that our
technique gives significantly higher performance compared to traditional
methods including the dense random code and the sparse random code both in
terms of accuracy and classification times. Moreover, our method requires
significantly smaller number of binary classifiers while maintaining accuracy
compared to the One-Versus-One.
|
[
{
"created": "Fri, 27 Dec 2013 03:21:34 GMT",
"version": "v1"
}
] |
2013-12-30
|
[
[
"Songsiri",
"Patoomsiri",
""
],
[
"Phetkaew",
"Thimaporn",
""
],
[
"Ichise",
"Ryutaro",
""
],
[
"Kijsirikul",
"Boonserm",
""
]
] |
Multi-class classification is mandatory for real world problems and one of promising techniques for multi-class classification is Error Correcting Output Code. We propose a method for constructing the Error Correcting Output Code to obtain the suitable combination of positive and negative classes encoded to represent binary classifiers. The minimum weight perfect matching algorithm is applied to find the optimal pairs of subset of classes by using the generalization performance as a weighting criterion. Based on our method, each subset of classes with positive and negative labels is appropriately combined for learning the binary classifiers. Experimental results show that our technique gives significantly higher performance compared to traditional methods including the dense random code and the sparse random code both in terms of accuracy and classification times. Moreover, our method requires significantly smaller number of binary classifiers while maintaining accuracy compared to the One-Versus-One.
|
1802.09623
|
Biao Zhao
|
Biao Zhao, Shigang Yue
|
A Resilient Image Matching Method with an Affine Invariant Feature
Detector and Descriptor
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Image feature matching is to seek, localize and identify the similarities
across the images. The matched local features between different images can
indicate the similarities of their content. Resilience of image feature
matching to large view point changes is challenging for a lot of applications
such as 3D object reconstruction, object recognition and navigation, etc, which
need accurate and robust feature matching from quite different view points. In
this paper we propose a novel image feature matching algorithm, integrating our
previous proposed Affine Invariant Feature Detector (AIFD) and new proposed
Affine Invariant Feature Descriptor (AIFDd). Both stages of this new proposed
algorithm can provide sufficient resilience to view point changes. With
systematic experiments, we can prove that the proposed method of feature
detector and descriptor outperforms other state-of-the-art feature matching
algorithms especially on view points robustness. It also performs well under
other conditions such as the change of illumination, rotation and compression,
etc.
|
[
{
"created": "Fri, 29 Dec 2017 11:36:58 GMT",
"version": "v1"
}
] |
2018-02-28
|
[
[
"Zhao",
"Biao",
""
],
[
"Yue",
"Shigang",
""
]
] |
Image feature matching is to seek, localize and identify the similarities across the images. The matched local features between different images can indicate the similarities of their content. Resilience of image feature matching to large view point changes is challenging for a lot of applications such as 3D object reconstruction, object recognition and navigation, etc, which need accurate and robust feature matching from quite different view points. In this paper we propose a novel image feature matching algorithm, integrating our previous proposed Affine Invariant Feature Detector (AIFD) and new proposed Affine Invariant Feature Descriptor (AIFDd). Both stages of this new proposed algorithm can provide sufficient resilience to view point changes. With systematic experiments, we can prove that the proposed method of feature detector and descriptor outperforms other state-of-the-art feature matching algorithms especially on view points robustness. It also performs well under other conditions such as the change of illumination, rotation and compression, etc.
|
2406.05412
|
Hao Zhang
|
Hao Zhang, Shuaijie Zhang, Renbin Zou
|
Select-Mosaic: Data Augmentation Method for Dense Small Object Scenes
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Data augmentation refers to the process of applying a series of
transformations or expansions to original data to generate new samples, thereby
increasing the diversity and quantity of the data, effectively improving the
performance and robustness of models. As a common data augmentation method,
Mosaic data augmentation technique stitches multiple images together to
increase the diversity and complexity of training data, thereby reducing the
risk of overfitting. Although Mosaic data augmentation achieves excellent
results in general detection tasks by stitching images together, it still has
certain limitations for specific detection tasks. This paper addresses the
challenge of detecting a large number of densely distributed small objects in
aerial images by proposing the Select-Mosaic data augmentation method, which is
improved with a fine-grained region selection strategy. The improved
Select-Mosaic method demonstrates superior performance in handling dense small
object detection tasks, significantly enhancing the accuracy and stability of
detection models. Code is available at
https://github.com/malagoutou/Select-Mosaic.
|
[
{
"created": "Sat, 8 Jun 2024 09:22:08 GMT",
"version": "v1"
}
] |
2024-06-11
|
[
[
"Zhang",
"Hao",
""
],
[
"Zhang",
"Shuaijie",
""
],
[
"Zou",
"Renbin",
""
]
] |
Data augmentation refers to the process of applying a series of transformations or expansions to original data to generate new samples, thereby increasing the diversity and quantity of the data, effectively improving the performance and robustness of models. As a common data augmentation method, Mosaic data augmentation technique stitches multiple images together to increase the diversity and complexity of training data, thereby reducing the risk of overfitting. Although Mosaic data augmentation achieves excellent results in general detection tasks by stitching images together, it still has certain limitations for specific detection tasks. This paper addresses the challenge of detecting a large number of densely distributed small objects in aerial images by proposing the Select-Mosaic data augmentation method, which is improved with a fine-grained region selection strategy. The improved Select-Mosaic method demonstrates superior performance in handling dense small object detection tasks, significantly enhancing the accuracy and stability of detection models. Code is available at https://github.com/malagoutou/Select-Mosaic.
|
2104.11645
|
Di Wu
|
Di Wu, Xiaofeng Xie, Xiang Ni, Bin Fu, Hanhui Deng, Haibo Zeng, and
Zhijin Qin
|
Software-Defined Edge Computing: A New Architecture Paradigm to Support
IoT Data Analysis
| null | null | null | null |
cs.NI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The rapid deployment of Internet of Things (IoT) applications leads to
massive data that need to be processed. These IoT applications have specific
communication requirements on latency and bandwidth, and present new features
on their generated data such as time-dependency. Therefore, it is desirable to
reshape the current IoT architectures by exploring their inherent nature of
communication and computing to support smart IoT data process and analysis. We
introduce in this paper features of IoT data, trends of IoT network
architectures, some problems in IoT data analysis, and their solutions.
Specifically, we view that software-defined edge computing is a promising
architecture to support the unique needs of IoT data analysis. We further
present an experiment on data anomaly detection in this architecture, and the
comparison between two architectures for ECG diagnosis. Results show that our
method is effective and feasible.
|
[
{
"created": "Thu, 22 Apr 2021 11:19:20 GMT",
"version": "v1"
},
{
"created": "Mon, 26 Apr 2021 02:39:57 GMT",
"version": "v2"
}
] |
2021-04-27
|
[
[
"Wu",
"Di",
""
],
[
"Xie",
"Xiaofeng",
""
],
[
"Ni",
"Xiang",
""
],
[
"Fu",
"Bin",
""
],
[
"Deng",
"Hanhui",
""
],
[
"Zeng",
"Haibo",
""
],
[
"Qin",
"Zhijin",
""
]
] |
The rapid deployment of Internet of Things (IoT) applications leads to massive data that need to be processed. These IoT applications have specific communication requirements on latency and bandwidth, and present new features on their generated data such as time-dependency. Therefore, it is desirable to reshape the current IoT architectures by exploring their inherent nature of communication and computing to support smart IoT data process and analysis. We introduce in this paper features of IoT data, trends of IoT network architectures, some problems in IoT data analysis, and their solutions. Specifically, we view that software-defined edge computing is a promising architecture to support the unique needs of IoT data analysis. We further present an experiment on data anomaly detection in this architecture, and the comparison between two architectures for ECG diagnosis. Results show that our method is effective and feasible.
|
1710.07761
|
Cheng-Jun Wang Frank
|
Cheng-Jun Wang, Zhi-Cong Chen, Qiang Qin, Naipeng Chao
|
Leveraging the Flow of Collective Attention for Computational
Communication Research
|
16 pages, 5 figures
| null | null | null |
cs.CY cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human attention becomes an increasingly important resource for our
understanding or collective human behaviors in the age of information
explosion. To better understand the flow of collective attention, we construct
the attention flow network using anonymous smartphone data of 100,000 users in
a major city of China. In the constructed network, nodes are websites visited
by users, and links denote the switch of users between two websites. We
quantify the flow of collective attention by computing the flow network
statistics, such as flow impact, flow dissipation, and flow distance. The
findings reveal a strong concentration and fragmentation of collective
attention for smartphone users, while the duplication of attention cross
websites proves to be unfounded in mobile using. We further confirmed the law
of dissipation and the allowmetric scaling of flow impact. Surprisingly, there
is a centralized flow structure, suggesting that the website with large traffic
can easily control the circulated collective attention. Additionally, we find
that flow network analysis can effectively explain the page views and sale
volume of products. Finally, we discuss the benefits and limitations of using
the flow network analysis for computational communication research.
|
[
{
"created": "Sat, 21 Oct 2017 06:17:12 GMT",
"version": "v1"
}
] |
2017-10-24
|
[
[
"Wang",
"Cheng-Jun",
""
],
[
"Chen",
"Zhi-Cong",
""
],
[
"Qin",
"Qiang",
""
],
[
"Chao",
"Naipeng",
""
]
] |
Human attention becomes an increasingly important resource for our understanding or collective human behaviors in the age of information explosion. To better understand the flow of collective attention, we construct the attention flow network using anonymous smartphone data of 100,000 users in a major city of China. In the constructed network, nodes are websites visited by users, and links denote the switch of users between two websites. We quantify the flow of collective attention by computing the flow network statistics, such as flow impact, flow dissipation, and flow distance. The findings reveal a strong concentration and fragmentation of collective attention for smartphone users, while the duplication of attention cross websites proves to be unfounded in mobile using. We further confirmed the law of dissipation and the allowmetric scaling of flow impact. Surprisingly, there is a centralized flow structure, suggesting that the website with large traffic can easily control the circulated collective attention. Additionally, we find that flow network analysis can effectively explain the page views and sale volume of products. Finally, we discuss the benefits and limitations of using the flow network analysis for computational communication research.
|
1109.5635
|
Krzysztof Onak
|
Alexandr Andoni and Krzysztof Onak
|
Approximating Edit Distance in Near-Linear Time
|
Preliminary version appeared in STOC 2009
| null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We show how to compute the edit distance between two strings of length n up
to a factor of 2^{\~O(sqrt(log n))} in n^(1+o(1)) time. This is the first
sub-polynomial approximation algorithm for this problem that runs in
near-linear time, improving on the state-of-the-art n^(1/3+o(1)) approximation.
Previously, approximation of 2^{\~O(sqrt(log n))} was known only for embedding
edit distance into l_1, and it is not known if that embedding can be computed
in less than quadratic time.
|
[
{
"created": "Mon, 26 Sep 2011 16:48:20 GMT",
"version": "v1"
}
] |
2011-09-27
|
[
[
"Andoni",
"Alexandr",
""
],
[
"Onak",
"Krzysztof",
""
]
] |
We show how to compute the edit distance between two strings of length n up to a factor of 2^{\~O(sqrt(log n))} in n^(1+o(1)) time. This is the first sub-polynomial approximation algorithm for this problem that runs in near-linear time, improving on the state-of-the-art n^(1/3+o(1)) approximation. Previously, approximation of 2^{\~O(sqrt(log n))} was known only for embedding edit distance into l_1, and it is not known if that embedding can be computed in less than quadratic time.
|
2106.05438
|
Alexander H. Liu
|
Alexander H. Liu, SouYoung Jin, Cheng-I Jeff Lai, Andrew Rouditchenko,
Aude Oliva, James Glass
|
Cross-Modal Discrete Representation Learning
|
Preprint
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Recent advances in representation learning have demonstrated an ability to
represent information from different modalities such as video, text, and audio
in a single high-level embedding vector. In this work we present a
self-supervised learning framework that is able to learn a representation that
captures finer levels of granularity across different modalities such as
concepts or events represented by visual objects or spoken words. Our framework
relies on a discretized embedding space created via vector quantization that is
shared across different modalities. Beyond the shared embedding space, we
propose a Cross-Modal Code Matching objective that forces the representations
from different views (modalities) to have a similar distribution over the
discrete embedding space such that cross-modal objects/actions localization can
be performed without direct supervision. In our experiments we show that the
proposed discretized multi-modal fine-grained representation (e.g.,
pixel/word/frame) can complement high-level summary representations (e.g.,
video/sentence/waveform) for improved performance on cross-modal retrieval
tasks. We also observe that the discretized representation uses individual
clusters to represent the same semantic concept across modalities.
|
[
{
"created": "Thu, 10 Jun 2021 00:23:33 GMT",
"version": "v1"
}
] |
2021-06-11
|
[
[
"Liu",
"Alexander H.",
""
],
[
"Jin",
"SouYoung",
""
],
[
"Lai",
"Cheng-I Jeff",
""
],
[
"Rouditchenko",
"Andrew",
""
],
[
"Oliva",
"Aude",
""
],
[
"Glass",
"James",
""
]
] |
Recent advances in representation learning have demonstrated an ability to represent information from different modalities such as video, text, and audio in a single high-level embedding vector. In this work we present a self-supervised learning framework that is able to learn a representation that captures finer levels of granularity across different modalities such as concepts or events represented by visual objects or spoken words. Our framework relies on a discretized embedding space created via vector quantization that is shared across different modalities. Beyond the shared embedding space, we propose a Cross-Modal Code Matching objective that forces the representations from different views (modalities) to have a similar distribution over the discrete embedding space such that cross-modal objects/actions localization can be performed without direct supervision. In our experiments we show that the proposed discretized multi-modal fine-grained representation (e.g., pixel/word/frame) can complement high-level summary representations (e.g., video/sentence/waveform) for improved performance on cross-modal retrieval tasks. We also observe that the discretized representation uses individual clusters to represent the same semantic concept across modalities.
|
2006.13477
|
Yiwen Sun
|
Yiwen Sun, Kun Fu, Zheng Wang, Changshui Zhang and Jieping Ye
|
Road Network Metric Learning for Estimated Time of Arrival
|
Accepted by 25th International Conference on Pattern Recognition
(ICPR 2020)
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, deep learning have achieved promising results in Estimated Time of
Arrival (ETA), which is considered as predicting the travel time from the
origin to the destination along a given path. One of the key techniques is to
use embedding vectors to represent the elements of road network, such as the
links (road segments). However, the embedding suffers from the data sparsity
problem that many links in the road network are traversed by too few floating
cars even in large ride-hailing platforms like Uber and DiDi. Insufficient data
makes the embedding vectors in an under-fitting status, which undermines the
accuracy of ETA prediction. To address the data sparsity problem, we propose
the Road Network Metric Learning framework for ETA (RNML-ETA). It consists of
two components: (1) a main regression task to predict the travel time, and (2)
an auxiliary metric learning task to improve the quality of link embedding
vectors. We further propose the triangle loss, a novel loss function to improve
the efficiency of metric learning. We validated the effectiveness of RNML-ETA
on large scale real-world datasets, by showing that our method outperforms the
state-of-the-art model and the promotion concentrates on the cold links with
few data.
|
[
{
"created": "Wed, 24 Jun 2020 04:45:14 GMT",
"version": "v1"
}
] |
2020-06-25
|
[
[
"Sun",
"Yiwen",
""
],
[
"Fu",
"Kun",
""
],
[
"Wang",
"Zheng",
""
],
[
"Zhang",
"Changshui",
""
],
[
"Ye",
"Jieping",
""
]
] |
Recently, deep learning have achieved promising results in Estimated Time of Arrival (ETA), which is considered as predicting the travel time from the origin to the destination along a given path. One of the key techniques is to use embedding vectors to represent the elements of road network, such as the links (road segments). However, the embedding suffers from the data sparsity problem that many links in the road network are traversed by too few floating cars even in large ride-hailing platforms like Uber and DiDi. Insufficient data makes the embedding vectors in an under-fitting status, which undermines the accuracy of ETA prediction. To address the data sparsity problem, we propose the Road Network Metric Learning framework for ETA (RNML-ETA). It consists of two components: (1) a main regression task to predict the travel time, and (2) an auxiliary metric learning task to improve the quality of link embedding vectors. We further propose the triangle loss, a novel loss function to improve the efficiency of metric learning. We validated the effectiveness of RNML-ETA on large scale real-world datasets, by showing that our method outperforms the state-of-the-art model and the promotion concentrates on the cold links with few data.
|
1411.4249
|
Chengwen Xing
|
Chengwen Xing, Feifei Gao, Yiqing Zhou
|
A Framework for Transceiver Designs for Multi-Hop Communications with
Covariance Shaping Constraints
|
31 Pages, 9 Figures, Submitted to Signal Processing IEEE
| null |
10.1109/TSP.2015.2425800
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For multiple-input multiple-output (MIMO) transceiver designs, sum power
constraint is an elegant and ideal model. When various practical limitations
are taken into account e.g., peak power constraints, per-antenna power
constraints, etc., covariance shaping constraints will act as an effective and
reasonable model. In this paper, we develop a framework for transceiver designs
for multi-hop communications under covariance shaping constraints.
Particularly, we focus on multi-hop amplify-and-forward (AF) MIMO relaying
communications which are recognized as a key enabling technology for
device-to-device (D2D) communications for next generation wireless systems such
as 5G. The proposed framework includes a broad range of various linear and
nonlinear transceiver designs as its special cases. It reveals an interesting
fact that the relaying operation in each hop can be understood as a matrix
version weighting operation. Furthermore, the nonlinear operations of
Tomolision-Harashima Precoding (THP) and Decision Feedback Equalizer (DFE) also
belong to the category of this kind of matrix version weighting operation.
Furthermore, for both the cases with only pure shaping constraints or joint
power constraints, the closed-form optimal solutions have been derived. At the
end of this paper, the performance of the various designs is assessed by
simulations.
|
[
{
"created": "Sun, 16 Nov 2014 11:56:03 GMT",
"version": "v1"
},
{
"created": "Wed, 19 Nov 2014 13:20:34 GMT",
"version": "v2"
},
{
"created": "Thu, 20 Nov 2014 05:40:26 GMT",
"version": "v3"
},
{
"created": "Thu, 11 Dec 2014 07:01:31 GMT",
"version": "v4"
},
{
"created": "Tue, 21 Feb 2017 10:45:39 GMT",
"version": "v5"
}
] |
2017-02-22
|
[
[
"Xing",
"Chengwen",
""
],
[
"Gao",
"Feifei",
""
],
[
"Zhou",
"Yiqing",
""
]
] |
For multiple-input multiple-output (MIMO) transceiver designs, sum power constraint is an elegant and ideal model. When various practical limitations are taken into account e.g., peak power constraints, per-antenna power constraints, etc., covariance shaping constraints will act as an effective and reasonable model. In this paper, we develop a framework for transceiver designs for multi-hop communications under covariance shaping constraints. Particularly, we focus on multi-hop amplify-and-forward (AF) MIMO relaying communications which are recognized as a key enabling technology for device-to-device (D2D) communications for next generation wireless systems such as 5G. The proposed framework includes a broad range of various linear and nonlinear transceiver designs as its special cases. It reveals an interesting fact that the relaying operation in each hop can be understood as a matrix version weighting operation. Furthermore, the nonlinear operations of Tomolision-Harashima Precoding (THP) and Decision Feedback Equalizer (DFE) also belong to the category of this kind of matrix version weighting operation. Furthermore, for both the cases with only pure shaping constraints or joint power constraints, the closed-form optimal solutions have been derived. At the end of this paper, the performance of the various designs is assessed by simulations.
|
1708.00544
|
Jeremy Kepner
|
Michael Jones, Jeremy Kepner, William Arcand, David Bestor, Bill
Bergeron, Vijay Gadepally, Michael Houle, Matthew Hubbell, Peter Michaleas,
Andrew Prout, Albert Reuther, Siddharth Samsi, Paul Monticiollo
|
Performance Measurements of Supercomputing and Cloud Storage Solutions
|
5 pages, 4 figures, to appear in IEEE HPEC 2017
| null |
10.1109/HPEC.2017.8091073
| null |
cs.DC astro-ph.IM cs.NI cs.OS cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Increasing amounts of data from varied sources, particularly in the fields of
machine learning and graph analytics, are causing storage requirements to grow
rapidly. A variety of technologies exist for storing and sharing these data,
ranging from parallel file systems used by supercomputers to distributed block
storage systems found in clouds. Relatively few comparative measurements exist
to inform decisions about which storage systems are best suited for particular
tasks. This work provides these measurements for two of the most popular
storage technologies: Lustre and Amazon S3. Lustre is an open-source, high
performance, parallel file system used by many of the largest supercomputers in
the world. Amazon's Simple Storage Service, or S3, is part of the Amazon Web
Services offering, and offers a scalable, distributed option to store and
retrieve data from anywhere on the Internet. Parallel processing is essential
for achieving high performance on modern storage systems. The performance tests
used span the gamut of parallel I/O scenarios, ranging from single-client,
single-node Amazon S3 and Lustre performance to a large-scale, multi-client
test designed to demonstrate the capabilities of a modern storage appliance
under heavy load. These results show that, when parallel I/O is used correctly
(i.e., many simultaneous read or write processes), full network bandwidth
performance is achievable and ranged from 10 gigabits/s over a 10 GigE S3
connection to 0.35 terabits/s using Lustre on a 1200 port 10 GigE switch. These
results demonstrate that S3 is well-suited to sharing vast quantities of data
over the Internet, while Lustre is well-suited to processing large quantities
of data locally.
|
[
{
"created": "Tue, 1 Aug 2017 22:48:06 GMT",
"version": "v1"
}
] |
2018-03-06
|
[
[
"Jones",
"Michael",
""
],
[
"Kepner",
"Jeremy",
""
],
[
"Arcand",
"William",
""
],
[
"Bestor",
"David",
""
],
[
"Bergeron",
"Bill",
""
],
[
"Gadepally",
"Vijay",
""
],
[
"Houle",
"Michael",
""
],
[
"Hubbell",
"Matthew",
""
],
[
"Michaleas",
"Peter",
""
],
[
"Prout",
"Andrew",
""
],
[
"Reuther",
"Albert",
""
],
[
"Samsi",
"Siddharth",
""
],
[
"Monticiollo",
"Paul",
""
]
] |
Increasing amounts of data from varied sources, particularly in the fields of machine learning and graph analytics, are causing storage requirements to grow rapidly. A variety of technologies exist for storing and sharing these data, ranging from parallel file systems used by supercomputers to distributed block storage systems found in clouds. Relatively few comparative measurements exist to inform decisions about which storage systems are best suited for particular tasks. This work provides these measurements for two of the most popular storage technologies: Lustre and Amazon S3. Lustre is an open-source, high performance, parallel file system used by many of the largest supercomputers in the world. Amazon's Simple Storage Service, or S3, is part of the Amazon Web Services offering, and offers a scalable, distributed option to store and retrieve data from anywhere on the Internet. Parallel processing is essential for achieving high performance on modern storage systems. The performance tests used span the gamut of parallel I/O scenarios, ranging from single-client, single-node Amazon S3 and Lustre performance to a large-scale, multi-client test designed to demonstrate the capabilities of a modern storage appliance under heavy load. These results show that, when parallel I/O is used correctly (i.e., many simultaneous read or write processes), full network bandwidth performance is achievable and ranged from 10 gigabits/s over a 10 GigE S3 connection to 0.35 terabits/s using Lustre on a 1200 port 10 GigE switch. These results demonstrate that S3 is well-suited to sharing vast quantities of data over the Internet, while Lustre is well-suited to processing large quantities of data locally.
|
2301.09072
|
Shangqing Liu
|
Shangqing Liu, Bozhi Wu, Xiaofei Xie, Guozhu Meng, Yang Liu
|
ContraBERT: Enhancing Code Pre-trained Models via Contrastive Learning
| null | null | null | null |
cs.SE
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Large-scale pre-trained models such as CodeBERT, GraphCodeBERT have earned
widespread attention from both academia and industry. Attributed to the
superior ability in code representation, they have been further applied in
multiple downstream tasks such as clone detection, code search and code
translation. However, it is also observed that these state-of-the-art
pre-trained models are susceptible to adversarial attacks. The performance of
these pre-trained models drops significantly with simple perturbations such as
renaming variable names. This weakness may be inherited by their downstream
models and thereby amplified at an unprecedented scale. To this end, we propose
an approach namely ContraBERT that aims to improve the robustness of
pre-trained models via contrastive learning. Specifically, we design nine kinds
of simple and complex data augmentation operators on the programming language
(PL) and natural language (NL) data to construct different variants.
Furthermore, we continue to train the existing pre-trained models by masked
language modeling (MLM) and contrastive pre-training task on the original
samples with their augmented variants to enhance the robustness of the model.
The extensive experiments demonstrate that ContraBERT can effectively improve
the robustness of the existing pre-trained models. Further study also confirms
that these robustness-enhanced models provide improvements as compared to
original models over four popular downstream tasks.
|
[
{
"created": "Sun, 22 Jan 2023 08:03:20 GMT",
"version": "v1"
}
] |
2023-01-24
|
[
[
"Liu",
"Shangqing",
""
],
[
"Wu",
"Bozhi",
""
],
[
"Xie",
"Xiaofei",
""
],
[
"Meng",
"Guozhu",
""
],
[
"Liu",
"Yang",
""
]
] |
Large-scale pre-trained models such as CodeBERT, GraphCodeBERT have earned widespread attention from both academia and industry. Attributed to the superior ability in code representation, they have been further applied in multiple downstream tasks such as clone detection, code search and code translation. However, it is also observed that these state-of-the-art pre-trained models are susceptible to adversarial attacks. The performance of these pre-trained models drops significantly with simple perturbations such as renaming variable names. This weakness may be inherited by their downstream models and thereby amplified at an unprecedented scale. To this end, we propose an approach namely ContraBERT that aims to improve the robustness of pre-trained models via contrastive learning. Specifically, we design nine kinds of simple and complex data augmentation operators on the programming language (PL) and natural language (NL) data to construct different variants. Furthermore, we continue to train the existing pre-trained models by masked language modeling (MLM) and contrastive pre-training task on the original samples with their augmented variants to enhance the robustness of the model. The extensive experiments demonstrate that ContraBERT can effectively improve the robustness of the existing pre-trained models. Further study also confirms that these robustness-enhanced models provide improvements as compared to original models over four popular downstream tasks.
|
1707.07857
|
Xiaohua Xie
|
Chunchao Guo, Jianhuang Lai, Xiaohua Xie
|
Motion-Appearance Interactive Encoding for Object Segmentation in
Unconstrained Videos
|
11 pages, 7 figures
| null |
10.1109/TCSVT.2019.2908779
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a novel method of integrating motion and appearance cues for
foreground object segmentation in unconstrained videos. Unlike conventional
methods encoding motion and appearance patterns individually, our method puts
particular emphasis on their mutual assistance. Specifically, we propose using
an interactively constrained encoding (ICE) scheme to incorporate motion and
appearance patterns into a graph that leads to a spatiotemporal energy
optimization. The reason of utilizing ICE is that both motion and appearance
cues for the same target share underlying correlative structure, thus can be
exploited in a deeply collaborative manner. We perform ICE not only in the
initialization but also in the refinement stage of a two-layer framework for
object segmentation. This scheme allows our method to consistently capture
structural patterns about object perceptions throughout the whole framework.
Our method can be operated on superpixels instead of raw pixels to reduce the
number of graph nodes by two orders of magnitude. Moreover, we propose to
partially explore the multi-object localization problem with inter-occlusion by
weighted bipartite graph matching. Comprehensive experiments on three benchmark
datasets (i.e., SegTrack, MOViCS, and GaTech) demonstrate the effectiveness of
our approach compared with extensive state-of-the-art methods.
|
[
{
"created": "Tue, 25 Jul 2017 09:01:59 GMT",
"version": "v1"
}
] |
2019-04-17
|
[
[
"Guo",
"Chunchao",
""
],
[
"Lai",
"Jianhuang",
""
],
[
"Xie",
"Xiaohua",
""
]
] |
We present a novel method of integrating motion and appearance cues for foreground object segmentation in unconstrained videos. Unlike conventional methods encoding motion and appearance patterns individually, our method puts particular emphasis on their mutual assistance. Specifically, we propose using an interactively constrained encoding (ICE) scheme to incorporate motion and appearance patterns into a graph that leads to a spatiotemporal energy optimization. The reason of utilizing ICE is that both motion and appearance cues for the same target share underlying correlative structure, thus can be exploited in a deeply collaborative manner. We perform ICE not only in the initialization but also in the refinement stage of a two-layer framework for object segmentation. This scheme allows our method to consistently capture structural patterns about object perceptions throughout the whole framework. Our method can be operated on superpixels instead of raw pixels to reduce the number of graph nodes by two orders of magnitude. Moreover, we propose to partially explore the multi-object localization problem with inter-occlusion by weighted bipartite graph matching. Comprehensive experiments on three benchmark datasets (i.e., SegTrack, MOViCS, and GaTech) demonstrate the effectiveness of our approach compared with extensive state-of-the-art methods.
|
2212.00306
|
Wentao Hu
|
Wentao Hu and Hui Fang
|
Decentralized Matrix Factorization with Heterogeneous Differential
Privacy
|
Accepted by the 22nd IEEE International Conference on Trust, Security
and Privacy in Computing and Communications (TrustCom-2023)
| null | null | null |
cs.LG cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Conventional matrix factorization relies on centralized collection of users'
data for recommendation, which might introduce an increased risk of privacy
leakage especially when the recommender is untrusted. Existing differentially
private matrix factorization methods either assume the recommender is trusted,
or can only provide a uniform level of privacy protection for all users and
items with untrusted recommender. In this paper, we propose a novel
Heterogeneous Differentially Private Matrix Factorization algorithm (denoted as
HDPMF) for untrusted recommender. To the best of our knowledge, we are the
first to achieve heterogeneous differential privacy for decentralized matrix
factorization in untrusted recommender scenario. Specifically, our framework
uses modified stretching mechanism with an innovative rescaling scheme to
achieve better trade off between privacy and accuracy. Meanwhile, by allocating
privacy budget properly, we can capture homogeneous privacy preference within a
user/item but heterogeneous privacy preference across different users/items.
Theoretical analysis confirms that HDPMF renders rigorous privacy guarantee,
and exhaustive experiments demonstrate its superiority especially in strong
privacy guarantee, high dimension model and sparse dataset scenario.
|
[
{
"created": "Thu, 1 Dec 2022 06:48:18 GMT",
"version": "v1"
},
{
"created": "Sun, 17 Sep 2023 03:19:23 GMT",
"version": "v2"
}
] |
2023-09-19
|
[
[
"Hu",
"Wentao",
""
],
[
"Fang",
"Hui",
""
]
] |
Conventional matrix factorization relies on centralized collection of users' data for recommendation, which might introduce an increased risk of privacy leakage especially when the recommender is untrusted. Existing differentially private matrix factorization methods either assume the recommender is trusted, or can only provide a uniform level of privacy protection for all users and items with untrusted recommender. In this paper, we propose a novel Heterogeneous Differentially Private Matrix Factorization algorithm (denoted as HDPMF) for untrusted recommender. To the best of our knowledge, we are the first to achieve heterogeneous differential privacy for decentralized matrix factorization in untrusted recommender scenario. Specifically, our framework uses modified stretching mechanism with an innovative rescaling scheme to achieve better trade off between privacy and accuracy. Meanwhile, by allocating privacy budget properly, we can capture homogeneous privacy preference within a user/item but heterogeneous privacy preference across different users/items. Theoretical analysis confirms that HDPMF renders rigorous privacy guarantee, and exhaustive experiments demonstrate its superiority especially in strong privacy guarantee, high dimension model and sparse dataset scenario.
|
2407.09913
|
Haoyang Liu
|
Haoyang Liu
|
Emotion Detection through Body Gesture and Face
|
25 pages, 4 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The project leverages advanced machine and deep learning techniques to
address the challenge of emotion recognition by focusing on non-facial cues,
specifically hands, body gestures, and gestures. Traditional emotion
recognition systems mainly rely on facial expression analysis and often ignore
the rich emotional information conveyed through body language. To bridge this
gap, this method leverages the Aff-Wild2 and DFEW databases to train and
evaluate a model capable of recognizing seven basic emotions (angry, disgust,
fear, happiness, sadness, surprise, and neutral) and estimating valence and
continuous scales wakeup descriptor.
Leverage OpenPose for pose estimation to extract detailed body posture and
posture features from images and videos. These features serve as input to
state-of-the-art neural network architectures, including ResNet, and ANN for
emotion classification, and fully connected layers for valence arousal
regression analysis. This bifurcation strategy can solve classification and
regression problems in the field of emotion recognition.
The project aims to contribute to the field of affective computing by
enhancing the ability of machines to interpret and respond to human emotions in
a more comprehensive and nuanced way. By integrating multimodal data and
cutting-edge computational models, I aspire to develop a system that not only
enriches human-computer interaction but also has potential applications in
areas as diverse as mental health support, educational technology, and
autonomous vehicle systems.
|
[
{
"created": "Sat, 13 Jul 2024 15:15:50 GMT",
"version": "v1"
}
] |
2024-07-16
|
[
[
"Liu",
"Haoyang",
""
]
] |
The project leverages advanced machine and deep learning techniques to address the challenge of emotion recognition by focusing on non-facial cues, specifically hands, body gestures, and gestures. Traditional emotion recognition systems mainly rely on facial expression analysis and often ignore the rich emotional information conveyed through body language. To bridge this gap, this method leverages the Aff-Wild2 and DFEW databases to train and evaluate a model capable of recognizing seven basic emotions (angry, disgust, fear, happiness, sadness, surprise, and neutral) and estimating valence and continuous scales wakeup descriptor. Leverage OpenPose for pose estimation to extract detailed body posture and posture features from images and videos. These features serve as input to state-of-the-art neural network architectures, including ResNet, and ANN for emotion classification, and fully connected layers for valence arousal regression analysis. This bifurcation strategy can solve classification and regression problems in the field of emotion recognition. The project aims to contribute to the field of affective computing by enhancing the ability of machines to interpret and respond to human emotions in a more comprehensive and nuanced way. By integrating multimodal data and cutting-edge computational models, I aspire to develop a system that not only enriches human-computer interaction but also has potential applications in areas as diverse as mental health support, educational technology, and autonomous vehicle systems.
|
1701.00963
|
Songwei Fu
|
Songwei Fu, Chia-Yen Shih, Yuming Jiang, Matteo Ceriotti, Xintao Huan
and Pedro Jos\'e Marr\'on
|
RADIUS: A System for Detecting Anomalous Link Quality Degradation in
Wireless Sensor Networks
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To ensure proper functioning of a Wireless Sensor Network (WSN), it is
crucial that the network is able to detect anomalies in communication quality
(e.g., RSSI), which may cause performance degradation, so that the network can
react accordingly. In this paper, we introduce RADIUS, a lightweight system for
the purpose. The design of RADIUS is aimed at minimizing the detection error
(caused by normal randomness of RSSI) in discriminating good links from weak
links and at reaching high detection accuracy under diverse link conditions and
dynamic environment changes. Central to the design is a threshold-based
decision approach that has its foundation on the Bayes decision theory. In
RADIUS, various techniques are developed to address challenges inherent in
applying this approach. In addition, through extensive experiments, proper
configuration of the parameters involved in these techniques is identified for
an indoor environment. In a prototype implementation of the RADIUS system
deployed in an indoor testbed, the results show that RADIUS is accurate in
detecting anomalous link quality degradation for all links across the network,
maintaining a stable error rate of 6.13% on average.
|
[
{
"created": "Wed, 4 Jan 2017 10:56:31 GMT",
"version": "v1"
}
] |
2017-01-05
|
[
[
"Fu",
"Songwei",
""
],
[
"Shih",
"Chia-Yen",
""
],
[
"Jiang",
"Yuming",
""
],
[
"Ceriotti",
"Matteo",
""
],
[
"Huan",
"Xintao",
""
],
[
"Marrón",
"Pedro José",
""
]
] |
To ensure proper functioning of a Wireless Sensor Network (WSN), it is crucial that the network is able to detect anomalies in communication quality (e.g., RSSI), which may cause performance degradation, so that the network can react accordingly. In this paper, we introduce RADIUS, a lightweight system for the purpose. The design of RADIUS is aimed at minimizing the detection error (caused by normal randomness of RSSI) in discriminating good links from weak links and at reaching high detection accuracy under diverse link conditions and dynamic environment changes. Central to the design is a threshold-based decision approach that has its foundation on the Bayes decision theory. In RADIUS, various techniques are developed to address challenges inherent in applying this approach. In addition, through extensive experiments, proper configuration of the parameters involved in these techniques is identified for an indoor environment. In a prototype implementation of the RADIUS system deployed in an indoor testbed, the results show that RADIUS is accurate in detecting anomalous link quality degradation for all links across the network, maintaining a stable error rate of 6.13% on average.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.