id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2110.07774
|
Amelia Regan
|
Hesam Sahfienya and Amelia C. Regan
|
4D flight trajectory prediction using a hybrid Deep Learning prediction
method based on ADS-B technology: a case study of Hartsfield-Jackson Atlanta
International Airport(ATL)
|
17 pages, 10 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The core of any flight schedule is the trajectories. In particular, 4D
trajectories are the most crucial component for flight attribute prediction. In
particular, 4D trajectories are the most crucial component for flight attribute
prediction. Each trajectory contains spatial and temporal features that are
associated with uncertainties that make the prediction process complex. Today
because of the increasing demand for air transportation, it is compulsory for
airports and airlines to have an optimized schedule to use all of the airport's
infrastructure potential. This is possible using advanced trajectory prediction
methods. This paper proposes a novel hybrid deep learning model to extract the
spatial and temporal features considering the uncertainty of the prediction
model for Hartsfield-Jackson Atlanta International Airport(ATL). Automatic
Dependent Surveillance-Broadcast (ADS-B) data are used as input to the models.
This research is conducted in three steps: (a) data preprocessing; (b)
prediction by a hybrid Convolutional Neural Network and Gated Recurrent Unit
(CNN-GRU) along with a 3D-CNN model; (c) The third and last step is the
comparison of the model's performance with the proposed model by comparing the
experimental results. The deep model uncertainty is considered using the
Mont-Carlo dropout (MC-Dropout). Mont-Carlo dropouts are added to the network
layers to enhance the model's prediction performance by a robust approach of
switching off between different neurons. The results show that the proposed
model has low error measurements compared to the other models (i.e., 3D CNN,
CNN-GRU). The model with MC-dropout reduces the error further by an average of
21 %.
|
[
{
"created": "Thu, 14 Oct 2021 23:48:44 GMT",
"version": "v1"
}
] |
2021-10-18
|
[
[
"Sahfienya",
"Hesam",
""
],
[
"Regan",
"Amelia C.",
""
]
] |
The core of any flight schedule is the trajectories. In particular, 4D trajectories are the most crucial component for flight attribute prediction. In particular, 4D trajectories are the most crucial component for flight attribute prediction. Each trajectory contains spatial and temporal features that are associated with uncertainties that make the prediction process complex. Today because of the increasing demand for air transportation, it is compulsory for airports and airlines to have an optimized schedule to use all of the airport's infrastructure potential. This is possible using advanced trajectory prediction methods. This paper proposes a novel hybrid deep learning model to extract the spatial and temporal features considering the uncertainty of the prediction model for Hartsfield-Jackson Atlanta International Airport(ATL). Automatic Dependent Surveillance-Broadcast (ADS-B) data are used as input to the models. This research is conducted in three steps: (a) data preprocessing; (b) prediction by a hybrid Convolutional Neural Network and Gated Recurrent Unit (CNN-GRU) along with a 3D-CNN model; (c) The third and last step is the comparison of the model's performance with the proposed model by comparing the experimental results. The deep model uncertainty is considered using the Mont-Carlo dropout (MC-Dropout). Mont-Carlo dropouts are added to the network layers to enhance the model's prediction performance by a robust approach of switching off between different neurons. The results show that the proposed model has low error measurements compared to the other models (i.e., 3D CNN, CNN-GRU). The model with MC-dropout reduces the error further by an average of 21 %.
|
1610.01732
|
Lei Tai
|
Lei Tai, Haoyang Ye, Qiong Ye, Ming Liu
|
PCA-aided Fully Convolutional Networks for Semantic Segmentation of
Multi-channel fMRI
|
ICAR 2017 - 18th International Conference on Advanced Robotics, Best
Student Paper Award, 6 figures
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Semantic segmentation of functional magnetic resonance imaging (fMRI) makes
great sense for pathology diagnosis and decision system of medical robots. The
multi-channel fMRI provides more information of the pathological features. But
the increased amount of data causes complexity in feature detections. This
paper proposes a principal component analysis (PCA)-aided fully convolutional
network to particularly deal with multi-channel fMRI. We transfer the learned
weights of contemporary classification networks to the segmentation task by
fine-tuning. The results of the convolutional network are compared with various
methods e.g. k-NN. A new labeling strategy is proposed to solve the semantic
segmentation problem with unclear boundaries. Even with a small-sized training
dataset, the test results demonstrate that our model outperforms other
pathological feature detection methods. Besides, its forward inference only
takes 90 milliseconds for a single set of fMRI data. To our knowledge, this is
the first time to realize pixel-wise labeling of multi-channel magnetic
resonance image using FCN.
|
[
{
"created": "Thu, 6 Oct 2016 05:08:15 GMT",
"version": "v1"
},
{
"created": "Fri, 9 Jun 2017 15:44:09 GMT",
"version": "v2"
},
{
"created": "Mon, 12 Jun 2017 12:50:09 GMT",
"version": "v3"
},
{
"created": "Tue, 11 Jul 2017 15:52:08 GMT",
"version": "v4"
}
] |
2017-07-12
|
[
[
"Tai",
"Lei",
""
],
[
"Ye",
"Haoyang",
""
],
[
"Ye",
"Qiong",
""
],
[
"Liu",
"Ming",
""
]
] |
Semantic segmentation of functional magnetic resonance imaging (fMRI) makes great sense for pathology diagnosis and decision system of medical robots. The multi-channel fMRI provides more information of the pathological features. But the increased amount of data causes complexity in feature detections. This paper proposes a principal component analysis (PCA)-aided fully convolutional network to particularly deal with multi-channel fMRI. We transfer the learned weights of contemporary classification networks to the segmentation task by fine-tuning. The results of the convolutional network are compared with various methods e.g. k-NN. A new labeling strategy is proposed to solve the semantic segmentation problem with unclear boundaries. Even with a small-sized training dataset, the test results demonstrate that our model outperforms other pathological feature detection methods. Besides, its forward inference only takes 90 milliseconds for a single set of fMRI data. To our knowledge, this is the first time to realize pixel-wise labeling of multi-channel magnetic resonance image using FCN.
|
1908.00205
|
Emma Xue
|
Shan Xue, Jie Lu, Guangquan Zhang
|
Cross-domain Network Representations
| null |
Pattern Recognition 94 (2019): 135-148
|
10.1016/j.patcog.2019.05.009
| null |
cs.SI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The purpose of network representation is to learn a set of latent features by
obtaining community information from network structures to provide knowledge
for machine learning tasks. Recent research has driven significant progress in
network representation by employing random walks as the network sampling
strategy. Nevertheless, existing approaches rely on domain-specifically rich
community structures and fail in the network that lack topological information
in its own domain. In this paper, we propose a novel algorithm for cross-domain
network representation, named as CDNR. By generating the random walks from a
structural rich domain and transferring the knowledge on the random walks
across domains, it enables a network representation for the structural scarce
domain as well. To be specific, CDNR is realized by a cross-domain two-layer
node-scale balance algorithm and a cross-domain two-layer knowledge transfer
algorithm in the framework of cross-domain two-layer random walk learning.
Experiments on various real-world datasets demonstrate the effectiveness of
CDNR for universal networks in an unsupervised way.
|
[
{
"created": "Thu, 1 Aug 2019 04:32:15 GMT",
"version": "v1"
}
] |
2019-08-02
|
[
[
"Xue",
"Shan",
""
],
[
"Lu",
"Jie",
""
],
[
"Zhang",
"Guangquan",
""
]
] |
The purpose of network representation is to learn a set of latent features by obtaining community information from network structures to provide knowledge for machine learning tasks. Recent research has driven significant progress in network representation by employing random walks as the network sampling strategy. Nevertheless, existing approaches rely on domain-specifically rich community structures and fail in the network that lack topological information in its own domain. In this paper, we propose a novel algorithm for cross-domain network representation, named as CDNR. By generating the random walks from a structural rich domain and transferring the knowledge on the random walks across domains, it enables a network representation for the structural scarce domain as well. To be specific, CDNR is realized by a cross-domain two-layer node-scale balance algorithm and a cross-domain two-layer knowledge transfer algorithm in the framework of cross-domain two-layer random walk learning. Experiments on various real-world datasets demonstrate the effectiveness of CDNR for universal networks in an unsupervised way.
|
2110.11405
|
Gautam Singh
|
Gautam Singh, Fei Deng and Sungjin Ahn
|
Illiterate DALL-E Learns to Compose
|
Published as a conference paper at ICLR 2022
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although DALL-E has shown an impressive ability of composition-based
systematic generalization in image generation, it requires the dataset of
text-image pairs and the compositionality is provided by the text. In contrast,
object-centric representation models like the Slot Attention model learn
composable representations without the text prompt. However, unlike DALL-E its
ability to systematically generalize for zero-shot generation is significantly
limited. In this paper, we propose a simple but novel slot-based autoencoding
architecture, called SLATE, for combining the best of both worlds: learning
object-centric representations that allows systematic generalization in
zero-shot image generation without text. As such, this model can also be seen
as an illiterate DALL-E model. Unlike the pixel-mixture decoders of existing
object-centric representation models, we propose to use the Image GPT decoder
conditioned on the slots for capturing complex interactions among the slots and
pixels. In experiments, we show that this simple and easy-to-implement
architecture not requiring a text prompt achieves significant improvement in
in-distribution and out-of-distribution (zero-shot) image generation and
qualitatively comparable or better slot-attention structure than the models
based on mixture decoders.
|
[
{
"created": "Sun, 17 Oct 2021 16:40:47 GMT",
"version": "v1"
},
{
"created": "Wed, 27 Oct 2021 18:46:24 GMT",
"version": "v2"
},
{
"created": "Mon, 14 Mar 2022 21:10:39 GMT",
"version": "v3"
}
] |
2022-03-16
|
[
[
"Singh",
"Gautam",
""
],
[
"Deng",
"Fei",
""
],
[
"Ahn",
"Sungjin",
""
]
] |
Although DALL-E has shown an impressive ability of composition-based systematic generalization in image generation, it requires the dataset of text-image pairs and the compositionality is provided by the text. In contrast, object-centric representation models like the Slot Attention model learn composable representations without the text prompt. However, unlike DALL-E its ability to systematically generalize for zero-shot generation is significantly limited. In this paper, we propose a simple but novel slot-based autoencoding architecture, called SLATE, for combining the best of both worlds: learning object-centric representations that allows systematic generalization in zero-shot image generation without text. As such, this model can also be seen as an illiterate DALL-E model. Unlike the pixel-mixture decoders of existing object-centric representation models, we propose to use the Image GPT decoder conditioned on the slots for capturing complex interactions among the slots and pixels. In experiments, we show that this simple and easy-to-implement architecture not requiring a text prompt achieves significant improvement in in-distribution and out-of-distribution (zero-shot) image generation and qualitatively comparable or better slot-attention structure than the models based on mixture decoders.
|
1805.11572
|
Sebastian Lunz
|
Sebastian Lunz, Ozan \"Oktem, Carola-Bibiane Sch\"onlieb
|
Adversarial Regularizers in Inverse Problems
|
published at NeurIPS 2018
| null | null | null |
cs.CV cs.LG math.NA stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Inverse Problems in medical imaging and computer vision are traditionally
solved using purely model-based methods. Among those variational regularization
models are one of the most popular approaches. We propose a new framework for
applying data-driven approaches to inverse problems, using a neural network as
a regularization functional. The network learns to discriminate between the
distribution of ground truth images and the distribution of unregularized
reconstructions. Once trained, the network is applied to the inverse problem by
solving the corresponding variational problem. Unlike other data-based
approaches for inverse problems, the algorithm can be applied even if only
unsupervised training data is available. Experiments demonstrate the potential
of the framework for denoising on the BSDS dataset and for computed tomography
reconstruction on the LIDC dataset.
|
[
{
"created": "Tue, 29 May 2018 16:40:37 GMT",
"version": "v1"
},
{
"created": "Fri, 11 Jan 2019 17:24:06 GMT",
"version": "v2"
}
] |
2019-01-14
|
[
[
"Lunz",
"Sebastian",
""
],
[
"Öktem",
"Ozan",
""
],
[
"Schönlieb",
"Carola-Bibiane",
""
]
] |
Inverse Problems in medical imaging and computer vision are traditionally solved using purely model-based methods. Among those variational regularization models are one of the most popular approaches. We propose a new framework for applying data-driven approaches to inverse problems, using a neural network as a regularization functional. The network learns to discriminate between the distribution of ground truth images and the distribution of unregularized reconstructions. Once trained, the network is applied to the inverse problem by solving the corresponding variational problem. Unlike other data-based approaches for inverse problems, the algorithm can be applied even if only unsupervised training data is available. Experiments demonstrate the potential of the framework for denoising on the BSDS dataset and for computed tomography reconstruction on the LIDC dataset.
|
1603.06200
|
Florian Geigl
|
Florian Geigl, Kristina Lerman, Simon Walk, Markus Strohmaier, Denis
Helic
|
Assessing the Navigational Effects of Click Biases and Link Insertion on
the Web
|
This paper is currently under review at ACM Hypertext 2016
| null | null | null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Websites have an inherent interest in steering user navigation in order to,
for example, increase sales of specific products or categories, or to guide
users towards specific information. In general, website administrators can use
the following two strategies to influence their visitors' navigation behavior.
First, they can introduce click biases to reinforce specific links on their
website by changing their visual appearance, for example, by locating them on
the top of the page. Second, they can utilize link insertion to generate new
paths for users to navigate over. In this paper, we present a novel approach
for measuring the potential effects of these two strategies on user navigation.
Our results suggest that, depending on the pages for which we want to increase
user visits, optimal link modification strategies vary. Moreover, simple
topological measures can be used as proxies for assessing the impact of the
intended changes on the navigation of users, even before these changes are
implemented.
|
[
{
"created": "Sun, 20 Mar 2016 11:07:48 GMT",
"version": "v1"
}
] |
2016-03-22
|
[
[
"Geigl",
"Florian",
""
],
[
"Lerman",
"Kristina",
""
],
[
"Walk",
"Simon",
""
],
[
"Strohmaier",
"Markus",
""
],
[
"Helic",
"Denis",
""
]
] |
Websites have an inherent interest in steering user navigation in order to, for example, increase sales of specific products or categories, or to guide users towards specific information. In general, website administrators can use the following two strategies to influence their visitors' navigation behavior. First, they can introduce click biases to reinforce specific links on their website by changing their visual appearance, for example, by locating them on the top of the page. Second, they can utilize link insertion to generate new paths for users to navigate over. In this paper, we present a novel approach for measuring the potential effects of these two strategies on user navigation. Our results suggest that, depending on the pages for which we want to increase user visits, optimal link modification strategies vary. Moreover, simple topological measures can be used as proxies for assessing the impact of the intended changes on the navigation of users, even before these changes are implemented.
|
1405.6058
|
Francesco Gadaleta
|
Francesco Gadaleta, Raoul Strackx, Nick Nikiforakis, Frank Piessens,
Wouter Joosen
|
On the effectiveness of virtualization-based security
|
12 pages, 07-10 May 2012, Max Planck Institute IT Security, Freiburg
(Germany)
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/3.0/
|
Protecting commodity operating systems and applications against malware and
targeted attacks has proven to be difficult. In recent years, virtualization
has received attention from security researchers who utilize it to harden
existing systems and provide strong security guarantees. This has lead to
interesting use cases such as cloud computing where possibly sensitive data is
processed on remote, third party systems. The migration and processing of data
in remote servers, poses new technical and legal questions, such as which
security measures should be taken to protect this data or how can it be proven
that execution of code wasn't tampered with. In this paper we focus on
technological aspects. We discuss the various possibilities of security within
the virtualization layer and we use as a case study \HelloRootkitty{}, a
lightweight invariance-enforcing framework which allows an operating system to
recover from kernel-level attacks. In addition to \HelloRootkitty{}, we also
explore the use of special hardware chips as a way of further protecting and
guaranteeing the integrity of a virtualized system.
|
[
{
"created": "Thu, 22 May 2014 07:56:53 GMT",
"version": "v1"
}
] |
2014-05-26
|
[
[
"Gadaleta",
"Francesco",
""
],
[
"Strackx",
"Raoul",
""
],
[
"Nikiforakis",
"Nick",
""
],
[
"Piessens",
"Frank",
""
],
[
"Joosen",
"Wouter",
""
]
] |
Protecting commodity operating systems and applications against malware and targeted attacks has proven to be difficult. In recent years, virtualization has received attention from security researchers who utilize it to harden existing systems and provide strong security guarantees. This has lead to interesting use cases such as cloud computing where possibly sensitive data is processed on remote, third party systems. The migration and processing of data in remote servers, poses new technical and legal questions, such as which security measures should be taken to protect this data or how can it be proven that execution of code wasn't tampered with. In this paper we focus on technological aspects. We discuss the various possibilities of security within the virtualization layer and we use as a case study \HelloRootkitty{}, a lightweight invariance-enforcing framework which allows an operating system to recover from kernel-level attacks. In addition to \HelloRootkitty{}, we also explore the use of special hardware chips as a way of further protecting and guaranteeing the integrity of a virtualized system.
|
2404.06437
|
Dimitrios Michail
|
Dimitrios Michail and Lefki-Ioanna Panagiotou and Charalampos Davalas
and Ioannis Prapas and Spyros Kondylatos and Nikolaos Ioannis Bountos and
Ioannis Papoutsis
|
Seasonal Fire Prediction using Spatio-Temporal Deep Neural Networks
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
With climate change expected to exacerbate fire weather conditions, the
accurate anticipation of wildfires on a global scale becomes increasingly
crucial for disaster mitigation. In this study, we utilize SeasFire, a
comprehensive global wildfire dataset with climate, vegetation, oceanic
indices, and human-related variables, to enable seasonal wildfire forecasting
with machine learning. For the predictive analysis, we train deep learning
models with different architectures that capture the spatio-temporal context
leading to wildfires. Our investigation focuses on assessing the effectiveness
of these models in predicting the presence of burned areas at varying
forecasting time horizons globally, extending up to six months into the future,
and on how different spatial or/and temporal context affects the performance of
the models. Our findings demonstrate the great potential of deep learning
models in seasonal fire forecasting; longer input time-series leads to more
robust predictions across varying forecasting horizons, while integrating
spatial information to capture wildfire spatio-temporal dynamics boosts
performance. Finally, our results hint that in order to enhance performance at
longer forecasting horizons, a larger receptive field spatially needs to be
considered.
|
[
{
"created": "Tue, 9 Apr 2024 16:28:54 GMT",
"version": "v1"
}
] |
2024-04-10
|
[
[
"Michail",
"Dimitrios",
""
],
[
"Panagiotou",
"Lefki-Ioanna",
""
],
[
"Davalas",
"Charalampos",
""
],
[
"Prapas",
"Ioannis",
""
],
[
"Kondylatos",
"Spyros",
""
],
[
"Bountos",
"Nikolaos Ioannis",
""
],
[
"Papoutsis",
"Ioannis",
""
]
] |
With climate change expected to exacerbate fire weather conditions, the accurate anticipation of wildfires on a global scale becomes increasingly crucial for disaster mitigation. In this study, we utilize SeasFire, a comprehensive global wildfire dataset with climate, vegetation, oceanic indices, and human-related variables, to enable seasonal wildfire forecasting with machine learning. For the predictive analysis, we train deep learning models with different architectures that capture the spatio-temporal context leading to wildfires. Our investigation focuses on assessing the effectiveness of these models in predicting the presence of burned areas at varying forecasting time horizons globally, extending up to six months into the future, and on how different spatial or/and temporal context affects the performance of the models. Our findings demonstrate the great potential of deep learning models in seasonal fire forecasting; longer input time-series leads to more robust predictions across varying forecasting horizons, while integrating spatial information to capture wildfire spatio-temporal dynamics boosts performance. Finally, our results hint that in order to enhance performance at longer forecasting horizons, a larger receptive field spatially needs to be considered.
|
2006.10712
|
Ertunc Erdil
|
Ertunc Erdil, Krishna Chaitanya, Neerav Karani, Ender Konukoglu
|
Task-agnostic Out-of-Distribution Detection Using Kernel Density
Estimation
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the recent years, researchers proposed a number of successful methods to
perform out-of-distribution (OOD) detection in deep neural networks (DNNs). So
far the scope of the highly accurate methods has been limited to image level
classification tasks. However, attempts for generally applicable methods beyond
classification did not attain similar performance. In this paper, we address
this limitation by proposing a simple yet effective task-agnostic OOD detection
method. We estimate the probability density functions (pdfs) of intermediate
features of a pre-trained DNN by performing kernel density estimation (KDE) on
the training dataset. As direct application of KDE to feature maps is hindered
by their high dimensionality, we use a set of lower-dimensional marginalized
KDE models instead of a single high-dimensional one. At test time, we evaluate
the pdfs on a test sample and produce a confidence score that indicates the
sample is OOD. The use of KDE eliminates the need for making simplifying
assumptions about the underlying feature pdfs and makes the proposed method
task-agnostic. We perform extensive experiments on classification tasks using
benchmark datasets for OOD detection. Additionally, we perform experiments on
medical image segmentation tasks using brain MRI datasets. The results
demonstrate that the proposed method consistently achieves high OOD detection
performance in both classification and segmentation tasks and improves
state-of-the-art in almost all cases. Code is available at
\url{https://github.com/eerdil/task_agnostic_ood}
|
[
{
"created": "Thu, 18 Jun 2020 17:46:06 GMT",
"version": "v1"
},
{
"created": "Tue, 6 Oct 2020 20:39:03 GMT",
"version": "v2"
},
{
"created": "Tue, 24 Nov 2020 11:29:14 GMT",
"version": "v3"
},
{
"created": "Tue, 30 Mar 2021 21:55:47 GMT",
"version": "v4"
}
] |
2021-04-01
|
[
[
"Erdil",
"Ertunc",
""
],
[
"Chaitanya",
"Krishna",
""
],
[
"Karani",
"Neerav",
""
],
[
"Konukoglu",
"Ender",
""
]
] |
In the recent years, researchers proposed a number of successful methods to perform out-of-distribution (OOD) detection in deep neural networks (DNNs). So far the scope of the highly accurate methods has been limited to image level classification tasks. However, attempts for generally applicable methods beyond classification did not attain similar performance. In this paper, we address this limitation by proposing a simple yet effective task-agnostic OOD detection method. We estimate the probability density functions (pdfs) of intermediate features of a pre-trained DNN by performing kernel density estimation (KDE) on the training dataset. As direct application of KDE to feature maps is hindered by their high dimensionality, we use a set of lower-dimensional marginalized KDE models instead of a single high-dimensional one. At test time, we evaluate the pdfs on a test sample and produce a confidence score that indicates the sample is OOD. The use of KDE eliminates the need for making simplifying assumptions about the underlying feature pdfs and makes the proposed method task-agnostic. We perform extensive experiments on classification tasks using benchmark datasets for OOD detection. Additionally, we perform experiments on medical image segmentation tasks using brain MRI datasets. The results demonstrate that the proposed method consistently achieves high OOD detection performance in both classification and segmentation tasks and improves state-of-the-art in almost all cases. Code is available at \url{https://github.com/eerdil/task_agnostic_ood}
|
1702.06969
|
Vedat Levi Alev
|
Vedat Levi Alev, Lap Chi Lau
|
Approximating Unique Games Using Low Diameter Graph Decomposition
|
15 pages, 2 figures
| null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We design approximation algorithms for Unique Games when the constraint graph
admits good low diameter graph decomposition. For the ${\sf Max2Lin}_k$ problem
in $K_r$-minor free graphs, when there is an assignment satisfying
$1-\varepsilon$ fraction of constraints, we present an algorithm that produces
an assignment satisfying $1-O(r\varepsilon)$ fraction of constraints, with the
approximation ratio independent of the alphabet size. A corollary is an
improved approximation algorithm for the ${\sf MaxCut}$ problem for $K_r$-minor
free graphs. For general Unique Games in $K_r$-minor free graphs, we provide
another algorithm that produces an assignment satisfying $1-O(r
\sqrt{\varepsilon})$ fraction of constraints.
Our approach is to round a linear programming relaxation to find a minimum
subset of edges that intersects all the inconsistent cycles. We show that it is
possible to apply the low diameter graph decomposition technique on the
constraint graph directly, rather than to work on the label extended graph as
in previous algorithms for Unique Games. The same approach applies when the
constraint graph is of genus $g$, and we get similar results with $r$ replaced
by $\log g$ in the ${\sf Max2Lin}_k$ problem and by $\sqrt{\log g}$ in the
general problem. The former result generalizes the result of Gupta-Talwar for
Unique Games in the ${\sf Max2Lin}_k$ case, and the latter result generalizes
the result of Trevisan for general Unique Games.
|
[
{
"created": "Wed, 22 Feb 2017 19:08:25 GMT",
"version": "v1"
},
{
"created": "Mon, 27 Feb 2017 05:09:46 GMT",
"version": "v2"
},
{
"created": "Mon, 12 Jun 2017 16:23:37 GMT",
"version": "v3"
},
{
"created": "Fri, 17 Nov 2017 13:25:16 GMT",
"version": "v4"
},
{
"created": "Wed, 29 Nov 2017 20:02:24 GMT",
"version": "v5"
}
] |
2017-12-01
|
[
[
"Alev",
"Vedat Levi",
""
],
[
"Lau",
"Lap Chi",
""
]
] |
We design approximation algorithms for Unique Games when the constraint graph admits good low diameter graph decomposition. For the ${\sf Max2Lin}_k$ problem in $K_r$-minor free graphs, when there is an assignment satisfying $1-\varepsilon$ fraction of constraints, we present an algorithm that produces an assignment satisfying $1-O(r\varepsilon)$ fraction of constraints, with the approximation ratio independent of the alphabet size. A corollary is an improved approximation algorithm for the ${\sf MaxCut}$ problem for $K_r$-minor free graphs. For general Unique Games in $K_r$-minor free graphs, we provide another algorithm that produces an assignment satisfying $1-O(r \sqrt{\varepsilon})$ fraction of constraints. Our approach is to round a linear programming relaxation to find a minimum subset of edges that intersects all the inconsistent cycles. We show that it is possible to apply the low diameter graph decomposition technique on the constraint graph directly, rather than to work on the label extended graph as in previous algorithms for Unique Games. The same approach applies when the constraint graph is of genus $g$, and we get similar results with $r$ replaced by $\log g$ in the ${\sf Max2Lin}_k$ problem and by $\sqrt{\log g}$ in the general problem. The former result generalizes the result of Gupta-Talwar for Unique Games in the ${\sf Max2Lin}_k$ case, and the latter result generalizes the result of Trevisan for general Unique Games.
|
0902.1853
|
Arash Amini
|
F. Marvasti, A. Amini, F. Haddadi, M. Soltanolkotabi, B. H. Khalaj, A.
Aldroubi, S. Holm, S. Sanei and J. Chambers
|
A Unified Approach to Sparse Signal Processing
|
43 pages, 40 figures, 15 tables
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A unified view of sparse signal processing is presented in tutorial form by
bringing together various fields. For each of these fields, various algorithms
and techniques, which have been developed to leverage sparsity, are described
succinctly. The common benefits of significant reduction in sampling rate and
processing manipulations are revealed.
The key applications of sparse signal processing are sampling, coding,
spectral estimation, array processing, component analysis, and multipath
channel estimation. In terms of reconstruction algorithms, linkages are made
with random sampling, compressed sensing and rate of innovation. The redundancy
introduced by channel coding in finite/real Galois fields is then related to
sampling with similar reconstruction algorithms. The methods of Prony,
Pisarenko, and MUSIC are next discussed for sparse frequency domain
representations. Specifically, the relations of the approach of Prony to an
annihilating filter and Error Locator Polynomials in coding are emphasized; the
Pisarenko and MUSIC methods are further improvements of the Prony method. Such
spectral estimation methods is then related to multi-source location and DOA
estimation in array processing. The notions of sparse array beamforming and
sparse sensor networks are also introduced. Sparsity in unobservable source
signals is also shown to facilitate source separation in SCA; the algorithms
developed in this area are also widely used in compressed sensing. Finally, the
multipath channel estimation problem is shown to have a sparse formulation;
algorithms similar to sampling and coding are used to estimate OFDM channels.
|
[
{
"created": "Wed, 11 Feb 2009 16:58:19 GMT",
"version": "v1"
}
] |
2009-02-12
|
[
[
"Marvasti",
"F.",
""
],
[
"Amini",
"A.",
""
],
[
"Haddadi",
"F.",
""
],
[
"Soltanolkotabi",
"M.",
""
],
[
"Khalaj",
"B. H.",
""
],
[
"Aldroubi",
"A.",
""
],
[
"Holm",
"S.",
""
],
[
"Sanei",
"S.",
""
],
[
"Chambers",
"J.",
""
]
] |
A unified view of sparse signal processing is presented in tutorial form by bringing together various fields. For each of these fields, various algorithms and techniques, which have been developed to leverage sparsity, are described succinctly. The common benefits of significant reduction in sampling rate and processing manipulations are revealed. The key applications of sparse signal processing are sampling, coding, spectral estimation, array processing, component analysis, and multipath channel estimation. In terms of reconstruction algorithms, linkages are made with random sampling, compressed sensing and rate of innovation. The redundancy introduced by channel coding in finite/real Galois fields is then related to sampling with similar reconstruction algorithms. The methods of Prony, Pisarenko, and MUSIC are next discussed for sparse frequency domain representations. Specifically, the relations of the approach of Prony to an annihilating filter and Error Locator Polynomials in coding are emphasized; the Pisarenko and MUSIC methods are further improvements of the Prony method. Such spectral estimation methods is then related to multi-source location and DOA estimation in array processing. The notions of sparse array beamforming and sparse sensor networks are also introduced. Sparsity in unobservable source signals is also shown to facilitate source separation in SCA; the algorithms developed in this area are also widely used in compressed sensing. Finally, the multipath channel estimation problem is shown to have a sparse formulation; algorithms similar to sampling and coding are used to estimate OFDM channels.
|
1905.02450
|
Kaitao Song
|
Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu
|
MASS: Masked Sequence to Sequence Pre-training for Language Generation
|
Accepted by ICML 2019
| null | null | null |
cs.CL cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pre-training and fine-tuning, e.g., BERT, have achieved great success in
language understanding by transferring knowledge from rich-resource
pre-training task to the low/zero-resource downstream tasks. Inspired by the
success of BERT, we propose MAsked Sequence to Sequence pre-training (MASS) for
the encoder-decoder based language generation tasks. MASS adopts the
encoder-decoder framework to reconstruct a sentence fragment given the
remaining part of the sentence: its encoder takes a sentence with randomly
masked fragment (several consecutive tokens) as input, and its decoder tries to
predict this masked fragment. In this way, MASS can jointly train the encoder
and decoder to develop the capability of representation extraction and language
modeling. By further fine-tuning on a variety of zero/low-resource language
generation tasks, including neural machine translation, text summarization and
conversational response generation (3 tasks and totally 8 datasets), MASS
achieves significant improvements over the baselines without pre-training or
with other pre-training methods. Specially, we achieve the state-of-the-art
accuracy (37.5 in terms of BLEU score) on the unsupervised English-French
translation, even beating the early attention-based supervised model.
|
[
{
"created": "Tue, 7 May 2019 10:13:04 GMT",
"version": "v1"
},
{
"created": "Wed, 8 May 2019 06:46:26 GMT",
"version": "v2"
},
{
"created": "Mon, 13 May 2019 11:43:27 GMT",
"version": "v3"
},
{
"created": "Tue, 11 Jun 2019 03:43:41 GMT",
"version": "v4"
},
{
"created": "Fri, 21 Jun 2019 04:36:52 GMT",
"version": "v5"
}
] |
2019-06-24
|
[
[
"Song",
"Kaitao",
""
],
[
"Tan",
"Xu",
""
],
[
"Qin",
"Tao",
""
],
[
"Lu",
"Jianfeng",
""
],
[
"Liu",
"Tie-Yan",
""
]
] |
Pre-training and fine-tuning, e.g., BERT, have achieved great success in language understanding by transferring knowledge from rich-resource pre-training task to the low/zero-resource downstream tasks. Inspired by the success of BERT, we propose MAsked Sequence to Sequence pre-training (MASS) for the encoder-decoder based language generation tasks. MASS adopts the encoder-decoder framework to reconstruct a sentence fragment given the remaining part of the sentence: its encoder takes a sentence with randomly masked fragment (several consecutive tokens) as input, and its decoder tries to predict this masked fragment. In this way, MASS can jointly train the encoder and decoder to develop the capability of representation extraction and language modeling. By further fine-tuning on a variety of zero/low-resource language generation tasks, including neural machine translation, text summarization and conversational response generation (3 tasks and totally 8 datasets), MASS achieves significant improvements over the baselines without pre-training or with other pre-training methods. Specially, we achieve the state-of-the-art accuracy (37.5 in terms of BLEU score) on the unsupervised English-French translation, even beating the early attention-based supervised model.
|
2305.05592
|
Ela Liberman Pincu
|
Ela Liberman-Pincu and Tal Oron-Gilad
|
A Robotic Medical Clown (RMC): Forming a Design Space Model
|
Working paper based on the poster presented at ICRA 2023
| null | null | null |
cs.RO cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Medical clowns help hospitalized children in reducing pain and anxiety
symptoms and increase the level of satisfaction in children's wards.
Unfortunately, there is a shortage of medical clowns around the world.
Furthermore, isolated children can not enjoy this service. This study explored
the concept of a Robotic Medical Clown (RMC) and its role. We used mixed
methods of elicitation to create a design space model for future robotic
medical clowns. We investigated the needs, perceptions, and preferences of
children and teenagers using four methods: interviewing medical clowns to learn
how they perceive their role and the potential role of an RMC, conducting focus
groups with teenagers, a one-on-one experience of children with a robot, and an
online questionnaire. The concept of RMCs was acceptable to children,
teenagers, and medical clowns. We found that the RMC's appearance affects the
perception of its characters and role. Future work should investigate the
interaction in hospitals.
|
[
{
"created": "Tue, 9 May 2023 16:31:36 GMT",
"version": "v1"
}
] |
2023-05-10
|
[
[
"Liberman-Pincu",
"Ela",
""
],
[
"Oron-Gilad",
"Tal",
""
]
] |
Medical clowns help hospitalized children in reducing pain and anxiety symptoms and increase the level of satisfaction in children's wards. Unfortunately, there is a shortage of medical clowns around the world. Furthermore, isolated children can not enjoy this service. This study explored the concept of a Robotic Medical Clown (RMC) and its role. We used mixed methods of elicitation to create a design space model for future robotic medical clowns. We investigated the needs, perceptions, and preferences of children and teenagers using four methods: interviewing medical clowns to learn how they perceive their role and the potential role of an RMC, conducting focus groups with teenagers, a one-on-one experience of children with a robot, and an online questionnaire. The concept of RMCs was acceptable to children, teenagers, and medical clowns. We found that the RMC's appearance affects the perception of its characters and role. Future work should investigate the interaction in hospitals.
|
1805.11550
|
Justin Hsu
|
Gerco van Heerdt, Justin Hsu, Jo\"el Ouaknine, Alexandra Silva
|
Convex Language Semantics for Nondeterministic Probabilistic Automata
| null | null | null | null |
cs.FL cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We explore language semantics for automata combining probabilistic and
nondeterministic behavior. We first show that there are precisely two natural
semantics for probabilistic automata with nondeterminism. For both choices, we
show that these automata are strictly more expressive than deterministic
probabilistic automata, and we prove that the problem of checking language
equivalence is undecidable by reduction from the threshold problem. However, we
provide a discounted metric that can be computed to arbitrarily high precision.
|
[
{
"created": "Tue, 29 May 2018 15:56:32 GMT",
"version": "v1"
}
] |
2018-05-30
|
[
[
"van Heerdt",
"Gerco",
""
],
[
"Hsu",
"Justin",
""
],
[
"Ouaknine",
"Joël",
""
],
[
"Silva",
"Alexandra",
""
]
] |
We explore language semantics for automata combining probabilistic and nondeterministic behavior. We first show that there are precisely two natural semantics for probabilistic automata with nondeterminism. For both choices, we show that these automata are strictly more expressive than deterministic probabilistic automata, and we prove that the problem of checking language equivalence is undecidable by reduction from the threshold problem. However, we provide a discounted metric that can be computed to arbitrarily high precision.
|
1708.09217
|
Long Zhou
|
Long Zhou, Jiajun Zhang, Chengqing Zong
|
Look-ahead Attention for Generation in Neural Machine Translation
|
12 pages, 5 figures
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The attention model has become a standard component in neural machine
translation (NMT) and it guides translation process by selectively focusing on
parts of the source sentence when predicting each target word. However, we find
that the generation of a target word does not only depend on the source
sentence, but also rely heavily on the previous generated target words,
especially the distant words which are difficult to model by using recurrent
neural networks. To solve this problem, we propose in this paper a novel
look-ahead attention mechanism for generation in NMT, which aims at directly
capturing the dependency relationship between target words. We further design
three patterns to integrate our look-ahead attention into the conventional
attention model. Experiments on NIST Chinese-to-English and WMT
English-to-German translation tasks show that our proposed look-ahead attention
mechanism achieves substantial improvements over state-of-the-art baselines.
|
[
{
"created": "Wed, 30 Aug 2017 11:27:02 GMT",
"version": "v1"
}
] |
2017-08-31
|
[
[
"Zhou",
"Long",
""
],
[
"Zhang",
"Jiajun",
""
],
[
"Zong",
"Chengqing",
""
]
] |
The attention model has become a standard component in neural machine translation (NMT) and it guides translation process by selectively focusing on parts of the source sentence when predicting each target word. However, we find that the generation of a target word does not only depend on the source sentence, but also rely heavily on the previous generated target words, especially the distant words which are difficult to model by using recurrent neural networks. To solve this problem, we propose in this paper a novel look-ahead attention mechanism for generation in NMT, which aims at directly capturing the dependency relationship between target words. We further design three patterns to integrate our look-ahead attention into the conventional attention model. Experiments on NIST Chinese-to-English and WMT English-to-German translation tasks show that our proposed look-ahead attention mechanism achieves substantial improvements over state-of-the-art baselines.
|
2206.04891
|
Sascha Marton
|
Sascha Marton, Stefan L\"udtke, Christian Bartelt, Andrej Tschalzev,
Heiner Stuckenschmidt
|
Explaining Neural Networks without Access to Training Data
| null |
Machine Learning (2024)
|
10.1007/s10994-023-06428-4
| null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We consider generating explanations for neural networks in cases where the
network's training data is not accessible, for instance due to privacy or
safety issues. Recently, $\mathcal{I}$-Nets have been proposed as a sample-free
approach to post-hoc, global model interpretability that does not require
access to training data. They formulate interpretation as a machine learning
task that maps network representations (parameters) to a representation of an
interpretable function. In this paper, we extend the $\mathcal{I}$-Net
framework to the cases of standard and soft decision trees as surrogate models.
We propose a suitable decision tree representation and design of the
corresponding $\mathcal{I}$-Net output layers. Furthermore, we make
$\mathcal{I}$-Nets applicable to real-world tasks by considering more realistic
distributions when generating the $\mathcal{I}$-Net's training data. We
empirically evaluate our approach against traditional global, post-hoc
interpretability approaches and show that it achieves superior results when the
training data is not accessible.
|
[
{
"created": "Fri, 10 Jun 2022 06:10:04 GMT",
"version": "v1"
}
] |
2024-01-15
|
[
[
"Marton",
"Sascha",
""
],
[
"Lüdtke",
"Stefan",
""
],
[
"Bartelt",
"Christian",
""
],
[
"Tschalzev",
"Andrej",
""
],
[
"Stuckenschmidt",
"Heiner",
""
]
] |
We consider generating explanations for neural networks in cases where the network's training data is not accessible, for instance due to privacy or safety issues. Recently, $\mathcal{I}$-Nets have been proposed as a sample-free approach to post-hoc, global model interpretability that does not require access to training data. They formulate interpretation as a machine learning task that maps network representations (parameters) to a representation of an interpretable function. In this paper, we extend the $\mathcal{I}$-Net framework to the cases of standard and soft decision trees as surrogate models. We propose a suitable decision tree representation and design of the corresponding $\mathcal{I}$-Net output layers. Furthermore, we make $\mathcal{I}$-Nets applicable to real-world tasks by considering more realistic distributions when generating the $\mathcal{I}$-Net's training data. We empirically evaluate our approach against traditional global, post-hoc interpretability approaches and show that it achieves superior results when the training data is not accessible.
|
2406.18836
|
Huaying Zhang
|
Huaying Zhang, Rintaro Yanagi, Ren Togo, Takahiro Ogawa, Miki Haseyama
|
Zero-shot Composed Image Retrieval Considering Query-target Relationship
Leveraging Masked Image-text Pairs
|
Accepted as a conference paper in IEEE ICIP 2024
| null | null | null |
cs.CV cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes a novel zero-shot composed image retrieval (CIR) method
considering the query-target relationship by masked image-text pairs. The
objective of CIR is to retrieve the target image using a query image and a
query text. Existing methods use a textual inversion network to convert the
query image into a pseudo word to compose the image and text and use a
pre-trained visual-language model to realize the retrieval. However, they do
not consider the query-target relationship to train the textual inversion
network to acquire information for retrieval. In this paper, we propose a novel
zero-shot CIR method that is trained end-to-end using masked image-text pairs.
By exploiting the abundant image-text pairs that are convenient to obtain with
a masking strategy for learning the query-target relationship, it is expected
that accurate zero-shot CIR using a retrieval-focused textual inversion network
can be realized. Experimental results show the effectiveness of the proposed
method.
|
[
{
"created": "Thu, 27 Jun 2024 02:10:30 GMT",
"version": "v1"
}
] |
2024-06-28
|
[
[
"Zhang",
"Huaying",
""
],
[
"Yanagi",
"Rintaro",
""
],
[
"Togo",
"Ren",
""
],
[
"Ogawa",
"Takahiro",
""
],
[
"Haseyama",
"Miki",
""
]
] |
This paper proposes a novel zero-shot composed image retrieval (CIR) method considering the query-target relationship by masked image-text pairs. The objective of CIR is to retrieve the target image using a query image and a query text. Existing methods use a textual inversion network to convert the query image into a pseudo word to compose the image and text and use a pre-trained visual-language model to realize the retrieval. However, they do not consider the query-target relationship to train the textual inversion network to acquire information for retrieval. In this paper, we propose a novel zero-shot CIR method that is trained end-to-end using masked image-text pairs. By exploiting the abundant image-text pairs that are convenient to obtain with a masking strategy for learning the query-target relationship, it is expected that accurate zero-shot CIR using a retrieval-focused textual inversion network can be realized. Experimental results show the effectiveness of the proposed method.
|
0904.2129
|
Tamara Mchedlidze David
|
Tamara Mchedlidze, Antonios Symvonis
|
Crossing-Optimal Acyclic HP-Completion for Outerplanar st-Digraphs
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Given an embedded planar acyclic digraph G, we define the problem of acyclic
hamiltonian path completion with crossing minimization (Acyclic-HPCCM) to be
the problem of determining a hamiltonian path completion set of edges such
that, when these edges are embedded on G, they create the smallest possible
number of edge crossings and turn G to a hamiltonian acyclic digraph. Our
results include: 1. We provide a characterization under which a planar
st-digraph G is hamiltonian. 2. For an outerplanar st-digraph G, we define the
st-polygon decomposition of G and, based on its properties, we develop a
linear-time algorithm that solves the Acyclic-HPCCM problem. 3. For the class
of planar st-digraphs, we establish an equivalence between the Acyclic-HPCCM
problem and the problem of determining an upward 2-page topological book
embedding with minimum number of spine crossings. We infer (based on this
equivalence) for the class of outerplanar st-digraphs an upward topological
2-page book embedding with minimum number of spine crossings. To the best of
our knowledge, it is the first time that edge-crossing minimization is studied
in conjunction with the acyclic hamiltonian completion problem and the first
time that an optimal algorithm with respect to spine crossing minimization is
presented for upward topological book embeddings.
|
[
{
"created": "Tue, 14 Apr 2009 14:29:56 GMT",
"version": "v1"
}
] |
2009-04-15
|
[
[
"Mchedlidze",
"Tamara",
""
],
[
"Symvonis",
"Antonios",
""
]
] |
Given an embedded planar acyclic digraph G, we define the problem of acyclic hamiltonian path completion with crossing minimization (Acyclic-HPCCM) to be the problem of determining a hamiltonian path completion set of edges such that, when these edges are embedded on G, they create the smallest possible number of edge crossings and turn G to a hamiltonian acyclic digraph. Our results include: 1. We provide a characterization under which a planar st-digraph G is hamiltonian. 2. For an outerplanar st-digraph G, we define the st-polygon decomposition of G and, based on its properties, we develop a linear-time algorithm that solves the Acyclic-HPCCM problem. 3. For the class of planar st-digraphs, we establish an equivalence between the Acyclic-HPCCM problem and the problem of determining an upward 2-page topological book embedding with minimum number of spine crossings. We infer (based on this equivalence) for the class of outerplanar st-digraphs an upward topological 2-page book embedding with minimum number of spine crossings. To the best of our knowledge, it is the first time that edge-crossing minimization is studied in conjunction with the acyclic hamiltonian completion problem and the first time that an optimal algorithm with respect to spine crossing minimization is presented for upward topological book embeddings.
|
1404.5248
|
Meddeb Mohamed
|
M. Meddeb, H. Karray and Adel M. Alimi
|
Intelligent Remote Control for TV Program based on Emotion in Arabic
Speech
|
6 pages, 3 figures
|
International Journal of Scientific Research & Engineering
Technology (IJSET), ISSN: (2277-1581) volume 1, 2014
| null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recommender systems for TV program have been studied for the realization of
personalized TV Electronic Program Guides. In this paper, we propose automatic
emotion Arabic speech recognition in order to achieve an intelligent remote
control. In addition, the TV can estimate our interests and preferences by
observing our behavior to watch and have a conversation on topics that might be
interesting to us.
|
[
{
"created": "Mon, 21 Apr 2014 17:25:15 GMT",
"version": "v1"
}
] |
2014-04-22
|
[
[
"Meddeb",
"M.",
""
],
[
"Karray",
"H.",
""
],
[
"Alimi",
"Adel M.",
""
]
] |
Recommender systems for TV program have been studied for the realization of personalized TV Electronic Program Guides. In this paper, we propose automatic emotion Arabic speech recognition in order to achieve an intelligent remote control. In addition, the TV can estimate our interests and preferences by observing our behavior to watch and have a conversation on topics that might be interesting to us.
|
2011.00784
|
Tobias Schlagenhauf
|
Tobias Schlagenhauf, Yefeng Xia, J\"urgen Fleischer
|
Context-based Image Segment Labeling (CBISL)
|
11 pages, 4 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Working with images, one often faces problems with incomplete or unclear
information. Image inpainting can be used to restore missing image regions but
focuses, however, on low-level image features such as pixel intensity, pixel
gradient orientation, and color. This paper aims to recover semantic image
features (objects and positions) in images. Based on published gated PixelCNNs,
we demonstrate a new approach referred to as quadro-directional PixelCNN to
recover missing objects and return probable positions for objects based on the
context. We call this approach context-based image segment labeling (CBISL).
The results suggest that our four-directional model outperforms one-directional
models (gated PixelCNN) and returns a human-comparable performance.
|
[
{
"created": "Mon, 2 Nov 2020 07:26:55 GMT",
"version": "v1"
}
] |
2020-11-03
|
[
[
"Schlagenhauf",
"Tobias",
""
],
[
"Xia",
"Yefeng",
""
],
[
"Fleischer",
"Jürgen",
""
]
] |
Working with images, one often faces problems with incomplete or unclear information. Image inpainting can be used to restore missing image regions but focuses, however, on low-level image features such as pixel intensity, pixel gradient orientation, and color. This paper aims to recover semantic image features (objects and positions) in images. Based on published gated PixelCNNs, we demonstrate a new approach referred to as quadro-directional PixelCNN to recover missing objects and return probable positions for objects based on the context. We call this approach context-based image segment labeling (CBISL). The results suggest that our four-directional model outperforms one-directional models (gated PixelCNN) and returns a human-comparable performance.
|
2304.00988
|
Andrea Poltronieri
|
Jacopo de Berardinis, Albert Mero\~no-Pe\~nuela, Andrea Poltronieri,
Valentina Presutti
|
The Music Annotation Pattern
|
12 pages, 3 figures. Proceedings of the 13th Workshop on Ontology
Design and Patterns, edited by V. Sv\'atek et al., WOP, 2022
| null | null | null |
cs.AI cs.MM cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
The annotation of music content is a complex process to represent due to its
inherent multifaceted, subjectivity, and interdisciplinary nature. Numerous
systems and conventions for annotating music have been developed as independent
standards over the past decades. Little has been done to make them
interoperable, which jeopardises cross-corpora studies as it requires users to
familiarise with a multitude of conventions. Most of these systems lack the
semantic expressiveness needed to represent the complexity of the musical
language and cannot model multi-modal annotations originating from audio and
symbolic sources. In this article, we introduce the Music Annotation Pattern,
an Ontology Design Pattern (ODP) to homogenise different annotation systems and
to represent several types of musical objects (e.g. chords, patterns,
structures). This ODP preserves the semantics of the object's content at
different levels and temporal granularity. Moreover, our ODP accounts for
multi-modality upfront, to describe annotations derived from different sources,
and it is the first to enable the integration of music datasets at a large
scale.
|
[
{
"created": "Thu, 30 Mar 2023 11:13:59 GMT",
"version": "v1"
}
] |
2023-04-04
|
[
[
"de Berardinis",
"Jacopo",
""
],
[
"Meroño-Peñuela",
"Albert",
""
],
[
"Poltronieri",
"Andrea",
""
],
[
"Presutti",
"Valentina",
""
]
] |
The annotation of music content is a complex process to represent due to its inherent multifaceted, subjectivity, and interdisciplinary nature. Numerous systems and conventions for annotating music have been developed as independent standards over the past decades. Little has been done to make them interoperable, which jeopardises cross-corpora studies as it requires users to familiarise with a multitude of conventions. Most of these systems lack the semantic expressiveness needed to represent the complexity of the musical language and cannot model multi-modal annotations originating from audio and symbolic sources. In this article, we introduce the Music Annotation Pattern, an Ontology Design Pattern (ODP) to homogenise different annotation systems and to represent several types of musical objects (e.g. chords, patterns, structures). This ODP preserves the semantics of the object's content at different levels and temporal granularity. Moreover, our ODP accounts for multi-modality upfront, to describe annotations derived from different sources, and it is the first to enable the integration of music datasets at a large scale.
|
2401.17035
|
Ivica Kopriva Dr
|
Ivica Kopriva
|
Robust Kernel Sparse Subspace Clustering
|
5 pages, 2 tables
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Kernel methods are applied to many problems in pattern recognition, including
subspace clustering (SC). That way, nonlinear problems in the input data space
become linear in mapped high-dimensional feature space. Thereby,
computationally tractable nonlinear algorithms are enabled through implicit
mapping by the virtue of kernel trick. However, kernelization of linear
algorithms is possible only if square of the Froebenious norm of the error term
is used in related optimization problem. That, however, implies normal
distribution of the error. That is not appropriate for non-Gaussian errors such
as gross sparse corruptions that are modeled by -norm. Herein, to the best of
our knowledge, we propose for the first time robust kernel sparse SC (RKSSC)
algorithm for data with gross sparse corruptions. The concept, in principle,
can be applied to other SC algorithms to achieve robustness to the presence of
such type of corruption. We validated proposed approach on two well-known
datasets with linear robust SSC algorithm as a baseline model. According to
Wilcoxon test, clustering performance obtained by the RKSSC algorithm is
statistically significantly better than corresponding performance obtained by
the robust SSC algorithm. MATLAB code of proposed RKSSC algorithm is posted on
https://github.com/ikopriva/RKSSC.
|
[
{
"created": "Tue, 30 Jan 2024 14:12:39 GMT",
"version": "v1"
}
] |
2024-01-31
|
[
[
"Kopriva",
"Ivica",
""
]
] |
Kernel methods are applied to many problems in pattern recognition, including subspace clustering (SC). That way, nonlinear problems in the input data space become linear in mapped high-dimensional feature space. Thereby, computationally tractable nonlinear algorithms are enabled through implicit mapping by the virtue of kernel trick. However, kernelization of linear algorithms is possible only if square of the Froebenious norm of the error term is used in related optimization problem. That, however, implies normal distribution of the error. That is not appropriate for non-Gaussian errors such as gross sparse corruptions that are modeled by -norm. Herein, to the best of our knowledge, we propose for the first time robust kernel sparse SC (RKSSC) algorithm for data with gross sparse corruptions. The concept, in principle, can be applied to other SC algorithms to achieve robustness to the presence of such type of corruption. We validated proposed approach on two well-known datasets with linear robust SSC algorithm as a baseline model. According to Wilcoxon test, clustering performance obtained by the RKSSC algorithm is statistically significantly better than corresponding performance obtained by the robust SSC algorithm. MATLAB code of proposed RKSSC algorithm is posted on https://github.com/ikopriva/RKSSC.
|
1712.06843
|
Saahil Ognawala
|
Saahil Ognawala, Ana Petrovska, Kristian Beckers
|
An Exploratory Survey of Hybrid Testing Techniques Involving Symbolic
Execution and Fuzzing
|
Author's preprint
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent efforts in practical symbolic execution have successfully mitigated
the path-explosion problem to some extent with search-based heuristics and
compositional approaches. Similarly, due to an increase in the performance of
cheap multi-core commodity computers, fuzzing as a viable method of random
mutation-based testing has also seen promise. However, the possibility of
combining symbolic execution and fuzzing, thereby providing an opportunity to
mitigate drawbacks in each other, has not been sufficiently explored. Fuzzing
could, for example, expedite path-exploration in symbolic execution, and
symbolic execution could make seed input generation in fuzzing more efficient.
There have only been, in our view, very few hybrid solution proposals with
symbolic execution and fuzzing at their centre. By analyzing 77 relevant and
systematically selected papers, we (1) present an overview of hybrid solution
proposals of symbolic execution and fuzzing, (2) perform a gap analysis in
research of hybrid techniques to improve both, plain symbolic execution and
fuzzing, (3) propose new ideas for hybrid test-case generation techniques.
|
[
{
"created": "Tue, 19 Dec 2017 09:50:10 GMT",
"version": "v1"
}
] |
2017-12-20
|
[
[
"Ognawala",
"Saahil",
""
],
[
"Petrovska",
"Ana",
""
],
[
"Beckers",
"Kristian",
""
]
] |
Recent efforts in practical symbolic execution have successfully mitigated the path-explosion problem to some extent with search-based heuristics and compositional approaches. Similarly, due to an increase in the performance of cheap multi-core commodity computers, fuzzing as a viable method of random mutation-based testing has also seen promise. However, the possibility of combining symbolic execution and fuzzing, thereby providing an opportunity to mitigate drawbacks in each other, has not been sufficiently explored. Fuzzing could, for example, expedite path-exploration in symbolic execution, and symbolic execution could make seed input generation in fuzzing more efficient. There have only been, in our view, very few hybrid solution proposals with symbolic execution and fuzzing at their centre. By analyzing 77 relevant and systematically selected papers, we (1) present an overview of hybrid solution proposals of symbolic execution and fuzzing, (2) perform a gap analysis in research of hybrid techniques to improve both, plain symbolic execution and fuzzing, (3) propose new ideas for hybrid test-case generation techniques.
|
2310.18036
|
Benjamin Aram Berendsohn
|
Benjamin Aram Berendsohn
|
Fast and simple unrooted dynamic forests
| null | null |
10.1137/1.9781611977929.4
| null |
cs.DS
|
http://creativecommons.org/licenses/by-sa/4.0/
|
A dynamic forest data structure maintains a forest (and associated data like
edge weights) under edge insertions and deletions. Dynamic forests are widely
used to solve online and offline graph problems. Well-known examples of dynamic
forest data structures are link-cut trees [Sleator and Tarjan '83] and top
trees [Alstrup, Holm, de Lichtenberg, and Thorup '05], both of which need O(log
n) time per operation. While top trees are more flexible and arguably easier to
use, link-cut trees are faster in practice [Tarjan and Werneck '10].
In this paper, we propose an alternative to link-cut trees. Our data
structure is based on search trees on trees (STTs, also known as elimination
trees) and an STT algorithm [Berendsohn and Kozma '22] based on the classical
Splay trees [Sleator and Tarjan '85]. While link-cut trees maintain a hierarchy
of binary search trees, we maintain a single STT. Most of the complexity of our
data structure lies in the implementation of the STT rotation primitive, which
can easily be reused, simplifying the development of new STT-based approaches.
We implement several variants of our data structure in the Rust programming
language, along with an implementation of link-cut trees for comparison.
Experimental evaluation suggests that our algorithms are faster when the
dynamic forest is unrooted, while link-cut trees are faster for rooted dynamic
forests.
|
[
{
"created": "Fri, 27 Oct 2023 10:28:24 GMT",
"version": "v1"
},
{
"created": "Mon, 8 Jan 2024 13:48:50 GMT",
"version": "v2"
}
] |
2024-01-09
|
[
[
"Berendsohn",
"Benjamin Aram",
""
]
] |
A dynamic forest data structure maintains a forest (and associated data like edge weights) under edge insertions and deletions. Dynamic forests are widely used to solve online and offline graph problems. Well-known examples of dynamic forest data structures are link-cut trees [Sleator and Tarjan '83] and top trees [Alstrup, Holm, de Lichtenberg, and Thorup '05], both of which need O(log n) time per operation. While top trees are more flexible and arguably easier to use, link-cut trees are faster in practice [Tarjan and Werneck '10]. In this paper, we propose an alternative to link-cut trees. Our data structure is based on search trees on trees (STTs, also known as elimination trees) and an STT algorithm [Berendsohn and Kozma '22] based on the classical Splay trees [Sleator and Tarjan '85]. While link-cut trees maintain a hierarchy of binary search trees, we maintain a single STT. Most of the complexity of our data structure lies in the implementation of the STT rotation primitive, which can easily be reused, simplifying the development of new STT-based approaches. We implement several variants of our data structure in the Rust programming language, along with an implementation of link-cut trees for comparison. Experimental evaluation suggests that our algorithms are faster when the dynamic forest is unrooted, while link-cut trees are faster for rooted dynamic forests.
|
2209.14399
|
Marie Siew
|
Marie Siew, Shikhar Sharma, Zekai Li, Kun Guo, Chao Xu, Tania
Lorido-Botran, Tony Q.S. Quek and Carlee Joe-Wong
|
FIRE: A Failure-Adaptive Reinforcement Learning Framework for Edge
Computing Migrations
| null | null | null | null |
cs.NI cs.LG cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
In edge computing, users' service profiles are migrated due to user mobility.
Reinforcement learning (RL) frameworks have been proposed to do so, often
trained on simulated data. However, existing RL frameworks overlook occasional
server failures, which although rare, impact latency-sensitive applications
like autonomous driving and real-time obstacle detection. Nevertheless, these
failures (rare events), being not adequately represented in historical training
data, pose a challenge for data-driven RL algorithms. As it is impractical to
adjust failure frequency in real-world applications for training, we introduce
FIRE, a framework that adapts to rare events by training a RL policy in an edge
computing digital twin environment. We propose ImRE, an importance
sampling-based Q-learning algorithm, which samples rare events proportionally
to their impact on the value function. FIRE considers delay, migration,
failure, and backup placement costs across individual and shared service
profiles. We prove ImRE's boundedness and convergence to optimality. Next, we
introduce novel deep Q-learning (ImDQL) and actor critic (ImACRE) versions of
our algorithm to enhance scalability. We extend our framework to accommodate
users with varying risk tolerances. Through trace driven experiments, we show
that FIRE reduces costs compared to vanilla RL and the greedy baseline in the
event of failures.
|
[
{
"created": "Wed, 28 Sep 2022 19:49:39 GMT",
"version": "v1"
},
{
"created": "Thu, 7 Mar 2024 06:22:02 GMT",
"version": "v2"
}
] |
2024-03-08
|
[
[
"Siew",
"Marie",
""
],
[
"Sharma",
"Shikhar",
""
],
[
"Li",
"Zekai",
""
],
[
"Guo",
"Kun",
""
],
[
"Xu",
"Chao",
""
],
[
"Lorido-Botran",
"Tania",
""
],
[
"Quek",
"Tony Q. S.",
""
],
[
"Joe-Wong",
"Carlee",
""
]
] |
In edge computing, users' service profiles are migrated due to user mobility. Reinforcement learning (RL) frameworks have been proposed to do so, often trained on simulated data. However, existing RL frameworks overlook occasional server failures, which although rare, impact latency-sensitive applications like autonomous driving and real-time obstacle detection. Nevertheless, these failures (rare events), being not adequately represented in historical training data, pose a challenge for data-driven RL algorithms. As it is impractical to adjust failure frequency in real-world applications for training, we introduce FIRE, a framework that adapts to rare events by training a RL policy in an edge computing digital twin environment. We propose ImRE, an importance sampling-based Q-learning algorithm, which samples rare events proportionally to their impact on the value function. FIRE considers delay, migration, failure, and backup placement costs across individual and shared service profiles. We prove ImRE's boundedness and convergence to optimality. Next, we introduce novel deep Q-learning (ImDQL) and actor critic (ImACRE) versions of our algorithm to enhance scalability. We extend our framework to accommodate users with varying risk tolerances. Through trace driven experiments, we show that FIRE reduces costs compared to vanilla RL and the greedy baseline in the event of failures.
|
2012.09720
|
Ilias Diakonikolas
|
Ilias Diakonikolas and Daniel M. Kane
|
Near-Optimal Statistical Query Hardness of Learning Halfspaces with
Massart Noise
|
This version improves on the previous version. It obtains a
near-optimal hardness result essentially matching known algorithms
| null | null | null |
cs.LG cs.CC math.ST stat.ML stat.TH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the problem of PAC learning halfspaces with Massart noise. Given
labeled samples $(x, y)$ from a distribution $D$ on $\mathbb{R}^{d} \times \{
\pm 1\}$ such that the marginal $D_x$ on the examples is arbitrary and the
label $y$ of example $x$ is generated from the target halfspace corrupted by a
Massart adversary with flipping probability $\eta(x) \leq \eta \leq 1/2$, the
goal is to compute a hypothesis with small misclassification error. The best
known $\mathrm{poly}(d, 1/\epsilon)$-time algorithms for this problem achieve
error of $\eta+\epsilon$, which can be far from the optimal bound of
$\mathrm{OPT}+\epsilon$, where $\mathrm{OPT} = \mathbf{E}_{x \sim D_x}
[\eta(x)]$. While it is known that achieving $\mathrm{OPT}+o(1)$ error requires
super-polynomial time in the Statistical Query model, a large gap remains
between known upper and lower bounds.
In this work, we essentially characterize the efficient learnability of
Massart halfspaces in the Statistical Query (SQ) model. Specifically, we show
that no efficient SQ algorithm for learning Massart halfspaces on
$\mathbb{R}^d$ can achieve error better than $\Omega(\eta)$, even if
$\mathrm{OPT} = 2^{-\log^{c} (d)}$, for any universal constant $c \in (0, 1)$.
Furthermore, when the noise upper bound $\eta$ is close to $1/2$, our error
lower bound becomes $\eta - o_{\eta}(1)$, where the $o_{\eta}(1)$ term goes to
$0$ when $\eta$ approaches $1/2$. Our results provide strong evidence that
known learning algorithms for Massart halfspaces are nearly best possible,
thereby resolving a longstanding open problem in learning theory.
|
[
{
"created": "Thu, 17 Dec 2020 16:43:11 GMT",
"version": "v1"
},
{
"created": "Mon, 23 Aug 2021 16:18:45 GMT",
"version": "v2"
},
{
"created": "Mon, 8 Nov 2021 18:19:54 GMT",
"version": "v3"
}
] |
2021-11-09
|
[
[
"Diakonikolas",
"Ilias",
""
],
[
"Kane",
"Daniel M.",
""
]
] |
We study the problem of PAC learning halfspaces with Massart noise. Given labeled samples $(x, y)$ from a distribution $D$ on $\mathbb{R}^{d} \times \{ \pm 1\}$ such that the marginal $D_x$ on the examples is arbitrary and the label $y$ of example $x$ is generated from the target halfspace corrupted by a Massart adversary with flipping probability $\eta(x) \leq \eta \leq 1/2$, the goal is to compute a hypothesis with small misclassification error. The best known $\mathrm{poly}(d, 1/\epsilon)$-time algorithms for this problem achieve error of $\eta+\epsilon$, which can be far from the optimal bound of $\mathrm{OPT}+\epsilon$, where $\mathrm{OPT} = \mathbf{E}_{x \sim D_x} [\eta(x)]$. While it is known that achieving $\mathrm{OPT}+o(1)$ error requires super-polynomial time in the Statistical Query model, a large gap remains between known upper and lower bounds. In this work, we essentially characterize the efficient learnability of Massart halfspaces in the Statistical Query (SQ) model. Specifically, we show that no efficient SQ algorithm for learning Massart halfspaces on $\mathbb{R}^d$ can achieve error better than $\Omega(\eta)$, even if $\mathrm{OPT} = 2^{-\log^{c} (d)}$, for any universal constant $c \in (0, 1)$. Furthermore, when the noise upper bound $\eta$ is close to $1/2$, our error lower bound becomes $\eta - o_{\eta}(1)$, where the $o_{\eta}(1)$ term goes to $0$ when $\eta$ approaches $1/2$. Our results provide strong evidence that known learning algorithms for Massart halfspaces are nearly best possible, thereby resolving a longstanding open problem in learning theory.
|
2203.04311
|
Zhiyu Mou
|
Zhiyu Mou, Jun Liu, Xiang Yun, Feifei Gao, Qihui Wu
|
Cluster Head Detection for Hierarchical UAV Swarm With Graph
Self-supervised Learning
| null | null | null | null |
cs.LG cs.AI eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we study the cluster head detection problem of a two-level
unmanned aerial vehicle (UAV) swarm network (USNET) with multiple UAV clusters,
where the inherent follow strategy (IFS) of low-level follower UAVs (FUAVs)
with respect to high-level cluster head UAVs (HUAVs) is unknown. We first
propose a graph attention self-supervised learning algorithm (GASSL) to detect
the HUAVs of a single UAV cluster, where the GASSL can fit the IFS at the same
time. Then, to detect the HUAVs in the USNET with multiple UAV clusters, we
develop a multi-cluster graph attention self-supervised learning algorithm
(MC-GASSL) based on the GASSL. The MC-GASSL clusters the USNET with a gated
recurrent unit (GRU)-based metric learning scheme and finds the HUAVs in each
cluster with GASSL. Numerical results show that the GASSL can detect the HUAVs
in single UAV clusters obeying various kinds of IFSs with over 98% average
accuracy. The simulation results also show that the clustering purity of the
USNET with MC-GASSL exceeds that with traditional clustering algorithms by at
least 10% average. Furthermore, the MC-GASSL can efficiently detect all the
HUAVs in USNETs with various IFSs and cluster numbers with low detection
redundancies.
|
[
{
"created": "Tue, 8 Mar 2022 14:50:29 GMT",
"version": "v1"
}
] |
2022-03-10
|
[
[
"Mou",
"Zhiyu",
""
],
[
"Liu",
"Jun",
""
],
[
"Yun",
"Xiang",
""
],
[
"Gao",
"Feifei",
""
],
[
"Wu",
"Qihui",
""
]
] |
In this paper, we study the cluster head detection problem of a two-level unmanned aerial vehicle (UAV) swarm network (USNET) with multiple UAV clusters, where the inherent follow strategy (IFS) of low-level follower UAVs (FUAVs) with respect to high-level cluster head UAVs (HUAVs) is unknown. We first propose a graph attention self-supervised learning algorithm (GASSL) to detect the HUAVs of a single UAV cluster, where the GASSL can fit the IFS at the same time. Then, to detect the HUAVs in the USNET with multiple UAV clusters, we develop a multi-cluster graph attention self-supervised learning algorithm (MC-GASSL) based on the GASSL. The MC-GASSL clusters the USNET with a gated recurrent unit (GRU)-based metric learning scheme and finds the HUAVs in each cluster with GASSL. Numerical results show that the GASSL can detect the HUAVs in single UAV clusters obeying various kinds of IFSs with over 98% average accuracy. The simulation results also show that the clustering purity of the USNET with MC-GASSL exceeds that with traditional clustering algorithms by at least 10% average. Furthermore, the MC-GASSL can efficiently detect all the HUAVs in USNETs with various IFSs and cluster numbers with low detection redundancies.
|
1804.08032
|
Bart Jacobs
|
Bart Jacobs
|
A Channel-based Exact Inference Algorithm for Bayesian Networks
| null | null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper describes a new algorithm for exact Bayesian inference that is
based on a recently proposed compositional semantics of Bayesian networks in
terms of channels. The paper concentrates on the ideas behind this algorithm,
involving a linearisation (`stretching') of the Bayesian network, followed by a
combination of forward state transformation and backward predicate
transformation, while evidence is accumulated along the way. The performance of
a prototype implementation of the algorithm in Python is briefly compared to a
standard implementation (pgmpy): first results show competitive performance.
|
[
{
"created": "Sat, 21 Apr 2018 21:59:24 GMT",
"version": "v1"
}
] |
2018-04-24
|
[
[
"Jacobs",
"Bart",
""
]
] |
This paper describes a new algorithm for exact Bayesian inference that is based on a recently proposed compositional semantics of Bayesian networks in terms of channels. The paper concentrates on the ideas behind this algorithm, involving a linearisation (`stretching') of the Bayesian network, followed by a combination of forward state transformation and backward predicate transformation, while evidence is accumulated along the way. The performance of a prototype implementation of the algorithm in Python is briefly compared to a standard implementation (pgmpy): first results show competitive performance.
|
2103.04789
|
Qianyu Feng
|
Qianyu Feng, Yawei Luo, Keyang Luo, Yi Yang
|
Look, Cast and Mold: Learning 3D Shape Manifold from Single-view
Synthetic Data
|
this work is no longer under development
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Inferring the stereo structure of objects in the real world is a challenging
yet practical task. To equip deep models with this ability usually requires
abundant 3D supervision which is hard to acquire. It is promising that we can
simply benefit from synthetic data, where pairwise ground-truth is easy to
access. Nevertheless, the domain gaps are nontrivial considering the variant
texture, shape and context. To overcome these difficulties, we propose a
Visio-Perceptual Adaptive Network for single-view 3D reconstruction, dubbed
VPAN. To generalize the model towards a real scenario, we propose to fulfill
several aspects: (1) Look: visually incorporate spatial structure from the
single view to enhance the expressiveness of representation; (2) Cast:
perceptually align the 2D image features to the 3D shape priors with
cross-modal semantic contrastive mapping; (3) Mold: reconstruct stereo-shape of
target by transforming embeddings into the desired manifold. Extensive
experiments on several benchmarks demonstrate the effectiveness and robustness
of the proposed method in learning the 3D shape manifold from synthetic data
via a single-view. The proposed method outperforms state-of-the-arts on Pix3D
dataset with IoU 0.292 and CD 0.108, and reaches IoU 0.329 and CD 0.104 on
Pascal 3D+.
|
[
{
"created": "Mon, 8 Mar 2021 14:30:18 GMT",
"version": "v1"
},
{
"created": "Thu, 18 Mar 2021 11:31:09 GMT",
"version": "v2"
},
{
"created": "Tue, 7 Jun 2022 05:44:25 GMT",
"version": "v3"
}
] |
2022-06-08
|
[
[
"Feng",
"Qianyu",
""
],
[
"Luo",
"Yawei",
""
],
[
"Luo",
"Keyang",
""
],
[
"Yang",
"Yi",
""
]
] |
Inferring the stereo structure of objects in the real world is a challenging yet practical task. To equip deep models with this ability usually requires abundant 3D supervision which is hard to acquire. It is promising that we can simply benefit from synthetic data, where pairwise ground-truth is easy to access. Nevertheless, the domain gaps are nontrivial considering the variant texture, shape and context. To overcome these difficulties, we propose a Visio-Perceptual Adaptive Network for single-view 3D reconstruction, dubbed VPAN. To generalize the model towards a real scenario, we propose to fulfill several aspects: (1) Look: visually incorporate spatial structure from the single view to enhance the expressiveness of representation; (2) Cast: perceptually align the 2D image features to the 3D shape priors with cross-modal semantic contrastive mapping; (3) Mold: reconstruct stereo-shape of target by transforming embeddings into the desired manifold. Extensive experiments on several benchmarks demonstrate the effectiveness and robustness of the proposed method in learning the 3D shape manifold from synthetic data via a single-view. The proposed method outperforms state-of-the-arts on Pix3D dataset with IoU 0.292 and CD 0.108, and reaches IoU 0.329 and CD 0.104 on Pascal 3D+.
|
2001.05451
|
Yujie Wang
|
Yujie Wang
|
Improvement of an Approximated Self-Improving Sorter and Error Analysis
of its Estimated Entropy
|
I found there is a critical error in this submission, therefore, I
withdraw this draft
| null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
The self-improving sorter proposed by Ailon et al. consists of two phases: a
relatively long training phase and rapid operation phase. In this study, we
have developed an efficient way to further improve this sorter by approximating
its training phase to be faster but not sacrificing much performance in the
operation phase. It is very necessary to ensure the accuracy of the estimated
entropy when we test the performance of this approximated sorter. Thus we
further developed a useful formula to calculate an upper bound for the 'error'
of the estimated entropy derived from the input data with unknown
distributions. Our work will contribute to the better use of this
self-improving sorter for huge data in a quicker way.
|
[
{
"created": "Wed, 15 Jan 2020 17:49:28 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Mar 2021 13:18:01 GMT",
"version": "v2"
}
] |
2021-03-16
|
[
[
"Wang",
"Yujie",
""
]
] |
The self-improving sorter proposed by Ailon et al. consists of two phases: a relatively long training phase and rapid operation phase. In this study, we have developed an efficient way to further improve this sorter by approximating its training phase to be faster but not sacrificing much performance in the operation phase. It is very necessary to ensure the accuracy of the estimated entropy when we test the performance of this approximated sorter. Thus we further developed a useful formula to calculate an upper bound for the 'error' of the estimated entropy derived from the input data with unknown distributions. Our work will contribute to the better use of this self-improving sorter for huge data in a quicker way.
|
1709.05675
|
Andrey Savchenko
|
Anastasiia D. Sokolova, Angelina S. Kharchevnikova, Andrey V.
Savchenko
|
Organizing Multimedia Data in Video Surveillance Systems Based on Face
Verification with Convolutional Neural Networks
|
8 pages; 1 figure, accepted for publication at AIST17
|
Proceedings of the International Conference on Analysis of Images,
Social Networks and Texts (AIST), 2018, pp. 223-230
|
10.1007/978-3-319-73013-4_20
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we propose the two-stage approach of organizing information in
video surveillance systems. At first, the faces are detected in each frame and
a video stream is split into sequences of frames with face region of one
person. Secondly, these sequences (tracks) that contain identical faces are
grouped using face verification algorithms and hierarchical agglomerative
clustering. Gender and age are estimated for each cluster (person) in order to
facilitate the usage of the organized video collection. The particular
attention is focused on the aggregation of features extracted from each frame
with the deep convolutional neural networks. The experimental results of the
proposed approach using YTF and IJB-A datasets demonstrated that the most
accurate and fast solution is achieved for matching of normalized average of
feature vectors of all frames in a track.
|
[
{
"created": "Sun, 17 Sep 2017 14:57:55 GMT",
"version": "v1"
}
] |
2018-01-04
|
[
[
"Sokolova",
"Anastasiia D.",
""
],
[
"Kharchevnikova",
"Angelina S.",
""
],
[
"Savchenko",
"Andrey V.",
""
]
] |
In this paper we propose the two-stage approach of organizing information in video surveillance systems. At first, the faces are detected in each frame and a video stream is split into sequences of frames with face region of one person. Secondly, these sequences (tracks) that contain identical faces are grouped using face verification algorithms and hierarchical agglomerative clustering. Gender and age are estimated for each cluster (person) in order to facilitate the usage of the organized video collection. The particular attention is focused on the aggregation of features extracted from each frame with the deep convolutional neural networks. The experimental results of the proposed approach using YTF and IJB-A datasets demonstrated that the most accurate and fast solution is achieved for matching of normalized average of feature vectors of all frames in a track.
|
2304.08327
|
Yi-Pei Chen
|
Yi-Pei Chen, An-Zi Yen, Hen-Hsen Huang, Hideki Nakayama, Hsin-Hsi Chen
|
LED: A Dataset for Life Event Extraction from Dialogs
|
Accepted to EACL 2023 Findings
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Lifelogging has gained more attention due to its wide applications, such as
personalized recommendations or memory assistance. The issues of collecting and
extracting personal life events have emerged. People often share their life
experiences with others through conversations. However, extracting life events
from conversations is rarely explored. In this paper, we present Life Event
Dialog, a dataset containing fine-grained life event annotations on
conversational data. In addition, we initiate a novel conversational life event
extraction task and differentiate the task from the public event extraction or
the life event extraction from other sources like microblogs. We explore three
information extraction (IE) frameworks to address the conversational life event
extraction task: OpenIE, relation extraction, and event extraction. A
comprehensive empirical analysis of the three baselines is established. The
results suggest that the current event extraction model still struggles with
extracting life events from human daily conversations. Our proposed life event
dialog dataset and in-depth analysis of IE frameworks will facilitate future
research on life event extraction from conversations.
|
[
{
"created": "Mon, 17 Apr 2023 14:46:59 GMT",
"version": "v1"
}
] |
2023-04-18
|
[
[
"Chen",
"Yi-Pei",
""
],
[
"Yen",
"An-Zi",
""
],
[
"Huang",
"Hen-Hsen",
""
],
[
"Nakayama",
"Hideki",
""
],
[
"Chen",
"Hsin-Hsi",
""
]
] |
Lifelogging has gained more attention due to its wide applications, such as personalized recommendations or memory assistance. The issues of collecting and extracting personal life events have emerged. People often share their life experiences with others through conversations. However, extracting life events from conversations is rarely explored. In this paper, we present Life Event Dialog, a dataset containing fine-grained life event annotations on conversational data. In addition, we initiate a novel conversational life event extraction task and differentiate the task from the public event extraction or the life event extraction from other sources like microblogs. We explore three information extraction (IE) frameworks to address the conversational life event extraction task: OpenIE, relation extraction, and event extraction. A comprehensive empirical analysis of the three baselines is established. The results suggest that the current event extraction model still struggles with extracting life events from human daily conversations. Our proposed life event dialog dataset and in-depth analysis of IE frameworks will facilitate future research on life event extraction from conversations.
|
2007.07841
|
Paul Tardy
|
Paul Tardy, David Janiszek, Yannick Est\`eve, Vincent Nguyen
|
Align then Summarize: Automatic Alignment Methods for Summarization
Corpus Creation
| null |
LREC 2020 -- Proceedings of The 12th Language Resources and
Evaluation Conference, 2020, pp. 6718--6724
| null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Summarizing texts is not a straightforward task. Before even considering text
summarization, one should determine what kind of summary is expected. How much
should the information be compressed? Is it relevant to reformulate or should
the summary stick to the original phrasing? State-of-the-art on automatic text
summarization mostly revolves around news articles. We suggest that considering
a wider variety of tasks would lead to an improvement in the field, in terms of
generalization and robustness. We explore meeting summarization: generating
reports from automatic transcriptions. Our work consists in segmenting and
aligning transcriptions with respect to reports, to get a suitable dataset for
neural summarization. Using a bootstrapping approach, we provide pre-alignments
that are corrected by human annotators, making a validation set against which
we evaluate automatic models. This consistently reduces annotators' efforts by
providing iteratively better pre-alignment and maximizes the corpus size by
using annotations from our automatic alignment models. Evaluation is conducted
on \publicmeetings, a novel corpus of aligned public meetings. We report
automatic alignment and summarization performances on this corpus and show that
automatic alignment is relevant for data annotation since it leads to large
improvement of almost +4 on all ROUGE scores on the summarization task.
|
[
{
"created": "Wed, 15 Jul 2020 17:03:34 GMT",
"version": "v1"
}
] |
2020-07-16
|
[
[
"Tardy",
"Paul",
""
],
[
"Janiszek",
"David",
""
],
[
"Estève",
"Yannick",
""
],
[
"Nguyen",
"Vincent",
""
]
] |
Summarizing texts is not a straightforward task. Before even considering text summarization, one should determine what kind of summary is expected. How much should the information be compressed? Is it relevant to reformulate or should the summary stick to the original phrasing? State-of-the-art on automatic text summarization mostly revolves around news articles. We suggest that considering a wider variety of tasks would lead to an improvement in the field, in terms of generalization and robustness. We explore meeting summarization: generating reports from automatic transcriptions. Our work consists in segmenting and aligning transcriptions with respect to reports, to get a suitable dataset for neural summarization. Using a bootstrapping approach, we provide pre-alignments that are corrected by human annotators, making a validation set against which we evaluate automatic models. This consistently reduces annotators' efforts by providing iteratively better pre-alignment and maximizes the corpus size by using annotations from our automatic alignment models. Evaluation is conducted on \publicmeetings, a novel corpus of aligned public meetings. We report automatic alignment and summarization performances on this corpus and show that automatic alignment is relevant for data annotation since it leads to large improvement of almost +4 on all ROUGE scores on the summarization task.
|
1711.02257
|
Zhao Chen
|
Zhao Chen, Vijay Badrinarayanan, Chen-Yu Lee and Andrew Rabinovich
|
GradNorm: Gradient Normalization for Adaptive Loss Balancing in Deep
Multitask Networks
|
ICML 2018
|
Proceedings of the 35th International Conference on Machine
Learning (2018), 793-802
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep multitask networks, in which one neural network produces multiple
predictive outputs, can offer better speed and performance than their
single-task counterparts but are challenging to train properly. We present a
gradient normalization (GradNorm) algorithm that automatically balances
training in deep multitask models by dynamically tuning gradient magnitudes. We
show that for various network architectures, for both regression and
classification tasks, and on both synthetic and real datasets, GradNorm
improves accuracy and reduces overfitting across multiple tasks when compared
to single-task networks, static baselines, and other adaptive multitask loss
balancing techniques. GradNorm also matches or surpasses the performance of
exhaustive grid search methods, despite only involving a single asymmetry
hyperparameter $\alpha$. Thus, what was once a tedious search process that
incurred exponentially more compute for each task added can now be accomplished
within a few training runs, irrespective of the number of tasks. Ultimately, we
will demonstrate that gradient manipulation affords us great control over the
training dynamics of multitask networks and may be one of the keys to unlocking
the potential of multitask learning.
|
[
{
"created": "Tue, 7 Nov 2017 02:08:12 GMT",
"version": "v1"
},
{
"created": "Tue, 19 Dec 2017 01:00:22 GMT",
"version": "v2"
},
{
"created": "Sun, 8 Apr 2018 21:25:49 GMT",
"version": "v3"
},
{
"created": "Tue, 12 Jun 2018 06:45:49 GMT",
"version": "v4"
}
] |
2018-07-16
|
[
[
"Chen",
"Zhao",
""
],
[
"Badrinarayanan",
"Vijay",
""
],
[
"Lee",
"Chen-Yu",
""
],
[
"Rabinovich",
"Andrew",
""
]
] |
Deep multitask networks, in which one neural network produces multiple predictive outputs, can offer better speed and performance than their single-task counterparts but are challenging to train properly. We present a gradient normalization (GradNorm) algorithm that automatically balances training in deep multitask models by dynamically tuning gradient magnitudes. We show that for various network architectures, for both regression and classification tasks, and on both synthetic and real datasets, GradNorm improves accuracy and reduces overfitting across multiple tasks when compared to single-task networks, static baselines, and other adaptive multitask loss balancing techniques. GradNorm also matches or surpasses the performance of exhaustive grid search methods, despite only involving a single asymmetry hyperparameter $\alpha$. Thus, what was once a tedious search process that incurred exponentially more compute for each task added can now be accomplished within a few training runs, irrespective of the number of tasks. Ultimately, we will demonstrate that gradient manipulation affords us great control over the training dynamics of multitask networks and may be one of the keys to unlocking the potential of multitask learning.
|
2304.08379
|
Pedro Neto
|
Ant\'onio Amorim, Diana Guimar\~aes, Tiago Mendon\c{c}a, Pedro Neto,
Paulo Costa, Ant\'onio Paulo Moreira
|
Robust human position estimation in cooperative robotic cells
| null |
Robotics and Computer-Integrated Manufacturing, 67, 102035 (2021)
|
10.1016/j.rcim.2020.102035
| null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Robots are increasingly present in our lives, sharing the workspace and tasks
with human co-workers. However, existing interfaces for human-robot interaction
/ cooperation (HRI/C) have limited levels of intuitiveness to use and safety is
a major concern when humans and robots share the same workspace. Many times,
this is due to the lack of a reliable estimation of the human pose in space
which is the primary input to calculate the human-robot minimum distance
(required for safety and collision avoidance) and HRI/C featuring machine
learning algorithms classifying human behaviours / gestures. Each sensor type
has its own characteristics resulting in problems such as occlusions (vision)
and drift (inertial) when used in an isolated fashion. In this paper, it is
proposed a combined system that merges the human tracking provided by a 3D
vision sensor with the pose estimation provided by a set of inertial
measurement units (IMUs) placed in human body limbs. The IMUs compensate the
gaps in occluded areas to have tracking continuity. To mitigate the lingering
effects of the IMU offset we propose a continuous online calculation of the
offset value. Experimental tests were designed to simulate human motion in a
human-robot collaborative environment where the robot moves away to avoid
unexpected collisions with de human. Results indicate that our approach is able
to capture the human\textsc's position, for example the forearm, with a
precision in the millimetre range and robustness to occlusions.
|
[
{
"created": "Mon, 17 Apr 2023 15:42:44 GMT",
"version": "v1"
}
] |
2023-04-18
|
[
[
"Amorim",
"António",
""
],
[
"Guimarães",
"Diana",
""
],
[
"Mendonça",
"Tiago",
""
],
[
"Neto",
"Pedro",
""
],
[
"Costa",
"Paulo",
""
],
[
"Moreira",
"António Paulo",
""
]
] |
Robots are increasingly present in our lives, sharing the workspace and tasks with human co-workers. However, existing interfaces for human-robot interaction / cooperation (HRI/C) have limited levels of intuitiveness to use and safety is a major concern when humans and robots share the same workspace. Many times, this is due to the lack of a reliable estimation of the human pose in space which is the primary input to calculate the human-robot minimum distance (required for safety and collision avoidance) and HRI/C featuring machine learning algorithms classifying human behaviours / gestures. Each sensor type has its own characteristics resulting in problems such as occlusions (vision) and drift (inertial) when used in an isolated fashion. In this paper, it is proposed a combined system that merges the human tracking provided by a 3D vision sensor with the pose estimation provided by a set of inertial measurement units (IMUs) placed in human body limbs. The IMUs compensate the gaps in occluded areas to have tracking continuity. To mitigate the lingering effects of the IMU offset we propose a continuous online calculation of the offset value. Experimental tests were designed to simulate human motion in a human-robot collaborative environment where the robot moves away to avoid unexpected collisions with de human. Results indicate that our approach is able to capture the human\textsc's position, for example the forearm, with a precision in the millimetre range and robustness to occlusions.
|
2305.08527
|
Jinlei Xu
|
Jinlei Xu, Zhengyu Zhu, Zheng Chu, Hehao Niu, Pei Xiao, Inkyu Lee
|
Sum Secrecy Rate Maximization for IRS-aided Multi-Cluster MIMO-NOMA
Terahertz Systems
|
11 pages, 8 figure; references added
| null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Intelligent reflecting surface (IRS) is a promising technique to extend the
network coverage and improve spectral efficiency. This paper investigates an
IRS-assisted terahertz (THz) multiple-input multiple-output
(MIMO)-nonorthogonal multiple access (NOMA) system based on hybrid precoding
with the presence of eavesdropper. Two types of sparse RF chain antenna
structures are adopted, i.e., sub-connected structure and fully connected
structure. First, cluster heads are selected for each beam, and analog
precoding based on discrete phase is designed. Then, users are clustered based
on channel correlation, and NOMA technology is employed to serve the users. In
addition, a low-complexity forced-zero method is utilized to design digital
precoding in order to eliminate inter-cluster interference. On this basis, we
propose a secure transmission scheme to maximize the sum secrecy rate by
jointly optimizing the power allocation and phase shifts of IRS subject to the
total transmit power budget, minimal achievable rate requirement of each user,
and IRS reflection coefficients. Due to multiple coupled variables, the
formulated problem leads to a non-convex issue. We apply the Taylor series
expansion and semidefinite programming to convert the original non-convex
problem into a convex one. Then, an alternating optimization algorithm is
developed to obtain a feasible solution of the original problem. Simulation
results verify the convergence of the proposed algorithm, and deploying IRS can
bring significant beamforming gains to suppress the eavesdropping.
|
[
{
"created": "Mon, 15 May 2023 10:35:26 GMT",
"version": "v1"
},
{
"created": "Mon, 12 Jun 2023 00:09:03 GMT",
"version": "v2"
}
] |
2023-06-13
|
[
[
"Xu",
"Jinlei",
""
],
[
"Zhu",
"Zhengyu",
""
],
[
"Chu",
"Zheng",
""
],
[
"Niu",
"Hehao",
""
],
[
"Xiao",
"Pei",
""
],
[
"Lee",
"Inkyu",
""
]
] |
Intelligent reflecting surface (IRS) is a promising technique to extend the network coverage and improve spectral efficiency. This paper investigates an IRS-assisted terahertz (THz) multiple-input multiple-output (MIMO)-nonorthogonal multiple access (NOMA) system based on hybrid precoding with the presence of eavesdropper. Two types of sparse RF chain antenna structures are adopted, i.e., sub-connected structure and fully connected structure. First, cluster heads are selected for each beam, and analog precoding based on discrete phase is designed. Then, users are clustered based on channel correlation, and NOMA technology is employed to serve the users. In addition, a low-complexity forced-zero method is utilized to design digital precoding in order to eliminate inter-cluster interference. On this basis, we propose a secure transmission scheme to maximize the sum secrecy rate by jointly optimizing the power allocation and phase shifts of IRS subject to the total transmit power budget, minimal achievable rate requirement of each user, and IRS reflection coefficients. Due to multiple coupled variables, the formulated problem leads to a non-convex issue. We apply the Taylor series expansion and semidefinite programming to convert the original non-convex problem into a convex one. Then, an alternating optimization algorithm is developed to obtain a feasible solution of the original problem. Simulation results verify the convergence of the proposed algorithm, and deploying IRS can bring significant beamforming gains to suppress the eavesdropping.
|
2406.10910
|
Kaiming Shen
|
Yannan Chen, Yi Feng, Xiaoyang Li, Licheng Zhao, Kaiming Shen
|
Fast Fractional Programming for Multi-Cell Integrated Sensing and
Communications
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper concerns the coordinate multi-cell beamforming design for
integrated sensing and communications (ISAC). In particular, we assume that
each base station (BS) has massive antennas. The optimization objective is to
maximize a weighted sum of the data rates (for communications) and the Fisher
information (for sensing). We first show that the conventional beamforming
method for the multiple-input multiple-output (MIMO) transmission, i.e., the
weighted minimum mean square error (WMMSE) algorithm, has a natural extension
to the ISAC problem scenario from a fractional programming (FP) perspective.
However, the extended WMMSE algorithm requires computing the $N\times N$ matrix
inverse extensively, where $N$ is proportional to the antenna array size, so
the algorithm becomes quite costly when antennas are massively deployed. To
address this issue, we develop a nonhomogeneous bound and use it in conjunction
with the FP technique to solve the ISAC beamforming problem without the need to
invert any large matrices. It is further shown that the resulting new FP
algorithm has an intimate connection with gradient projection, based on which
we can accelerate the convergence via Nesterov's gradient extrapolation.
|
[
{
"created": "Sun, 16 Jun 2024 12:14:09 GMT",
"version": "v1"
}
] |
2024-06-18
|
[
[
"Chen",
"Yannan",
""
],
[
"Feng",
"Yi",
""
],
[
"Li",
"Xiaoyang",
""
],
[
"Zhao",
"Licheng",
""
],
[
"Shen",
"Kaiming",
""
]
] |
This paper concerns the coordinate multi-cell beamforming design for integrated sensing and communications (ISAC). In particular, we assume that each base station (BS) has massive antennas. The optimization objective is to maximize a weighted sum of the data rates (for communications) and the Fisher information (for sensing). We first show that the conventional beamforming method for the multiple-input multiple-output (MIMO) transmission, i.e., the weighted minimum mean square error (WMMSE) algorithm, has a natural extension to the ISAC problem scenario from a fractional programming (FP) perspective. However, the extended WMMSE algorithm requires computing the $N\times N$ matrix inverse extensively, where $N$ is proportional to the antenna array size, so the algorithm becomes quite costly when antennas are massively deployed. To address this issue, we develop a nonhomogeneous bound and use it in conjunction with the FP technique to solve the ISAC beamforming problem without the need to invert any large matrices. It is further shown that the resulting new FP algorithm has an intimate connection with gradient projection, based on which we can accelerate the convergence via Nesterov's gradient extrapolation.
|
2203.01880
|
Elmira Amirloo Abolfathi
|
Elmira Amirloo, Amir Rasouli, Peter Lakner, Mohsen Rohani, Jun Luo
|
LatentFormer: Multi-Agent Transformer-Based Interaction Modeling and
Trajectory Prediction
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Multi-agent trajectory prediction is a fundamental problem in autonomous
driving. The key challenges in prediction are accurately anticipating the
behavior of surrounding agents and understanding the scene context. To address
these problems, we propose LatentFormer, a transformer-based model for
predicting future vehicle trajectories. The proposed method leverages a novel
technique for modeling interactions among dynamic objects in the scene.
Contrary to many existing approaches which model cross-agent interactions
during the observation time, our method additionally exploits the future states
of the agents. This is accomplished using a hierarchical attention mechanism
where the evolving states of the agents autoregressively control the
contributions of past trajectories and scene encodings in the final prediction.
Furthermore, we propose a multi-resolution map encoding scheme that relies on a
vision transformer module to effectively capture both local and global scene
context to guide the generation of more admissible future trajectories. We
evaluate the proposed method on the nuScenes benchmark dataset and show that
our approach achieves state-of-the-art performance and improves upon trajectory
metrics by up to 40%. We further investigate the contributions of various
components of the proposed technique via extensive ablation studies.
|
[
{
"created": "Thu, 3 Mar 2022 17:44:58 GMT",
"version": "v1"
}
] |
2022-03-04
|
[
[
"Amirloo",
"Elmira",
""
],
[
"Rasouli",
"Amir",
""
],
[
"Lakner",
"Peter",
""
],
[
"Rohani",
"Mohsen",
""
],
[
"Luo",
"Jun",
""
]
] |
Multi-agent trajectory prediction is a fundamental problem in autonomous driving. The key challenges in prediction are accurately anticipating the behavior of surrounding agents and understanding the scene context. To address these problems, we propose LatentFormer, a transformer-based model for predicting future vehicle trajectories. The proposed method leverages a novel technique for modeling interactions among dynamic objects in the scene. Contrary to many existing approaches which model cross-agent interactions during the observation time, our method additionally exploits the future states of the agents. This is accomplished using a hierarchical attention mechanism where the evolving states of the agents autoregressively control the contributions of past trajectories and scene encodings in the final prediction. Furthermore, we propose a multi-resolution map encoding scheme that relies on a vision transformer module to effectively capture both local and global scene context to guide the generation of more admissible future trajectories. We evaluate the proposed method on the nuScenes benchmark dataset and show that our approach achieves state-of-the-art performance and improves upon trajectory metrics by up to 40%. We further investigate the contributions of various components of the proposed technique via extensive ablation studies.
|
2405.05615
|
Shibo Jie
|
Shibo Jie, Yehui Tang, Ning Ding, Zhi-Hong Deng, Kai Han, Yunhe Wang
|
Memory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning
|
Accepted to ICML2024
| null | null | null |
cs.CV cs.CL cs.LG
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Current solutions for efficiently constructing large vision-language (VL)
models follow a two-step paradigm: projecting the output of pre-trained vision
encoders to the input space of pre-trained language models as visual prompts;
and then transferring the models to downstream VL tasks via end-to-end
parameter-efficient fine-tuning (PEFT). However, this paradigm still exhibits
inefficiency since it significantly increases the input length of the language
models. In this paper, in contrast to integrating visual prompts into inputs,
we regard visual prompts as additional knowledge that facilitates language
models in addressing tasks associated with visual information. Motivated by the
finding that Feed-Forward Network (FFN) of language models acts as "key-value
memory", we introduce a novel approach termed memory-space visual prompting
(MemVP), wherein visual prompts are concatenated with the weights of FFN for
visual knowledge injection. Experimental results across various VL tasks and
language models reveal that MemVP significantly reduces the training time and
inference latency of the finetuned VL models and surpasses the performance of
previous PEFT methods. Code: https://github.com/JieShibo/MemVP
|
[
{
"created": "Thu, 9 May 2024 08:23:20 GMT",
"version": "v1"
}
] |
2024-05-10
|
[
[
"Jie",
"Shibo",
""
],
[
"Tang",
"Yehui",
""
],
[
"Ding",
"Ning",
""
],
[
"Deng",
"Zhi-Hong",
""
],
[
"Han",
"Kai",
""
],
[
"Wang",
"Yunhe",
""
]
] |
Current solutions for efficiently constructing large vision-language (VL) models follow a two-step paradigm: projecting the output of pre-trained vision encoders to the input space of pre-trained language models as visual prompts; and then transferring the models to downstream VL tasks via end-to-end parameter-efficient fine-tuning (PEFT). However, this paradigm still exhibits inefficiency since it significantly increases the input length of the language models. In this paper, in contrast to integrating visual prompts into inputs, we regard visual prompts as additional knowledge that facilitates language models in addressing tasks associated with visual information. Motivated by the finding that Feed-Forward Network (FFN) of language models acts as "key-value memory", we introduce a novel approach termed memory-space visual prompting (MemVP), wherein visual prompts are concatenated with the weights of FFN for visual knowledge injection. Experimental results across various VL tasks and language models reveal that MemVP significantly reduces the training time and inference latency of the finetuned VL models and surpasses the performance of previous PEFT methods. Code: https://github.com/JieShibo/MemVP
|
2206.09848
|
Yue Chen
|
Anthony L. Gunderman, Saikat Sengupta, Eleni Siampli, Dimitri
Sigounas, Christopher Kellner, Chima Oluigbo, Karun Sharma, Isuru Godage,
Kevin Cleary, Yue Chen
|
A Surgical Platform for Intracerebral Hemorrhage Robotic Evacuation
(ASPIHRE): A Non-metallic MR-guided Concentric Tube Robot
|
19 pages, 20 figures, 3 tables
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Intracerebral hemorrhage (ICH) is the deadliest stroke sub-type, with a
one-month mortality rate as high as 52%. Due to the potential cortical
disruption caused by craniotomy, conservative management (watchful waiting) has
historically been a common method of treatment. Minimally invasive evacuation
has recently become an accepted method of treatment for patients with
deep-seated hematoma 30-50 mL in volume, but proper visualization and tool
dexterity remain constrained in conventional endoscopic approaches,
particularly with larger hematoma volumes (> 50 mL). In this article we
describe the development of ASPIHRE (A Surgical Platform for Intracerebral
Hemorrhage Robotic Evacuation), the first-ever concentric tube robot that uses
off-the-shelf plastic tubes for MR-guided ICH evacuation, improving tool
dexterity and procedural visualization. The robot kinematics model is developed
based on a calibration-based method and tube mechanics modeling, allowing the
models to consider both variable curvature and torsional deflection. The
MR-safe pneumatic motors are controlled using a variable gain PID algorithm
producing a rotational accuracy of 0.317 +/- 0.3 degrees. The hardware and
theoretical models are validated in a series of systematic bench-top and MRI
experiments resulting in positional accuracy of the tube tip of 1.39 +\- 0.54
mm. Following validation of targeting accuracy, the evacuation efficacy of the
robot was tested in an MR-guided phantom clot evacuation experiment. The robot
was able to evacuate an initially 38.36 mL clot in 5 minutes, leaving a
residual hematoma of 8.14 mL, well below the 15 mL guideline suggesting good
post-ICH evacuation clinical outcomes.
|
[
{
"created": "Mon, 20 Jun 2022 15:26:43 GMT",
"version": "v1"
}
] |
2022-06-22
|
[
[
"Gunderman",
"Anthony L.",
""
],
[
"Sengupta",
"Saikat",
""
],
[
"Siampli",
"Eleni",
""
],
[
"Sigounas",
"Dimitri",
""
],
[
"Kellner",
"Christopher",
""
],
[
"Oluigbo",
"Chima",
""
],
[
"Sharma",
"Karun",
""
],
[
"Godage",
"Isuru",
""
],
[
"Cleary",
"Kevin",
""
],
[
"Chen",
"Yue",
""
]
] |
Intracerebral hemorrhage (ICH) is the deadliest stroke sub-type, with a one-month mortality rate as high as 52%. Due to the potential cortical disruption caused by craniotomy, conservative management (watchful waiting) has historically been a common method of treatment. Minimally invasive evacuation has recently become an accepted method of treatment for patients with deep-seated hematoma 30-50 mL in volume, but proper visualization and tool dexterity remain constrained in conventional endoscopic approaches, particularly with larger hematoma volumes (> 50 mL). In this article we describe the development of ASPIHRE (A Surgical Platform for Intracerebral Hemorrhage Robotic Evacuation), the first-ever concentric tube robot that uses off-the-shelf plastic tubes for MR-guided ICH evacuation, improving tool dexterity and procedural visualization. The robot kinematics model is developed based on a calibration-based method and tube mechanics modeling, allowing the models to consider both variable curvature and torsional deflection. The MR-safe pneumatic motors are controlled using a variable gain PID algorithm producing a rotational accuracy of 0.317 +/- 0.3 degrees. The hardware and theoretical models are validated in a series of systematic bench-top and MRI experiments resulting in positional accuracy of the tube tip of 1.39 +\- 0.54 mm. Following validation of targeting accuracy, the evacuation efficacy of the robot was tested in an MR-guided phantom clot evacuation experiment. The robot was able to evacuate an initially 38.36 mL clot in 5 minutes, leaving a residual hematoma of 8.14 mL, well below the 15 mL guideline suggesting good post-ICH evacuation clinical outcomes.
|
2406.13457
|
Dachun Kai
|
Dachun Kai, Jiayao Lu, Yueyi Zhang, Xiaoyan Sun
|
EvTexture: Event-driven Texture Enhancement for Video Super-Resolution
|
ICML 2024. Project page:
https://dachunkai.github.io/evtexture.github.io/
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Event-based vision has drawn increasing attention due to its unique
characteristics, such as high temporal resolution and high dynamic range. It
has been used in video super-resolution (VSR) recently to enhance the flow
estimation and temporal alignment. Rather than for motion learning, we propose
in this paper the first VSR method that utilizes event signals for texture
enhancement. Our method, called EvTexture, leverages high-frequency details of
events to better recover texture regions in VSR. In our EvTexture, a new
texture enhancement branch is presented. We further introduce an iterative
texture enhancement module to progressively explore the
high-temporal-resolution event information for texture restoration. This allows
for gradual refinement of texture regions across multiple iterations, leading
to more accurate and rich high-resolution details. Experimental results show
that our EvTexture achieves state-of-the-art performance on four datasets. For
the Vid4 dataset with rich textures, our method can get up to 4.67dB gain
compared with recent event-based methods. Code:
https://github.com/DachunKai/EvTexture.
|
[
{
"created": "Wed, 19 Jun 2024 11:27:44 GMT",
"version": "v1"
}
] |
2024-06-21
|
[
[
"Kai",
"Dachun",
""
],
[
"Lu",
"Jiayao",
""
],
[
"Zhang",
"Yueyi",
""
],
[
"Sun",
"Xiaoyan",
""
]
] |
Event-based vision has drawn increasing attention due to its unique characteristics, such as high temporal resolution and high dynamic range. It has been used in video super-resolution (VSR) recently to enhance the flow estimation and temporal alignment. Rather than for motion learning, we propose in this paper the first VSR method that utilizes event signals for texture enhancement. Our method, called EvTexture, leverages high-frequency details of events to better recover texture regions in VSR. In our EvTexture, a new texture enhancement branch is presented. We further introduce an iterative texture enhancement module to progressively explore the high-temporal-resolution event information for texture restoration. This allows for gradual refinement of texture regions across multiple iterations, leading to more accurate and rich high-resolution details. Experimental results show that our EvTexture achieves state-of-the-art performance on four datasets. For the Vid4 dataset with rich textures, our method can get up to 4.67dB gain compared with recent event-based methods. Code: https://github.com/DachunKai/EvTexture.
|
1211.6653
|
Yuyang Wang
|
Yuyang Wang, Roni Khardon
|
Nonparametric Bayesian Mixed-effect Model: a Sparse Gaussian Process
Approach
|
Preliminary version appeared in ECML2012
| null |
10.1007/978-3-642-33460-3_51
| null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-task learning models using Gaussian processes (GP) have been developed
and successfully applied in various applications. The main difficulty with this
approach is the computational cost of inference using the union of examples
from all tasks. Therefore sparse solutions, that avoid using the entire data
directly and instead use a set of informative "representatives" are desirable.
The paper investigates this problem for the grouped mixed-effect GP model where
each individual response is given by a fixed-effect, taken from one of a set of
unknown groups, plus a random individual effect function that captures
variations among individuals. Such models have been widely used in previous
work but no sparse solutions have been developed. The paper presents the first
sparse solution for such problems, showing how the sparse approximation can be
obtained by maximizing a variational lower bound on the marginal likelihood,
generalizing ideas from single-task Gaussian processes to handle the
mixed-effect model as well as grouping. Experiments using artificial and real
data validate the approach showing that it can recover the performance of
inference with the full sample, that it outperforms baseline methods, and that
it outperforms state of the art sparse solutions for other multi-task GP
formulations.
|
[
{
"created": "Wed, 28 Nov 2012 16:50:23 GMT",
"version": "v1"
}
] |
2012-11-29
|
[
[
"Wang",
"Yuyang",
""
],
[
"Khardon",
"Roni",
""
]
] |
Multi-task learning models using Gaussian processes (GP) have been developed and successfully applied in various applications. The main difficulty with this approach is the computational cost of inference using the union of examples from all tasks. Therefore sparse solutions, that avoid using the entire data directly and instead use a set of informative "representatives" are desirable. The paper investigates this problem for the grouped mixed-effect GP model where each individual response is given by a fixed-effect, taken from one of a set of unknown groups, plus a random individual effect function that captures variations among individuals. Such models have been widely used in previous work but no sparse solutions have been developed. The paper presents the first sparse solution for such problems, showing how the sparse approximation can be obtained by maximizing a variational lower bound on the marginal likelihood, generalizing ideas from single-task Gaussian processes to handle the mixed-effect model as well as grouping. Experiments using artificial and real data validate the approach showing that it can recover the performance of inference with the full sample, that it outperforms baseline methods, and that it outperforms state of the art sparse solutions for other multi-task GP formulations.
|
1502.07449
|
Lan Shi
|
Lan Shi, Christopher Soell, Andreas Baenisch, Robert Weigel, J\"urgen
Seiler, Thomas Ussmueller
|
Concept for a CMOS Image Sensor Suited for Analog Image Pre-Processing
|
Presented at DATE Friday Workshop on Heterogeneous Architectures and
Design Methods for Embedded Image Systems (HIS 2015) (arXiv:1502.07241)
| null | null |
DATEHIS/2015/04
|
cs.ET cs.AR cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A concept for a novel CMOS image sensor suited for analog image
pre-processing is presented in this paper. As an example, an image restoration
algorithm for reducing image noise is applied as image pre-processing in the
analog domain. To supply low-latency data input for analog image preprocessing,
the proposed concept for a CMOS image sensor offers a new sensor signal
acquisition method in 2D. In comparison to image pre-processing in the digital
domain, the proposed analog image pre-processing promises an improved image
quality. Furthermore, the image noise at the stage of analog sensor signal
acquisition can be used to select the most effective restoration algorithm
applied to the analog circuit due to image processing prior to the A/D
converter.
|
[
{
"created": "Thu, 26 Feb 2015 06:18:04 GMT",
"version": "v1"
}
] |
2015-02-27
|
[
[
"Shi",
"Lan",
""
],
[
"Soell",
"Christopher",
""
],
[
"Baenisch",
"Andreas",
""
],
[
"Weigel",
"Robert",
""
],
[
"Seiler",
"Jürgen",
""
],
[
"Ussmueller",
"Thomas",
""
]
] |
A concept for a novel CMOS image sensor suited for analog image pre-processing is presented in this paper. As an example, an image restoration algorithm for reducing image noise is applied as image pre-processing in the analog domain. To supply low-latency data input for analog image preprocessing, the proposed concept for a CMOS image sensor offers a new sensor signal acquisition method in 2D. In comparison to image pre-processing in the digital domain, the proposed analog image pre-processing promises an improved image quality. Furthermore, the image noise at the stage of analog sensor signal acquisition can be used to select the most effective restoration algorithm applied to the analog circuit due to image processing prior to the A/D converter.
|
2401.02734
|
Jian Li
|
Jian Li, Yong Liu, Wei Wang, Haoran Wu, Weiping Wang
|
FedNS: A Fast Sketching Newton-Type Algorithm for Federated Learning
|
Accepted at AAAI 2024
| null | null | null |
cs.LG cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent Newton-type federated learning algorithms have demonstrated linear
convergence with respect to the communication rounds. However, communicating
Hessian matrices is often unfeasible due to their quadratic communication
complexity. In this paper, we introduce a novel approach to tackle this issue
while still achieving fast convergence rates. Our proposed method, named as
Federated Newton Sketch methods (FedNS), approximates the centralized Newton's
method by communicating the sketched square-root Hessian instead of the exact
Hessian. To enhance communication efficiency, we reduce the sketch size to
match the effective dimension of the Hessian matrix. We provide convergence
analysis based on statistical learning for the federated Newton sketch
approaches. Specifically, our approaches reach super-linear convergence rates
w.r.t. the communication rounds for the first time. We validate the
effectiveness of our algorithms through various experiments, which coincide
with our theoretical findings.
|
[
{
"created": "Fri, 5 Jan 2024 10:06:41 GMT",
"version": "v1"
}
] |
2024-01-08
|
[
[
"Li",
"Jian",
""
],
[
"Liu",
"Yong",
""
],
[
"Wang",
"Wei",
""
],
[
"Wu",
"Haoran",
""
],
[
"Wang",
"Weiping",
""
]
] |
Recent Newton-type federated learning algorithms have demonstrated linear convergence with respect to the communication rounds. However, communicating Hessian matrices is often unfeasible due to their quadratic communication complexity. In this paper, we introduce a novel approach to tackle this issue while still achieving fast convergence rates. Our proposed method, named as Federated Newton Sketch methods (FedNS), approximates the centralized Newton's method by communicating the sketched square-root Hessian instead of the exact Hessian. To enhance communication efficiency, we reduce the sketch size to match the effective dimension of the Hessian matrix. We provide convergence analysis based on statistical learning for the federated Newton sketch approaches. Specifically, our approaches reach super-linear convergence rates w.r.t. the communication rounds for the first time. We validate the effectiveness of our algorithms through various experiments, which coincide with our theoretical findings.
|
1108.4891
|
Christoph Wernhard
|
Christoph Wernhard
|
Computing with Logic as Operator Elimination: The ToyElim System
|
Appears in the Proceedings of the 25th Workshop on Logic Programming
(WLP 2011)
| null | null | null |
cs.AI cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A prototype system is described whose core functionality is, based on
propositional logic, the elimination of second-order operators, such as Boolean
quantifiers and operators for projection, forgetting and circumscription. This
approach allows to express many representational and computational tasks in
knowledge representation - for example computation of abductive explanations
and models with respect to logic programming semantics - in a uniform
operational system, backed by a uniform classical semantic framework.
|
[
{
"created": "Wed, 24 Aug 2011 17:21:58 GMT",
"version": "v1"
}
] |
2011-08-25
|
[
[
"Wernhard",
"Christoph",
""
]
] |
A prototype system is described whose core functionality is, based on propositional logic, the elimination of second-order operators, such as Boolean quantifiers and operators for projection, forgetting and circumscription. This approach allows to express many representational and computational tasks in knowledge representation - for example computation of abductive explanations and models with respect to logic programming semantics - in a uniform operational system, backed by a uniform classical semantic framework.
|
2106.04765
|
Yair Schiff
|
Yair Schiff, Brian Quanz, Payel Das, Pin-Yu Chen
|
Predicting Deep Neural Network Generalization with Perturbation Response
Curves
|
NeurIPS 2021
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The field of Deep Learning is rich with empirical evidence of human-like
performance on a variety of prediction tasks. However, despite these successes,
the recent Predicting Generalization in Deep Learning (PGDL) NeurIPS 2020
competition suggests that there is a need for more robust and efficient
measures of network generalization. In this work, we propose a new framework
for evaluating the generalization capabilities of trained networks. We use
perturbation response (PR) curves that capture the accuracy change of a given
network as a function of varying levels of training sample perturbation. From
these PR curves, we derive novel statistics that capture generalization
capability. Specifically, we introduce two new measures for accurately
predicting generalization gaps: the Gi-score and Pal-score, which are inspired
by the Gini coefficient and Palma ratio (measures of income inequality), that
accurately predict generalization gaps. Using our framework applied to intra
and inter-class sample mixup, we attain better predictive scores than the
current state-of-the-art measures on a majority of tasks in the PGDL
competition. In addition, we show that our framework and the proposed
statistics can be used to capture to what extent a trained network is invariant
to a given parametric input transformation, such as rotation or translation.
Therefore, these generalization gap prediction statistics also provide a useful
means for selecting optimal network architectures and hyperparameters that are
invariant to a certain perturbation.
|
[
{
"created": "Wed, 9 Jun 2021 01:37:36 GMT",
"version": "v1"
},
{
"created": "Wed, 27 Oct 2021 01:19:08 GMT",
"version": "v2"
}
] |
2021-10-28
|
[
[
"Schiff",
"Yair",
""
],
[
"Quanz",
"Brian",
""
],
[
"Das",
"Payel",
""
],
[
"Chen",
"Pin-Yu",
""
]
] |
The field of Deep Learning is rich with empirical evidence of human-like performance on a variety of prediction tasks. However, despite these successes, the recent Predicting Generalization in Deep Learning (PGDL) NeurIPS 2020 competition suggests that there is a need for more robust and efficient measures of network generalization. In this work, we propose a new framework for evaluating the generalization capabilities of trained networks. We use perturbation response (PR) curves that capture the accuracy change of a given network as a function of varying levels of training sample perturbation. From these PR curves, we derive novel statistics that capture generalization capability. Specifically, we introduce two new measures for accurately predicting generalization gaps: the Gi-score and Pal-score, which are inspired by the Gini coefficient and Palma ratio (measures of income inequality), that accurately predict generalization gaps. Using our framework applied to intra and inter-class sample mixup, we attain better predictive scores than the current state-of-the-art measures on a majority of tasks in the PGDL competition. In addition, we show that our framework and the proposed statistics can be used to capture to what extent a trained network is invariant to a given parametric input transformation, such as rotation or translation. Therefore, these generalization gap prediction statistics also provide a useful means for selecting optimal network architectures and hyperparameters that are invariant to a certain perturbation.
|
2405.01754
|
Atefeh Alirezazadeh
|
Atefeh Alirezazadeh, Vahid Disfani
|
A Peer-to-Peer Energy Management Solution for Maximum Social Welfare
| null | null | null | null |
cs.CE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In smart energy communities, prosumers who both generate and consume energy
play a crucial role in shaping energy management strategies. These communities
use advanced platforms that enable prosumers to actively engage in the local
electricity markets by setting and adjusting their own energy prices. Through
peer to peer (P2P) energy trading systems, members can directly exchange energy
derived from sources such as solar photovoltaic panels, electric vehicle
battery storage, and demand response (DR) programs. This direct exchange not
only enhances the efficiency of the network but also fosters a dynamic energy
market within the community. In this article, parking-sharing services for EVs
and the mechanisms of P2P energy scheduling, which facilitates the transfer and
communication of power among different energy communities (ECs) are addressed.
It focuses on integrating solar power, responsive electrical loads, and
electric vehicles (EVs) to optimize both economic returns and social benefits
for all participants. The system is designed to ensure that all energy
transactions are transparent and beneficial to the proactive consumers
involved. Moreover, due to urban traffic conditions and the challenges of
finding suitable locations for EV charging and parking, houses in these
communities provide parking-sharing services for EVs. This integration of
energy management and urban scheduling illustrates a holistic approach to
addressing both energy and transportation challenges, ultimately leading to
more sustainable urban environments.
|
[
{
"created": "Thu, 2 May 2024 21:42:35 GMT",
"version": "v1"
}
] |
2024-05-06
|
[
[
"Alirezazadeh",
"Atefeh",
""
],
[
"Disfani",
"Vahid",
""
]
] |
In smart energy communities, prosumers who both generate and consume energy play a crucial role in shaping energy management strategies. These communities use advanced platforms that enable prosumers to actively engage in the local electricity markets by setting and adjusting their own energy prices. Through peer to peer (P2P) energy trading systems, members can directly exchange energy derived from sources such as solar photovoltaic panels, electric vehicle battery storage, and demand response (DR) programs. This direct exchange not only enhances the efficiency of the network but also fosters a dynamic energy market within the community. In this article, parking-sharing services for EVs and the mechanisms of P2P energy scheduling, which facilitates the transfer and communication of power among different energy communities (ECs) are addressed. It focuses on integrating solar power, responsive electrical loads, and electric vehicles (EVs) to optimize both economic returns and social benefits for all participants. The system is designed to ensure that all energy transactions are transparent and beneficial to the proactive consumers involved. Moreover, due to urban traffic conditions and the challenges of finding suitable locations for EV charging and parking, houses in these communities provide parking-sharing services for EVs. This integration of energy management and urban scheduling illustrates a holistic approach to addressing both energy and transportation challenges, ultimately leading to more sustainable urban environments.
|
1702.01638
|
Xinyu Li
|
Xinyu Li, Yanyi Zhang, Jianyu Zhang, Shuhong Chen, Ivan Marsic,
Richard A. Farneth, Randall S. Burd
|
Concurrent Activity Recognition with Multimodal CNN-LSTM Structure
|
14 pages, 12 figures, under review
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a system that recognizes concurrent activities from real-world
data captured by multiple sensors of different types. The recognition is
achieved in two steps. First, we extract spatial and temporal features from the
multimodal data. We feed each datatype into a convolutional neural network that
extracts spatial features, followed by a long-short term memory network that
extracts temporal information in the sensory data. The extracted features are
then fused for decision making in the second step. Second, we achieve
concurrent activity recognition with a single classifier that encodes a binary
output vector in which elements indicate whether the corresponding activity
types are currently in progress. We tested our system with three datasets from
different domains recorded using different sensors and achieved performance
comparable to existing systems designed specifically for those domains. Our
system is the first to address the concurrent activity recognition with
multisensory data using a single model, which is scalable, simple to train and
easy to deploy.
|
[
{
"created": "Mon, 6 Feb 2017 15:01:45 GMT",
"version": "v1"
}
] |
2017-02-07
|
[
[
"Li",
"Xinyu",
""
],
[
"Zhang",
"Yanyi",
""
],
[
"Zhang",
"Jianyu",
""
],
[
"Chen",
"Shuhong",
""
],
[
"Marsic",
"Ivan",
""
],
[
"Farneth",
"Richard A.",
""
],
[
"Burd",
"Randall S.",
""
]
] |
We introduce a system that recognizes concurrent activities from real-world data captured by multiple sensors of different types. The recognition is achieved in two steps. First, we extract spatial and temporal features from the multimodal data. We feed each datatype into a convolutional neural network that extracts spatial features, followed by a long-short term memory network that extracts temporal information in the sensory data. The extracted features are then fused for decision making in the second step. Second, we achieve concurrent activity recognition with a single classifier that encodes a binary output vector in which elements indicate whether the corresponding activity types are currently in progress. We tested our system with three datasets from different domains recorded using different sensors and achieved performance comparable to existing systems designed specifically for those domains. Our system is the first to address the concurrent activity recognition with multisensory data using a single model, which is scalable, simple to train and easy to deploy.
|
2311.01591
|
Debolina Halder Lina
|
Debolina Halder Lina and Arlei Silva
|
Better Fair than Sorry: Adversarial Missing Data Imputation for Fair
GNNs
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
This paper addresses the problem of learning fair Graph Neural Networks
(GNNs) under missing protected attributes. GNNs have achieved state-of-the-art
results in many relevant tasks where decisions might disproportionately impact
specific communities. However, existing work on fair GNNs assumes that either
protected attributes are fully-observed or that the missing data imputation is
fair. In practice, biases in the imputation will be propagated to the model
outcomes, leading them to overestimate the fairness of their predictions. We
address this challenge by proposing Better Fair than Sorry (BFtS), a fair
missing data imputation model for protected attributes used by fair GNNs. The
key design principle behind BFtS is that imputations should approximate the
worst-case scenario for the fair GNN -- i.e. when optimizing fairness is the
hardest. We implement this idea using a 3-player adversarial scheme where two
adversaries collaborate against the fair GNN. Experiments using synthetic and
real datasets show that BFtS often achieves a better fairness $\times$ accuracy
trade-off than existing alternatives.
|
[
{
"created": "Thu, 2 Nov 2023 20:57:44 GMT",
"version": "v1"
},
{
"created": "Thu, 15 Feb 2024 17:48:33 GMT",
"version": "v2"
}
] |
2024-02-16
|
[
[
"Lina",
"Debolina Halder",
""
],
[
"Silva",
"Arlei",
""
]
] |
This paper addresses the problem of learning fair Graph Neural Networks (GNNs) under missing protected attributes. GNNs have achieved state-of-the-art results in many relevant tasks where decisions might disproportionately impact specific communities. However, existing work on fair GNNs assumes that either protected attributes are fully-observed or that the missing data imputation is fair. In practice, biases in the imputation will be propagated to the model outcomes, leading them to overestimate the fairness of their predictions. We address this challenge by proposing Better Fair than Sorry (BFtS), a fair missing data imputation model for protected attributes used by fair GNNs. The key design principle behind BFtS is that imputations should approximate the worst-case scenario for the fair GNN -- i.e. when optimizing fairness is the hardest. We implement this idea using a 3-player adversarial scheme where two adversaries collaborate against the fair GNN. Experiments using synthetic and real datasets show that BFtS often achieves a better fairness $\times$ accuracy trade-off than existing alternatives.
|
1712.07062
|
Biao He
|
Biao He, Shihao Yan, Xiangyun Zhou, and Hamid Jafarkhani
|
Covert Wireless Communication with a Poisson Field of Interferers
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we study covert communication in wireless networks consisting
of a transmitter, Alice, an intended receiver, Bob, a warden, Willie, and a
Poisson field of interferers. Bob and Willie are subject to uncertain shot
noise due to the ambient signals from interferers in the network. With the aid
of stochastic geometry, we analyze the throughput of the covert communication
between Alice and Bob subject to given requirements on the covertness against
Willie and the reliability of decoding at Bob. We consider non-fading and
fading channels. We analytically obtain interesting findings on the impacts of
the density and the transmit power of the concurrent interferers on the covert
throughput. That is, the density and the transmit power of the interferers have
no impact on the covert throughput as long as the network stays in the
interference-limited regime, for both the non-fading and the fading cases. When
the interference is sufficiently small and comparable with the receiver noise,
the covert throughput increases as the density or the transmit power of the
concurrent interferers increases.
|
[
{
"created": "Tue, 19 Dec 2017 17:21:59 GMT",
"version": "v1"
},
{
"created": "Tue, 12 Jun 2018 16:42:03 GMT",
"version": "v2"
},
{
"created": "Tue, 19 Jun 2018 00:04:39 GMT",
"version": "v3"
}
] |
2018-06-20
|
[
[
"He",
"Biao",
""
],
[
"Yan",
"Shihao",
""
],
[
"Zhou",
"Xiangyun",
""
],
[
"Jafarkhani",
"Hamid",
""
]
] |
In this paper, we study covert communication in wireless networks consisting of a transmitter, Alice, an intended receiver, Bob, a warden, Willie, and a Poisson field of interferers. Bob and Willie are subject to uncertain shot noise due to the ambient signals from interferers in the network. With the aid of stochastic geometry, we analyze the throughput of the covert communication between Alice and Bob subject to given requirements on the covertness against Willie and the reliability of decoding at Bob. We consider non-fading and fading channels. We analytically obtain interesting findings on the impacts of the density and the transmit power of the concurrent interferers on the covert throughput. That is, the density and the transmit power of the interferers have no impact on the covert throughput as long as the network stays in the interference-limited regime, for both the non-fading and the fading cases. When the interference is sufficiently small and comparable with the receiver noise, the covert throughput increases as the density or the transmit power of the concurrent interferers increases.
|
1711.10288
|
Jacopo Cavazza
|
Pietro Morerio and Jacopo Cavazza and Vittorio Murino
|
Minimal-Entropy Correlation Alignment for Unsupervised Deep Domain
Adaptation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we face the problem of unsupervised domain adaptation with a
novel deep learning approach which leverages on our finding that entropy
minimization is induced by the optimal alignment of second order statistics
between source and target domains. We formally demonstrate this hypothesis and,
aiming at achieving an optimal alignment in practical cases, we adopt a more
principled strategy which, differently from the current Euclidean approaches,
deploys alignment along geodesics. Our pipeline can be implemented by adding to
the standard classification loss (on the labeled source domain), a
source-to-target regularizer that is weighted in an unsupervised and
data-driven fashion. We provide extensive experiments to assess the superiority
of our framework on standard domain and modality adaptation benchmarks.
|
[
{
"created": "Tue, 28 Nov 2017 13:39:10 GMT",
"version": "v1"
}
] |
2017-11-29
|
[
[
"Morerio",
"Pietro",
""
],
[
"Cavazza",
"Jacopo",
""
],
[
"Murino",
"Vittorio",
""
]
] |
In this work, we face the problem of unsupervised domain adaptation with a novel deep learning approach which leverages on our finding that entropy minimization is induced by the optimal alignment of second order statistics between source and target domains. We formally demonstrate this hypothesis and, aiming at achieving an optimal alignment in practical cases, we adopt a more principled strategy which, differently from the current Euclidean approaches, deploys alignment along geodesics. Our pipeline can be implemented by adding to the standard classification loss (on the labeled source domain), a source-to-target regularizer that is weighted in an unsupervised and data-driven fashion. We provide extensive experiments to assess the superiority of our framework on standard domain and modality adaptation benchmarks.
|
2006.13646
|
Weiyu Chen
|
Weiyu Chen, Haiyang Ding, Shilian Wang, Daniel Benevides da Costa,
Fengkui Gong and Pedro Henrique Juliano Nardelli
|
Backscatter Cooperation in NOMA Communications Systems
|
31 pages, 6 figures
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, a backscatter cooperation (BC) scheme is proposed for
non-orthogonal multiple access (NOMA) downlink transmission. The key idea is to
enable one user to split and then backscatter part of its received signals to
improve the reception at another user. To evaluate the performance of the
proposed BC-NOMA scheme, three benchmark schemes are introduced. They are the
non-cooperation (NC)-NOMA scheme, the conventional relaying (CR)-NOMA scheme,
and the incremental relaying (IR)-NOMA scheme. For all these schemes, the
analytical expressions of the minimum total power to avoid information outage
are derived, based on which their respective outage performance, expected
rates, and diversity-multiplexing trade-off (DMT) are investigated. Analytical
results show that the proposed BC-NOMA scheme strictly outperforms the NC-NOMA
scheme in terms of all the three metrics. Furthermore, theoretical analyses are
validated via Monte-Carlo simulations. It is shown that unlike the CR-NOMA
scheme and the IR-NOMA scheme, the proposed BC-NOMA scheme can enhance the
transmission reliability without impairing the transmission rate, which makes
backscattering an appealing solution to cooperative NOMA downlinks.
|
[
{
"created": "Wed, 24 Jun 2020 11:40:59 GMT",
"version": "v1"
}
] |
2020-06-25
|
[
[
"Chen",
"Weiyu",
""
],
[
"Ding",
"Haiyang",
""
],
[
"Wang",
"Shilian",
""
],
[
"da Costa",
"Daniel Benevides",
""
],
[
"Gong",
"Fengkui",
""
],
[
"Nardelli",
"Pedro Henrique Juliano",
""
]
] |
In this paper, a backscatter cooperation (BC) scheme is proposed for non-orthogonal multiple access (NOMA) downlink transmission. The key idea is to enable one user to split and then backscatter part of its received signals to improve the reception at another user. To evaluate the performance of the proposed BC-NOMA scheme, three benchmark schemes are introduced. They are the non-cooperation (NC)-NOMA scheme, the conventional relaying (CR)-NOMA scheme, and the incremental relaying (IR)-NOMA scheme. For all these schemes, the analytical expressions of the minimum total power to avoid information outage are derived, based on which their respective outage performance, expected rates, and diversity-multiplexing trade-off (DMT) are investigated. Analytical results show that the proposed BC-NOMA scheme strictly outperforms the NC-NOMA scheme in terms of all the three metrics. Furthermore, theoretical analyses are validated via Monte-Carlo simulations. It is shown that unlike the CR-NOMA scheme and the IR-NOMA scheme, the proposed BC-NOMA scheme can enhance the transmission reliability without impairing the transmission rate, which makes backscattering an appealing solution to cooperative NOMA downlinks.
|
1906.07953
|
Robin Haunschild
|
Jian Du, Peixin Li, Robin Haunschild, Yinan Sun, and Xiaoli Tang
|
Paper-Patent Citation Linkages as Early Signs for Predicting Delayed
Recognized Knowledge: Macro and Micro Evidence
|
21 pages, 8 figures, and 4 tables; previous version was presented at
the ISSI 2019 in Rome, Italy; current version has been accepted for
publication in Journal of Informetrics
| null | null | null |
cs.DL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this study, we investigate the extent to which patent citations to papers
can serve as early signs for predicting delayed recognized knowledge in science
using a comparative study with a control group, i.e., instant recognition
papers. We identify the two opposite groups of papers by the Bcp measure, a
parameter-free index for identifying papers which were recognized with delay.
We provide a macro (Science/Nature papers dataset) and micro (a case chosen
from the dataset) evidence on paper-patent citation linkages as early signs for
predicting delayed recognized knowledge in science. It appears that papers with
delayed recognition show a stronger and longer technical impact than instant
recognition papers. We provide indication that in the more recent years papers
with delayed recognition are awakened more often and earlier by a patent rather
than by a scientific paper (also called "prince"). We also found that patent
citations seem to play an important role to avoid instant recognition papers to
level off or to become a so called "flash in the pan", i.e., instant
recognition. It also appears that the sleeping beauties may firstly encounter
negative citations and then patent citations and finally get widely recognized.
In contrast to the two focused fields (biology and chemistry) for instant
recognition papers, delayed recognition papers are rather evenly distributed in
biology, chemistry, psychology, geology, materials science, and physics. We
discovered several pairs of "science sleeping"-"technology [...]. We propose in
further research to discover the potential ahead of time and transformative
research by using citation delay analysis, patent & NPL analysis, and citation
context analysis.
|
[
{
"created": "Wed, 19 Jun 2019 07:45:43 GMT",
"version": "v1"
},
{
"created": "Mon, 20 Jan 2020 13:19:43 GMT",
"version": "v2"
}
] |
2020-01-22
|
[
[
"Du",
"Jian",
""
],
[
"Li",
"Peixin",
""
],
[
"Haunschild",
"Robin",
""
],
[
"Sun",
"Yinan",
""
],
[
"Tang",
"Xiaoli",
""
]
] |
In this study, we investigate the extent to which patent citations to papers can serve as early signs for predicting delayed recognized knowledge in science using a comparative study with a control group, i.e., instant recognition papers. We identify the two opposite groups of papers by the Bcp measure, a parameter-free index for identifying papers which were recognized with delay. We provide a macro (Science/Nature papers dataset) and micro (a case chosen from the dataset) evidence on paper-patent citation linkages as early signs for predicting delayed recognized knowledge in science. It appears that papers with delayed recognition show a stronger and longer technical impact than instant recognition papers. We provide indication that in the more recent years papers with delayed recognition are awakened more often and earlier by a patent rather than by a scientific paper (also called "prince"). We also found that patent citations seem to play an important role to avoid instant recognition papers to level off or to become a so called "flash in the pan", i.e., instant recognition. It also appears that the sleeping beauties may firstly encounter negative citations and then patent citations and finally get widely recognized. In contrast to the two focused fields (biology and chemistry) for instant recognition papers, delayed recognition papers are rather evenly distributed in biology, chemistry, psychology, geology, materials science, and physics. We discovered several pairs of "science sleeping"-"technology [...]. We propose in further research to discover the potential ahead of time and transformative research by using citation delay analysis, patent & NPL analysis, and citation context analysis.
|
2004.02166
|
Suman Banerjee
|
Suman Banerjee
|
Designing and Connectivity Checking of Implicit Social Networks from the
User-Item Rating Data
| null | null | null | null |
cs.SI cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
\emph{Implicit Social Network} is a connected social structure among a group
of persons, where two of them are linked if they have some common interest. One
real\mbox{-}life example of such networks is the implicit social network among
the customers of an online commercial house, where there exists an edge between
two customers if they like similar items. Such networks are often useful for
different commercial applications such as \textit{target advertisement},
\textit{viral marketing}, etc. In this article, we study two fundamental
problems in this direction. The first one is that, given the user\mbox{-}item
rating data of an E\mbox{-}Commerce house, how we can design implicit social
networks among its users and the second one is at the time of designing itself
can we obtain the connectivity information among the users. Formally, we call
the first problem as the \textsc{Implicit User Network Design} Problem and the
second one as \textsc{Implicit User Network Design with Connectivity Checking}
Problem. For the first problem, we propose three different algorithms, namely
\emph{`Exhaustive Search Approach'}, \emph{`Clique Addition Approach'}, and
\textit{`Matrix Multiplication\mbox{-}Based Approach'}. For the second problem,
we propose two different approaches. The first one is the sequential approach:
designing and then connectivity checking, and the other one is a concurrent
approach, which is basically an incremental algorithm that performs designing
and connectivity checking simultaneously. Proposed methodologies have
experimented with three publicly available rating network datasets such as
\emph{Flixter}, \textit{Movielens}, and \textit{Epinions}.
|
[
{
"created": "Sun, 5 Apr 2020 11:44:51 GMT",
"version": "v1"
}
] |
2020-04-07
|
[
[
"Banerjee",
"Suman",
""
]
] |
\emph{Implicit Social Network} is a connected social structure among a group of persons, where two of them are linked if they have some common interest. One real\mbox{-}life example of such networks is the implicit social network among the customers of an online commercial house, where there exists an edge between two customers if they like similar items. Such networks are often useful for different commercial applications such as \textit{target advertisement}, \textit{viral marketing}, etc. In this article, we study two fundamental problems in this direction. The first one is that, given the user\mbox{-}item rating data of an E\mbox{-}Commerce house, how we can design implicit social networks among its users and the second one is at the time of designing itself can we obtain the connectivity information among the users. Formally, we call the first problem as the \textsc{Implicit User Network Design} Problem and the second one as \textsc{Implicit User Network Design with Connectivity Checking} Problem. For the first problem, we propose three different algorithms, namely \emph{`Exhaustive Search Approach'}, \emph{`Clique Addition Approach'}, and \textit{`Matrix Multiplication\mbox{-}Based Approach'}. For the second problem, we propose two different approaches. The first one is the sequential approach: designing and then connectivity checking, and the other one is a concurrent approach, which is basically an incremental algorithm that performs designing and connectivity checking simultaneously. Proposed methodologies have experimented with three publicly available rating network datasets such as \emph{Flixter}, \textit{Movielens}, and \textit{Epinions}.
|
2202.13716
|
Claudio Canella
|
Claudio Canella, Sebastian Dorn, Daniel Gruss, Michael Schwarz
|
SFIP: Coarse-Grained Syscall-Flow-Integrity Protection in Modern Systems
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Growing code bases of modern applications have led to a steady increase in
the number of vulnerabilities. Control-Flow Integrity (CFI) is one promising
mitigation that is more and more widely deployed and prevents numerous
exploits. CFI focuses purely on one security domain. That is, transitions
between user space and kernel space are not protected by CFI. Furthermore, if
user space CFI is bypassed, the system and kernel interfaces remain
unprotected, and an attacker can run arbitrary transitions.
In this paper, we introduce the concept of syscall-flow-integrity protection
(SFIP) that complements the concept of CFI with integrity for user-kernel
transitions. Our proof-of-concept implementation relies on static analysis
during compilation to automatically extract possible syscall transitions. An
application can opt-in to SFIP by providing the extracted information to the
kernel for runtime enforcement. The concept is built on three fully-automated
pillars: First, a syscall state machine, representing possible transitions
according to a syscall digraph model. Second, a syscall-origin mapping, which
maps syscalls to the locations at which they can occur. Third, an efficient
enforcement of syscall-flow integrity in a modified Linux kernel. In our
evaluation, we show that SFIP can be applied to large scale applications with
minimal slowdowns. In a micro- and a macrobenchmark, it only introduces an
overhead of 13.1% and 1.8%, respectively. In terms of security, we discuss and
demonstrate its effectiveness in preventing control-flow-hijacking attacks in
real-world applications. Finally, to highlight the reduction in attack surface,
we perform an analysis of the state machines and syscall-origin mappings of
several real-world applications. On average, SFIP decreases the number of
possible transitions by 38.6% compared to seccomp and 90.9% when no protection
is applied.
|
[
{
"created": "Mon, 28 Feb 2022 12:17:32 GMT",
"version": "v1"
}
] |
2022-03-01
|
[
[
"Canella",
"Claudio",
""
],
[
"Dorn",
"Sebastian",
""
],
[
"Gruss",
"Daniel",
""
],
[
"Schwarz",
"Michael",
""
]
] |
Growing code bases of modern applications have led to a steady increase in the number of vulnerabilities. Control-Flow Integrity (CFI) is one promising mitigation that is more and more widely deployed and prevents numerous exploits. CFI focuses purely on one security domain. That is, transitions between user space and kernel space are not protected by CFI. Furthermore, if user space CFI is bypassed, the system and kernel interfaces remain unprotected, and an attacker can run arbitrary transitions. In this paper, we introduce the concept of syscall-flow-integrity protection (SFIP) that complements the concept of CFI with integrity for user-kernel transitions. Our proof-of-concept implementation relies on static analysis during compilation to automatically extract possible syscall transitions. An application can opt-in to SFIP by providing the extracted information to the kernel for runtime enforcement. The concept is built on three fully-automated pillars: First, a syscall state machine, representing possible transitions according to a syscall digraph model. Second, a syscall-origin mapping, which maps syscalls to the locations at which they can occur. Third, an efficient enforcement of syscall-flow integrity in a modified Linux kernel. In our evaluation, we show that SFIP can be applied to large scale applications with minimal slowdowns. In a micro- and a macrobenchmark, it only introduces an overhead of 13.1% and 1.8%, respectively. In terms of security, we discuss and demonstrate its effectiveness in preventing control-flow-hijacking attacks in real-world applications. Finally, to highlight the reduction in attack surface, we perform an analysis of the state machines and syscall-origin mappings of several real-world applications. On average, SFIP decreases the number of possible transitions by 38.6% compared to seccomp and 90.9% when no protection is applied.
|
2402.03111
|
R\'emi Pr\'ebet
|
R\'emi Pr\'ebet and Mohab Safey El Din and \'Eric Schost
|
Computing roadmaps in unbounded smooth real algebraic sets II: algorithm
and complexity
|
60 pages
| null | null | null |
cs.SC math.AG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A roadmap for an algebraic set $V$ defined by polynomials with coefficients
in some real field, say $\mathbb{R}$, is an algebraic curve contained in $V$
whose intersection with all connected components of $V\cap\mathbb{R}^{n}$ is
connected. These objects, introduced by Canny, can be used to answer
connectivity queries over $V\cap \mathbb{R}^{n}$ provided that they are
required to contain the finite set of query points $\mathcal{P}\subset V$; in
this case,we say that the roadmap is associated to $(V, \mathcal{P})$.
In this paper, we make effective a connectivity result we previously proved,
to design a Monte Carlo algorithm which, on input (i) a finite sequence of
polynomials defining $V$ (and satisfying some regularity assumptions) and (ii)
an algebraic representation of finitely many query points $\mathcal{P}$ in $V$,
computes a roadmap for $(V, \mathcal{P})$. This algorithm generalizes the
nearly optimal one introduced by the last two authors by dropping a boundedness
assumption on the real trace of $V$.
The output size and running times of our algorithm are both polynomial in
$(nD)^{n\log d}$, where $D$ is the maximal degree of the input equations and
$d$ is the dimension of $V$. As far as we know, the best previously known
algorithm dealing with such sets has an output size and running time polynomial
in $(nD)^{n\log^2 n}$.
|
[
{
"created": "Mon, 5 Feb 2024 15:44:16 GMT",
"version": "v1"
}
] |
2024-02-06
|
[
[
"Prébet",
"Rémi",
""
],
[
"Din",
"Mohab Safey El",
""
],
[
"Schost",
"Éric",
""
]
] |
A roadmap for an algebraic set $V$ defined by polynomials with coefficients in some real field, say $\mathbb{R}$, is an algebraic curve contained in $V$ whose intersection with all connected components of $V\cap\mathbb{R}^{n}$ is connected. These objects, introduced by Canny, can be used to answer connectivity queries over $V\cap \mathbb{R}^{n}$ provided that they are required to contain the finite set of query points $\mathcal{P}\subset V$; in this case,we say that the roadmap is associated to $(V, \mathcal{P})$. In this paper, we make effective a connectivity result we previously proved, to design a Monte Carlo algorithm which, on input (i) a finite sequence of polynomials defining $V$ (and satisfying some regularity assumptions) and (ii) an algebraic representation of finitely many query points $\mathcal{P}$ in $V$, computes a roadmap for $(V, \mathcal{P})$. This algorithm generalizes the nearly optimal one introduced by the last two authors by dropping a boundedness assumption on the real trace of $V$. The output size and running times of our algorithm are both polynomial in $(nD)^{n\log d}$, where $D$ is the maximal degree of the input equations and $d$ is the dimension of $V$. As far as we know, the best previously known algorithm dealing with such sets has an output size and running time polynomial in $(nD)^{n\log^2 n}$.
|
1810.07791
|
Dominika Woszczyk
|
Dominika Woszczyk, Gerasimos Spanakis
|
MaaSim: A Liveability Simulation for Improving the Quality of Life in
Cities
|
16 pages
| null | null | null |
cs.CY cs.HC cs.LG cs.NE stat.ML
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Urbanism is no longer planned on paper thanks to powerful models and 3D
simulation platforms. However, current work is not open to the public and lacks
an optimisation agent that could help in decision making. This paper describes
the creation of an open-source simulation based on an existing Dutch
liveability score with a built-in AI module. Features are selected using
feature engineering and Random Forests. Then, a modified scoring function is
built based on the former liveability classes. The score is predicted using
Random Forest for regression and achieved a recall of 0.83 with 10-fold
cross-validation. Afterwards, Exploratory Factor Analysis is applied to select
the actions present in the model. The resulting indicators are divided into 5
groups, and 12 actions are generated. The performance of four optimisation
algorithms is compared, namely NSGA-II, PAES, SPEA2 and eps-MOEA, on three
established criteria of quality: cardinality, the spread of the solutions,
spacing, and the resulting score and number of turns. Although all four
algorithms show different strengths, eps-MOEA is selected to be the most
suitable for this problem. Ultimately, the simulation incorporates the model
and the selected AI module in a GUI written in the Kivy framework for Python.
Tests performed on users show positive responses and encourage further
initiatives towards joining technology and public applications.
|
[
{
"created": "Sat, 13 Oct 2018 15:19:41 GMT",
"version": "v1"
}
] |
2018-10-19
|
[
[
"Woszczyk",
"Dominika",
""
],
[
"Spanakis",
"Gerasimos",
""
]
] |
Urbanism is no longer planned on paper thanks to powerful models and 3D simulation platforms. However, current work is not open to the public and lacks an optimisation agent that could help in decision making. This paper describes the creation of an open-source simulation based on an existing Dutch liveability score with a built-in AI module. Features are selected using feature engineering and Random Forests. Then, a modified scoring function is built based on the former liveability classes. The score is predicted using Random Forest for regression and achieved a recall of 0.83 with 10-fold cross-validation. Afterwards, Exploratory Factor Analysis is applied to select the actions present in the model. The resulting indicators are divided into 5 groups, and 12 actions are generated. The performance of four optimisation algorithms is compared, namely NSGA-II, PAES, SPEA2 and eps-MOEA, on three established criteria of quality: cardinality, the spread of the solutions, spacing, and the resulting score and number of turns. Although all four algorithms show different strengths, eps-MOEA is selected to be the most suitable for this problem. Ultimately, the simulation incorporates the model and the selected AI module in a GUI written in the Kivy framework for Python. Tests performed on users show positive responses and encourage further initiatives towards joining technology and public applications.
|
2210.01633
|
Michael Cohen
|
Michael K. Cohen, Samuel Daulton, Michael A. Osborne
|
Log-Linear-Time Gaussian Processes Using Binary Tree Kernels
|
NeurIPS 2022; 9 pages + appendices
|
Adv.Neur.Info.Proc.Sys. 35 (2022) 8118-8129
| null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Gaussian processes (GPs) produce good probabilistic models of functions, but
most GP kernels require $O((n+m)n^2)$ time, where $n$ is the number of data
points and $m$ the number of predictive locations. We present a new kernel that
allows for Gaussian process regression in $O((n+m)\log(n+m))$ time. Our "binary
tree" kernel places all data points on the leaves of a binary tree, with the
kernel depending only on the depth of the deepest common ancestor. We can store
the resulting kernel matrix in $O(n)$ space in $O(n \log n)$ time, as a sum of
sparse rank-one matrices, and approximately invert the kernel matrix in $O(n)$
time. Sparse GP methods also offer linear run time, but they predict less well
than higher dimensional kernels. On a classic suite of regression tasks, we
compare our kernel against Mat\'ern, sparse, and sparse variational kernels.
The binary tree GP assigns the highest likelihood to the test data on a
plurality of datasets, usually achieves lower mean squared error than the
sparse methods, and often ties or beats the Mat\'ern GP. On large datasets, the
binary tree GP is fastest, and much faster than a Mat\'ern GP.
|
[
{
"created": "Tue, 4 Oct 2022 14:30:06 GMT",
"version": "v1"
}
] |
2023-04-03
|
[
[
"Cohen",
"Michael K.",
""
],
[
"Daulton",
"Samuel",
""
],
[
"Osborne",
"Michael A.",
""
]
] |
Gaussian processes (GPs) produce good probabilistic models of functions, but most GP kernels require $O((n+m)n^2)$ time, where $n$ is the number of data points and $m$ the number of predictive locations. We present a new kernel that allows for Gaussian process regression in $O((n+m)\log(n+m))$ time. Our "binary tree" kernel places all data points on the leaves of a binary tree, with the kernel depending only on the depth of the deepest common ancestor. We can store the resulting kernel matrix in $O(n)$ space in $O(n \log n)$ time, as a sum of sparse rank-one matrices, and approximately invert the kernel matrix in $O(n)$ time. Sparse GP methods also offer linear run time, but they predict less well than higher dimensional kernels. On a classic suite of regression tasks, we compare our kernel against Mat\'ern, sparse, and sparse variational kernels. The binary tree GP assigns the highest likelihood to the test data on a plurality of datasets, usually achieves lower mean squared error than the sparse methods, and often ties or beats the Mat\'ern GP. On large datasets, the binary tree GP is fastest, and much faster than a Mat\'ern GP.
|
2006.01938
|
Tenzin Singhay Bhotia
|
Vaibhav Kumar, Tenzin Singhay Bhotia, Vaibhav Kumar, Tanmoy
Chakraborty
|
Nurse is Closer to Woman than Surgeon? Mitigating Gender-Biased
Proximities in Word Embeddings
|
TACL 2020
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Word embeddings are the standard model for semantic and syntactic
representations of words. Unfortunately, these models have been shown to
exhibit undesirable word associations resulting from gender, racial, and
religious biases. Existing post-processing methods for debiasing word
embeddings are unable to mitigate gender bias hidden in the spatial arrangement
of word vectors. In this paper, we propose RAN-Debias, a novel gender debiasing
methodology which not only eliminates the bias present in a word vector but
also alters the spatial distribution of its neighbouring vectors, achieving a
bias-free setting while maintaining minimal semantic offset. We also propose a
new bias evaluation metric - Gender-based Illicit Proximity Estimate (GIPE),
which measures the extent of undue proximity in word vectors resulting from the
presence of gender-based predilections. Experiments based on a suite of
evaluation metrics show that RAN-Debias significantly outperforms the
state-of-the-art in reducing proximity bias (GIPE) by at least 42.02%. It also
reduces direct bias, adding minimal semantic disturbance, and achieves the best
performance in a downstream application task (coreference resolution).
|
[
{
"created": "Tue, 2 Jun 2020 20:50:43 GMT",
"version": "v1"
}
] |
2020-06-04
|
[
[
"Kumar",
"Vaibhav",
""
],
[
"Bhotia",
"Tenzin Singhay",
""
],
[
"Kumar",
"Vaibhav",
""
],
[
"Chakraborty",
"Tanmoy",
""
]
] |
Word embeddings are the standard model for semantic and syntactic representations of words. Unfortunately, these models have been shown to exhibit undesirable word associations resulting from gender, racial, and religious biases. Existing post-processing methods for debiasing word embeddings are unable to mitigate gender bias hidden in the spatial arrangement of word vectors. In this paper, we propose RAN-Debias, a novel gender debiasing methodology which not only eliminates the bias present in a word vector but also alters the spatial distribution of its neighbouring vectors, achieving a bias-free setting while maintaining minimal semantic offset. We also propose a new bias evaluation metric - Gender-based Illicit Proximity Estimate (GIPE), which measures the extent of undue proximity in word vectors resulting from the presence of gender-based predilections. Experiments based on a suite of evaluation metrics show that RAN-Debias significantly outperforms the state-of-the-art in reducing proximity bias (GIPE) by at least 42.02%. It also reduces direct bias, adding minimal semantic disturbance, and achieves the best performance in a downstream application task (coreference resolution).
|
2403.19930
|
Shulin Liu
|
Shulin Liu, Chengcheng Xu, Hao Liu, Tinghao Yu, Tao Yang
|
Are LLMs Effective Backbones for Fine-tuning? An Experimental
Investigation of Supervised LLMs on Chinese Short Text Matching
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The recent success of Large Language Models (LLMs) has garnered significant
attention in both academia and industry. Prior research on LLMs has primarily
focused on enhancing or leveraging their generalization capabilities in zero-
and few-shot settings. However, there has been limited investigation into
effectively fine-tuning LLMs for a specific natural language understanding task
in supervised settings. In this study, we conduct an experimental analysis by
fine-tuning LLMs for the task of Chinese short text matching. We explore
various factors that influence performance when fine-tuning LLMs, including
task modeling methods, prompt formats, and output formats.
|
[
{
"created": "Fri, 29 Mar 2024 02:36:54 GMT",
"version": "v1"
}
] |
2024-04-01
|
[
[
"Liu",
"Shulin",
""
],
[
"Xu",
"Chengcheng",
""
],
[
"Liu",
"Hao",
""
],
[
"Yu",
"Tinghao",
""
],
[
"Yang",
"Tao",
""
]
] |
The recent success of Large Language Models (LLMs) has garnered significant attention in both academia and industry. Prior research on LLMs has primarily focused on enhancing or leveraging their generalization capabilities in zero- and few-shot settings. However, there has been limited investigation into effectively fine-tuning LLMs for a specific natural language understanding task in supervised settings. In this study, we conduct an experimental analysis by fine-tuning LLMs for the task of Chinese short text matching. We explore various factors that influence performance when fine-tuning LLMs, including task modeling methods, prompt formats, and output formats.
|
2102.02938
|
Stephen MacDonell
|
Stephen G. MacDonell
|
The Impact of Sampling and Rule Set Size on Generated Fuzzy Inference
System Predictive Accuracy: Analysis of a Software Engineering Data Set
|
Conference paper, 7 pages, 5 tables, 7 figures
|
Proceedings of the 12th Engineering Applications of Neural
Networks (EANN)/7th Artificial Intelligence Applications and Innovations
(AIAI) Joint Conferences (EANN-AIAI2011)
|
10.1007/978-3-642-23960-1_43
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Software project management makes extensive use of predictive modeling to
estimate product size, defect proneness and development effort. Although
uncertainty is acknowledged in these tasks, fuzzy inference systems, designed
to cope well with uncertainty, have received only limited attention in the
software engineering domain. In this study we empirically investigate the
impact of two choices on the predictive accuracy of generated fuzzy inference
systems when applied to a software engineering data set: sampling of
observations for training and testing; and the size of the rule set generated
using fuzzy c-means clustering. Over ten samples we found no consistent pattern
of predictive performance given certain rule set size. We did find, however,
that a rule set compiled from multiple samples generally resulted in more
accurate predictions than single sample rule sets. More generally, the results
provide further evidence of the sensitivity of empirical analysis outcomes to
specific model-building decisions.
|
[
{
"created": "Fri, 5 Feb 2021 00:42:52 GMT",
"version": "v1"
}
] |
2021-02-08
|
[
[
"MacDonell",
"Stephen G.",
""
]
] |
Software project management makes extensive use of predictive modeling to estimate product size, defect proneness and development effort. Although uncertainty is acknowledged in these tasks, fuzzy inference systems, designed to cope well with uncertainty, have received only limited attention in the software engineering domain. In this study we empirically investigate the impact of two choices on the predictive accuracy of generated fuzzy inference systems when applied to a software engineering data set: sampling of observations for training and testing; and the size of the rule set generated using fuzzy c-means clustering. Over ten samples we found no consistent pattern of predictive performance given certain rule set size. We did find, however, that a rule set compiled from multiple samples generally resulted in more accurate predictions than single sample rule sets. More generally, the results provide further evidence of the sensitivity of empirical analysis outcomes to specific model-building decisions.
|
2303.07230
|
Fatemeh Hadadi
|
Fatemeh Hadadi, Joshua H. Dawes, Donghwan Shin, Domenico Bianculli,
Lionel Briand
|
Systematic Evaluation of Deep Learning Models for Log-based Failure
Prediction
|
Accepted by EMSE'24
|
Empir Software Eng 29, 105 (2024)
|
10.1007/s10664-024-10501-4
| null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
With the increasing complexity and scope of software systems, their
dependability is crucial. The analysis of log data recorded during system
execution can enable engineers to automatically predict failures at run time.
Several Machine Learning (ML) techniques, including traditional ML and Deep
Learning (DL), have been proposed to automate such tasks. However, current
empirical studies are limited in terms of covering all main DL types --
Recurrent Neural Network (RNN), Convolutional Neural network (CNN), and
transformer -- as well as examining them on a wide range of diverse datasets.
In this paper, we aim to address these issues by systematically investigating
the combination of log data embedding strategies and DL types for failure
prediction. To that end, we propose a modular architecture to accommodate
various configurations of embedding strategies and DL-based encoders. To
further investigate how dataset characteristics such as dataset size and
failure percentage affect model accuracy, we synthesised 360 datasets, with
varying characteristics, for three distinct system behavioral models, based on
a systematic and automated generation approach. Using the F1 score metric, our
results show that the best overall performing configuration is a CNN-based
encoder with Logkey2vec. Additionally, we provide specific dataset conditions,
namely a dataset size >350 or a failure percentage >7.5%, under which this
configuration demonstrates high accuracy for failure prediction.
|
[
{
"created": "Mon, 13 Mar 2023 16:04:14 GMT",
"version": "v1"
},
{
"created": "Thu, 26 Oct 2023 20:07:45 GMT",
"version": "v2"
},
{
"created": "Tue, 30 Apr 2024 16:25:17 GMT",
"version": "v3"
},
{
"created": "Mon, 24 Jun 2024 04:36:05 GMT",
"version": "v4"
}
] |
2024-06-25
|
[
[
"Hadadi",
"Fatemeh",
""
],
[
"Dawes",
"Joshua H.",
""
],
[
"Shin",
"Donghwan",
""
],
[
"Bianculli",
"Domenico",
""
],
[
"Briand",
"Lionel",
""
]
] |
With the increasing complexity and scope of software systems, their dependability is crucial. The analysis of log data recorded during system execution can enable engineers to automatically predict failures at run time. Several Machine Learning (ML) techniques, including traditional ML and Deep Learning (DL), have been proposed to automate such tasks. However, current empirical studies are limited in terms of covering all main DL types -- Recurrent Neural Network (RNN), Convolutional Neural network (CNN), and transformer -- as well as examining them on a wide range of diverse datasets. In this paper, we aim to address these issues by systematically investigating the combination of log data embedding strategies and DL types for failure prediction. To that end, we propose a modular architecture to accommodate various configurations of embedding strategies and DL-based encoders. To further investigate how dataset characteristics such as dataset size and failure percentage affect model accuracy, we synthesised 360 datasets, with varying characteristics, for three distinct system behavioral models, based on a systematic and automated generation approach. Using the F1 score metric, our results show that the best overall performing configuration is a CNN-based encoder with Logkey2vec. Additionally, we provide specific dataset conditions, namely a dataset size >350 or a failure percentage >7.5%, under which this configuration demonstrates high accuracy for failure prediction.
|
1907.01201
|
Qinmeng Zou
|
Qinmeng Zou and Frederic Magoules
|
Convergence Detection of Asynchronous Iterations based on Modified
Recursive Doubling
| null |
17th International Symposium on Distributed Computing and
Applications for Business Engineering and Science (DCABES), 2018, IEEE
|
10.1109/dcabes.2018.00081
| null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper addresses the distributed convergence detection problem in
asynchronous iterations. A modified recursive doubling algorithm is
investigated in order to adapt to the non-power-of-two case. Some convergence
detection algorithms are illustrated based on the reduction operation. Finally,
a concluding discussion about the implementation and the applicability is
presented.
|
[
{
"created": "Tue, 2 Jul 2019 07:04:31 GMT",
"version": "v1"
}
] |
2019-07-12
|
[
[
"Zou",
"Qinmeng",
""
],
[
"Magoules",
"Frederic",
""
]
] |
This paper addresses the distributed convergence detection problem in asynchronous iterations. A modified recursive doubling algorithm is investigated in order to adapt to the non-power-of-two case. Some convergence detection algorithms are illustrated based on the reduction operation. Finally, a concluding discussion about the implementation and the applicability is presented.
|
2307.09020
|
Sunder Ali Khowaja
|
Sunder Ali Khowaja, Lewis Nkenyereye, Ghulam Mujtaba, Ik Hyun Lee,
Giancarlo Fortino, Kapal Dev
|
FISTNet: FusIon of STyle-path generative Networks for Facial Style
Transfer
|
21 pages, 6 figures, 2 tables
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
With the surge in emerging technologies such as Metaverse, spatial computing,
and generative AI, the application of facial style transfer has gained a lot of
interest from researchers as well as startups enthusiasts alike. StyleGAN
methods have paved the way for transfer-learning strategies that could reduce
the dependency on the huge volume of data that is available for the training
process. However, StyleGAN methods have the tendency of overfitting that
results in the introduction of artifacts in the facial images. Studies, such as
DualStyleGAN, proposed the use of multipath networks but they require the
networks to be trained for a specific style rather than generating a fusion of
facial styles at once. In this paper, we propose a FusIon of STyles (FIST)
network for facial images that leverages pre-trained multipath style transfer
networks to eliminate the problem associated with lack of huge data volume in
the training phase along with the fusion of multiple styles at the output. We
leverage pre-trained styleGAN networks with an external style pass that use
residual modulation block instead of a transform coding block. The method also
preserves facial structure, identity, and details via the gated mapping unit
introduced in this study. The aforementioned components enable us to train the
network with very limited amount of data while generating high-quality stylized
images. Our training process adapts curriculum learning strategy to perform
efficient, flexible style and model fusion in the generative space. We perform
extensive experiments to show the superiority of FISTNet in comparison to
existing state-of-the-art methods.
|
[
{
"created": "Tue, 18 Jul 2023 07:20:31 GMT",
"version": "v1"
},
{
"created": "Thu, 19 Oct 2023 13:51:08 GMT",
"version": "v2"
},
{
"created": "Tue, 2 Apr 2024 15:46:19 GMT",
"version": "v3"
}
] |
2024-04-03
|
[
[
"Khowaja",
"Sunder Ali",
""
],
[
"Nkenyereye",
"Lewis",
""
],
[
"Mujtaba",
"Ghulam",
""
],
[
"Lee",
"Ik Hyun",
""
],
[
"Fortino",
"Giancarlo",
""
],
[
"Dev",
"Kapal",
""
]
] |
With the surge in emerging technologies such as Metaverse, spatial computing, and generative AI, the application of facial style transfer has gained a lot of interest from researchers as well as startups enthusiasts alike. StyleGAN methods have paved the way for transfer-learning strategies that could reduce the dependency on the huge volume of data that is available for the training process. However, StyleGAN methods have the tendency of overfitting that results in the introduction of artifacts in the facial images. Studies, such as DualStyleGAN, proposed the use of multipath networks but they require the networks to be trained for a specific style rather than generating a fusion of facial styles at once. In this paper, we propose a FusIon of STyles (FIST) network for facial images that leverages pre-trained multipath style transfer networks to eliminate the problem associated with lack of huge data volume in the training phase along with the fusion of multiple styles at the output. We leverage pre-trained styleGAN networks with an external style pass that use residual modulation block instead of a transform coding block. The method also preserves facial structure, identity, and details via the gated mapping unit introduced in this study. The aforementioned components enable us to train the network with very limited amount of data while generating high-quality stylized images. Our training process adapts curriculum learning strategy to perform efficient, flexible style and model fusion in the generative space. We perform extensive experiments to show the superiority of FISTNet in comparison to existing state-of-the-art methods.
|
1511.03774
|
Lijie Chen
|
Lijie Chen, Jian Li
|
On the Optimal Sample Complexity for Best Arm Identification
| null | null | null | null |
cs.LG cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the best arm identification (BEST-1-ARM) problem, which is defined
as follows. We are given $n$ stochastic bandit arms. The $i$th arm has a reward
distribution $D_i$ with an unknown mean $\mu_{i}$. Upon each play of the $i$th
arm, we can get a reward, sampled i.i.d. from $D_i$. We would like to identify
the arm with the largest mean with probability at least $1-\delta$, using as
few samples as possible. We provide a nontrivial algorithm for BEST-1-ARM,
which improves upon several prior upper bounds on the same problem. We also
study an important special case where there are only two arms, which we call
the sign problem. We provide a new lower bound of sign, simplifying and
significantly extending a classical result by Farrell in 1964, with a
completely new proof. Using the new lower bound for sign, we obtain the first
lower bound for BEST-1-ARM that goes beyond the classic Mannor-Tsitsiklis lower
bound, by an interesting reduction from Sign to BEST-1-ARM. We propose an
interesting conjecture concerning the optimal sample complexity of BEST-1-ARM
from the perspective of instance-wise optimality.
|
[
{
"created": "Thu, 12 Nov 2015 04:49:46 GMT",
"version": "v1"
},
{
"created": "Fri, 13 Nov 2015 05:47:39 GMT",
"version": "v2"
},
{
"created": "Tue, 23 Aug 2016 18:05:29 GMT",
"version": "v3"
}
] |
2016-08-24
|
[
[
"Chen",
"Lijie",
""
],
[
"Li",
"Jian",
""
]
] |
We study the best arm identification (BEST-1-ARM) problem, which is defined as follows. We are given $n$ stochastic bandit arms. The $i$th arm has a reward distribution $D_i$ with an unknown mean $\mu_{i}$. Upon each play of the $i$th arm, we can get a reward, sampled i.i.d. from $D_i$. We would like to identify the arm with the largest mean with probability at least $1-\delta$, using as few samples as possible. We provide a nontrivial algorithm for BEST-1-ARM, which improves upon several prior upper bounds on the same problem. We also study an important special case where there are only two arms, which we call the sign problem. We provide a new lower bound of sign, simplifying and significantly extending a classical result by Farrell in 1964, with a completely new proof. Using the new lower bound for sign, we obtain the first lower bound for BEST-1-ARM that goes beyond the classic Mannor-Tsitsiklis lower bound, by an interesting reduction from Sign to BEST-1-ARM. We propose an interesting conjecture concerning the optimal sample complexity of BEST-1-ARM from the perspective of instance-wise optimality.
|
1110.1734
|
Debaditya Ghosh
|
Debaditya Ghosh, Pritam Majumder, Ayan Kumar Das
|
A New Energy Efficient Approach Towards WASN Routing with Modified QCS
Protocol
|
18 pages, 14 figures
|
International Journal of Ad hoc, Sensor & Ubiquitous Computing
(IJASUC) Vol.2, No.3, September 2011
|
10.5121/ijasuc.2011.230
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In today's world Wireless Ad-hoc sensor network, consists of small sensor
nodes having limited resources, has a great potential to solve problems in
various domain including disaster management. In this paper "QCS-protocol" is
modified which was introduced in our previous paper [1] and named as "Modified
QCS-protocol". This is the backbone of our Intelligent Energy Efficient Ad-hoc
Sensor Network. Two other protocols "Irregular Information Transfer" & "Final
Broadcast-Petrol Flow" protocol are also modified to enhance performance of the
new version of QCS protocol to run the system properly and to make the network
more energy efficient and perfect. The challenges in WASN are- limited node
power, Ad-hoc organization of network and reliability. Most of the existing
approaches addressed the problems separately, but not in a totality. This paper
shows how the network can have unlimited life and all time readiness with
overall stability to send information to the base station with minimum power
dissipation with the help of multimode "same type" sensor nodes and type
categorization of generated information. Moreover an effort is made to give
some light to the implementation issues and analyzed overall performance of the
network by MATLAB simulation.
|
[
{
"created": "Sat, 8 Oct 2011 13:30:09 GMT",
"version": "v1"
}
] |
2011-10-11
|
[
[
"Ghosh",
"Debaditya",
""
],
[
"Majumder",
"Pritam",
""
],
[
"Das",
"Ayan Kumar",
""
]
] |
In today's world Wireless Ad-hoc sensor network, consists of small sensor nodes having limited resources, has a great potential to solve problems in various domain including disaster management. In this paper "QCS-protocol" is modified which was introduced in our previous paper [1] and named as "Modified QCS-protocol". This is the backbone of our Intelligent Energy Efficient Ad-hoc Sensor Network. Two other protocols "Irregular Information Transfer" & "Final Broadcast-Petrol Flow" protocol are also modified to enhance performance of the new version of QCS protocol to run the system properly and to make the network more energy efficient and perfect. The challenges in WASN are- limited node power, Ad-hoc organization of network and reliability. Most of the existing approaches addressed the problems separately, but not in a totality. This paper shows how the network can have unlimited life and all time readiness with overall stability to send information to the base station with minimum power dissipation with the help of multimode "same type" sensor nodes and type categorization of generated information. Moreover an effort is made to give some light to the implementation issues and analyzed overall performance of the network by MATLAB simulation.
|
2403.18882
|
Igor Ivkic
|
Igor Ivki\'c, Tobias Buhmann, Burkhard List, Clemens Gnauer
|
Towards a Cost-Benefit Analysis of Additive Manufacturing as a Service
|
In Proceedings of the 14th International Conference on Cloud
Computing and Services Science (CLOSER 2024). Angers, France
| null | null | null |
cs.OH
|
http://creativecommons.org/licenses/by/4.0/
|
The landscape of traditional industrial manufacturing is undergoing a pivotal
shift from resource-intensive production and long supply chains to more
sustainable and regionally focused economies. In this evolving scenario, the
move towards local, on-demand manufacturing is emerging as a remedy to the
environmentally damaging practice of mass-producing products in distant
countries and then transporting them over long distances to customers. This
paradigm shift significantly empowers customers, giving them greater control
over the manufacturing process by enabling on-demand production and favouring
local production sites over traditional mass production and extensive shipping
practices. In this position paper we propose a cloud-native Manufacturing as a
Service (MaaS) platform that integrates advances in three-dimensional (3D)
printing technology into a responsive and eco-conscious manufacturing
ecosystem. In this context, we propose a high-level architectural design for a
cloud-based MaaS platform that connects web shops of local stores with small
and medium-sized enterprises (SMEs) operating 3D printers. Furthermore, we
outline an experimental design, including a cost-benefit analysis, to
empirically evaluate the operational effectiveness and economic feasibility in
a cloud-based additive manufacturing ecosystem. The proposed cloud-based MaaS
platform enables on-demand additive manufacturing and opens up a profit sharing
opportunity between different stakeholders.
|
[
{
"created": "Wed, 27 Mar 2024 13:52:53 GMT",
"version": "v1"
}
] |
2024-03-29
|
[
[
"Ivkić",
"Igor",
""
],
[
"Buhmann",
"Tobias",
""
],
[
"List",
"Burkhard",
""
],
[
"Gnauer",
"Clemens",
""
]
] |
The landscape of traditional industrial manufacturing is undergoing a pivotal shift from resource-intensive production and long supply chains to more sustainable and regionally focused economies. In this evolving scenario, the move towards local, on-demand manufacturing is emerging as a remedy to the environmentally damaging practice of mass-producing products in distant countries and then transporting them over long distances to customers. This paradigm shift significantly empowers customers, giving them greater control over the manufacturing process by enabling on-demand production and favouring local production sites over traditional mass production and extensive shipping practices. In this position paper we propose a cloud-native Manufacturing as a Service (MaaS) platform that integrates advances in three-dimensional (3D) printing technology into a responsive and eco-conscious manufacturing ecosystem. In this context, we propose a high-level architectural design for a cloud-based MaaS platform that connects web shops of local stores with small and medium-sized enterprises (SMEs) operating 3D printers. Furthermore, we outline an experimental design, including a cost-benefit analysis, to empirically evaluate the operational effectiveness and economic feasibility in a cloud-based additive manufacturing ecosystem. The proposed cloud-based MaaS platform enables on-demand additive manufacturing and opens up a profit sharing opportunity between different stakeholders.
|
1711.07710
|
Arindam Khan
|
Waldo G\'alvez and Fabrizio Grandoni and Sandy Heydrich and Salvatore
Ingala and Arindam Khan and Andreas Wiese
|
Approximating Geometric Knapsack via L-packings
|
64pages, full version of FOCS 2017 paper
| null | null | null |
cs.DS cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the two-dimensional geometric knapsack problem (2DK) in which we are
given a set of n axis-aligned rectangular items, each one with an associated
profit, and an axis-aligned square knapsack. The goal is to find a
(non-overlapping) packing of a maximum profit subset of items inside the
knapsack (without rotating items). The best-known polynomial-time approximation
factor for this problem (even just in the cardinality case) is (2 + \epsilon)
[Jansen and Zhang, SODA 2004].
In this paper, we break the 2 approximation barrier, achieving a
polynomial-time (17/9 + \epsilon) < 1.89 approximation, which improves to
(558/325 + \epsilon) < 1.72 in the cardinality case. Essentially all prior work
on 2DK approximation packs items inside a constant number of rectangular
containers, where items inside each container are packed using a simple greedy
strategy. We deviate for the first time from this setting: we show that there
exists a large profit solution where items are packed inside a constant number
of containers plus one L-shaped region at the boundary of the knapsack which
contains items that are high and narrow and items that are wide and thin. As a
second major and the main algorithmic contribution of this paper, we present a
PTAS for this case. We believe that this will turn out to be useful in future
work in geometric packing problems.
We also consider the variant of the problem with rotations (2DKR), where
items can be rotated by 90 degrees. Also, in this case, the best-known
polynomial-time approximation factor (even for the cardinality case) is (2 +
\epsilon) [Jansen and Zhang, SODA 2004]. Exploiting part of the machinery
developed for 2DK plus a few additional ideas, we obtain a polynomial-time (3/2
+ \epsilon)-approximation for 2DKR, which improves to (4/3 + \epsilon) in the
cardinality case.
|
[
{
"created": "Tue, 21 Nov 2017 10:46:35 GMT",
"version": "v1"
}
] |
2017-11-22
|
[
[
"Gálvez",
"Waldo",
""
],
[
"Grandoni",
"Fabrizio",
""
],
[
"Heydrich",
"Sandy",
""
],
[
"Ingala",
"Salvatore",
""
],
[
"Khan",
"Arindam",
""
],
[
"Wiese",
"Andreas",
""
]
] |
We study the two-dimensional geometric knapsack problem (2DK) in which we are given a set of n axis-aligned rectangular items, each one with an associated profit, and an axis-aligned square knapsack. The goal is to find a (non-overlapping) packing of a maximum profit subset of items inside the knapsack (without rotating items). The best-known polynomial-time approximation factor for this problem (even just in the cardinality case) is (2 + \epsilon) [Jansen and Zhang, SODA 2004]. In this paper, we break the 2 approximation barrier, achieving a polynomial-time (17/9 + \epsilon) < 1.89 approximation, which improves to (558/325 + \epsilon) < 1.72 in the cardinality case. Essentially all prior work on 2DK approximation packs items inside a constant number of rectangular containers, where items inside each container are packed using a simple greedy strategy. We deviate for the first time from this setting: we show that there exists a large profit solution where items are packed inside a constant number of containers plus one L-shaped region at the boundary of the knapsack which contains items that are high and narrow and items that are wide and thin. As a second major and the main algorithmic contribution of this paper, we present a PTAS for this case. We believe that this will turn out to be useful in future work in geometric packing problems. We also consider the variant of the problem with rotations (2DKR), where items can be rotated by 90 degrees. Also, in this case, the best-known polynomial-time approximation factor (even for the cardinality case) is (2 + \epsilon) [Jansen and Zhang, SODA 2004]. Exploiting part of the machinery developed for 2DK plus a few additional ideas, we obtain a polynomial-time (3/2 + \epsilon)-approximation for 2DKR, which improves to (4/3 + \epsilon) in the cardinality case.
|
1208.2766
|
EPTCS
|
Gabriele Fici (Universit\'e Nice Sophia Antipolis, France), Francesca
Fiorenzi (Universit\'e Paris-Sud 11, France)
|
Topological properties of cellular automata on trees
|
In Proceedings AUTOMATA&JAC 2012, arXiv:1208.2498
|
EPTCS 90, 2012, pp. 255-266
|
10.4204/EPTCS.90.20
| null |
cs.FL cs.CC cs.DM nlin.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We prove that there do not exist positively expansive cellular automata
defined on the full k-ary tree shift (for k>=2). Moreover, we investigate some
topological properties of these automata and their relationships, namely
permutivity, surjectivity, preinjectivity, right-closingness and openness.
|
[
{
"created": "Tue, 14 Aug 2012 01:55:58 GMT",
"version": "v1"
}
] |
2012-08-15
|
[
[
"Fici",
"Gabriele",
"",
"Université Nice Sophia Antipolis, France"
],
[
"Fiorenzi",
"Francesca",
"",
"Université Paris-Sud 11, France"
]
] |
We prove that there do not exist positively expansive cellular automata defined on the full k-ary tree shift (for k>=2). Moreover, we investigate some topological properties of these automata and their relationships, namely permutivity, surjectivity, preinjectivity, right-closingness and openness.
|
2009.06343
|
Selahattin Serdar Helli
|
Selahattin Serdar Helli, \c{C}a\u{g}kan Dem\.irc\.i, Onur \c{C}oban
and Anda\c{c} Hamamci
|
Short-Term Forecasting COVID-19 Cases In Turkey Using Long Short-Term
Memory Network
|
4 pages,4 figures
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
COVID-19 has been one of the most severe diseases, causing a harsh pandemic
all over the world, since December 2019. The aim of this study is to evaluate
the value of Long Short-Term Memory (LSTM) Networks in forecasting the total
number of COVID-19 cases in Turkey. The COVID-19 data for 30 days, between
March 24 and April 23, 2020, are used to estimate the next fifteen days. The
mean absolute error of the LSTM Network for 15 days estimation is
1,69$\pm$1.35%. Whereas, for the same data, the error of the Box-Jenkins method
is 3.24$\pm$1.56%, Prophet method is 6.88$\pm$4.96% and Holt-Winters Additive
method with Damped Trend is 0.47$\pm$0.28%. Additionally, when the number of
deaths data is also provided with the number of total cases to the input of
LSTM Network, the mean error reduces to 0.99$\pm$0.51%. Consequently, addition
of the number of deaths data to the input, results a lower error in
forecasting, compared to using only the number of total cases as the input.
However, Holt-Winters Additive method with Damped Trend gives superior results
to LSTM Networks in forecasting the total number of COVID-19 cases.
|
[
{
"created": "Mon, 14 Sep 2020 12:01:40 GMT",
"version": "v1"
},
{
"created": "Wed, 16 Sep 2020 12:10:32 GMT",
"version": "v2"
}
] |
2020-09-17
|
[
[
"Helli",
"Selahattin Serdar",
""
],
[
"Demirci",
"Çağkan",
""
],
[
"Çoban",
"Onur",
""
],
[
"Hamamci",
"Andaç",
""
]
] |
COVID-19 has been one of the most severe diseases, causing a harsh pandemic all over the world, since December 2019. The aim of this study is to evaluate the value of Long Short-Term Memory (LSTM) Networks in forecasting the total number of COVID-19 cases in Turkey. The COVID-19 data for 30 days, between March 24 and April 23, 2020, are used to estimate the next fifteen days. The mean absolute error of the LSTM Network for 15 days estimation is 1,69$\pm$1.35%. Whereas, for the same data, the error of the Box-Jenkins method is 3.24$\pm$1.56%, Prophet method is 6.88$\pm$4.96% and Holt-Winters Additive method with Damped Trend is 0.47$\pm$0.28%. Additionally, when the number of deaths data is also provided with the number of total cases to the input of LSTM Network, the mean error reduces to 0.99$\pm$0.51%. Consequently, addition of the number of deaths data to the input, results a lower error in forecasting, compared to using only the number of total cases as the input. However, Holt-Winters Additive method with Damped Trend gives superior results to LSTM Networks in forecasting the total number of COVID-19 cases.
|
2312.03430
|
Zhuoyan Liu
|
Zhuoyan Liu, Bo Wang, Lizhi Wang, Chenyu Mao, Ye Li
|
ShareCMP: Polarization-Aware RGB-P Semantic Segmentation
|
10 pages, 5 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Multimodal semantic segmentation is developing rapidly, but the modality of
RGB-Polarization remains underexplored. To delve into this problem, we
construct a UPLight RGB-P segmentation benchmark with 12 typical underwater
semantic classes. In this work, we design the ShareCMP, an RGB-P semantic
segmentation framework with a shared dual-branch architecture, which reduces
the number of parameters by about 26-33% compared to previous dual-branch
models. It encompasses a Polarization Generate Attention (PGA) module designed
to generate polarization modal images with richer polarization properties for
the encoder. In addition, we introduce the Class Polarization-Aware Loss
(CPALoss) to improve the learning and understanding of the encoder for
polarization modal information and to optimize the PGA module. With extensive
experiments on a total of three RGB-P benchmarks, our ShareCMP achieves
state-of-the-art performance in mIoU with fewer parameters on the UPLight
(92.45(+0.32)%), ZJU (92.7(+0.1)%), and MCubeS (50.99(+1.51)%) datasets
compared to the previous best methods. The code is available at
https://github.com/LEFTeyex/ShareCMP.
|
[
{
"created": "Wed, 6 Dec 2023 11:25:40 GMT",
"version": "v1"
},
{
"created": "Sun, 10 Dec 2023 03:02:22 GMT",
"version": "v2"
}
] |
2023-12-12
|
[
[
"Liu",
"Zhuoyan",
""
],
[
"Wang",
"Bo",
""
],
[
"Wang",
"Lizhi",
""
],
[
"Mao",
"Chenyu",
""
],
[
"Li",
"Ye",
""
]
] |
Multimodal semantic segmentation is developing rapidly, but the modality of RGB-Polarization remains underexplored. To delve into this problem, we construct a UPLight RGB-P segmentation benchmark with 12 typical underwater semantic classes. In this work, we design the ShareCMP, an RGB-P semantic segmentation framework with a shared dual-branch architecture, which reduces the number of parameters by about 26-33% compared to previous dual-branch models. It encompasses a Polarization Generate Attention (PGA) module designed to generate polarization modal images with richer polarization properties for the encoder. In addition, we introduce the Class Polarization-Aware Loss (CPALoss) to improve the learning and understanding of the encoder for polarization modal information and to optimize the PGA module. With extensive experiments on a total of three RGB-P benchmarks, our ShareCMP achieves state-of-the-art performance in mIoU with fewer parameters on the UPLight (92.45(+0.32)%), ZJU (92.7(+0.1)%), and MCubeS (50.99(+1.51)%) datasets compared to the previous best methods. The code is available at https://github.com/LEFTeyex/ShareCMP.
|
1112.1335
|
Guodong Shi
|
Guodong Shi, Yiguang Hong and K. H. Johansson
|
Connectivity and Set Tracking of Multi-agent Systems Guided by Multiple
Moving Leaders
| null | null | null | null |
cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we investigate distributed multi-agent tracking of a convex
set specified by multiple moving leaders with unmeasurable velocities. Various
jointly-connected interaction topologies of the follower agents with
uncertainties are considered in the study of set tracking. Based on the
connectivity of the time-varying multi-agent system, necessary and sufficient
conditions are obtained for set input-to-state stability and set integral
input-to-state stability for a nonlinear neighbor-based coordination rule with
switching directed topologies. Conditions for asymptotic set tracking are also
proposed with respect to the polytope spanned by the leaders.
|
[
{
"created": "Tue, 6 Dec 2011 16:30:35 GMT",
"version": "v1"
}
] |
2015-03-19
|
[
[
"Shi",
"Guodong",
""
],
[
"Hong",
"Yiguang",
""
],
[
"Johansson",
"K. H.",
""
]
] |
In this paper, we investigate distributed multi-agent tracking of a convex set specified by multiple moving leaders with unmeasurable velocities. Various jointly-connected interaction topologies of the follower agents with uncertainties are considered in the study of set tracking. Based on the connectivity of the time-varying multi-agent system, necessary and sufficient conditions are obtained for set input-to-state stability and set integral input-to-state stability for a nonlinear neighbor-based coordination rule with switching directed topologies. Conditions for asymptotic set tracking are also proposed with respect to the polytope spanned by the leaders.
|
2103.17182
|
Zeke Xie
|
Zeke Xie, Li Yuan, Zhanxing Zhu, and Masashi Sugiyama
|
Positive-Negative Momentum: Manipulating Stochastic Gradient Noise to
Improve Generalization
|
ICML 2021; 20 pages; 13 figures; We fixed some typos in the updated
version
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It is well-known that stochastic gradient noise (SGN) acts as implicit
regularization for deep learning and is essentially important for both
optimization and generalization of deep networks. Some works attempted to
artificially simulate SGN by injecting random noise to improve deep learning.
However, it turned out that the injected simple random noise cannot work as
well as SGN, which is anisotropic and parameter-dependent. For simulating SGN
at low computational costs and without changing the learning rate or batch
size, we propose the Positive-Negative Momentum (PNM) approach that is a
powerful alternative to conventional Momentum in classic optimizers. The
introduced PNM method maintains two approximate independent momentum terms.
Then, we can control the magnitude of SGN explicitly by adjusting the momentum
difference. We theoretically prove the convergence guarantee and the
generalization advantage of PNM over Stochastic Gradient Descent (SGD). By
incorporating PNM into the two conventional optimizers, SGD with Momentum and
Adam, our extensive experiments empirically verified the significant advantage
of the PNM-based variants over the corresponding conventional Momentum-based
optimizers.
|
[
{
"created": "Wed, 31 Mar 2021 16:08:06 GMT",
"version": "v1"
},
{
"created": "Mon, 10 May 2021 12:21:32 GMT",
"version": "v2"
},
{
"created": "Sun, 6 Jun 2021 15:19:52 GMT",
"version": "v3"
},
{
"created": "Tue, 12 Oct 2021 05:39:54 GMT",
"version": "v4"
},
{
"created": "Tue, 30 Aug 2022 13:14:57 GMT",
"version": "v5"
}
] |
2022-08-31
|
[
[
"Xie",
"Zeke",
""
],
[
"Yuan",
"Li",
""
],
[
"Zhu",
"Zhanxing",
""
],
[
"Sugiyama",
"Masashi",
""
]
] |
It is well-known that stochastic gradient noise (SGN) acts as implicit regularization for deep learning and is essentially important for both optimization and generalization of deep networks. Some works attempted to artificially simulate SGN by injecting random noise to improve deep learning. However, it turned out that the injected simple random noise cannot work as well as SGN, which is anisotropic and parameter-dependent. For simulating SGN at low computational costs and without changing the learning rate or batch size, we propose the Positive-Negative Momentum (PNM) approach that is a powerful alternative to conventional Momentum in classic optimizers. The introduced PNM method maintains two approximate independent momentum terms. Then, we can control the magnitude of SGN explicitly by adjusting the momentum difference. We theoretically prove the convergence guarantee and the generalization advantage of PNM over Stochastic Gradient Descent (SGD). By incorporating PNM into the two conventional optimizers, SGD with Momentum and Adam, our extensive experiments empirically verified the significant advantage of the PNM-based variants over the corresponding conventional Momentum-based optimizers.
|
2311.17216
|
Hang Li
|
Hang Li, Chengzhi Shen, Philip Torr, Volker Tresp, Jindong Gu
|
Self-Discovering Interpretable Diffusion Latent Directions for
Responsible Text-to-Image Generation
|
Accepted to CVPR 2024
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Diffusion-based models have gained significant popularity for text-to-image
generation due to their exceptional image-generation capabilities. A risk with
these models is the potential generation of inappropriate content, such as
biased or harmful images. However, the underlying reasons for generating such
undesired content from the perspective of the diffusion model's internal
representation remain unclear. Previous work interprets vectors in an
interpretable latent space of diffusion models as semantic concepts. However,
existing approaches cannot discover directions for arbitrary concepts, such as
those related to inappropriate concepts. In this work, we propose a novel
self-supervised approach to find interpretable latent directions for a given
concept. With the discovered vectors, we further propose a simple approach to
mitigate inappropriate generation. Extensive experiments have been conducted to
verify the effectiveness of our mitigation approach, namely, for fair
generation, safe generation, and responsible text-enhancing generation. Project
page: \url{https://interpretdiffusion.github.io}.
|
[
{
"created": "Tue, 28 Nov 2023 20:40:45 GMT",
"version": "v1"
},
{
"created": "Thu, 28 Mar 2024 14:58:59 GMT",
"version": "v2"
}
] |
2024-03-29
|
[
[
"Li",
"Hang",
""
],
[
"Shen",
"Chengzhi",
""
],
[
"Torr",
"Philip",
""
],
[
"Tresp",
"Volker",
""
],
[
"Gu",
"Jindong",
""
]
] |
Diffusion-based models have gained significant popularity for text-to-image generation due to their exceptional image-generation capabilities. A risk with these models is the potential generation of inappropriate content, such as biased or harmful images. However, the underlying reasons for generating such undesired content from the perspective of the diffusion model's internal representation remain unclear. Previous work interprets vectors in an interpretable latent space of diffusion models as semantic concepts. However, existing approaches cannot discover directions for arbitrary concepts, such as those related to inappropriate concepts. In this work, we propose a novel self-supervised approach to find interpretable latent directions for a given concept. With the discovered vectors, we further propose a simple approach to mitigate inappropriate generation. Extensive experiments have been conducted to verify the effectiveness of our mitigation approach, namely, for fair generation, safe generation, and responsible text-enhancing generation. Project page: \url{https://interpretdiffusion.github.io}.
|
2211.14308
|
Guillaume Le Moing
|
Guillaume Le Moing and Jean Ponce and Cordelia Schmid
|
WALDO: Future Video Synthesis using Object Layer Decomposition and
Parametric Flow Prediction
|
Accepted to ICCV 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents WALDO (WArping Layer-Decomposed Objects), a novel
approach to the prediction of future video frames from past ones. Individual
images are decomposed into multiple layers combining object masks and a small
set of control points. The layer structure is shared across all frames in each
video to build dense inter-frame connections. Complex scene motions are modeled
by combining parametric geometric transformations associated with individual
layers, and video synthesis is broken down into discovering the layers
associated with past frames, predicting the corresponding transformations for
upcoming ones and warping the associated object regions accordingly, and
filling in the remaining image parts. Extensive experiments on multiple
benchmarks including urban videos (Cityscapes and KITTI) and videos featuring
nonrigid motions (UCF-Sports and H3.6M), show that our method consistently
outperforms the state of the art by a significant margin in every case. Code,
pretrained models, and video samples synthesized by our approach can be found
in the project webpage https://16lemoing.github.io/waldo.
|
[
{
"created": "Fri, 25 Nov 2022 18:59:46 GMT",
"version": "v1"
},
{
"created": "Tue, 21 Mar 2023 15:22:30 GMT",
"version": "v2"
},
{
"created": "Tue, 29 Aug 2023 07:58:49 GMT",
"version": "v3"
}
] |
2023-08-30
|
[
[
"Moing",
"Guillaume Le",
""
],
[
"Ponce",
"Jean",
""
],
[
"Schmid",
"Cordelia",
""
]
] |
This paper presents WALDO (WArping Layer-Decomposed Objects), a novel approach to the prediction of future video frames from past ones. Individual images are decomposed into multiple layers combining object masks and a small set of control points. The layer structure is shared across all frames in each video to build dense inter-frame connections. Complex scene motions are modeled by combining parametric geometric transformations associated with individual layers, and video synthesis is broken down into discovering the layers associated with past frames, predicting the corresponding transformations for upcoming ones and warping the associated object regions accordingly, and filling in the remaining image parts. Extensive experiments on multiple benchmarks including urban videos (Cityscapes and KITTI) and videos featuring nonrigid motions (UCF-Sports and H3.6M), show that our method consistently outperforms the state of the art by a significant margin in every case. Code, pretrained models, and video samples synthesized by our approach can be found in the project webpage https://16lemoing.github.io/waldo.
|
2207.08569
|
Kosmas Dimitropoulos
|
Dimitrios Konstantinidis, Ilias Papastratis, Kosmas Dimitropoulos,
Petros Daras
|
Multi-manifold Attention for Vision Transformers
|
This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vision Transformers are very popular nowadays due to their state-of-the-art
performance in several computer vision tasks, such as image classification and
action recognition. Although their performance has been greatly enhanced
through highly descriptive patch embeddings and hierarchical structures, there
is still limited research on utilizing additional data representations so as to
refine the selfattention map of a Transformer. To address this problem, a novel
attention mechanism, called multi-manifold multihead attention, is proposed in
this work to substitute the vanilla self-attention of a Transformer. The
proposed mechanism models the input space in three distinct manifolds, namely
Euclidean, Symmetric Positive Definite and Grassmann, thus leveraging different
statistical and geometrical properties of the input for the computation of a
highly descriptive attention map. In this way, the proposed attention mechanism
can guide a Vision Transformer to become more attentive towards important
appearance, color and texture features of an image, leading to improved
classification and segmentation results, as shown by the experimental results
on well-known datasets.
|
[
{
"created": "Mon, 18 Jul 2022 12:53:53 GMT",
"version": "v1"
},
{
"created": "Wed, 30 Nov 2022 13:45:41 GMT",
"version": "v2"
},
{
"created": "Tue, 5 Sep 2023 09:05:15 GMT",
"version": "v3"
}
] |
2023-09-06
|
[
[
"Konstantinidis",
"Dimitrios",
""
],
[
"Papastratis",
"Ilias",
""
],
[
"Dimitropoulos",
"Kosmas",
""
],
[
"Daras",
"Petros",
""
]
] |
Vision Transformers are very popular nowadays due to their state-of-the-art performance in several computer vision tasks, such as image classification and action recognition. Although their performance has been greatly enhanced through highly descriptive patch embeddings and hierarchical structures, there is still limited research on utilizing additional data representations so as to refine the selfattention map of a Transformer. To address this problem, a novel attention mechanism, called multi-manifold multihead attention, is proposed in this work to substitute the vanilla self-attention of a Transformer. The proposed mechanism models the input space in three distinct manifolds, namely Euclidean, Symmetric Positive Definite and Grassmann, thus leveraging different statistical and geometrical properties of the input for the computation of a highly descriptive attention map. In this way, the proposed attention mechanism can guide a Vision Transformer to become more attentive towards important appearance, color and texture features of an image, leading to improved classification and segmentation results, as shown by the experimental results on well-known datasets.
|
2102.04925
|
Chuhan Wu
|
Chuhan Wu, Fangzhao Wu, Yang Cao, Yongfeng Huang, Xing Xie
|
FedGNN: Federated Graph Neural Network for Privacy-Preserving
Recommendation
| null | null |
10.1038/s41467-022-30714-9
| null |
cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
Graph neural network (GNN) is widely used for recommendation to model
high-order interactions between users and items. Existing GNN-based
recommendation methods rely on centralized storage of user-item graphs and
centralized model learning. However, user data is privacy-sensitive, and the
centralized storage of user-item graphs may arouse privacy concerns and risk.
In this paper, we propose a federated framework for privacy-preserving
GNN-based recommendation, which can collectively train GNN models from
decentralized user data and meanwhile exploit high-order user-item interaction
information with privacy well protected. In our method, we locally train GNN
model in each user client based on the user-item graph inferred from the local
user-item interaction data. Each client uploads the local gradients of GNN to a
server for aggregation, which are further sent to user clients for updating
local GNN models. Since local gradients may contain private information, we
apply local differential privacy techniques to the local gradients to protect
user privacy. In addition, in order to protect the items that users have
interactions with, we propose to incorporate randomly sampled items as pseudo
interacted items for anonymity. To incorporate high-order user-item
interactions, we propose a user-item graph expansion method that can find
neighboring users with co-interacted items and exchange their embeddings for
expanding the local user-item graphs in a privacy-preserving way. Extensive
experiments on six benchmark datasets validate that our approach can achieve
competitive results with existing centralized GNN-based recommendation methods
and meanwhile effectively protect user privacy.
|
[
{
"created": "Tue, 9 Feb 2021 16:30:53 GMT",
"version": "v1"
},
{
"created": "Mon, 1 Mar 2021 08:27:46 GMT",
"version": "v2"
}
] |
2022-10-12
|
[
[
"Wu",
"Chuhan",
""
],
[
"Wu",
"Fangzhao",
""
],
[
"Cao",
"Yang",
""
],
[
"Huang",
"Yongfeng",
""
],
[
"Xie",
"Xing",
""
]
] |
Graph neural network (GNN) is widely used for recommendation to model high-order interactions between users and items. Existing GNN-based recommendation methods rely on centralized storage of user-item graphs and centralized model learning. However, user data is privacy-sensitive, and the centralized storage of user-item graphs may arouse privacy concerns and risk. In this paper, we propose a federated framework for privacy-preserving GNN-based recommendation, which can collectively train GNN models from decentralized user data and meanwhile exploit high-order user-item interaction information with privacy well protected. In our method, we locally train GNN model in each user client based on the user-item graph inferred from the local user-item interaction data. Each client uploads the local gradients of GNN to a server for aggregation, which are further sent to user clients for updating local GNN models. Since local gradients may contain private information, we apply local differential privacy techniques to the local gradients to protect user privacy. In addition, in order to protect the items that users have interactions with, we propose to incorporate randomly sampled items as pseudo interacted items for anonymity. To incorporate high-order user-item interactions, we propose a user-item graph expansion method that can find neighboring users with co-interacted items and exchange their embeddings for expanding the local user-item graphs in a privacy-preserving way. Extensive experiments on six benchmark datasets validate that our approach can achieve competitive results with existing centralized GNN-based recommendation methods and meanwhile effectively protect user privacy.
|
2307.07699
|
Joohyung Lee
|
Adam Ishay, Zhun Yang, Joohyung Lee
|
Leveraging Large Language Models to Generate Answer Set Programs
|
17 pages, KR 2023
| null | null | null |
cs.AI cs.CL cs.SC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large language models (LLMs), such as GPT-3 and GPT-4, have demonstrated
exceptional performance in various natural language processing tasks and have
shown the ability to solve certain reasoning problems. However, their reasoning
capabilities are limited and relatively shallow, despite the application of
various prompting techniques. In contrast, formal logic is adept at handling
complex reasoning, but translating natural language descriptions into formal
logic is a challenging task that non-experts struggle with. This paper proposes
a neuro-symbolic method that combines the strengths of large language models
and answer set programming. Specifically, we employ an LLM to transform natural
language descriptions of logic puzzles into answer set programs. We carefully
design prompts for an LLM to convert natural language descriptions into answer
set programs in a step by step manner. Surprisingly, with just a few in-context
learning examples, LLMs can generate reasonably complex answer set programs.
The majority of errors made are relatively simple and can be easily corrected
by humans, thus enabling LLMs to effectively assist in the creation of answer
set programs.
|
[
{
"created": "Sat, 15 Jul 2023 03:40:55 GMT",
"version": "v1"
}
] |
2023-07-18
|
[
[
"Ishay",
"Adam",
""
],
[
"Yang",
"Zhun",
""
],
[
"Lee",
"Joohyung",
""
]
] |
Large language models (LLMs), such as GPT-3 and GPT-4, have demonstrated exceptional performance in various natural language processing tasks and have shown the ability to solve certain reasoning problems. However, their reasoning capabilities are limited and relatively shallow, despite the application of various prompting techniques. In contrast, formal logic is adept at handling complex reasoning, but translating natural language descriptions into formal logic is a challenging task that non-experts struggle with. This paper proposes a neuro-symbolic method that combines the strengths of large language models and answer set programming. Specifically, we employ an LLM to transform natural language descriptions of logic puzzles into answer set programs. We carefully design prompts for an LLM to convert natural language descriptions into answer set programs in a step by step manner. Surprisingly, with just a few in-context learning examples, LLMs can generate reasonably complex answer set programs. The majority of errors made are relatively simple and can be easily corrected by humans, thus enabling LLMs to effectively assist in the creation of answer set programs.
|
1904.03084
|
Lukas Schmelzeisen
|
Ipek Baris and Lukas Schmelzeisen and Steffen Staab
|
CLEARumor at SemEval-2019 Task 7: ConvoLving ELMo Against Rumors
|
5 pages, 2 figures, 3 tables. Accepted for publication at
SemEval@NAACL-HLT 2019
|
SemEval@NAACL-HLT (2019) 1105-1109
|
10.18653/v1/S19-2193
| null |
cs.CL cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper describes our submission to SemEval-2019 Task 7: RumourEval:
Determining Rumor Veracity and Support for Rumors. We participated in both
subtasks. The goal of subtask A is to classify the type of interaction between
a rumorous social media post and a reply post as support, query, deny, or
comment. The goal of subtask B is to predict the veracity of a given rumor. For
subtask A, we implement a CNN-based neural architecture using ELMo embeddings
of post text combined with auxiliary features and achieve a F1-score of 44.6%.
For subtask B, we employ a MLP neural network leveraging our estimates for
subtask A and achieve a F1-score of 30.1% (second place in the competition). We
provide results and analysis of our system performance and present ablation
experiments.
|
[
{
"created": "Fri, 5 Apr 2019 14:25:25 GMT",
"version": "v1"
}
] |
2020-11-30
|
[
[
"Baris",
"Ipek",
""
],
[
"Schmelzeisen",
"Lukas",
""
],
[
"Staab",
"Steffen",
""
]
] |
This paper describes our submission to SemEval-2019 Task 7: RumourEval: Determining Rumor Veracity and Support for Rumors. We participated in both subtasks. The goal of subtask A is to classify the type of interaction between a rumorous social media post and a reply post as support, query, deny, or comment. The goal of subtask B is to predict the veracity of a given rumor. For subtask A, we implement a CNN-based neural architecture using ELMo embeddings of post text combined with auxiliary features and achieve a F1-score of 44.6%. For subtask B, we employ a MLP neural network leveraging our estimates for subtask A and achieve a F1-score of 30.1% (second place in the competition). We provide results and analysis of our system performance and present ablation experiments.
|
2210.06436
|
Yuesong Shen
|
Yuesong Shen, Daniel Cremers
|
Deep Combinatorial Aggregation
|
NeurIPS 2022
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural networks are known to produce poor uncertainty estimations, and a
variety of approaches have been proposed to remedy this issue. This includes
deep ensemble, a simple and effective method that achieves state-of-the-art
results for uncertainty-aware learning tasks. In this work, we explore a
combinatorial generalization of deep ensemble called deep combinatorial
aggregation (DCA). DCA creates multiple instances of network components and
aggregates their combinations to produce diversified model proposals and
predictions. DCA components can be defined at different levels of granularity.
And we discovered that coarse-grain DCAs can outperform deep ensemble for
uncertainty-aware learning both in terms of predictive performance and
uncertainty estimation. For fine-grain DCAs, we discover that an average
parameterization approach named deep combinatorial weight averaging (DCWA) can
improve the baseline training. It is on par with stochastic weight averaging
(SWA) but does not require any custom training schedule or adaptation of
BatchNorm layers. Furthermore, we propose a consistency enforcing loss that
helps the training of DCWA and modelwise DCA. We experiment on in-domain,
distributional shift, and out-of-distribution image classification tasks, and
empirically confirm the effectiveness of DCWA and DCA approaches.
|
[
{
"created": "Wed, 12 Oct 2022 17:35:03 GMT",
"version": "v1"
}
] |
2022-10-13
|
[
[
"Shen",
"Yuesong",
""
],
[
"Cremers",
"Daniel",
""
]
] |
Neural networks are known to produce poor uncertainty estimations, and a variety of approaches have been proposed to remedy this issue. This includes deep ensemble, a simple and effective method that achieves state-of-the-art results for uncertainty-aware learning tasks. In this work, we explore a combinatorial generalization of deep ensemble called deep combinatorial aggregation (DCA). DCA creates multiple instances of network components and aggregates their combinations to produce diversified model proposals and predictions. DCA components can be defined at different levels of granularity. And we discovered that coarse-grain DCAs can outperform deep ensemble for uncertainty-aware learning both in terms of predictive performance and uncertainty estimation. For fine-grain DCAs, we discover that an average parameterization approach named deep combinatorial weight averaging (DCWA) can improve the baseline training. It is on par with stochastic weight averaging (SWA) but does not require any custom training schedule or adaptation of BatchNorm layers. Furthermore, we propose a consistency enforcing loss that helps the training of DCWA and modelwise DCA. We experiment on in-domain, distributional shift, and out-of-distribution image classification tasks, and empirically confirm the effectiveness of DCWA and DCA approaches.
|
2405.03228
|
Jinying Xiao
|
Jinying Xiao, Ping Li, Jie Nie
|
TED: Accelerate Model Training by Internal Generalization
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large language models have demonstrated strong performance in recent years,
but the high cost of training drives the need for efficient methods to compress
dataset sizes. We propose TED pruning, a method that addresses the challenge of
overfitting under high pruning ratios by quantifying the model's ability to
improve performance on pruned data while fitting retained data, known as
Internal Generalization (IG). TED uses an optimization objective based on
Internal Generalization Distance (IGD), measuring changes in IG before and
after pruning to align with true generalization performance and achieve
implicit regularization. The IGD optimization objective was verified to allow
the model to achieve the smallest upper bound on generalization error. The
impact of small mask fluctuations on IG is studied through masks and Taylor
approximation, and fast estimation of IGD is enabled. In analyzing continuous
training dynamics, the prior effect of IGD is validated, and a progressive
pruning strategy is proposed. Experiments on image classification, natural
language understanding, and large language model fine-tuning show TED achieves
lossless performance with 60-70\% of the data. Upon acceptance, our code will
be made publicly available.
|
[
{
"created": "Mon, 6 May 2024 07:40:13 GMT",
"version": "v1"
}
] |
2024-05-07
|
[
[
"Xiao",
"Jinying",
""
],
[
"Li",
"Ping",
""
],
[
"Nie",
"Jie",
""
]
] |
Large language models have demonstrated strong performance in recent years, but the high cost of training drives the need for efficient methods to compress dataset sizes. We propose TED pruning, a method that addresses the challenge of overfitting under high pruning ratios by quantifying the model's ability to improve performance on pruned data while fitting retained data, known as Internal Generalization (IG). TED uses an optimization objective based on Internal Generalization Distance (IGD), measuring changes in IG before and after pruning to align with true generalization performance and achieve implicit regularization. The IGD optimization objective was verified to allow the model to achieve the smallest upper bound on generalization error. The impact of small mask fluctuations on IG is studied through masks and Taylor approximation, and fast estimation of IGD is enabled. In analyzing continuous training dynamics, the prior effect of IGD is validated, and a progressive pruning strategy is proposed. Experiments on image classification, natural language understanding, and large language model fine-tuning show TED achieves lossless performance with 60-70\% of the data. Upon acceptance, our code will be made publicly available.
|
2111.03322
|
Mouhammad Sakr
|
Swen Jacobs (1), Mouhammad Sakr (2), Marcus V\"olp (2) ((1) CISPA
Helmholtz Center for Information Security, Saarbr\"ucken, Germany, (2) SnT,
University of Luxembourg)
|
Automatic Repair and Deadlock Detection for Parameterized Systems
| null | null | null | null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
We present an algorithm for the repair of parameterized systems. The repair
problem is, for a given process implementation, to find a refinement such that
a given safety property is satisfied by the resulting parameterized system, and
deadlocks are avoided. Our algorithm uses a parameterized model checker to
determine the correctness of candidate solutions and employs a constraint
system to rule out candidates. We apply this algorithm on systems that can be
represented as well-structured transition systems (WSTS), including disjunctive
systems, pairwise rendezvous systems, and broadcast protocols. Moreover, we
show that parameterized deadlock detection can be decided in EXPTIME for
disjunctive systems, and that deadlock detection is in general undecidable for
broadcast protocols.
|
[
{
"created": "Fri, 5 Nov 2021 08:46:22 GMT",
"version": "v1"
},
{
"created": "Tue, 19 Jul 2022 10:46:08 GMT",
"version": "v2"
},
{
"created": "Thu, 28 Jul 2022 13:49:29 GMT",
"version": "v3"
}
] |
2022-07-29
|
[
[
"Jacobs",
"Swen",
""
],
[
"Sakr",
"Mouhammad",
""
],
[
"Völp",
"Marcus",
""
]
] |
We present an algorithm for the repair of parameterized systems. The repair problem is, for a given process implementation, to find a refinement such that a given safety property is satisfied by the resulting parameterized system, and deadlocks are avoided. Our algorithm uses a parameterized model checker to determine the correctness of candidate solutions and employs a constraint system to rule out candidates. We apply this algorithm on systems that can be represented as well-structured transition systems (WSTS), including disjunctive systems, pairwise rendezvous systems, and broadcast protocols. Moreover, we show that parameterized deadlock detection can be decided in EXPTIME for disjunctive systems, and that deadlock detection is in general undecidable for broadcast protocols.
|
2406.10730
|
Pedro Hack
|
Pedro Hack
|
Order-theoretic models for decision-making: Learning, optimization,
complexity and computation
|
PhD thesis
| null |
10.18725/OPARU-52612
| null |
cs.IT cs.AI cs.LO math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The study of intelligent systems explains behaviour in terms of economic
rationality. This results in an optimization principle involving a function or
utility, which states that the system will evolve until the configuration of
maximum utility is achieved. Recently, this theory has incorporated
constraints, i.e., the optimum is achieved when the utility is maximized while
respecting some information-processing constraints. This is reminiscent of
thermodynamic systems. As such, the study of intelligent systems has benefited
from the tools of thermodynamics. The first aim of this thesis is to clarify
the applicability of these results in the study of intelligent systems.
We can think of the local transition steps in thermodynamic or intelligent
systems as being driven by uncertainty. In fact, the transitions in both
systems can be described in terms of majorization. Hence, real-valued
uncertainty measures like Shannon entropy are simply a proxy for their more
involved behaviour. More in general, real-valued functions are fundamental to
study optimization and complexity in the order-theoretic approach to several
topics, including economics, thermodynamics, and quantum mechanics. The second
aim of this thesis is to improve on this classification.
The basic similarity between thermodynamic and intelligent systems is based
on an uncertainty notion expressed by a preorder. We can also think of the
transitions in the steps of a computational process as a decision-making
procedure. In fact, by adding some requirements on the considered order
structures, we can build an abstract model of uncertainty reduction that allows
to incorporate computability, that is, to distinguish the objects that can be
constructed by following a finite set of instructions from those that cannot.
The third aim of this thesis is to clarify the requirements on the order
structure that allow such a framework.
|
[
{
"created": "Sat, 15 Jun 2024 20:20:43 GMT",
"version": "v1"
}
] |
2024-06-18
|
[
[
"Hack",
"Pedro",
""
]
] |
The study of intelligent systems explains behaviour in terms of economic rationality. This results in an optimization principle involving a function or utility, which states that the system will evolve until the configuration of maximum utility is achieved. Recently, this theory has incorporated constraints, i.e., the optimum is achieved when the utility is maximized while respecting some information-processing constraints. This is reminiscent of thermodynamic systems. As such, the study of intelligent systems has benefited from the tools of thermodynamics. The first aim of this thesis is to clarify the applicability of these results in the study of intelligent systems. We can think of the local transition steps in thermodynamic or intelligent systems as being driven by uncertainty. In fact, the transitions in both systems can be described in terms of majorization. Hence, real-valued uncertainty measures like Shannon entropy are simply a proxy for their more involved behaviour. More in general, real-valued functions are fundamental to study optimization and complexity in the order-theoretic approach to several topics, including economics, thermodynamics, and quantum mechanics. The second aim of this thesis is to improve on this classification. The basic similarity between thermodynamic and intelligent systems is based on an uncertainty notion expressed by a preorder. We can also think of the transitions in the steps of a computational process as a decision-making procedure. In fact, by adding some requirements on the considered order structures, we can build an abstract model of uncertainty reduction that allows to incorporate computability, that is, to distinguish the objects that can be constructed by following a finite set of instructions from those that cannot. The third aim of this thesis is to clarify the requirements on the order structure that allow such a framework.
|
1908.06062
|
Daniel Liu
|
Daniel Liu, Ronald Yu, Hao Su
|
Adversarial shape perturbations on 3D point clouds
|
18 pages, accepted to the 2020 ECCV workshop on Adversarial
Robustness in the Real World, source code available at this https url:
https://github.com/Daniel-Liu-c0deb0t/Adversarial-point-perturbations-on-3D-objects
| null | null | null |
cs.CV cs.CR cs.LG eess.IV stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The importance of training robust neural network grows as 3D data is
increasingly utilized in deep learning for vision tasks in robotics, drone
control, and autonomous driving. One commonly used 3D data type is 3D point
clouds, which describe shape information. We examine the problem of creating
robust models from the perspective of the attacker, which is necessary in
understanding how 3D neural networks can be exploited. We explore two
categories of attacks: distributional attacks that involve imperceptible
perturbations to the distribution of points, and shape attacks that involve
deforming the shape represented by a point cloud. We explore three possible
shape attacks for attacking 3D point cloud classification and show that some of
them are able to be effective even against preprocessing steps, like the
previously proposed point-removal defenses.
|
[
{
"created": "Fri, 16 Aug 2019 17:19:34 GMT",
"version": "v1"
},
{
"created": "Mon, 28 Sep 2020 00:04:59 GMT",
"version": "v2"
},
{
"created": "Fri, 23 Oct 2020 04:55:16 GMT",
"version": "v3"
}
] |
2020-10-26
|
[
[
"Liu",
"Daniel",
""
],
[
"Yu",
"Ronald",
""
],
[
"Su",
"Hao",
""
]
] |
The importance of training robust neural network grows as 3D data is increasingly utilized in deep learning for vision tasks in robotics, drone control, and autonomous driving. One commonly used 3D data type is 3D point clouds, which describe shape information. We examine the problem of creating robust models from the perspective of the attacker, which is necessary in understanding how 3D neural networks can be exploited. We explore two categories of attacks: distributional attacks that involve imperceptible perturbations to the distribution of points, and shape attacks that involve deforming the shape represented by a point cloud. We explore three possible shape attacks for attacking 3D point cloud classification and show that some of them are able to be effective even against preprocessing steps, like the previously proposed point-removal defenses.
|
2402.19421
|
Xingchen Xu
|
Lijia Ma, Xingchen Xu, Yong Tan
|
Crafting Knowledge: Exploring the Creative Mechanisms of Chat-Based
Search Engines
|
38 pages, 2 figures, 7 tables
| null | null | null |
cs.IR cs.AI econ.GN q-fin.EC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the domain of digital information dissemination, search engines act as
pivotal conduits linking information seekers with providers. The advent of
chat-based search engines utilizing Large Language Models (LLMs) and Retrieval
Augmented Generation (RAG), exemplified by Bing Chat, marks an evolutionary
leap in the search ecosystem. They demonstrate metacognitive abilities in
interpreting web information and crafting responses with human-like
understanding and creativity. Nonetheless, the intricate nature of LLMs renders
their "cognitive" processes opaque, challenging even their designers'
understanding. This research aims to dissect the mechanisms through which an
LLM-powered chat-based search engine, specifically Bing Chat, selects
information sources for its responses. To this end, an extensive dataset has
been compiled through engagements with New Bing, documenting the websites it
cites alongside those listed by the conventional search engine. Employing
natural language processing (NLP) techniques, the research reveals that Bing
Chat exhibits a preference for content that is not only readable and formally
structured, but also demonstrates lower perplexity levels, indicating a unique
inclination towards text that is predictable by the underlying LLM. Further
enriching our analysis, we procure an additional dataset through interactions
with the GPT-4 based knowledge retrieval API, unveiling a congruent text
preference between the RAG API and Bing Chat. This consensus suggests that
these text preferences intrinsically emerge from the underlying language
models, rather than being explicitly crafted by Bing Chat's developers.
Moreover, our investigation documents a greater similarity among websites cited
by RAG technologies compared to those ranked highest by conventional search
engines.
|
[
{
"created": "Thu, 29 Feb 2024 18:20:37 GMT",
"version": "v1"
}
] |
2024-03-01
|
[
[
"Ma",
"Lijia",
""
],
[
"Xu",
"Xingchen",
""
],
[
"Tan",
"Yong",
""
]
] |
In the domain of digital information dissemination, search engines act as pivotal conduits linking information seekers with providers. The advent of chat-based search engines utilizing Large Language Models (LLMs) and Retrieval Augmented Generation (RAG), exemplified by Bing Chat, marks an evolutionary leap in the search ecosystem. They demonstrate metacognitive abilities in interpreting web information and crafting responses with human-like understanding and creativity. Nonetheless, the intricate nature of LLMs renders their "cognitive" processes opaque, challenging even their designers' understanding. This research aims to dissect the mechanisms through which an LLM-powered chat-based search engine, specifically Bing Chat, selects information sources for its responses. To this end, an extensive dataset has been compiled through engagements with New Bing, documenting the websites it cites alongside those listed by the conventional search engine. Employing natural language processing (NLP) techniques, the research reveals that Bing Chat exhibits a preference for content that is not only readable and formally structured, but also demonstrates lower perplexity levels, indicating a unique inclination towards text that is predictable by the underlying LLM. Further enriching our analysis, we procure an additional dataset through interactions with the GPT-4 based knowledge retrieval API, unveiling a congruent text preference between the RAG API and Bing Chat. This consensus suggests that these text preferences intrinsically emerge from the underlying language models, rather than being explicitly crafted by Bing Chat's developers. Moreover, our investigation documents a greater similarity among websites cited by RAG technologies compared to those ranked highest by conventional search engines.
|
2310.12362
|
Ruisi Zhang
|
Ruisi Zhang, Shehzeen Samarah Hussain, Paarth Neekhara, Farinaz
Koushanfar
|
REMARK-LLM: A Robust and Efficient Watermarking Framework for Generative
Large Language Models
|
accept to usenix security 2024
| null | null | null |
cs.CR cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We present REMARK-LLM, a novel efficient, and robust watermarking framework
designed for texts generated by large language models (LLMs). Synthesizing
human-like content using LLMs necessitates vast computational resources and
extensive datasets, encapsulating critical intellectual property (IP). However,
the generated content is prone to malicious exploitation, including spamming
and plagiarism. To address the challenges, REMARK-LLM proposes three new
components: (i) a learning-based message encoding module to infuse binary
signatures into LLM-generated texts; (ii) a reparameterization module to
transform the dense distributions from the message encoding to the sparse
distribution of the watermarked textual tokens; (iii) a decoding module
dedicated for signature extraction; Furthermore, we introduce an optimized beam
search algorithm to guarantee the coherence and consistency of the generated
content. REMARK-LLM is rigorously trained to encourage the preservation of
semantic integrity in watermarked content, while ensuring effective watermark
retrieval. Extensive evaluations on multiple unseen datasets highlight
REMARK-LLM proficiency and transferability in inserting 2 times more signature
bits into the same texts when compared to prior art, all while maintaining
semantic integrity. Furthermore, REMARK-LLM exhibits better resilience against
a spectrum of watermark detection and removal attacks.
|
[
{
"created": "Wed, 18 Oct 2023 22:14:37 GMT",
"version": "v1"
},
{
"created": "Mon, 8 Apr 2024 00:16:46 GMT",
"version": "v2"
}
] |
2024-04-09
|
[
[
"Zhang",
"Ruisi",
""
],
[
"Hussain",
"Shehzeen Samarah",
""
],
[
"Neekhara",
"Paarth",
""
],
[
"Koushanfar",
"Farinaz",
""
]
] |
We present REMARK-LLM, a novel efficient, and robust watermarking framework designed for texts generated by large language models (LLMs). Synthesizing human-like content using LLMs necessitates vast computational resources and extensive datasets, encapsulating critical intellectual property (IP). However, the generated content is prone to malicious exploitation, including spamming and plagiarism. To address the challenges, REMARK-LLM proposes three new components: (i) a learning-based message encoding module to infuse binary signatures into LLM-generated texts; (ii) a reparameterization module to transform the dense distributions from the message encoding to the sparse distribution of the watermarked textual tokens; (iii) a decoding module dedicated for signature extraction; Furthermore, we introduce an optimized beam search algorithm to guarantee the coherence and consistency of the generated content. REMARK-LLM is rigorously trained to encourage the preservation of semantic integrity in watermarked content, while ensuring effective watermark retrieval. Extensive evaluations on multiple unseen datasets highlight REMARK-LLM proficiency and transferability in inserting 2 times more signature bits into the same texts when compared to prior art, all while maintaining semantic integrity. Furthermore, REMARK-LLM exhibits better resilience against a spectrum of watermark detection and removal attacks.
|
2301.02423
|
Shuai Liu
|
Shuai Liu, Xiao Guo, Shun Qi, Huaning Wang and Xiangyu Chang
|
Learning Personalized Brain Functional Connectivity of MDD Patients from
Multiple Sites via Federated Bayesian Networks
| null | null | null | null |
cs.LG q-bio.NC
|
http://creativecommons.org/licenses/by/4.0/
|
Identifying functional connectivity biomarkers of major depressive disorder
(MDD) patients is essential to advance understanding of the disorder mechanisms
and early intervention. However, due to the small sample size and the high
dimension of available neuroimaging data, the performance of existing methods
is often limited. Multi-site data could enhance the statistical power and
sample size, while they are often subject to inter-site heterogeneity and
data-sharing policies. In this paper, we propose a federated joint estimator,
NOTEARS-PFL, for simultaneous learning of multiple Bayesian networks (BNs) with
continuous optimization, to identify disease-induced alterations in MDD
patients. We incorporate information shared between sites and site-specific
information into the proposed federated learning framework to learn
personalized BN structures by introducing the group fused lasso penalty. We
develop the alternating direction method of multipliers, where in the local
update step, the neuroimaging data is processed at each local site. Then the
learned network structures are transmitted to the center for the global update.
In particular, we derive a closed-form expression for the local update step and
use the iterative proximal projection method to deal with the group fused lasso
penalty in the global update step. We evaluate the performance of the proposed
method on both synthetic and real-world multi-site rs-fMRI datasets. The
results suggest that the proposed NOTEARS-PFL yields superior effectiveness and
accuracy than the comparable methods.
|
[
{
"created": "Fri, 6 Jan 2023 08:58:06 GMT",
"version": "v1"
}
] |
2023-01-09
|
[
[
"Liu",
"Shuai",
""
],
[
"Guo",
"Xiao",
""
],
[
"Qi",
"Shun",
""
],
[
"Wang",
"Huaning",
""
],
[
"Chang",
"Xiangyu",
""
]
] |
Identifying functional connectivity biomarkers of major depressive disorder (MDD) patients is essential to advance understanding of the disorder mechanisms and early intervention. However, due to the small sample size and the high dimension of available neuroimaging data, the performance of existing methods is often limited. Multi-site data could enhance the statistical power and sample size, while they are often subject to inter-site heterogeneity and data-sharing policies. In this paper, we propose a federated joint estimator, NOTEARS-PFL, for simultaneous learning of multiple Bayesian networks (BNs) with continuous optimization, to identify disease-induced alterations in MDD patients. We incorporate information shared between sites and site-specific information into the proposed federated learning framework to learn personalized BN structures by introducing the group fused lasso penalty. We develop the alternating direction method of multipliers, where in the local update step, the neuroimaging data is processed at each local site. Then the learned network structures are transmitted to the center for the global update. In particular, we derive a closed-form expression for the local update step and use the iterative proximal projection method to deal with the group fused lasso penalty in the global update step. We evaluate the performance of the proposed method on both synthetic and real-world multi-site rs-fMRI datasets. The results suggest that the proposed NOTEARS-PFL yields superior effectiveness and accuracy than the comparable methods.
|
2202.10554
|
Tugdual Ceillier
|
Arthur Vilhelm, Matthieu Limbert, Cl\'ement Audebert, Tugdual Ceillier
|
Ensemble Learning techniques for object detection in high-resolution
satellite images
|
Conference on Artificial Intelligence for Defense, Nov 2019, Rennes,
France
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Ensembling is a method that aims to maximize the detection performance by
fusing individual detectors. While rarely mentioned in deep-learning articles
applied to remote sensing, ensembling methods have been widely used to achieve
high scores in recent data science com-petitions, such as Kaggle. The few
remote sensing articles mentioning ensembling mainly focus on mid resolution
images and earth observation applications such as land use classification, but
never on Very High Resolution (VHR) images for defense-related applications or
object detection.This study aims at reviewing the most relevant ensembling
techniques to be used for object detection on very high resolution imagery and
shows an example of the value of such techniques on a relevant operational
use-case (vehicle detection in desert areas).
|
[
{
"created": "Wed, 16 Feb 2022 10:19:21 GMT",
"version": "v1"
}
] |
2022-02-23
|
[
[
"Vilhelm",
"Arthur",
""
],
[
"Limbert",
"Matthieu",
""
],
[
"Audebert",
"Clément",
""
],
[
"Ceillier",
"Tugdual",
""
]
] |
Ensembling is a method that aims to maximize the detection performance by fusing individual detectors. While rarely mentioned in deep-learning articles applied to remote sensing, ensembling methods have been widely used to achieve high scores in recent data science com-petitions, such as Kaggle. The few remote sensing articles mentioning ensembling mainly focus on mid resolution images and earth observation applications such as land use classification, but never on Very High Resolution (VHR) images for defense-related applications or object detection.This study aims at reviewing the most relevant ensembling techniques to be used for object detection on very high resolution imagery and shows an example of the value of such techniques on a relevant operational use-case (vehicle detection in desert areas).
|
2203.06053
|
Saeed Mohammadi
|
Saeed Mohammadi and Mohammad Reza Hesamzadeh
|
A Machine Learning Approach for Prosumer Management in Intraday
Electricity Markets
|
5 pages, 6 figures
| null | null | null |
cs.LG cs.AI cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Prosumer operators are dealing with extensive challenges to participate in
short-term electricity markets while taking uncertainties into account.
Challenges such as variation in demand, solar energy, wind power, and
electricity prices as well as faster response time in intraday electricity
markets. Machine learning approaches could resolve these challenges due to
their ability to continuous learning of complex relations and providing a
real-time response. Such approaches are applicable with presence of the high
performance computing and big data. To tackle these challenges, a Markov
decision process is proposed and solved with a reinforcement learning algorithm
with proper observations and actions employing tabular Q-learning. Trained
agent converges to a policy which is similar to the global optimal solution. It
increases the prosumer's profit by 13.39% compared to the well-known stochastic
optimization approach.
|
[
{
"created": "Fri, 11 Mar 2022 16:29:02 GMT",
"version": "v1"
}
] |
2022-03-14
|
[
[
"Mohammadi",
"Saeed",
""
],
[
"Hesamzadeh",
"Mohammad Reza",
""
]
] |
Prosumer operators are dealing with extensive challenges to participate in short-term electricity markets while taking uncertainties into account. Challenges such as variation in demand, solar energy, wind power, and electricity prices as well as faster response time in intraday electricity markets. Machine learning approaches could resolve these challenges due to their ability to continuous learning of complex relations and providing a real-time response. Such approaches are applicable with presence of the high performance computing and big data. To tackle these challenges, a Markov decision process is proposed and solved with a reinforcement learning algorithm with proper observations and actions employing tabular Q-learning. Trained agent converges to a policy which is similar to the global optimal solution. It increases the prosumer's profit by 13.39% compared to the well-known stochastic optimization approach.
|
2301.11683
|
Alec Edwards
|
Alessandro Abate, Alec Edwards, Mirco Giacobbe
|
Neural Abstractions
|
NeurIPS 2022
| null | null | null |
cs.LO cs.LG cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a novel method for the safety verification of nonlinear dynamical
models that uses neural networks to represent abstractions of their dynamics.
Neural networks have extensively been used before as approximators; in this
work, we make a step further and use them for the first time as abstractions.
For a given dynamical model, our method synthesises a neural network that
overapproximates its dynamics by ensuring an arbitrarily tight, formally
certified bound on the approximation error. For this purpose, we employ a
counterexample-guided inductive synthesis procedure. We show that this produces
a neural ODE with non-deterministic disturbances that constitutes a formal
abstraction of the concrete model under analysis. This guarantees a fundamental
property: if the abstract model is safe, i.e., free from any initialised
trajectory that reaches an undesirable state, then the concrete model is also
safe. By using neural ODEs with ReLU activation functions as abstractions, we
cast the safety verification problem for nonlinear dynamical models into that
of hybrid automata with affine dynamics, which we verify using SpaceEx. We
demonstrate that our approach performs comparably to the mature tool Flow* on
existing benchmark nonlinear models. We additionally demonstrate and that it is
effective on models that do not exhibit local Lipschitz continuity, which are
out of reach to the existing technologies.
|
[
{
"created": "Fri, 27 Jan 2023 12:38:09 GMT",
"version": "v1"
}
] |
2023-01-30
|
[
[
"Abate",
"Alessandro",
""
],
[
"Edwards",
"Alec",
""
],
[
"Giacobbe",
"Mirco",
""
]
] |
We present a novel method for the safety verification of nonlinear dynamical models that uses neural networks to represent abstractions of their dynamics. Neural networks have extensively been used before as approximators; in this work, we make a step further and use them for the first time as abstractions. For a given dynamical model, our method synthesises a neural network that overapproximates its dynamics by ensuring an arbitrarily tight, formally certified bound on the approximation error. For this purpose, we employ a counterexample-guided inductive synthesis procedure. We show that this produces a neural ODE with non-deterministic disturbances that constitutes a formal abstraction of the concrete model under analysis. This guarantees a fundamental property: if the abstract model is safe, i.e., free from any initialised trajectory that reaches an undesirable state, then the concrete model is also safe. By using neural ODEs with ReLU activation functions as abstractions, we cast the safety verification problem for nonlinear dynamical models into that of hybrid automata with affine dynamics, which we verify using SpaceEx. We demonstrate that our approach performs comparably to the mature tool Flow* on existing benchmark nonlinear models. We additionally demonstrate and that it is effective on models that do not exhibit local Lipschitz continuity, which are out of reach to the existing technologies.
|
0801.4714
|
Miroslava Sotakova
|
Miroslava Sotakova
|
Breaking One-Round Key-Agreement Protocols in the Random Oracle Model
|
6 pages
| null | null | null |
cs.CC cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we study one-round key-agreement protocols analogous to
Merkle's puzzles in the random oracle model. The players Alice and Bob are
allowed to query a random permutation oracle $n$ times and upon their queries
and communication, they both output the same key with high probability. We
prove that Eve can always break such a protocol by querying the oracle $O(n^2)$
times. The long-time unproven optimality of the quadratic bound in the fully
general, multi-round scenario has been shown recently by Barak and
Mahmoody-Ghidary. The results in this paper have been found independently of
their work.
|
[
{
"created": "Wed, 30 Jan 2008 19:34:34 GMT",
"version": "v1"
},
{
"created": "Wed, 12 Mar 2008 21:02:49 GMT",
"version": "v2"
},
{
"created": "Tue, 24 Mar 2009 12:17:31 GMT",
"version": "v3"
}
] |
2009-03-24
|
[
[
"Sotakova",
"Miroslava",
""
]
] |
In this paper we study one-round key-agreement protocols analogous to Merkle's puzzles in the random oracle model. The players Alice and Bob are allowed to query a random permutation oracle $n$ times and upon their queries and communication, they both output the same key with high probability. We prove that Eve can always break such a protocol by querying the oracle $O(n^2)$ times. The long-time unproven optimality of the quadratic bound in the fully general, multi-round scenario has been shown recently by Barak and Mahmoody-Ghidary. The results in this paper have been found independently of their work.
|
0809.5096
|
Hong Ju Park
|
Hong Ju Park and Ender Ayanoglu
|
Diversity Analysis of Bit-Interleaved Coded Multiple Beamforming
|
The maximum achievable diversity order from given convolutional code
with any interleaver is shown by using the Singleton bound
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, diversity analysis of bit-interleaved coded multiple
beamforming (BICMB) is extended to the case of general spatial interleavers,
removing a condition on their previously known design criteria and quantifying
the resulting diversity order. The diversity order is determined by a parameter
Qmax which is inherited from the convolutional code and the spatial
de-multiplexer used in BICMB. We introduce a method to find this parameter by
employing a transfer function approach as in finding the weight spectrum of a
convolutional code. By using this method, several Qmax values are shown and
verified to be identical with the results from a computer search. The diversity
analysis and the method to find the parameter are supported by simulation
results. By using the Singleton bound, we also show that Qmax is lower bounded
by the product of the number of streams and the code rate of an encoder. The
design rule of the spatial de-multiplexer for a given convolutional code is
proposed to meet the condition on the maximum achievable diversity order.
|
[
{
"created": "Tue, 30 Sep 2008 00:14:10 GMT",
"version": "v1"
},
{
"created": "Wed, 29 Oct 2008 18:00:01 GMT",
"version": "v2"
},
{
"created": "Tue, 3 Feb 2009 01:49:02 GMT",
"version": "v3"
}
] |
2009-09-29
|
[
[
"Park",
"Hong Ju",
""
],
[
"Ayanoglu",
"Ender",
""
]
] |
In this paper, diversity analysis of bit-interleaved coded multiple beamforming (BICMB) is extended to the case of general spatial interleavers, removing a condition on their previously known design criteria and quantifying the resulting diversity order. The diversity order is determined by a parameter Qmax which is inherited from the convolutional code and the spatial de-multiplexer used in BICMB. We introduce a method to find this parameter by employing a transfer function approach as in finding the weight spectrum of a convolutional code. By using this method, several Qmax values are shown and verified to be identical with the results from a computer search. The diversity analysis and the method to find the parameter are supported by simulation results. By using the Singleton bound, we also show that Qmax is lower bounded by the product of the number of streams and the code rate of an encoder. The design rule of the spatial de-multiplexer for a given convolutional code is proposed to meet the condition on the maximum achievable diversity order.
|
1906.02292
|
Konstantinos Slavakis
|
Cong Ye, Konstantinos Slavakis, Pratik V. Patil, Sarah F. Muldoon,
John Medaglia
|
Brain-Network Clustering via Kernel-ARMA Modeling and the Grassmannian
| null | null | null | null |
cs.LG eess.SP stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent advances in neuroscience and in the technology of functional magnetic
resonance imaging (fMRI) and electro-encephalography (EEG) have propelled a
growing interest in brain-network clustering via time-series analysis.
Notwithstanding, most of the brain-network clustering methods revolve around
state clustering and/or node clustering (a.k.a. community detection or topology
inference) within states. This work answers first the need of capturing
non-linear nodal dependencies by bringing forth a novel feature-extraction
mechanism via kernel autoregressive-moving-average modeling. The extracted
features are mapped to the Grassmann manifold (Grassmannian), which consists of
all linear subspaces of a fixed rank. By virtue of the Riemannian geometry of
the Grassmannian, a unifying clustering framework is offered to tackle all
possible clustering problems in a network: Cluster multiple states, detect
communities within states, and even identify/track subnetwork state sequences.
The effectiveness of the proposed approach is underlined by extensive numerical
tests on synthetic and real fMRI/EEG data which demonstrate that the advocated
learning method compares favorably versus several state-of-the-art clustering
schemes.
|
[
{
"created": "Wed, 5 Jun 2019 20:19:05 GMT",
"version": "v1"
}
] |
2019-06-07
|
[
[
"Ye",
"Cong",
""
],
[
"Slavakis",
"Konstantinos",
""
],
[
"Patil",
"Pratik V.",
""
],
[
"Muldoon",
"Sarah F.",
""
],
[
"Medaglia",
"John",
""
]
] |
Recent advances in neuroscience and in the technology of functional magnetic resonance imaging (fMRI) and electro-encephalography (EEG) have propelled a growing interest in brain-network clustering via time-series analysis. Notwithstanding, most of the brain-network clustering methods revolve around state clustering and/or node clustering (a.k.a. community detection or topology inference) within states. This work answers first the need of capturing non-linear nodal dependencies by bringing forth a novel feature-extraction mechanism via kernel autoregressive-moving-average modeling. The extracted features are mapped to the Grassmann manifold (Grassmannian), which consists of all linear subspaces of a fixed rank. By virtue of the Riemannian geometry of the Grassmannian, a unifying clustering framework is offered to tackle all possible clustering problems in a network: Cluster multiple states, detect communities within states, and even identify/track subnetwork state sequences. The effectiveness of the proposed approach is underlined by extensive numerical tests on synthetic and real fMRI/EEG data which demonstrate that the advocated learning method compares favorably versus several state-of-the-art clustering schemes.
|
2407.05000
|
Shaowen Wang
|
Shaowen Wang, Linxi Yu, Jian Li
|
LoRA-GA: Low-Rank Adaptation with Gradient Approximation
| null | null | null | null |
cs.LG cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Fine-tuning large-scale pretrained models is prohibitively expensive in terms
of computational and memory costs. LoRA, as one of the most popular
Parameter-Efficient Fine-Tuning (PEFT) methods, offers a cost-effective
alternative by fine-tuning an auxiliary low-rank model that has significantly
fewer parameters. Although LoRA reduces the computational and memory
requirements significantly at each iteration, extensive empirical evidence
indicates that it converges at a considerably slower rate compared to full
fine-tuning, ultimately leading to increased overall compute and often worse
test performance. In our paper, we perform an in-depth investigation of the
initialization method of LoRA and show that careful initialization (without any
change of the architecture and the training algorithm) can significantly
enhance both efficiency and performance. In particular, we introduce a novel
initialization method, LoRA-GA (Low Rank Adaptation with Gradient
Approximation), which aligns the gradients of low-rank matrix product with
those of full fine-tuning at the first step. Our extensive experiments
demonstrate that LoRA-GA achieves a convergence rate comparable to that of full
fine-tuning (hence being significantly faster than vanilla LoRA as well as
various recent improvements) while simultaneously attaining comparable or even
better performance. For example, on the subset of the GLUE dataset with
T5-Base, LoRA-GA outperforms LoRA by 5.69% on average. On larger models such as
Llama 2-7B, LoRA-GA shows performance improvements of 0.34, 11.52%, and 5.05%
on MT-bench, GSM8K, and Human-eval, respectively. Additionally, we observe up
to 2-4 times convergence speed improvement compared to vanilla LoRA, validating
its effectiveness in accelerating convergence and enhancing model performance.
Code is available at https://github.com/Outsider565/LoRA-GA.
|
[
{
"created": "Sat, 6 Jul 2024 08:37:21 GMT",
"version": "v1"
},
{
"created": "Tue, 16 Jul 2024 07:32:23 GMT",
"version": "v2"
}
] |
2024-07-17
|
[
[
"Wang",
"Shaowen",
""
],
[
"Yu",
"Linxi",
""
],
[
"Li",
"Jian",
""
]
] |
Fine-tuning large-scale pretrained models is prohibitively expensive in terms of computational and memory costs. LoRA, as one of the most popular Parameter-Efficient Fine-Tuning (PEFT) methods, offers a cost-effective alternative by fine-tuning an auxiliary low-rank model that has significantly fewer parameters. Although LoRA reduces the computational and memory requirements significantly at each iteration, extensive empirical evidence indicates that it converges at a considerably slower rate compared to full fine-tuning, ultimately leading to increased overall compute and often worse test performance. In our paper, we perform an in-depth investigation of the initialization method of LoRA and show that careful initialization (without any change of the architecture and the training algorithm) can significantly enhance both efficiency and performance. In particular, we introduce a novel initialization method, LoRA-GA (Low Rank Adaptation with Gradient Approximation), which aligns the gradients of low-rank matrix product with those of full fine-tuning at the first step. Our extensive experiments demonstrate that LoRA-GA achieves a convergence rate comparable to that of full fine-tuning (hence being significantly faster than vanilla LoRA as well as various recent improvements) while simultaneously attaining comparable or even better performance. For example, on the subset of the GLUE dataset with T5-Base, LoRA-GA outperforms LoRA by 5.69% on average. On larger models such as Llama 2-7B, LoRA-GA shows performance improvements of 0.34, 11.52%, and 5.05% on MT-bench, GSM8K, and Human-eval, respectively. Additionally, we observe up to 2-4 times convergence speed improvement compared to vanilla LoRA, validating its effectiveness in accelerating convergence and enhancing model performance. Code is available at https://github.com/Outsider565/LoRA-GA.
|
2107.12704
|
Staas De Jong
|
Staas de Jong
|
The cyclotactor: towards a tactile platform for musical interaction
|
Proceedings of the International Conference on New Interfaces for
Musical Expression, 2008
| null |
10.5281/zenodo.1179571
| null |
cs.HC cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
This paper reports on work in progress on a finger-based tactile I/O device
for musical interaction. Central to the device is the ability to set up
cyclical relationships between tactile input and output. A direct practical
application of this to musical interaction is given, using the idea to
multiplex two degrees of freedom on a single tactile loop.
|
[
{
"created": "Tue, 27 Jul 2021 10:02:57 GMT",
"version": "v1"
}
] |
2021-07-28
|
[
[
"de Jong",
"Staas",
""
]
] |
This paper reports on work in progress on a finger-based tactile I/O device for musical interaction. Central to the device is the ability to set up cyclical relationships between tactile input and output. A direct practical application of this to musical interaction is given, using the idea to multiplex two degrees of freedom on a single tactile loop.
|
2305.18156
|
Mengsay Loem
|
Mengsay Loem, Masahiro Kaneko, Sho Takase, Naoaki Okazaki
|
Exploring Effectiveness of GPT-3 in Grammatical Error Correction: A
Study on Performance and Controllability in Prompt-Based Methods
|
Accepted in BEA 2023
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large-scale pre-trained language models such as GPT-3 have shown remarkable
performance across various natural language processing tasks. However, applying
prompt-based methods with GPT-3 for Grammatical Error Correction (GEC) tasks
and their controllability remains underexplored. Controllability in GEC is
crucial for real-world applications, particularly in educational settings,
where the ability to tailor feedback according to learner levels and specific
error types can significantly enhance the learning process. This paper
investigates the performance and controllability of prompt-based methods with
GPT-3 for GEC tasks using zero-shot and few-shot setting. We explore the impact
of task instructions and examples on GPT-3's output, focusing on controlling
aspects such as minimal edits, fluency edits, and learner levels. Our findings
demonstrate that GPT-3 could effectively perform GEC tasks, outperforming
existing supervised and unsupervised approaches. We also showed that GPT-3
could achieve controllability when appropriate task instructions and examples
are given.
|
[
{
"created": "Mon, 29 May 2023 15:31:29 GMT",
"version": "v1"
}
] |
2023-05-30
|
[
[
"Loem",
"Mengsay",
""
],
[
"Kaneko",
"Masahiro",
""
],
[
"Takase",
"Sho",
""
],
[
"Okazaki",
"Naoaki",
""
]
] |
Large-scale pre-trained language models such as GPT-3 have shown remarkable performance across various natural language processing tasks. However, applying prompt-based methods with GPT-3 for Grammatical Error Correction (GEC) tasks and their controllability remains underexplored. Controllability in GEC is crucial for real-world applications, particularly in educational settings, where the ability to tailor feedback according to learner levels and specific error types can significantly enhance the learning process. This paper investigates the performance and controllability of prompt-based methods with GPT-3 for GEC tasks using zero-shot and few-shot setting. We explore the impact of task instructions and examples on GPT-3's output, focusing on controlling aspects such as minimal edits, fluency edits, and learner levels. Our findings demonstrate that GPT-3 could effectively perform GEC tasks, outperforming existing supervised and unsupervised approaches. We also showed that GPT-3 could achieve controllability when appropriate task instructions and examples are given.
|
2308.07760
|
Bowei He
|
Bowei He, Xu He, Renrui Zhang, Yingxue Zhang, Ruiming Tang, Chen Ma
|
Dynamic Embedding Size Search with Minimum Regret for Streaming
Recommender System
|
Accepted for publication on CIKM2023
| null |
10.1145/3583780.3615135
| null |
cs.IR cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the continuous increase of users and items, conventional recommender
systems trained on static datasets can hardly adapt to changing environments.
The high-throughput data requires the model to be updated in a timely manner
for capturing the user interest dynamics, which leads to the emergence of
streaming recommender systems. Due to the prevalence of deep learning-based
recommender systems, the embedding layer is widely adopted to represent the
characteristics of users, items, and other features in low-dimensional vectors.
However, it has been proved that setting an identical and static embedding size
is sub-optimal in terms of recommendation performance and memory cost,
especially for streaming recommendations. To tackle this problem, we first
rethink the streaming model update process and model the dynamic embedding size
search as a bandit problem. Then, we analyze and quantify the factors that
influence the optimal embedding sizes from the statistics perspective. Based on
this, we propose the \textbf{D}ynamic \textbf{E}mbedding \textbf{S}ize
\textbf{S}earch (\textbf{DESS}) method to minimize the embedding size selection
regret on both user and item sides in a non-stationary manner. Theoretically,
we obtain a sublinear regret upper bound superior to previous methods.
Empirical results across two recommendation tasks on four public datasets also
demonstrate that our approach can achieve better streaming recommendation
performance with lower memory cost and higher time efficiency.
|
[
{
"created": "Tue, 15 Aug 2023 13:27:18 GMT",
"version": "v1"
}
] |
2023-08-16
|
[
[
"He",
"Bowei",
""
],
[
"He",
"Xu",
""
],
[
"Zhang",
"Renrui",
""
],
[
"Zhang",
"Yingxue",
""
],
[
"Tang",
"Ruiming",
""
],
[
"Ma",
"Chen",
""
]
] |
With the continuous increase of users and items, conventional recommender systems trained on static datasets can hardly adapt to changing environments. The high-throughput data requires the model to be updated in a timely manner for capturing the user interest dynamics, which leads to the emergence of streaming recommender systems. Due to the prevalence of deep learning-based recommender systems, the embedding layer is widely adopted to represent the characteristics of users, items, and other features in low-dimensional vectors. However, it has been proved that setting an identical and static embedding size is sub-optimal in terms of recommendation performance and memory cost, especially for streaming recommendations. To tackle this problem, we first rethink the streaming model update process and model the dynamic embedding size search as a bandit problem. Then, we analyze and quantify the factors that influence the optimal embedding sizes from the statistics perspective. Based on this, we propose the \textbf{D}ynamic \textbf{E}mbedding \textbf{S}ize \textbf{S}earch (\textbf{DESS}) method to minimize the embedding size selection regret on both user and item sides in a non-stationary manner. Theoretically, we obtain a sublinear regret upper bound superior to previous methods. Empirical results across two recommendation tasks on four public datasets also demonstrate that our approach can achieve better streaming recommendation performance with lower memory cost and higher time efficiency.
|
1704.08598
|
Phuong Nguyen
|
Phuong Nguyen and Klara Nahrstedt
|
Crowdsensing in Opportunistic Mobile Social Networks: A Context-aware
and Human-centric Approach
|
Long version of the IEEE MASS 2015 poster abstract titled
"Context-aware Crowd-sensing in Opportunistic Mobile Social Network"
| null | null | null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, there have been efforts to collect human contact traces
during social events (e.g., conferences) using Bluetooth devices (e.g., mobile
phones, iMotes). The results of these studies have enabled the ability to do
the crowd-sourcing task from within the crowd, in order to answer questions,
such as: what is the current density of the crowd, or how many people are
attending the event? However, in those studies, the sensing devices are usually
distributed and configured in a certain manner. For example, the number of
devices is fixed, people register for the devices on a volunteering basis. In
this paper, we treat the above problem as an optimization problem and draw the
connection to the vertex cover problem in graph theory. Since finding the
optimal solution for minimum vertex cover problem is NP-complete, approximation
algorithms have to be used. However, we will show that the well-known
approximation algorithms do not perform well with the crowd-sensing task. In
this paper, we propose the notions of node observability and coverage utility
score and design a new context-aware approximation algorithm to find vertex
cover that is tailored for crowd-sensing task. In addition, we design
human-centric bootstrapping strategies to make initial assignment of sensing
devices based on meta information about the participants (e.g., interests,
friendship). The motivation is to assign the sensing task to a more
"socialized" device to obtain better sensing coverage. We perform comprehensive
experiments on real-world data traces obtained from previous experimental
studies in conference and academic social context. The results show that our
proposed approach significantly outperforms the baseline approximation
algorithms in terms of sensing coverage.
|
[
{
"created": "Thu, 27 Apr 2017 14:28:28 GMT",
"version": "v1"
}
] |
2017-04-28
|
[
[
"Nguyen",
"Phuong",
""
],
[
"Nahrstedt",
"Klara",
""
]
] |
In recent years, there have been efforts to collect human contact traces during social events (e.g., conferences) using Bluetooth devices (e.g., mobile phones, iMotes). The results of these studies have enabled the ability to do the crowd-sourcing task from within the crowd, in order to answer questions, such as: what is the current density of the crowd, or how many people are attending the event? However, in those studies, the sensing devices are usually distributed and configured in a certain manner. For example, the number of devices is fixed, people register for the devices on a volunteering basis. In this paper, we treat the above problem as an optimization problem and draw the connection to the vertex cover problem in graph theory. Since finding the optimal solution for minimum vertex cover problem is NP-complete, approximation algorithms have to be used. However, we will show that the well-known approximation algorithms do not perform well with the crowd-sensing task. In this paper, we propose the notions of node observability and coverage utility score and design a new context-aware approximation algorithm to find vertex cover that is tailored for crowd-sensing task. In addition, we design human-centric bootstrapping strategies to make initial assignment of sensing devices based on meta information about the participants (e.g., interests, friendship). The motivation is to assign the sensing task to a more "socialized" device to obtain better sensing coverage. We perform comprehensive experiments on real-world data traces obtained from previous experimental studies in conference and academic social context. The results show that our proposed approach significantly outperforms the baseline approximation algorithms in terms of sensing coverage.
|
2105.13655
|
Dabeen Lee
|
Dabeen Lee, Milan Vojnovic
|
Scheduling Jobs with Stochastic Holding Costs
|
Extended abstract appeared in NeurIPS 2021
| null | null | null |
cs.LG cs.DS math.OC stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study a single-server scheduling problem for the objective of minimizing
the expected cumulative holding cost incurred by jobs, where parameters
defining stochastic job holding costs are unknown to the scheduler. We consider
a general setting allowing for different job classes, where jobs of the same
class have statistically identical holding costs and service times, with an
arbitrary number of jobs across classes. In each time step, the server can
process a job and observes random holding costs of the jobs that are yet to be
completed. We consider a learning-based $c\mu$ rule scheduling which starts
with a preemption period of fixed duration, serving as a learning phase, and
having gathered data about jobs, it switches to nonpreemptive scheduling. Our
algorithms are designed to handle instances with large and small gaps in mean
job holding costs and achieve near-optimal performance guarantees. The
performance of algorithms is evaluated by regret, where the benchmark is the
minimum possible total holding cost attained by the $c\mu$ rule scheduling
policy when the parameters of jobs are known. We show regret lower bounds and
algorithms that achieve nearly matching regret upper bounds. Our numerical
results demonstrate the efficacy of our algorithms and show that our regret
analysis is nearly tight.
|
[
{
"created": "Fri, 28 May 2021 08:04:06 GMT",
"version": "v1"
},
{
"created": "Tue, 26 Oct 2021 22:42:45 GMT",
"version": "v2"
},
{
"created": "Wed, 21 Sep 2022 05:25:43 GMT",
"version": "v3"
}
] |
2022-09-22
|
[
[
"Lee",
"Dabeen",
""
],
[
"Vojnovic",
"Milan",
""
]
] |
We study a single-server scheduling problem for the objective of minimizing the expected cumulative holding cost incurred by jobs, where parameters defining stochastic job holding costs are unknown to the scheduler. We consider a general setting allowing for different job classes, where jobs of the same class have statistically identical holding costs and service times, with an arbitrary number of jobs across classes. In each time step, the server can process a job and observes random holding costs of the jobs that are yet to be completed. We consider a learning-based $c\mu$ rule scheduling which starts with a preemption period of fixed duration, serving as a learning phase, and having gathered data about jobs, it switches to nonpreemptive scheduling. Our algorithms are designed to handle instances with large and small gaps in mean job holding costs and achieve near-optimal performance guarantees. The performance of algorithms is evaluated by regret, where the benchmark is the minimum possible total holding cost attained by the $c\mu$ rule scheduling policy when the parameters of jobs are known. We show regret lower bounds and algorithms that achieve nearly matching regret upper bounds. Our numerical results demonstrate the efficacy of our algorithms and show that our regret analysis is nearly tight.
|
2109.13770
|
Andrew Lee
|
Andrew Lee, Jonathan K. Kummerfeld, Lawrence C. An, Rada Mihalcea
|
Micromodels for Efficient, Explainable, and Reusable Systems: A Case
Study on Mental Health
|
To appear in Findings of EMNLP 2021
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Many statistical models have high accuracy on test benchmarks, but are not
explainable, struggle in low-resource scenarios, cannot be reused for multiple
tasks, and cannot easily integrate domain expertise. These factors limit their
use, particularly in settings such as mental health, where it is difficult to
annotate datasets and model outputs have significant impact. We introduce a
micromodel architecture to address these challenges. Our approach allows
researchers to build interpretable representations that embed domain knowledge
and provide explanations throughout the model's decision process. We
demonstrate the idea on multiple mental health tasks: depression
classification, PTSD classification, and suicidal risk assessment. Our systems
consistently produce strong results, even in low-resource scenarios, and are
more interpretable than alternative methods.
|
[
{
"created": "Tue, 28 Sep 2021 14:45:59 GMT",
"version": "v1"
}
] |
2021-09-29
|
[
[
"Lee",
"Andrew",
""
],
[
"Kummerfeld",
"Jonathan K.",
""
],
[
"An",
"Lawrence C.",
""
],
[
"Mihalcea",
"Rada",
""
]
] |
Many statistical models have high accuracy on test benchmarks, but are not explainable, struggle in low-resource scenarios, cannot be reused for multiple tasks, and cannot easily integrate domain expertise. These factors limit their use, particularly in settings such as mental health, where it is difficult to annotate datasets and model outputs have significant impact. We introduce a micromodel architecture to address these challenges. Our approach allows researchers to build interpretable representations that embed domain knowledge and provide explanations throughout the model's decision process. We demonstrate the idea on multiple mental health tasks: depression classification, PTSD classification, and suicidal risk assessment. Our systems consistently produce strong results, even in low-resource scenarios, and are more interpretable than alternative methods.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.