id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2107.01799
|
Isaac Sledge
|
Isaac J. Sledge and Jose C. Principe
|
An Information-Theoretic Approach for Automatically Determining the
Number of States when Aggregating Markov Chains
|
Submitted to IEEE ICASSP. arXiv admin note: substantial text overlap
with arXiv:1903.09266
| null |
10.1109/ICASSP.2019.8682473
| null |
cs.IT cs.LG math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
A fundamental problem when aggregating Markov chains is the specification of
the number of state groups. Too few state groups may fail to sufficiently
capture the pertinent dynamics of the original, high-order Markov chain. Too
many state groups may lead to a non-parsimonious, reduced-order Markov chain
whose complexity rivals that of the original. In this paper, we show that an
augmented value-of-information-based approach to aggregating Markov chains
facilitates the determination of the number of state groups. The optimal
state-group count coincides with the case where the complexity of the
reduced-order chain is balanced against the mutual dependence between the
original- and reduced-order chain dynamics.
|
[
{
"created": "Mon, 5 Jul 2021 05:36:04 GMT",
"version": "v1"
}
] |
2021-07-06
|
[
[
"Sledge",
"Isaac J.",
""
],
[
"Principe",
"Jose C.",
""
]
] |
A fundamental problem when aggregating Markov chains is the specification of the number of state groups. Too few state groups may fail to sufficiently capture the pertinent dynamics of the original, high-order Markov chain. Too many state groups may lead to a non-parsimonious, reduced-order Markov chain whose complexity rivals that of the original. In this paper, we show that an augmented value-of-information-based approach to aggregating Markov chains facilitates the determination of the number of state groups. The optimal state-group count coincides with the case where the complexity of the reduced-order chain is balanced against the mutual dependence between the original- and reduced-order chain dynamics.
|
2102.01884
|
Bekir Sait Ciftler
|
Bekir Sait Ciftler, Abdulmalik Alwarafy, Mohamed Abdallah, Mounir
Hamdi
|
DQN-Based Multi-User Power Allocation for Hybrid RF/VLC Networks
|
6 pages, 4 figures, accepted to IEEE ICC 2021
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, a Deep Q-Network (DQN) based multi-agent multi-user power
allocation algorithm is proposed for hybrid networks composed of radio
frequency (RF) and visible light communication (VLC) access points (APs). The
users are capable of multihoming, which can bridge RF and VLC links for
accommodating their bandwidth requirements. By leveraging a non-cooperative
multi-agent DQN algorithm, where each AP is an agent, an online power
allocation strategy is developed to optimize the transmit power for providing
users' required data rate. Our simulation results demonstrate that DQN's median
convergence time training is 90% shorter than the Q-Learning (QL) based
algorithm. The DQN-based algorithm converges to the desired user rate in half
duration on average while converging with the rate of 96.1% compared to the
QL-based algorithm's convergence rate of 72.3% Additionally, thanks to its
continuous state-space definition, the DQN-based power allocation algorithm
provides average user data rates closer to the target rates than the QL-based
algorithm when it converges.
|
[
{
"created": "Wed, 3 Feb 2021 05:42:49 GMT",
"version": "v1"
}
] |
2021-02-04
|
[
[
"Ciftler",
"Bekir Sait",
""
],
[
"Alwarafy",
"Abdulmalik",
""
],
[
"Abdallah",
"Mohamed",
""
],
[
"Hamdi",
"Mounir",
""
]
] |
In this paper, a Deep Q-Network (DQN) based multi-agent multi-user power allocation algorithm is proposed for hybrid networks composed of radio frequency (RF) and visible light communication (VLC) access points (APs). The users are capable of multihoming, which can bridge RF and VLC links for accommodating their bandwidth requirements. By leveraging a non-cooperative multi-agent DQN algorithm, where each AP is an agent, an online power allocation strategy is developed to optimize the transmit power for providing users' required data rate. Our simulation results demonstrate that DQN's median convergence time training is 90% shorter than the Q-Learning (QL) based algorithm. The DQN-based algorithm converges to the desired user rate in half duration on average while converging with the rate of 96.1% compared to the QL-based algorithm's convergence rate of 72.3% Additionally, thanks to its continuous state-space definition, the DQN-based power allocation algorithm provides average user data rates closer to the target rates than the QL-based algorithm when it converges.
|
1904.07331
|
Niki Gitinabard
|
Adithya Sheshadri, Niki Gitinabard, Collin F. Lynch, Tiffany Barnes,
and Sarah Heckman
|
Predicting Student Performance Based on Online Study Habits: A Study of
Blended Courses
|
Published in the International Conference on Educational Data Mining
(EDM 2018)
| null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Online tools provide unique access to research students' study habits and
problem-solving behavior. In MOOCs, this online data can be used to inform
instructors and to provide automatic guidance to students. However, these
techniques may not apply in blended courses with face to face and online
components. We report on a study of integrated user-system interaction logs
from 3 computer science courses using four online systems: LMS, forum, version
control, and homework system. Our results show that students rarely work across
platforms in a single session, and that final class performance can be
predicted from students' system use.
|
[
{
"created": "Mon, 15 Apr 2019 21:18:13 GMT",
"version": "v1"
}
] |
2019-04-17
|
[
[
"Sheshadri",
"Adithya",
""
],
[
"Gitinabard",
"Niki",
""
],
[
"Lynch",
"Collin F.",
""
],
[
"Barnes",
"Tiffany",
""
],
[
"Heckman",
"Sarah",
""
]
] |
Online tools provide unique access to research students' study habits and problem-solving behavior. In MOOCs, this online data can be used to inform instructors and to provide automatic guidance to students. However, these techniques may not apply in blended courses with face to face and online components. We report on a study of integrated user-system interaction logs from 3 computer science courses using four online systems: LMS, forum, version control, and homework system. Our results show that students rarely work across platforms in a single session, and that final class performance can be predicted from students' system use.
|
2107.06552
|
Young Eun Kim
|
Young Eun Kim and Seong-Whan Lee
|
Domain Generalization with Pseudo-Domain Label for Face Anti-Spoofing
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Face anti-spoofing (FAS) plays an important role in protecting face
recognition systems from face representation attacks. Many recent studies in
FAS have approached this problem with domain generalization technique. Domain
generalization aims to increase generalization performance to better detect
various types of attacks and unseen attacks. However, previous studies in this
area have defined each domain simply as an anti-spoofing datasets and focused
on developing learning techniques. In this paper, we proposed a method that
enables network to judge its domain by itself with the clustered convolutional
feature statistics from intermediate layers of the network, without labeling
domains as datasets. We obtained pseudo-domain labels by not only using the
network extracting features, but also using depth estimators, which were
previously used only as an auxiliary task in FAS. In our experiments, we
trained with three datasets and evaluated the performance with the remaining
one dataset to demonstrate the effectiveness of the proposed method by
conducting a total of four sets of experiments.
|
[
{
"created": "Wed, 14 Jul 2021 08:35:07 GMT",
"version": "v1"
}
] |
2021-07-15
|
[
[
"Kim",
"Young Eun",
""
],
[
"Lee",
"Seong-Whan",
""
]
] |
Face anti-spoofing (FAS) plays an important role in protecting face recognition systems from face representation attacks. Many recent studies in FAS have approached this problem with domain generalization technique. Domain generalization aims to increase generalization performance to better detect various types of attacks and unseen attacks. However, previous studies in this area have defined each domain simply as an anti-spoofing datasets and focused on developing learning techniques. In this paper, we proposed a method that enables network to judge its domain by itself with the clustered convolutional feature statistics from intermediate layers of the network, without labeling domains as datasets. We obtained pseudo-domain labels by not only using the network extracting features, but also using depth estimators, which were previously used only as an auxiliary task in FAS. In our experiments, we trained with three datasets and evaluated the performance with the remaining one dataset to demonstrate the effectiveness of the proposed method by conducting a total of four sets of experiments.
|
2004.13297
|
Jian Ren
|
Menglei Chai, Jian Ren, Sergey Tulyakov
|
Neural Hair Rendering
|
ECCV 2020
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a generic neural-based hair rendering pipeline that
can synthesize photo-realistic images from virtual 3D hair models. Unlike
existing supervised translation methods that require model-level similarity to
preserve consistent structure representation for both real images and fake
renderings, our method adopts an unsupervised solution to work on arbitrary
hair models. The key component of our method is a shared latent space to encode
appearance-invariant structure information of both domains, which generates
realistic renderings conditioned by extra appearance inputs. This is achieved
by domain-specific pre-disentangled structure representation, partially shared
domain encoder layers and a structure discriminator. We also propose a simple
yet effective temporal conditioning method to enforce consistency for video
sequence generation. We demonstrate the superiority of our method by testing it
on a large number of portraits and comparing it with alternative baselines and
state-of-the-art unsupervised image translation methods.
|
[
{
"created": "Tue, 28 Apr 2020 04:36:49 GMT",
"version": "v1"
},
{
"created": "Tue, 21 Jul 2020 19:29:30 GMT",
"version": "v2"
}
] |
2020-07-23
|
[
[
"Chai",
"Menglei",
""
],
[
"Ren",
"Jian",
""
],
[
"Tulyakov",
"Sergey",
""
]
] |
In this paper, we propose a generic neural-based hair rendering pipeline that can synthesize photo-realistic images from virtual 3D hair models. Unlike existing supervised translation methods that require model-level similarity to preserve consistent structure representation for both real images and fake renderings, our method adopts an unsupervised solution to work on arbitrary hair models. The key component of our method is a shared latent space to encode appearance-invariant structure information of both domains, which generates realistic renderings conditioned by extra appearance inputs. This is achieved by domain-specific pre-disentangled structure representation, partially shared domain encoder layers and a structure discriminator. We also propose a simple yet effective temporal conditioning method to enforce consistency for video sequence generation. We demonstrate the superiority of our method by testing it on a large number of portraits and comparing it with alternative baselines and state-of-the-art unsupervised image translation methods.
|
1811.12640
|
Sudeshna Roy
|
Sudeshna Roy, Meghana Madhyastha, Sheril Lawrence, Vaibhav Rajan
|
Inferring Concept Prerequisite Relations from Online Educational
Resources
|
Accepted at the AAAI Conference on Innovative Applications of
Artificial Intelligence (IAAI-19)
| null | null | null |
cs.CL cs.AI cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Internet has rich and rapidly increasing sources of high quality
educational content. Inferring prerequisite relations between educational
concepts is required for modern large-scale online educational technology
applications such as personalized recommendations and automatic curriculum
creation. We present PREREQ, a new supervised learning method for inferring
concept prerequisite relations. PREREQ is designed using latent representations
of concepts obtained from the Pairwise Latent Dirichlet Allocation model, and a
neural network based on the Siamese network architecture. PREREQ can learn
unknown concept prerequisites from course prerequisites and labeled concept
prerequisite data. It outperforms state-of-the-art approaches on benchmark
datasets and can effectively learn from very less training data. PREREQ can
also use unlabeled video playlists, a steadily growing source of training data,
to learn concept prerequisites, thus obviating the need for manual annotation
of course prerequisites.
|
[
{
"created": "Fri, 30 Nov 2018 06:55:20 GMT",
"version": "v1"
},
{
"created": "Wed, 23 Jan 2019 00:39:00 GMT",
"version": "v2"
}
] |
2019-01-24
|
[
[
"Roy",
"Sudeshna",
""
],
[
"Madhyastha",
"Meghana",
""
],
[
"Lawrence",
"Sheril",
""
],
[
"Rajan",
"Vaibhav",
""
]
] |
The Internet has rich and rapidly increasing sources of high quality educational content. Inferring prerequisite relations between educational concepts is required for modern large-scale online educational technology applications such as personalized recommendations and automatic curriculum creation. We present PREREQ, a new supervised learning method for inferring concept prerequisite relations. PREREQ is designed using latent representations of concepts obtained from the Pairwise Latent Dirichlet Allocation model, and a neural network based on the Siamese network architecture. PREREQ can learn unknown concept prerequisites from course prerequisites and labeled concept prerequisite data. It outperforms state-of-the-art approaches on benchmark datasets and can effectively learn from very less training data. PREREQ can also use unlabeled video playlists, a steadily growing source of training data, to learn concept prerequisites, thus obviating the need for manual annotation of course prerequisites.
|
1910.14634
|
Reinhard Heckel
|
Reinhard Heckel and Mahdi Soltanolkotabi
|
Denoising and Regularization via Exploiting the Structural Bias of
Convolutional Generators
|
final ICRL version; simplifications in the proof
|
International Conference on Learning Representations (ICLR) 2020
| null | null |
cs.LG cs.CV cs.IT math.IT stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Convolutional Neural Networks (CNNs) have emerged as highly successful tools
for image generation, recovery, and restoration. A major contributing factor to
this success is that convolutional networks impose strong prior assumptions
about natural images. A surprising experiment that highlights this
architectural bias towards natural images is that one can remove noise and
corruptions from a natural image without using any training data, by simply
fitting (via gradient descent) a randomly initialized, over-parameterized
convolutional generator to the corrupted image. While this over-parameterized
network can fit the corrupted image perfectly, surprisingly after a few
iterations of gradient descent it generates an almost uncorrupted image. This
intriguing phenomenon enables state-of-the-art CNN-based denoising and
regularization of other inverse problems. In this paper, we attribute this
effect to a particular architectural choice of convolutional networks, namely
convolutions with fixed interpolating filters. We then formally characterize
the dynamics of fitting a two-layer convolutional generator to a noisy signal
and prove that early-stopped gradient descent denoises/regularizes. Our proof
relies on showing that convolutional generators fit the structured part of an
image significantly faster than the corrupted portion.
|
[
{
"created": "Thu, 31 Oct 2019 17:22:00 GMT",
"version": "v1"
},
{
"created": "Sun, 23 Feb 2020 01:49:25 GMT",
"version": "v2"
}
] |
2020-02-25
|
[
[
"Heckel",
"Reinhard",
""
],
[
"Soltanolkotabi",
"Mahdi",
""
]
] |
Convolutional Neural Networks (CNNs) have emerged as highly successful tools for image generation, recovery, and restoration. A major contributing factor to this success is that convolutional networks impose strong prior assumptions about natural images. A surprising experiment that highlights this architectural bias towards natural images is that one can remove noise and corruptions from a natural image without using any training data, by simply fitting (via gradient descent) a randomly initialized, over-parameterized convolutional generator to the corrupted image. While this over-parameterized network can fit the corrupted image perfectly, surprisingly after a few iterations of gradient descent it generates an almost uncorrupted image. This intriguing phenomenon enables state-of-the-art CNN-based denoising and regularization of other inverse problems. In this paper, we attribute this effect to a particular architectural choice of convolutional networks, namely convolutions with fixed interpolating filters. We then formally characterize the dynamics of fitting a two-layer convolutional generator to a noisy signal and prove that early-stopped gradient descent denoises/regularizes. Our proof relies on showing that convolutional generators fit the structured part of an image significantly faster than the corrupted portion.
|
2307.03638
|
Jianyuan Ni
|
Jianyuan Ni, Hao Tang, Anne H.H. Ngu, Gaowen Liu, Yan Yan
|
Physical-aware Cross-modal Adversarial Network for Wearable Sensor-based
Human Action Recognition
|
We will be making some significant changes to the paper, including
the title and methodology. We therefore wish to withdraw the paper for now
| null | null | null |
cs.MM cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Wearable sensor-based Human Action Recognition (HAR) has made significant
strides in recent times. However, the accuracy performance of wearable
sensor-based HAR is currently still lagging behind that of visual
modalities-based systems, such as RGB video and depth data. Although diverse
input modalities can provide complementary cues and improve the accuracy
performance of HAR, wearable devices can only capture limited kinds of
non-visual time series input, such as accelerometers and gyroscopes. This
limitation hinders the deployment of multimodal simultaneously using visual and
non-visual modality data in parallel on current wearable devices. To address
this issue, we propose a novel Physical-aware Cross-modal Adversarial (PCA)
framework that utilizes only time-series accelerometer data from four inertial
sensors for the wearable sensor-based HAR problem. Specifically, we propose an
effective IMU2SKELETON network to produce corresponding synthetic skeleton
joints from accelerometer data. Subsequently, we imposed additional constraints
on the synthetic skeleton data from a physical perspective, as accelerometer
data can be regarded as the second derivative of the skeleton sequence
coordinates. After that, the original accelerometer as well as the constrained
skeleton sequence were fused together to make the final classification. In this
way, when individuals wear wearable devices, the devices can not only capture
accelerometer data, but can also generate synthetic skeleton sequences for
real-time wearable sensor-based HAR applications that need to be conducted
anytime and anywhere. To demonstrate the effectiveness of our proposed PCA
framework, we conduct extensive experiments on Berkeley-MHAD, UTD-MHAD, and
MMAct datasets. The results confirm that the proposed PCA approach has
competitive performance compared to the previous methods on the mono
sensor-based HAR classification problem.
|
[
{
"created": "Fri, 7 Jul 2023 14:57:34 GMT",
"version": "v1"
},
{
"created": "Sun, 19 May 2024 20:39:25 GMT",
"version": "v2"
}
] |
2024-05-21
|
[
[
"Ni",
"Jianyuan",
""
],
[
"Tang",
"Hao",
""
],
[
"Ngu",
"Anne H. H.",
""
],
[
"Liu",
"Gaowen",
""
],
[
"Yan",
"Yan",
""
]
] |
Wearable sensor-based Human Action Recognition (HAR) has made significant strides in recent times. However, the accuracy performance of wearable sensor-based HAR is currently still lagging behind that of visual modalities-based systems, such as RGB video and depth data. Although diverse input modalities can provide complementary cues and improve the accuracy performance of HAR, wearable devices can only capture limited kinds of non-visual time series input, such as accelerometers and gyroscopes. This limitation hinders the deployment of multimodal simultaneously using visual and non-visual modality data in parallel on current wearable devices. To address this issue, we propose a novel Physical-aware Cross-modal Adversarial (PCA) framework that utilizes only time-series accelerometer data from four inertial sensors for the wearable sensor-based HAR problem. Specifically, we propose an effective IMU2SKELETON network to produce corresponding synthetic skeleton joints from accelerometer data. Subsequently, we imposed additional constraints on the synthetic skeleton data from a physical perspective, as accelerometer data can be regarded as the second derivative of the skeleton sequence coordinates. After that, the original accelerometer as well as the constrained skeleton sequence were fused together to make the final classification. In this way, when individuals wear wearable devices, the devices can not only capture accelerometer data, but can also generate synthetic skeleton sequences for real-time wearable sensor-based HAR applications that need to be conducted anytime and anywhere. To demonstrate the effectiveness of our proposed PCA framework, we conduct extensive experiments on Berkeley-MHAD, UTD-MHAD, and MMAct datasets. The results confirm that the proposed PCA approach has competitive performance compared to the previous methods on the mono sensor-based HAR classification problem.
|
1908.02802
|
Roozbeh Yousefzadeh
|
Roozbeh Yousefzadeh, Dianne P O'Leary
|
Investigating Decision Boundaries of Trained Neural Networks
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep learning models have been the subject of study from various
perspectives, for example, their training process, interpretation,
generalization error, robustness to adversarial attacks, etc. A trained model
is defined by its decision boundaries, and therefore, many of the studies about
deep learning models speculate about the decision boundaries, and sometimes
make simplifying assumptions about them. So far, finding exact points on the
decision boundaries of trained deep models has been considered an intractable
problem. Here, we compute exact points on the decision boundaries of these
models and provide mathematical tools to investigate the surfaces that define
the decision boundaries. Through numerical results, we confirm that some of the
speculations about the decision boundaries are accurate, some of the
computational methods can be improved, and some of the simplifying assumptions
may be unreliable, for models with nonlinear activation functions. We advocate
for verification of simplifying assumptions and approximation methods, wherever
they are used. Finally, we demonstrate that the computational practices used
for finding adversarial examples can be improved and computing the closest
point on the decision boundary reveals the weakest vulnerability of a model
against adversarial attack.
|
[
{
"created": "Wed, 7 Aug 2019 19:09:22 GMT",
"version": "v1"
}
] |
2019-08-09
|
[
[
"Yousefzadeh",
"Roozbeh",
""
],
[
"O'Leary",
"Dianne P",
""
]
] |
Deep learning models have been the subject of study from various perspectives, for example, their training process, interpretation, generalization error, robustness to adversarial attacks, etc. A trained model is defined by its decision boundaries, and therefore, many of the studies about deep learning models speculate about the decision boundaries, and sometimes make simplifying assumptions about them. So far, finding exact points on the decision boundaries of trained deep models has been considered an intractable problem. Here, we compute exact points on the decision boundaries of these models and provide mathematical tools to investigate the surfaces that define the decision boundaries. Through numerical results, we confirm that some of the speculations about the decision boundaries are accurate, some of the computational methods can be improved, and some of the simplifying assumptions may be unreliable, for models with nonlinear activation functions. We advocate for verification of simplifying assumptions and approximation methods, wherever they are used. Finally, we demonstrate that the computational practices used for finding adversarial examples can be improved and computing the closest point on the decision boundary reveals the weakest vulnerability of a model against adversarial attack.
|
2111.08211
|
Yan Kang
|
Yuezhou Wu, Yan Kang, Jiahuan Luo, Yuanqin He, Qiang Yang
|
FedCG: Leverage Conditional GAN for Protecting Privacy and Maintaining
Competitive Performance in Federated Learning
| null |
Proceedings of the Thirty-First International Joint Conference on
Artificial Intelligence, 2022
|
10.24963/ijcai.2022/324
| null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Federated learning (FL) aims to protect data privacy by enabling clients to
build machine learning models collaboratively without sharing their private
data. Recent works demonstrate that information exchanged during FL is subject
to gradient-based privacy attacks, and consequently, a variety of
privacy-preserving methods have been adopted to thwart such attacks. However,
these defensive methods either introduce orders of magnitude more computational
and communication overheads (e.g., with homomorphic encryption) or incur
substantial model performance losses in terms of prediction accuracy (e.g.,
with differential privacy). In this work, we propose $\textsc{FedCG}$, a novel
federated learning method that leverages conditional generative adversarial
networks to achieve high-level privacy protection while still maintaining
competitive model performance. $\textsc{FedCG}$ decomposes each client's local
network into a private extractor and a public classifier and keeps the
extractor local to protect privacy. Instead of exposing extractors,
$\textsc{FedCG}$ shares clients' generators with the server for aggregating
clients' shared knowledge, aiming to enhance the performance of each client's
local networks. Extensive experiments demonstrate that $\textsc{FedCG}$ can
achieve competitive model performance compared with FL baselines, and privacy
analysis shows that $\textsc{FedCG}$ has a high-level privacy-preserving
capability. Code is available at https://github.com/yankang18/FedCG
|
[
{
"created": "Tue, 16 Nov 2021 03:20:37 GMT",
"version": "v1"
},
{
"created": "Wed, 16 Feb 2022 03:16:52 GMT",
"version": "v2"
},
{
"created": "Sun, 7 Jul 2024 03:57:12 GMT",
"version": "v3"
}
] |
2024-07-09
|
[
[
"Wu",
"Yuezhou",
""
],
[
"Kang",
"Yan",
""
],
[
"Luo",
"Jiahuan",
""
],
[
"He",
"Yuanqin",
""
],
[
"Yang",
"Qiang",
""
]
] |
Federated learning (FL) aims to protect data privacy by enabling clients to build machine learning models collaboratively without sharing their private data. Recent works demonstrate that information exchanged during FL is subject to gradient-based privacy attacks, and consequently, a variety of privacy-preserving methods have been adopted to thwart such attacks. However, these defensive methods either introduce orders of magnitude more computational and communication overheads (e.g., with homomorphic encryption) or incur substantial model performance losses in terms of prediction accuracy (e.g., with differential privacy). In this work, we propose $\textsc{FedCG}$, a novel federated learning method that leverages conditional generative adversarial networks to achieve high-level privacy protection while still maintaining competitive model performance. $\textsc{FedCG}$ decomposes each client's local network into a private extractor and a public classifier and keeps the extractor local to protect privacy. Instead of exposing extractors, $\textsc{FedCG}$ shares clients' generators with the server for aggregating clients' shared knowledge, aiming to enhance the performance of each client's local networks. Extensive experiments demonstrate that $\textsc{FedCG}$ can achieve competitive model performance compared with FL baselines, and privacy analysis shows that $\textsc{FedCG}$ has a high-level privacy-preserving capability. Code is available at https://github.com/yankang18/FedCG
|
1208.0079
|
Abhay Jha
|
Abhay Jha, Dan Suciu
|
Probabilistic Databases with MarkoViews
|
VLDB2012
|
Proceedings of the VLDB Endowment (PVLDB), Vol. 5, No. 11, pp.
1160-1171 (2012)
| null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most of the work on query evaluation in probabilistic databases has focused
on the simple tuple-independent data model, where tuples are independent random
events. Several efficient query evaluation techniques exists in this setting,
such as safe plans, algorithms based on OBDDs, tree-decomposition and a variety
of approximation algorithms. However, complex data analytics tasks often
require complex correlations, and query evaluation then is significantly more
expensive, or more restrictive. In this paper, we propose MVDB as a framework
both for representing complex correlations and for efficient query evaluation.
An MVDB specifies correlations by views, called MarkoViews, on the
probabilistic relations and declaring the weights of the view's outputs. An
MVDB is a (very large) Markov Logic Network. We make two sets of contributions.
First, we show that query evaluation on an MVDB is equivalent to evaluating a
Union of Conjunctive Query(UCQ) over a tuple-independent database. The
translation is exact (thus allowing the techniques developed for tuple
independent databases to be carried over to MVDB), yet it is novel and quite
non-obvious (some resulting probabilities may be negative!). This translation
in itself though may not lead to much gain since the translated query gets
complicated as we try to capture more correlations. Our second contribution is
to propose a new query evaluation strategy that exploits offline compilation to
speed up online query evaluation. Here we utilize and extend our prior work on
compilation of UCQ. We validate experimentally our techniques on a large
probabilistic database with MarkoViews inferred from the DBLP data.
|
[
{
"created": "Wed, 1 Aug 2012 03:47:10 GMT",
"version": "v1"
}
] |
2012-08-02
|
[
[
"Jha",
"Abhay",
""
],
[
"Suciu",
"Dan",
""
]
] |
Most of the work on query evaluation in probabilistic databases has focused on the simple tuple-independent data model, where tuples are independent random events. Several efficient query evaluation techniques exists in this setting, such as safe plans, algorithms based on OBDDs, tree-decomposition and a variety of approximation algorithms. However, complex data analytics tasks often require complex correlations, and query evaluation then is significantly more expensive, or more restrictive. In this paper, we propose MVDB as a framework both for representing complex correlations and for efficient query evaluation. An MVDB specifies correlations by views, called MarkoViews, on the probabilistic relations and declaring the weights of the view's outputs. An MVDB is a (very large) Markov Logic Network. We make two sets of contributions. First, we show that query evaluation on an MVDB is equivalent to evaluating a Union of Conjunctive Query(UCQ) over a tuple-independent database. The translation is exact (thus allowing the techniques developed for tuple independent databases to be carried over to MVDB), yet it is novel and quite non-obvious (some resulting probabilities may be negative!). This translation in itself though may not lead to much gain since the translated query gets complicated as we try to capture more correlations. Our second contribution is to propose a new query evaluation strategy that exploits offline compilation to speed up online query evaluation. Here we utilize and extend our prior work on compilation of UCQ. We validate experimentally our techniques on a large probabilistic database with MarkoViews inferred from the DBLP data.
|
1902.09093
|
Bishan Yang
|
Igor Labutov, Bishan Yang, Anusha Prakash, Amos Azaria
|
Multi-Relational Question Answering from Narratives: Machine Reading and
Reasoning in Simulated Worlds
|
published at ACL 2018
|
ACL 2018
| null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Question Answering (QA), as a research field, has primarily focused on either
knowledge bases (KBs) or free text as a source of knowledge. These two sources
have historically shaped the kinds of questions that are asked over these
sources, and the methods developed to answer them. In this work, we look
towards a practical use-case of QA over user-instructed knowledge that uniquely
combines elements of both structured QA over knowledge bases, and unstructured
QA over narrative, introducing the task of multi-relational QA over personal
narrative. As a first step towards this goal, we make three key contributions:
(i) we generate and release TextWorldsQA, a set of five diverse datasets, where
each dataset contains dynamic narrative that describes entities and relations
in a simulated world, paired with variably compositional questions over that
knowledge, (ii) we perform a thorough evaluation and analysis of several
state-of-the-art QA models and their variants at this task, and (iii) we
release a lightweight Python-based framework we call TextWorlds for easily
generating arbitrary additional worlds and narrative, with the goal of allowing
the community to create and share a growing collection of diverse worlds as a
test-bed for this task.
|
[
{
"created": "Mon, 25 Feb 2019 05:04:26 GMT",
"version": "v1"
}
] |
2019-02-26
|
[
[
"Labutov",
"Igor",
""
],
[
"Yang",
"Bishan",
""
],
[
"Prakash",
"Anusha",
""
],
[
"Azaria",
"Amos",
""
]
] |
Question Answering (QA), as a research field, has primarily focused on either knowledge bases (KBs) or free text as a source of knowledge. These two sources have historically shaped the kinds of questions that are asked over these sources, and the methods developed to answer them. In this work, we look towards a practical use-case of QA over user-instructed knowledge that uniquely combines elements of both structured QA over knowledge bases, and unstructured QA over narrative, introducing the task of multi-relational QA over personal narrative. As a first step towards this goal, we make three key contributions: (i) we generate and release TextWorldsQA, a set of five diverse datasets, where each dataset contains dynamic narrative that describes entities and relations in a simulated world, paired with variably compositional questions over that knowledge, (ii) we perform a thorough evaluation and analysis of several state-of-the-art QA models and their variants at this task, and (iii) we release a lightweight Python-based framework we call TextWorlds for easily generating arbitrary additional worlds and narrative, with the goal of allowing the community to create and share a growing collection of diverse worlds as a test-bed for this task.
|
1710.02555
|
Melissa Greeff
|
Melissa Greeff and Angela P. Schoellig
|
Model Predictive Path-Following for Constrained Differentially Flat
Systems
|
8 pages, submitted to ICRA 2018
| null | null | null |
cs.RO cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For many tasks, predictive path-following control can significantly improve
the performance and robustness of autonomous robots over traditional trajectory
tracking control. It does this by prioritizing closeness to the path over timed
progress along the path and by looking ahead to account for changes in the
path. We propose a novel predictive path-following approach that couples
feedforward linearization with path-based model predictive control. Our
approach has a few key advantages. By utilizing the differential flatness
property, we reduce the path-based model predictive control problem from a
nonlinear to a convex optimization problem. Robustness to disturbances is
achieved by a dynamic path reference, which adjusts its speed based on the
robot's progress. We also account for key system constraints. We demonstrate
these advantages in experiment on a quadrotor. We show improved performance
over a baseline trajectory tracking controller by keeping the quadrotor closer
to the desired path under nominal conditions, with an initial offset and under
a wind disturbance.
|
[
{
"created": "Fri, 6 Oct 2017 18:56:13 GMT",
"version": "v1"
},
{
"created": "Thu, 2 Nov 2017 15:24:08 GMT",
"version": "v2"
}
] |
2017-11-03
|
[
[
"Greeff",
"Melissa",
""
],
[
"Schoellig",
"Angela P.",
""
]
] |
For many tasks, predictive path-following control can significantly improve the performance and robustness of autonomous robots over traditional trajectory tracking control. It does this by prioritizing closeness to the path over timed progress along the path and by looking ahead to account for changes in the path. We propose a novel predictive path-following approach that couples feedforward linearization with path-based model predictive control. Our approach has a few key advantages. By utilizing the differential flatness property, we reduce the path-based model predictive control problem from a nonlinear to a convex optimization problem. Robustness to disturbances is achieved by a dynamic path reference, which adjusts its speed based on the robot's progress. We also account for key system constraints. We demonstrate these advantages in experiment on a quadrotor. We show improved performance over a baseline trajectory tracking controller by keeping the quadrotor closer to the desired path under nominal conditions, with an initial offset and under a wind disturbance.
|
2008.09511
|
Peter Lindner
|
Nofar Carmeli, Martin Grohe, Peter Lindner, Christoph Standke
|
Tuple-Independent Representations of Infinite Probabilistic Databases
| null | null | null | null |
cs.DB cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Probabilistic databases (PDBs) are probability spaces over database
instances. They provide a framework for handling uncertainty in databases, as
occurs due to data integration, noisy data, data from unreliable sources or
randomized processes. Most of the existing theory literature investigated
finite, tuple-independent PDBs (TI-PDBs) where the occurrences of tuples are
independent events. Only recently, Grohe and Lindner (PODS '19) introduced
independence assumptions for PDBs beyond the finite domain assumption. In the
finite, a major argument for discussing the theoretical properties of TI-PDBs
is that they can be used to represent any finite PDB via views. This is no
longer the case once the number of tuples is countably infinite. In this paper,
we systematically study the representability of infinite PDBs in terms of
TI-PDBs and the related block-independent disjoint PDBs.
The central question is which infinite PDBs are representable as first-order
views over tuple-independent PDBs. We give a necessary condition for the
representability of PDBs and provide a sufficient criterion for
representability in terms of the probability distribution of a PDB. With
various examples, we explore the limits of our criteria. We show that
conditioning on first order properties yields no additional power in terms of
expressivity. Finally, we discuss the relation between purely logical and
arithmetic reasons for (non-)representability.
|
[
{
"created": "Fri, 21 Aug 2020 14:39:47 GMT",
"version": "v1"
},
{
"created": "Tue, 19 Apr 2022 07:17:45 GMT",
"version": "v2"
}
] |
2022-04-20
|
[
[
"Carmeli",
"Nofar",
""
],
[
"Grohe",
"Martin",
""
],
[
"Lindner",
"Peter",
""
],
[
"Standke",
"Christoph",
""
]
] |
Probabilistic databases (PDBs) are probability spaces over database instances. They provide a framework for handling uncertainty in databases, as occurs due to data integration, noisy data, data from unreliable sources or randomized processes. Most of the existing theory literature investigated finite, tuple-independent PDBs (TI-PDBs) where the occurrences of tuples are independent events. Only recently, Grohe and Lindner (PODS '19) introduced independence assumptions for PDBs beyond the finite domain assumption. In the finite, a major argument for discussing the theoretical properties of TI-PDBs is that they can be used to represent any finite PDB via views. This is no longer the case once the number of tuples is countably infinite. In this paper, we systematically study the representability of infinite PDBs in terms of TI-PDBs and the related block-independent disjoint PDBs. The central question is which infinite PDBs are representable as first-order views over tuple-independent PDBs. We give a necessary condition for the representability of PDBs and provide a sufficient criterion for representability in terms of the probability distribution of a PDB. With various examples, we explore the limits of our criteria. We show that conditioning on first order properties yields no additional power in terms of expressivity. Finally, we discuss the relation between purely logical and arithmetic reasons for (non-)representability.
|
2302.08174
|
Rafael Mohr
|
Christian Eder, Pierre Lairez, Rafael Mohr, Mohab Safey El Din
|
A Direttissimo Algorithm for Equidimensional Decomposition
|
Some minor revisions, corrects a mistake in the proof of lemma 2.2
| null | null | null |
cs.SC math.AC
|
http://creativecommons.org/licenses/by/4.0/
|
We describe a recursive algorithm that decomposes an algebraic set into
locally closed equidimensional sets, i.e. sets which each have irreducible
components of the same dimension. At the core of this algorithm, we combine
ideas from the theory of triangular sets, a.k.a. regular chains, with Gr\"obner
bases to encode and work with locally closed algebraic sets. Equipped with
this, our algorithm avoids projections of the algebraic sets that are
decomposed and certain genericity assumptions frequently made when decomposing
polynomial systems, such as assumptions about Noether position. This makes it
produce fine decompositions on more structured systems where ensuring
genericity assumptions often destroys the structure of the system at hand.
Practical experiments demonstrate its efficiency compared to state-of-the-art
implementations.
|
[
{
"created": "Thu, 16 Feb 2023 09:42:55 GMT",
"version": "v1"
},
{
"created": "Fri, 9 Jun 2023 08:45:15 GMT",
"version": "v2"
}
] |
2023-06-12
|
[
[
"Eder",
"Christian",
""
],
[
"Lairez",
"Pierre",
""
],
[
"Mohr",
"Rafael",
""
],
[
"Din",
"Mohab Safey El",
""
]
] |
We describe a recursive algorithm that decomposes an algebraic set into locally closed equidimensional sets, i.e. sets which each have irreducible components of the same dimension. At the core of this algorithm, we combine ideas from the theory of triangular sets, a.k.a. regular chains, with Gr\"obner bases to encode and work with locally closed algebraic sets. Equipped with this, our algorithm avoids projections of the algebraic sets that are decomposed and certain genericity assumptions frequently made when decomposing polynomial systems, such as assumptions about Noether position. This makes it produce fine decompositions on more structured systems where ensuring genericity assumptions often destroys the structure of the system at hand. Practical experiments demonstrate its efficiency compared to state-of-the-art implementations.
|
2406.12479
|
Haifeng Li
|
Linrui Xu, Ling Zhao, Wang Guo, Qiujun Li, Kewang Long, Kaiqi Zou,
Yuhan Wang, Haifeng Li
|
RS-GPT4V: A Unified Multimodal Instruction-Following Dataset for Remote
Sensing Image Understanding
|
14 pages, 6 figures, 4 tables
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The remote sensing image intelligence understanding model is undergoing a new
profound paradigm shift which has been promoted by multi-modal large language
model (MLLM), i.e. from the paradigm learning a domain model (LaDM) shifts to
paradigm learning a pre-trained general foundation model followed by an
adaptive domain model (LaGD). Under the new LaGD paradigm, the old datasets,
which have led to advances in RSI intelligence understanding in the last
decade, are no longer suitable for fire-new tasks. We argued that a new dataset
must be designed to lighten tasks with the following features: 1)
Generalization: training model to learn shared knowledge among tasks and to
adapt to different tasks; 2) Understanding complex scenes: training model to
understand the fine-grained attribute of the objects of interest, and to be
able to describe the scene with natural language; 3) Reasoning: training model
to be able to realize high-level visual reasoning. In this paper, we designed a
high-quality, diversified, and unified multimodal instruction-following dataset
for RSI understanding produced by GPT-4V and existing datasets, which we called
RS-GPT4V. To achieve generalization, we used a (Question, Answer) which was
deduced from GPT-4V via instruction-following to unify the tasks such as
captioning and localization; To achieve complex scene, we proposed a
hierarchical instruction description with local strategy in which the
fine-grained attributes of the objects and their spatial relationships are
described and global strategy in which all the local information are integrated
to yield detailed instruction descript; To achieve reasoning, we designed
multiple-turn QA pair to provide the reasoning ability for a model. The
empirical results show that the fine-tuned MLLMs by RS-GPT4V can describe
fine-grained information. The dataset is available at:
https://github.com/GeoX-Lab/RS-GPT4V.
|
[
{
"created": "Tue, 18 Jun 2024 10:34:28 GMT",
"version": "v1"
}
] |
2024-06-19
|
[
[
"Xu",
"Linrui",
""
],
[
"Zhao",
"Ling",
""
],
[
"Guo",
"Wang",
""
],
[
"Li",
"Qiujun",
""
],
[
"Long",
"Kewang",
""
],
[
"Zou",
"Kaiqi",
""
],
[
"Wang",
"Yuhan",
""
],
[
"Li",
"Haifeng",
""
]
] |
The remote sensing image intelligence understanding model is undergoing a new profound paradigm shift which has been promoted by multi-modal large language model (MLLM), i.e. from the paradigm learning a domain model (LaDM) shifts to paradigm learning a pre-trained general foundation model followed by an adaptive domain model (LaGD). Under the new LaGD paradigm, the old datasets, which have led to advances in RSI intelligence understanding in the last decade, are no longer suitable for fire-new tasks. We argued that a new dataset must be designed to lighten tasks with the following features: 1) Generalization: training model to learn shared knowledge among tasks and to adapt to different tasks; 2) Understanding complex scenes: training model to understand the fine-grained attribute of the objects of interest, and to be able to describe the scene with natural language; 3) Reasoning: training model to be able to realize high-level visual reasoning. In this paper, we designed a high-quality, diversified, and unified multimodal instruction-following dataset for RSI understanding produced by GPT-4V and existing datasets, which we called RS-GPT4V. To achieve generalization, we used a (Question, Answer) which was deduced from GPT-4V via instruction-following to unify the tasks such as captioning and localization; To achieve complex scene, we proposed a hierarchical instruction description with local strategy in which the fine-grained attributes of the objects and their spatial relationships are described and global strategy in which all the local information are integrated to yield detailed instruction descript; To achieve reasoning, we designed multiple-turn QA pair to provide the reasoning ability for a model. The empirical results show that the fine-tuned MLLMs by RS-GPT4V can describe fine-grained information. The dataset is available at: https://github.com/GeoX-Lab/RS-GPT4V.
|
1309.0869
|
EPTCS
|
Thao Dang (CNRS-VERIMAG), Tommaso Dreossi (VERIMAG, University of
Udine)
|
Falsifying Oscillation Properties of Parametric Biological Models
|
In Proceedings HSB 2013, arXiv:1308.5724
|
EPTCS 125, 2013, pp. 53-67
|
10.4204/EPTCS.125.4
| null |
cs.LO cs.CE cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose an approach to falsification of oscillation properties of
parametric biological models, based on the recently developed techniques for
testing continuous and hybrid systems. In this approach, an oscillation
property can be specified using a hybrid automaton, which is then used to guide
the exploration in the state and input spaces to search for the behaviors that
do not satisfy the property. We illustrate the approach with the Laub-Loomis
model for spontaneous oscillations during the aggregation stage of
Dictyostelium.
|
[
{
"created": "Tue, 3 Sep 2013 23:41:23 GMT",
"version": "v1"
}
] |
2013-09-05
|
[
[
"Dang",
"Thao",
"",
"CNRS-VERIMAG"
],
[
"Dreossi",
"Tommaso",
"",
"VERIMAG, University of\n Udine"
]
] |
We propose an approach to falsification of oscillation properties of parametric biological models, based on the recently developed techniques for testing continuous and hybrid systems. In this approach, an oscillation property can be specified using a hybrid automaton, which is then used to guide the exploration in the state and input spaces to search for the behaviors that do not satisfy the property. We illustrate the approach with the Laub-Loomis model for spontaneous oscillations during the aggregation stage of Dictyostelium.
|
2210.03026
|
Germ\'an Vidal
|
Germ\'an Vidal
|
Computing Race Variants in Message-Passing Concurrent Programming with
Selective Receives
|
Published as: Vidal, G. (2022). Computing Race Variants in
Message-Passing Concurrent Programming with Selective Receives. In: Mousavi,
M.R., Philippou, A. (eds) FORTE 2022. Lecture Notes in Computer Science, vol
13273. Springer, Cham. The final authenticated publication is available
online at https://doi.org/10.1007/978-3-031-08679-3_12. arXiv admin note:
text overlap with arXiv:2112.12869
| null |
10.1007/978-3-031-08679-3_12
| null |
cs.PL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Message-passing concurrency is a popular computation model that underlies
several programming languages like, e.g., Erlang, Akka, and (to some extent) Go
and Rust. In particular, we consider a message-passing concurrent language with
dynamic process spawning and selective receives, i.e., where messages can only
be consumed by the target process when they match a specific constraint (e.g.,
the case of Erlang). In this work, we introduce a notion of trace that can be
seen as an abstraction of a class of causally equivalent executions (i.e.,
which produce the same outcome). We then show that execution traces can be used
to identify message races. We provide constructive definitions to compute
message races as well as to produce so-called race variants, which can then be
used to drive new executions which are not causally equivalent to the previous
ones. This is an essential ingredient of state-space exploration techniques for
program verification.
|
[
{
"created": "Thu, 6 Oct 2022 16:19:15 GMT",
"version": "v1"
}
] |
2022-10-07
|
[
[
"Vidal",
"Germán",
""
]
] |
Message-passing concurrency is a popular computation model that underlies several programming languages like, e.g., Erlang, Akka, and (to some extent) Go and Rust. In particular, we consider a message-passing concurrent language with dynamic process spawning and selective receives, i.e., where messages can only be consumed by the target process when they match a specific constraint (e.g., the case of Erlang). In this work, we introduce a notion of trace that can be seen as an abstraction of a class of causally equivalent executions (i.e., which produce the same outcome). We then show that execution traces can be used to identify message races. We provide constructive definitions to compute message races as well as to produce so-called race variants, which can then be used to drive new executions which are not causally equivalent to the previous ones. This is an essential ingredient of state-space exploration techniques for program verification.
|
1206.0233
|
Arne Leitert
|
Arne Leitert
|
3-Colourability of Dually Chordal Graphs in Linear Time
| null | null | null | null |
cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A graph G is dually chordal if there is a spanning tree T of G such that any
maximal clique of G induces a subtree in T. This paper investigates the
Colourability problem on dually chordal graphs. It will show that it is
NP-complete in case of four colours and solvable in linear time with a simple
algorithm in case of three colours. In addition, it will be shown that a dually
chordal graph is 3-colourable if and only if it is perfect and has no clique of
size four.
|
[
{
"created": "Fri, 1 Jun 2012 16:00:59 GMT",
"version": "v1"
},
{
"created": "Sun, 5 Aug 2012 15:23:07 GMT",
"version": "v2"
},
{
"created": "Tue, 13 Nov 2012 15:26:12 GMT",
"version": "v3"
}
] |
2012-11-14
|
[
[
"Leitert",
"Arne",
""
]
] |
A graph G is dually chordal if there is a spanning tree T of G such that any maximal clique of G induces a subtree in T. This paper investigates the Colourability problem on dually chordal graphs. It will show that it is NP-complete in case of four colours and solvable in linear time with a simple algorithm in case of three colours. In addition, it will be shown that a dually chordal graph is 3-colourable if and only if it is perfect and has no clique of size four.
|
1404.0818
|
Marcin Pilipczuk
|
Daniel Lokshtanov and Marcin Pilipczuk and Micha{\l} Pilipczuk and
Saket Saurabh
|
Fixed-parameter tractable canonization and isomorphism test for graphs
of bounded treewidth
|
Full version of a paper presented at FOCS 2014
| null | null | null |
cs.DS cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We give a fixed-parameter tractable algorithm that, given a parameter $k$ and
two graphs $G_1,G_2$, either concludes that one of these graphs has treewidth
at least $k$, or determines whether $G_1$ and $G_2$ are isomorphic. The running
time of the algorithm on an $n$-vertex graph is $2^{O(k^5\log k)}\cdot n^5$,
and this is the first fixed-parameter algorithm for Graph Isomorphism
parameterized by treewidth.
Our algorithm in fact solves the more general canonization problem. We namely
design a procedure working in $2^{O(k^5\log k)}\cdot n^5$ time that, for a
given graph $G$ on $n$ vertices, either concludes that the treewidth of $G$ is
at least $k$, or: * finds in an isomorphic-invariant way a graph
$\mathfrak{c}(G)$ that is isomorphic to $G$; * finds an isomorphism-invariant
construction term --- an algebraic expression that encodes $G$ together with a
tree decomposition of $G$ of width $O(k^4)$.
Hence, the isomorphism test reduces to verifying whether the computed
isomorphic copies or the construction terms for $G_1$ and $G_2$ are equal.
|
[
{
"created": "Thu, 3 Apr 2014 09:49:54 GMT",
"version": "v1"
},
{
"created": "Wed, 10 Dec 2014 11:32:25 GMT",
"version": "v2"
}
] |
2014-12-11
|
[
[
"Lokshtanov",
"Daniel",
""
],
[
"Pilipczuk",
"Marcin",
""
],
[
"Pilipczuk",
"Michał",
""
],
[
"Saurabh",
"Saket",
""
]
] |
We give a fixed-parameter tractable algorithm that, given a parameter $k$ and two graphs $G_1,G_2$, either concludes that one of these graphs has treewidth at least $k$, or determines whether $G_1$ and $G_2$ are isomorphic. The running time of the algorithm on an $n$-vertex graph is $2^{O(k^5\log k)}\cdot n^5$, and this is the first fixed-parameter algorithm for Graph Isomorphism parameterized by treewidth. Our algorithm in fact solves the more general canonization problem. We namely design a procedure working in $2^{O(k^5\log k)}\cdot n^5$ time that, for a given graph $G$ on $n$ vertices, either concludes that the treewidth of $G$ is at least $k$, or: * finds in an isomorphic-invariant way a graph $\mathfrak{c}(G)$ that is isomorphic to $G$; * finds an isomorphism-invariant construction term --- an algebraic expression that encodes $G$ together with a tree decomposition of $G$ of width $O(k^4)$. Hence, the isomorphism test reduces to verifying whether the computed isomorphic copies or the construction terms for $G_1$ and $G_2$ are equal.
|
2212.02014
|
Heng Guo
|
Heng Guo, Jianfeng Zhang, Ke Yan, Le Lu, Minfeng Xu
|
Med-Query: Steerable Parsing of 9-DoF Medical Anatomies with Query
Embedding
|
updated version
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Automatic parsing of human anatomies at instance-level from 3D computed
tomography (CT) scans is a prerequisite step for many clinical applications.
The presence of pathologies, broken structures or limited field-of-view (FOV)
all can make anatomy parsing algorithms vulnerable. In this work, we explore
how to exploit and conduct the prosperous detection-then-segmentation paradigm
in 3D medical data, and propose a steerable, robust, and efficient computing
framework for detection, identification, and segmentation of anatomies in CT
scans. Considering complicated shapes, sizes and orientations of anatomies,
without lose of generality, we present the nine degrees-of-freedom (9-DoF) pose
estimation solution in full 3D space using a novel single-stage,
non-hierarchical forward representation. Our whole framework is executed in a
steerable manner where any anatomy of interest can be directly retrieved to
further boost the inference efficiency. We have validated the proposed method
on three medical imaging parsing tasks of ribs, spine, and abdominal organs.
For rib parsing, CT scans have been annotated at the rib instance-level for
quantitative evaluation, similarly for spine vertebrae and abdominal organs.
Extensive experiments on 9-DoF box detection and rib instance segmentation
demonstrate the effectiveness of our framework (with the identification rate of
97.0% and the segmentation Dice score of 90.9%) in high efficiency, compared
favorably against several strong baselines (e.g., CenterNet, FCOS, and
nnU-Net). For spine identification and segmentation, our method achieves a new
state-of-the-art result on the public CTSpine1K dataset. Last, we report highly
competitive results in multi-organ segmentation at FLARE22 competition. Our
annotations, code and models will be made publicly available at:
https://github.com/alibaba-damo-academy/Med_Query.
|
[
{
"created": "Mon, 5 Dec 2022 04:04:21 GMT",
"version": "v1"
},
{
"created": "Tue, 10 Oct 2023 10:03:24 GMT",
"version": "v2"
}
] |
2023-10-11
|
[
[
"Guo",
"Heng",
""
],
[
"Zhang",
"Jianfeng",
""
],
[
"Yan",
"Ke",
""
],
[
"Lu",
"Le",
""
],
[
"Xu",
"Minfeng",
""
]
] |
Automatic parsing of human anatomies at instance-level from 3D computed tomography (CT) scans is a prerequisite step for many clinical applications. The presence of pathologies, broken structures or limited field-of-view (FOV) all can make anatomy parsing algorithms vulnerable. In this work, we explore how to exploit and conduct the prosperous detection-then-segmentation paradigm in 3D medical data, and propose a steerable, robust, and efficient computing framework for detection, identification, and segmentation of anatomies in CT scans. Considering complicated shapes, sizes and orientations of anatomies, without lose of generality, we present the nine degrees-of-freedom (9-DoF) pose estimation solution in full 3D space using a novel single-stage, non-hierarchical forward representation. Our whole framework is executed in a steerable manner where any anatomy of interest can be directly retrieved to further boost the inference efficiency. We have validated the proposed method on three medical imaging parsing tasks of ribs, spine, and abdominal organs. For rib parsing, CT scans have been annotated at the rib instance-level for quantitative evaluation, similarly for spine vertebrae and abdominal organs. Extensive experiments on 9-DoF box detection and rib instance segmentation demonstrate the effectiveness of our framework (with the identification rate of 97.0% and the segmentation Dice score of 90.9%) in high efficiency, compared favorably against several strong baselines (e.g., CenterNet, FCOS, and nnU-Net). For spine identification and segmentation, our method achieves a new state-of-the-art result on the public CTSpine1K dataset. Last, we report highly competitive results in multi-organ segmentation at FLARE22 competition. Our annotations, code and models will be made publicly available at: https://github.com/alibaba-damo-academy/Med_Query.
|
2303.04901
|
Laura Zheng
|
Laura Zheng, Julio Poveda, James Mullen, Shreelekha Revankar, Ming C.
Lin
|
Towards Driving Policies with Personality: Modeling Behavior and Style
in Risky Scenarios via Data Collection in Virtual Reality
| null | null | null | null |
cs.RO cs.HC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Autonomous driving research currently faces data sparsity in representation
of risky scenarios. Such data is both difficult to obtain ethically in the real
world, and unreliable to obtain via simulation. Recent advances in virtual
reality (VR) driving simulators lower barriers to tackling this problem in
simulation. We propose the first data collection framework for risky scenario
driving data from real humans using VR, as well as accompanying numerical
driving personality characterizations. We validate the resulting dataset with
statistical analyses and model driving behavior with an eight-factor
personality vector based on the Multi-dimensional Driving Style Inventory
(MDSI). Our method, dataset, and analyses show that realistic driving
personalities can be modeled without deep learning or large datasets to
complement autonomous driving research.
|
[
{
"created": "Wed, 8 Mar 2023 21:38:24 GMT",
"version": "v1"
}
] |
2023-03-10
|
[
[
"Zheng",
"Laura",
""
],
[
"Poveda",
"Julio",
""
],
[
"Mullen",
"James",
""
],
[
"Revankar",
"Shreelekha",
""
],
[
"Lin",
"Ming C.",
""
]
] |
Autonomous driving research currently faces data sparsity in representation of risky scenarios. Such data is both difficult to obtain ethically in the real world, and unreliable to obtain via simulation. Recent advances in virtual reality (VR) driving simulators lower barriers to tackling this problem in simulation. We propose the first data collection framework for risky scenario driving data from real humans using VR, as well as accompanying numerical driving personality characterizations. We validate the resulting dataset with statistical analyses and model driving behavior with an eight-factor personality vector based on the Multi-dimensional Driving Style Inventory (MDSI). Our method, dataset, and analyses show that realistic driving personalities can be modeled without deep learning or large datasets to complement autonomous driving research.
|
1801.01451
|
Andrew Kiruluta
|
Andrew Kiruluta
|
Reducing Deep Network Complexity with Fourier Transform Methods
|
mistake in tensorflow code with test data leakage into training set
leading to model over fitting
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a novel way that uses shallow densely connected neuron network
architectures to achieve superior performance to convolution based neural
networks (CNNs) approaches with the added benefits of lower computation burden
requiring dramatically less training examples to achieve high prediction
accuracy ($>98\%$). The advantages of our proposed method is demonstrated in
results on benchmark datasets which show significant performance gain over
existing state-of-the-art results on MNIST, CIFAR-10 and CIFAR-100. By Fourier
transforming the inputs, each point in the training sample then has a
representational energy of all the weighted information from every other point.
The consequence of using this input is a reduced complexity neuron network,
reduced computation load and the lifting of the requirement for a large number
of training examples to achieve high classification accuracy.
|
[
{
"created": "Fri, 15 Dec 2017 20:30:09 GMT",
"version": "v1"
},
{
"created": "Thu, 7 Jun 2018 12:09:37 GMT",
"version": "v2"
}
] |
2018-06-08
|
[
[
"Kiruluta",
"Andrew",
""
]
] |
We propose a novel way that uses shallow densely connected neuron network architectures to achieve superior performance to convolution based neural networks (CNNs) approaches with the added benefits of lower computation burden requiring dramatically less training examples to achieve high prediction accuracy ($>98\%$). The advantages of our proposed method is demonstrated in results on benchmark datasets which show significant performance gain over existing state-of-the-art results on MNIST, CIFAR-10 and CIFAR-100. By Fourier transforming the inputs, each point in the training sample then has a representational energy of all the weighted information from every other point. The consequence of using this input is a reduced complexity neuron network, reduced computation load and the lifting of the requirement for a large number of training examples to achieve high classification accuracy.
|
2302.01301
|
Raffaele Galliera
|
Raffaele Galliera, Alessandro Morelli, Roberto Fronteddu, Niranjan
Suri
|
MARLIN: Soft Actor-Critic based Reinforcement Learning for Congestion
Control in Real Networks
|
10 pages, 5 figures, AAAI 2023 workshop "Reinforcement Learning Ready
for Production", accepted at NOMS 2023 - IEEE/IFIP Network Operations and
Management Symposium
| null |
10.1109/NOMS56928.2023.10154210
| null |
cs.LG cs.AI cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Fast and efficient transport protocols are the foundation of an increasingly
distributed world. The burden of continuously delivering improved communication
performance to support next-generation applications and services, combined with
the increasing heterogeneity of systems and network technologies, has promoted
the design of Congestion Control (CC) algorithms that perform well under
specific environments. The challenge of designing a generic CC algorithm that
can adapt to a broad range of scenarios is still an open research question. To
tackle this challenge, we propose to apply a novel Reinforcement Learning (RL)
approach. Our solution, MARLIN, uses the Soft Actor-Critic algorithm to
maximize both entropy and return and models the learning process as an
infinite-horizon task. We trained MARLIN on a real network with varying
background traffic patterns to overcome the sim-to-real mismatch that
researchers have encountered when applying RL to CC. We evaluated our solution
on the task of file transfer and compared it to TCP Cubic. While further
research is required, results have shown that MARLIN can achieve comparable
results to TCP with little hyperparameter tuning, in a task significantly
different from its training setting. Therefore, we believe that our work
represents a promising first step toward building CC algorithms based on the
maximum entropy RL framework.
|
[
{
"created": "Thu, 2 Feb 2023 18:27:20 GMT",
"version": "v1"
}
] |
2023-06-27
|
[
[
"Galliera",
"Raffaele",
""
],
[
"Morelli",
"Alessandro",
""
],
[
"Fronteddu",
"Roberto",
""
],
[
"Suri",
"Niranjan",
""
]
] |
Fast and efficient transport protocols are the foundation of an increasingly distributed world. The burden of continuously delivering improved communication performance to support next-generation applications and services, combined with the increasing heterogeneity of systems and network technologies, has promoted the design of Congestion Control (CC) algorithms that perform well under specific environments. The challenge of designing a generic CC algorithm that can adapt to a broad range of scenarios is still an open research question. To tackle this challenge, we propose to apply a novel Reinforcement Learning (RL) approach. Our solution, MARLIN, uses the Soft Actor-Critic algorithm to maximize both entropy and return and models the learning process as an infinite-horizon task. We trained MARLIN on a real network with varying background traffic patterns to overcome the sim-to-real mismatch that researchers have encountered when applying RL to CC. We evaluated our solution on the task of file transfer and compared it to TCP Cubic. While further research is required, results have shown that MARLIN can achieve comparable results to TCP with little hyperparameter tuning, in a task significantly different from its training setting. Therefore, we believe that our work represents a promising first step toward building CC algorithms based on the maximum entropy RL framework.
|
2006.14859
|
Bruno Lecouat
|
Bruno Lecouat, Jean Ponce, Julien Mairal
|
A Flexible Framework for Designing Trainable Priors with Adaptive
Smoothing and Game Encoding
|
NeurIPS 2020
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a general framework for designing and training neural network
layers whose forward passes can be interpreted as solving non-smooth convex
optimization problems, and whose architectures are derived from an optimization
algorithm. We focus on convex games, solved by local agents represented by the
nodes of a graph and interacting through regularization functions. This
approach is appealing for solving imaging problems, as it allows the use of
classical image priors within deep models that are trainable end to end. The
priors used in this presentation include variants of total variation, Laplacian
regularization, bilateral filtering, sparse coding on learned dictionaries, and
non-local self similarities. Our models are fully interpretable as well as
parameter and data efficient. Our experiments demonstrate their effectiveness
on a large diversity of tasks ranging from image denoising and compressed
sensing for fMRI to dense stereo matching.
|
[
{
"created": "Fri, 26 Jun 2020 08:34:54 GMT",
"version": "v1"
},
{
"created": "Mon, 9 Nov 2020 10:00:10 GMT",
"version": "v2"
}
] |
2020-11-10
|
[
[
"Lecouat",
"Bruno",
""
],
[
"Ponce",
"Jean",
""
],
[
"Mairal",
"Julien",
""
]
] |
We introduce a general framework for designing and training neural network layers whose forward passes can be interpreted as solving non-smooth convex optimization problems, and whose architectures are derived from an optimization algorithm. We focus on convex games, solved by local agents represented by the nodes of a graph and interacting through regularization functions. This approach is appealing for solving imaging problems, as it allows the use of classical image priors within deep models that are trainable end to end. The priors used in this presentation include variants of total variation, Laplacian regularization, bilateral filtering, sparse coding on learned dictionaries, and non-local self similarities. Our models are fully interpretable as well as parameter and data efficient. Our experiments demonstrate their effectiveness on a large diversity of tasks ranging from image denoising and compressed sensing for fMRI to dense stereo matching.
|
1903.06800
|
Alessandro Betti
|
Lorenzo Gigoni, Alessandro Betti, Emanuele Crisostomi, Alessandro
Franco, Mauro Tucci, Fabrizio Bizzarri, Debora Mucci
|
Day-Ahead Hourly Forecasting of Power Generation from Photovoltaic
Plants
|
Preprint of IEEE Transactions of Sustainable Energy, Vol. 9, Issue 2,
pp. 831 - 842 (2018)
|
IEEE Transactions of Sustainable Energy, Vol. 9, Issue 2, pp. 831
- 842 (2018)
|
10.1109/TSTE.2017.2762435
| null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The ability to accurately forecast power generation from renewable sources is
nowadays recognised as a fundamental skill to improve the operation of power
systems. Despite the general interest of the power community in this topic, it
is not always simple to compare different forecasting methodologies, and infer
the impact of single components in providing accurate predictions. In this
paper we extensively compare simple forecasting methodologies with more
sophisticated ones over 32 photovoltaic plants of different size and technology
over a whole year. Also, we try to evaluate the impact of weather conditions
and weather forecasts on the prediction of PV power generation.
|
[
{
"created": "Tue, 26 Feb 2019 11:29:18 GMT",
"version": "v1"
}
] |
2019-03-19
|
[
[
"Gigoni",
"Lorenzo",
""
],
[
"Betti",
"Alessandro",
""
],
[
"Crisostomi",
"Emanuele",
""
],
[
"Franco",
"Alessandro",
""
],
[
"Tucci",
"Mauro",
""
],
[
"Bizzarri",
"Fabrizio",
""
],
[
"Mucci",
"Debora",
""
]
] |
The ability to accurately forecast power generation from renewable sources is nowadays recognised as a fundamental skill to improve the operation of power systems. Despite the general interest of the power community in this topic, it is not always simple to compare different forecasting methodologies, and infer the impact of single components in providing accurate predictions. In this paper we extensively compare simple forecasting methodologies with more sophisticated ones over 32 photovoltaic plants of different size and technology over a whole year. Also, we try to evaluate the impact of weather conditions and weather forecasts on the prediction of PV power generation.
|
1806.06397
|
Karim Armanious
|
Karim Armanious, Chenming Jiang, Marc Fischer, Thomas K\"ustner,
Konstantin Nikolaou, Sergios Gatidis, Bin Yang
|
MedGAN: Medical Image Translation using GANs
|
16 pages, 8 figures
| null |
10.1016/j.compmedimag.2019.101684
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Image-to-image translation is considered a new frontier in the field of
medical image analysis, with numerous potential applications. However, a large
portion of recent approaches offers individualized solutions based on
specialized task-specific architectures or require refinement through
non-end-to-end training. In this paper, we propose a new framework, named
MedGAN, for medical image-to-image translation which operates on the image
level in an end-to-end manner. MedGAN builds upon recent advances in the field
of generative adversarial networks (GANs) by merging the adversarial framework
with a new combination of non-adversarial losses. We utilize a discriminator
network as a trainable feature extractor which penalizes the discrepancy
between the translated medical images and the desired modalities. Moreover,
style-transfer losses are utilized to match the textures and fine-structures of
the desired target images to the translated images. Additionally, we present a
new generator architecture, titled CasNet, which enhances the sharpness of the
translated medical outputs through progressive refinement via encoder-decoder
pairs. Without any application-specific modifications, we apply MedGAN on three
different tasks: PET-CT translation, correction of MR motion artefacts and PET
image denoising. Perceptual analysis by radiologists and quantitative
evaluations illustrate that the MedGAN outperforms other existing translation
approaches.
|
[
{
"created": "Sun, 17 Jun 2018 15:45:10 GMT",
"version": "v1"
},
{
"created": "Thu, 4 Apr 2019 14:34:21 GMT",
"version": "v2"
}
] |
2019-11-26
|
[
[
"Armanious",
"Karim",
""
],
[
"Jiang",
"Chenming",
""
],
[
"Fischer",
"Marc",
""
],
[
"Küstner",
"Thomas",
""
],
[
"Nikolaou",
"Konstantin",
""
],
[
"Gatidis",
"Sergios",
""
],
[
"Yang",
"Bin",
""
]
] |
Image-to-image translation is considered a new frontier in the field of medical image analysis, with numerous potential applications. However, a large portion of recent approaches offers individualized solutions based on specialized task-specific architectures or require refinement through non-end-to-end training. In this paper, we propose a new framework, named MedGAN, for medical image-to-image translation which operates on the image level in an end-to-end manner. MedGAN builds upon recent advances in the field of generative adversarial networks (GANs) by merging the adversarial framework with a new combination of non-adversarial losses. We utilize a discriminator network as a trainable feature extractor which penalizes the discrepancy between the translated medical images and the desired modalities. Moreover, style-transfer losses are utilized to match the textures and fine-structures of the desired target images to the translated images. Additionally, we present a new generator architecture, titled CasNet, which enhances the sharpness of the translated medical outputs through progressive refinement via encoder-decoder pairs. Without any application-specific modifications, we apply MedGAN on three different tasks: PET-CT translation, correction of MR motion artefacts and PET image denoising. Perceptual analysis by radiologists and quantitative evaluations illustrate that the MedGAN outperforms other existing translation approaches.
|
1811.00472
|
Weidi Xie
|
Erika Lu, Weidi Xie and Andrew Zisserman
|
Class-Agnostic Counting
|
Asian Conference on Computer Vision (ACCV), 2018
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Nearly all existing counting methods are designed for a specific object
class. Our work, however, aims to create a counting model able to count any
class of object. To achieve this goal, we formulate counting as a matching
problem, enabling us to exploit the image self-similarity property that
naturally exists in object counting problems. We make the following three
contributions: first, a Generic Matching Network (GMN) architecture that can
potentially count any object in a class-agnostic manner; second, by
reformulating the counting problem as one of matching objects, we can take
advantage of the abundance of video data labeled for tracking, which contains
natural repetitions suitable for training a counting model. Such data enables
us to train the GMN. Third, to customize the GMN to different user
requirements, an adapter module is used to specialize the model with minimal
effort, i.e. using a few labeled examples, and adapting only a small fraction
of the trained parameters. This is a form of few-shot learning, which is
practical for domains where labels are limited due to requiring expert
knowledge (e.g. microbiology). We demonstrate the flexibility of our method on
a diverse set of existing counting benchmarks: specifically cells, cars, and
human crowds. The model achieves competitive performance on cell and crowd
counting datasets, and surpasses the state-of-the-art on the car dataset using
only three training images. When training on the entire dataset, the proposed
method outperforms all previous methods by a large margin.
|
[
{
"created": "Thu, 1 Nov 2018 16:11:42 GMT",
"version": "v1"
}
] |
2018-11-02
|
[
[
"Lu",
"Erika",
""
],
[
"Xie",
"Weidi",
""
],
[
"Zisserman",
"Andrew",
""
]
] |
Nearly all existing counting methods are designed for a specific object class. Our work, however, aims to create a counting model able to count any class of object. To achieve this goal, we formulate counting as a matching problem, enabling us to exploit the image self-similarity property that naturally exists in object counting problems. We make the following three contributions: first, a Generic Matching Network (GMN) architecture that can potentially count any object in a class-agnostic manner; second, by reformulating the counting problem as one of matching objects, we can take advantage of the abundance of video data labeled for tracking, which contains natural repetitions suitable for training a counting model. Such data enables us to train the GMN. Third, to customize the GMN to different user requirements, an adapter module is used to specialize the model with minimal effort, i.e. using a few labeled examples, and adapting only a small fraction of the trained parameters. This is a form of few-shot learning, which is practical for domains where labels are limited due to requiring expert knowledge (e.g. microbiology). We demonstrate the flexibility of our method on a diverse set of existing counting benchmarks: specifically cells, cars, and human crowds. The model achieves competitive performance on cell and crowd counting datasets, and surpasses the state-of-the-art on the car dataset using only three training images. When training on the entire dataset, the proposed method outperforms all previous methods by a large margin.
|
1810.02490
|
Mostafa Zaman Chowdhury
|
Mostafa Zaman Chowdhury and Yeong Min Jang
|
CAC and Traffic Modeling for Integrated Macrocell/Femtocell Networks
|
International Conference on Ubiquitous and Future Networks (ICUFN),
July 2012, Thailand
| null |
10.1109/ICUFN.2012.6261709
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dense femtocells and the integration of these femtocells with the macrocell
are the ultimate goal of the femtocellular network deployment. Integrated
macrocell/femtocell networks surely able to provide high data rate for the
indoor users as well as able to offload huge traffic from the macrocellular
networks to femtocellular networks. Efficient handling of handover calls is the
key for the successful macrocell/femtocell integration. An appropriate traffic
model for the integrated macrocell/femtocell networks is also needed for the
performance analysis measurement. In this paper we presented a call admission
control process and a traffic model for the integrated macrocell/femtocell
networks. The numerical and simulation results show the important of the
integrated macrocell/femtocell network and the performance improvement of the
proposed schemes.
|
[
{
"created": "Fri, 5 Oct 2018 01:57:36 GMT",
"version": "v1"
}
] |
2018-10-08
|
[
[
"Chowdhury",
"Mostafa Zaman",
""
],
[
"Jang",
"Yeong Min",
""
]
] |
Dense femtocells and the integration of these femtocells with the macrocell are the ultimate goal of the femtocellular network deployment. Integrated macrocell/femtocell networks surely able to provide high data rate for the indoor users as well as able to offload huge traffic from the macrocellular networks to femtocellular networks. Efficient handling of handover calls is the key for the successful macrocell/femtocell integration. An appropriate traffic model for the integrated macrocell/femtocell networks is also needed for the performance analysis measurement. In this paper we presented a call admission control process and a traffic model for the integrated macrocell/femtocell networks. The numerical and simulation results show the important of the integrated macrocell/femtocell network and the performance improvement of the proposed schemes.
|
2103.04904
|
Laszlo Csirmaz
|
Laszlo Csirmaz, Franti\v{s}ek Mat\'u\v{s} and Carles Padr\'o
|
Bipartite secret sharing and staircases
|
To appear in Discrete Mathematics
| null | null | null |
cs.CR cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Bipartite secret sharing schemes have a bipartite access structure in which
the set of participants is divided into two parts and all participants in the
same part play an equivalent role. Such a bipartite scheme can be described by
a \emph{staircase}: the collection of its minimal points. The complexity of a
scheme is the maximal share size relative to the secret size; and the
$\kappa$-complexity of an access structure is the best lower bound provided by
the entropy method. An access structure is $\kappa$-ideal if it has
$\kappa$-complexity 1. Motivated by the abundance of open problems in this
area, the main results can be summarized as follows. First, a new
characterization of $\kappa$-ideal multipartite access structures is given
which offers a straightforward and simple approach to describe ideal bipartite
and tripartite access structures. Second, the $\kappa$-complexity is determined
for a range of bipartite access structures, including those determined by two
points, staircases with equal widths and heights, and staircases with all
heights 1. Third, matching linear schemes are presented for some non-ideal
cases, including staircases where all heights are 1 and all widths are equal.
Finally, finding the Shannon complexity of a bipartite access structure can be
considered as a discrete submodular optimization problem. An interesting and
intriguing continuous version is defined which might give further insight to
the large-scale behavior of these optimization problems.
|
[
{
"created": "Mon, 8 Mar 2021 17:09:43 GMT",
"version": "v1"
},
{
"created": "Thu, 5 Oct 2023 14:19:21 GMT",
"version": "v2"
}
] |
2023-10-06
|
[
[
"Csirmaz",
"Laszlo",
""
],
[
"Matúš",
"František",
""
],
[
"Padró",
"Carles",
""
]
] |
Bipartite secret sharing schemes have a bipartite access structure in which the set of participants is divided into two parts and all participants in the same part play an equivalent role. Such a bipartite scheme can be described by a \emph{staircase}: the collection of its minimal points. The complexity of a scheme is the maximal share size relative to the secret size; and the $\kappa$-complexity of an access structure is the best lower bound provided by the entropy method. An access structure is $\kappa$-ideal if it has $\kappa$-complexity 1. Motivated by the abundance of open problems in this area, the main results can be summarized as follows. First, a new characterization of $\kappa$-ideal multipartite access structures is given which offers a straightforward and simple approach to describe ideal bipartite and tripartite access structures. Second, the $\kappa$-complexity is determined for a range of bipartite access structures, including those determined by two points, staircases with equal widths and heights, and staircases with all heights 1. Third, matching linear schemes are presented for some non-ideal cases, including staircases where all heights are 1 and all widths are equal. Finally, finding the Shannon complexity of a bipartite access structure can be considered as a discrete submodular optimization problem. An interesting and intriguing continuous version is defined which might give further insight to the large-scale behavior of these optimization problems.
|
2103.02649
|
Xiaoyang Wang
|
Xiaoyang Wang, Jonathan D Thomas, Robert J Piechocki, Shipra Kapoor,
Raul Santos-Rodriguez, Arjun Parekh
|
Self-play Learning Strategies for Resource Assignment in Open-RAN
Networks
| null | null | null | null |
cs.NI cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Open Radio Access Network (ORAN) is being developed with an aim to
democratise access and lower the cost of future mobile data networks,
supporting network services with various QoS requirements, such as massive IoT
and URLLC. In ORAN, network functionality is dis-aggregated into remote units
(RUs), distributed units (DUs) and central units (CUs), which allows flexible
software on Commercial-Off-The-Shelf (COTS) deployments. Furthermore, the
mapping of variable RU requirements to local mobile edge computing centres for
future centralized processing would significantly reduce the power consumption
in cellular networks. In this paper, we study the RU-DU resource assignment
problem in an ORAN system, modelled as a 2D bin packing problem. A deep
reinforcement learning-based self-play approach is proposed to achieve
efficient RU-DU resource management, with AlphaGo Zero inspired neural
Monte-Carlo Tree Search (MCTS). Experiments on representative 2D bin packing
environment and real sites data show that the self-play learning strategy
achieves intelligent RU-DU resource assignment for different network
conditions.
|
[
{
"created": "Wed, 3 Mar 2021 19:31:29 GMT",
"version": "v1"
}
] |
2021-03-05
|
[
[
"Wang",
"Xiaoyang",
""
],
[
"Thomas",
"Jonathan D",
""
],
[
"Piechocki",
"Robert J",
""
],
[
"Kapoor",
"Shipra",
""
],
[
"Santos-Rodriguez",
"Raul",
""
],
[
"Parekh",
"Arjun",
""
]
] |
Open Radio Access Network (ORAN) is being developed with an aim to democratise access and lower the cost of future mobile data networks, supporting network services with various QoS requirements, such as massive IoT and URLLC. In ORAN, network functionality is dis-aggregated into remote units (RUs), distributed units (DUs) and central units (CUs), which allows flexible software on Commercial-Off-The-Shelf (COTS) deployments. Furthermore, the mapping of variable RU requirements to local mobile edge computing centres for future centralized processing would significantly reduce the power consumption in cellular networks. In this paper, we study the RU-DU resource assignment problem in an ORAN system, modelled as a 2D bin packing problem. A deep reinforcement learning-based self-play approach is proposed to achieve efficient RU-DU resource management, with AlphaGo Zero inspired neural Monte-Carlo Tree Search (MCTS). Experiments on representative 2D bin packing environment and real sites data show that the self-play learning strategy achieves intelligent RU-DU resource assignment for different network conditions.
|
2305.10459
|
Hadjer Benmeziane
|
Hadjer Benmeziane, Corey Lammie, Irem Boybat, Malte Rasch, Manuel Le
Gallo, Hsinyu Tsai, Ramachandran Muralidhar, Smail Niar, Ouarnoughi Hamza,
Vijay Narayanan, Abu Sebastian and Kaoutar El Maghraoui
|
AnalogNAS: A Neural Network Design Framework for Accurate Inference with
Analog In-Memory Computing
|
Accepted to IEEE Edge
| null | null | null |
cs.AR cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The advancement of Deep Learning (DL) is driven by efficient Deep Neural
Network (DNN) design and new hardware accelerators. Current DNN design is
primarily tailored for general-purpose use and deployment on commercially
viable platforms. Inference at the edge requires low latency, compact and
power-efficient models, and must be cost-effective. Digital processors based on
typical von Neumann architectures are not conducive to edge AI given the large
amounts of required data movement in and out of memory. Conversely,
analog/mixed signal in-memory computing hardware accelerators can easily
transcend the memory wall of von Neuman architectures when accelerating
inference workloads. They offer increased area and power efficiency, which are
paramount in edge resource-constrained environments. In this paper, we propose
AnalogNAS, a framework for automated DNN design targeting deployment on analog
In-Memory Computing (IMC) inference accelerators. We conduct extensive hardware
simulations to demonstrate the performance of AnalogNAS on State-Of-The-Art
(SOTA) models in terms of accuracy and deployment efficiency on various Tiny
Machine Learning (TinyML) tasks. We also present experimental results that show
AnalogNAS models achieving higher accuracy than SOTA models when implemented on
a 64-core IMC chip based on Phase Change Memory (PCM). The AnalogNAS search
code is released: https://github.com/IBM/analog-nas
|
[
{
"created": "Wed, 17 May 2023 07:39:14 GMT",
"version": "v1"
}
] |
2023-05-19
|
[
[
"Benmeziane",
"Hadjer",
""
],
[
"Lammie",
"Corey",
""
],
[
"Boybat",
"Irem",
""
],
[
"Rasch",
"Malte",
""
],
[
"Gallo",
"Manuel Le",
""
],
[
"Tsai",
"Hsinyu",
""
],
[
"Muralidhar",
"Ramachandran",
""
],
[
"Niar",
"Smail",
""
],
[
"Hamza",
"Ouarnoughi",
""
],
[
"Narayanan",
"Vijay",
""
],
[
"Sebastian",
"Abu",
""
],
[
"Maghraoui",
"Kaoutar El",
""
]
] |
The advancement of Deep Learning (DL) is driven by efficient Deep Neural Network (DNN) design and new hardware accelerators. Current DNN design is primarily tailored for general-purpose use and deployment on commercially viable platforms. Inference at the edge requires low latency, compact and power-efficient models, and must be cost-effective. Digital processors based on typical von Neumann architectures are not conducive to edge AI given the large amounts of required data movement in and out of memory. Conversely, analog/mixed signal in-memory computing hardware accelerators can easily transcend the memory wall of von Neuman architectures when accelerating inference workloads. They offer increased area and power efficiency, which are paramount in edge resource-constrained environments. In this paper, we propose AnalogNAS, a framework for automated DNN design targeting deployment on analog In-Memory Computing (IMC) inference accelerators. We conduct extensive hardware simulations to demonstrate the performance of AnalogNAS on State-Of-The-Art (SOTA) models in terms of accuracy and deployment efficiency on various Tiny Machine Learning (TinyML) tasks. We also present experimental results that show AnalogNAS models achieving higher accuracy than SOTA models when implemented on a 64-core IMC chip based on Phase Change Memory (PCM). The AnalogNAS search code is released: https://github.com/IBM/analog-nas
|
2404.05502
|
Roman Kazakov
|
Roman Kazakov, Kseniia Petukhova, Ekaterina Kochmar
|
PetKaz at SemEval-2024 Task 3: Advancing Emotion Classification with an
LLM for Emotion-Cause Pair Extraction in Conversations
|
8 pages, 7 figures, 2 tables, to be published in the Proceedings of
the 18th International Workshop on Semantic Evaluation (SemEval-2024), for
associated code, see https://github.com/sachertort/petkaz-semeval-ecac
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we present our submission to the SemEval-2023 Task~3 "The
Competition of Multimodal Emotion Cause Analysis in Conversations", focusing on
extracting emotion-cause pairs from dialogs. Specifically, our approach relies
on combining fine-tuned GPT-3.5 for emotion classification and a BiLSTM-based
neural network to detect causes. We score 2nd in the ranking for Subtask 1,
demonstrating the effectiveness of our approach through one of the highest
weighted-average proportional F1 scores recorded at 0.264.
|
[
{
"created": "Mon, 8 Apr 2024 13:25:03 GMT",
"version": "v1"
}
] |
2024-04-09
|
[
[
"Kazakov",
"Roman",
""
],
[
"Petukhova",
"Kseniia",
""
],
[
"Kochmar",
"Ekaterina",
""
]
] |
In this paper, we present our submission to the SemEval-2023 Task~3 "The Competition of Multimodal Emotion Cause Analysis in Conversations", focusing on extracting emotion-cause pairs from dialogs. Specifically, our approach relies on combining fine-tuned GPT-3.5 for emotion classification and a BiLSTM-based neural network to detect causes. We score 2nd in the ranking for Subtask 1, demonstrating the effectiveness of our approach through one of the highest weighted-average proportional F1 scores recorded at 0.264.
|
2306.07042
|
Enric Boix-Adser\`a
|
Enric Boix-Adsera, Etai Littwin, Emmanuel Abbe, Samy Bengio, Joshua
Susskind
|
Transformers learn through gradual rank increase
|
39 pages, to appear in NeurIPS 2023
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We identify incremental learning dynamics in transformers, where the
difference between trained and initial weights progressively increases in rank.
We rigorously prove this occurs under the simplifying assumptions of diagonal
weight matrices and small initialization. Our experiments support the theory
and also show that phenomenon can occur in practice without the simplifying
assumptions.
|
[
{
"created": "Mon, 12 Jun 2023 11:41:42 GMT",
"version": "v1"
},
{
"created": "Mon, 11 Dec 2023 00:23:45 GMT",
"version": "v2"
}
] |
2023-12-12
|
[
[
"Boix-Adsera",
"Enric",
""
],
[
"Littwin",
"Etai",
""
],
[
"Abbe",
"Emmanuel",
""
],
[
"Bengio",
"Samy",
""
],
[
"Susskind",
"Joshua",
""
]
] |
We identify incremental learning dynamics in transformers, where the difference between trained and initial weights progressively increases in rank. We rigorously prove this occurs under the simplifying assumptions of diagonal weight matrices and small initialization. Our experiments support the theory and also show that phenomenon can occur in practice without the simplifying assumptions.
|
2203.12328
|
Ananthanarayanan Chockalingam
|
Sandesh Rao Mattu and A. Chockalingam
|
Learning based Channel Estimation and Phase Noise Compensation in
Doubly-Selective Channels
|
Comm. Lett. Copyright IEEE. Personal use of this material is
permitted. Permission from IEEE must be obtained for all other uses, in any
current or future media, including reprinting/republishing this material for
advertising or promotional purposes, creating new collective works, for
resale or redistribution to servers or lists, or reuse of any copyrighted
component of this work in other works
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this letter, we propose a learning based channel estimation scheme for
orthogonal frequency division multiplexing (OFDM) systems in the presence of
phase noise in doubly-selective fading channels. Two-dimensional (2D)
convolutional neural networks (CNNs) are employed for effective training and
tracking of channel variation in both frequency as well as time domain. The
proposed network learns and estimates the channel coefficients in the entire
time-frequency (TF) grid based on pilots sparsely populated in the TF grid. In
order to make the network robust to phase noise (PN) impairment, a novel
training scheme where the training data is rotated by random phases before
being fed to the network is employed. Further, using the estimated channel
coefficients, a simple and effective PN estimation and compensation scheme is
devised. Numerical results demonstrate that the proposed network and PN
compensation scheme achieve robust OFDM performance in the presence of phase
noise.
|
[
{
"created": "Wed, 23 Mar 2022 11:13:27 GMT",
"version": "v1"
}
] |
2022-03-24
|
[
[
"Mattu",
"Sandesh Rao",
""
],
[
"Chockalingam",
"A.",
""
]
] |
In this letter, we propose a learning based channel estimation scheme for orthogonal frequency division multiplexing (OFDM) systems in the presence of phase noise in doubly-selective fading channels. Two-dimensional (2D) convolutional neural networks (CNNs) are employed for effective training and tracking of channel variation in both frequency as well as time domain. The proposed network learns and estimates the channel coefficients in the entire time-frequency (TF) grid based on pilots sparsely populated in the TF grid. In order to make the network robust to phase noise (PN) impairment, a novel training scheme where the training data is rotated by random phases before being fed to the network is employed. Further, using the estimated channel coefficients, a simple and effective PN estimation and compensation scheme is devised. Numerical results demonstrate that the proposed network and PN compensation scheme achieve robust OFDM performance in the presence of phase noise.
|
2205.14122
|
Alexandra Fedorova
|
Alexandra Fedorova, Keith Smith, Keith Bostic, Alexander Gorrod, Sue
LoVerso, Michael Cahill
|
Writes Hurt: Lessons in Cache Design for Optane NVRAM
| null | null | null | null |
cs.AR cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Intel OptaneTM DC Persistent Memory resides on the memory bus and approaches
DRAM in access latency. One avenue for its adoption is to employ it in place of
persistent storage; another is to use it as a cheaper and denser extension of
DRAM. In pursuit of the latter goal, we present the design of a volatile Optane
NVRAM cache as a component in a storage engine underlying MongoDB. The primary
innovation in our design is a new cache admission policy. We discover that on
Optane NVRAM, known for its limited write throughput, the presence of writes
disproportionately affects the throughput of reads, much more so than on DRAM.
Therefore, an admission policy that indiscriminately admits new data (and thus
generates writes), severely limits the rate of data retrieval and results in
exceedingly poor performance for the cache overall. We design an admission
policy that balances the rate of admission with the rate of lookups using
dynamically observed characteristics of the workload. Our implementation
outperforms OpenCAS (an off-the-shelf Optane-based block cache) in all cases,
and Intel Memory Mode in cases where the database size exceeds the available
NVRAM. Our cache is decoupled from the rest of the storage engine and uses
generic metrics to guide its admission policy; therefore our design can be
easily adopted in other systems.
|
[
{
"created": "Tue, 24 May 2022 22:16:11 GMT",
"version": "v1"
}
] |
2022-05-30
|
[
[
"Fedorova",
"Alexandra",
""
],
[
"Smith",
"Keith",
""
],
[
"Bostic",
"Keith",
""
],
[
"Gorrod",
"Alexander",
""
],
[
"LoVerso",
"Sue",
""
],
[
"Cahill",
"Michael",
""
]
] |
Intel OptaneTM DC Persistent Memory resides on the memory bus and approaches DRAM in access latency. One avenue for its adoption is to employ it in place of persistent storage; another is to use it as a cheaper and denser extension of DRAM. In pursuit of the latter goal, we present the design of a volatile Optane NVRAM cache as a component in a storage engine underlying MongoDB. The primary innovation in our design is a new cache admission policy. We discover that on Optane NVRAM, known for its limited write throughput, the presence of writes disproportionately affects the throughput of reads, much more so than on DRAM. Therefore, an admission policy that indiscriminately admits new data (and thus generates writes), severely limits the rate of data retrieval and results in exceedingly poor performance for the cache overall. We design an admission policy that balances the rate of admission with the rate of lookups using dynamically observed characteristics of the workload. Our implementation outperforms OpenCAS (an off-the-shelf Optane-based block cache) in all cases, and Intel Memory Mode in cases where the database size exceeds the available NVRAM. Our cache is decoupled from the rest of the storage engine and uses generic metrics to guide its admission policy; therefore our design can be easily adopted in other systems.
|
2303.13367
|
Brady Lund
|
Brady Lund, Ting Wang, Nishith Reddy Mannuru, Bing Nie, Somipam
Shimray, and Ziang Wang
|
ChatGPT and a New Academic Reality: Artificial Intelligence-Written
Research Papers and the Ethics of the Large Language Models in Scholarly
Publishing
| null |
Journal of the Association for Information Science and Technology
(2023)
|
10.1002/asi.24750
| null |
cs.CL cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
This paper discusses OpenAIs ChatGPT, a generative pre-trained transformer,
which uses natural language processing to fulfill text-based user requests
(i.e., a chatbot). The history and principles behind ChatGPT and similar models
are discussed. This technology is then discussed in relation to its potential
impact on academia and scholarly research and publishing. ChatGPT is seen as a
potential model for the automated preparation of essays and other types of
scholarly manuscripts. Potential ethical issues that could arise with the
emergence of large language models like GPT-3, the underlying technology behind
ChatGPT, and its usage by academics and researchers, are discussed and situated
within the context of broader advancements in artificial intelligence, machine
learning, and natural language processing for research and scholarly
publishing.
|
[
{
"created": "Tue, 21 Mar 2023 14:35:07 GMT",
"version": "v1"
},
{
"created": "Fri, 31 Mar 2023 17:56:28 GMT",
"version": "v2"
}
] |
2023-04-03
|
[
[
"Lund",
"Brady",
""
],
[
"Wang",
"Ting",
""
],
[
"Mannuru",
"Nishith Reddy",
""
],
[
"Nie",
"Bing",
""
],
[
"Shimray",
"Somipam",
""
],
[
"Wang",
"Ziang",
""
]
] |
This paper discusses OpenAIs ChatGPT, a generative pre-trained transformer, which uses natural language processing to fulfill text-based user requests (i.e., a chatbot). The history and principles behind ChatGPT and similar models are discussed. This technology is then discussed in relation to its potential impact on academia and scholarly research and publishing. ChatGPT is seen as a potential model for the automated preparation of essays and other types of scholarly manuscripts. Potential ethical issues that could arise with the emergence of large language models like GPT-3, the underlying technology behind ChatGPT, and its usage by academics and researchers, are discussed and situated within the context of broader advancements in artificial intelligence, machine learning, and natural language processing for research and scholarly publishing.
|
2310.20587
|
Ruizhe Shi
|
Ruizhe Shi, Yuyao Liu, Yanjie Ze, Simon S. Du, Huazhe Xu
|
Unleashing the Power of Pre-trained Language Models for Offline
Reinforcement Learning
|
24 pages, 16 tables
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Offline reinforcement learning (RL) aims to find a near-optimal policy using
pre-collected datasets. In real-world scenarios, data collection could be
costly and risky; therefore, offline RL becomes particularly challenging when
the in-domain data is limited. Given recent advances in Large Language Models
(LLMs) and their few-shot learning prowess, this paper introduces
$\textbf{La}$nguage Models for $\textbf{Mo}$tion Control ($\textbf{LaMo}$), a
general framework based on Decision Transformers to effectively use pre-trained
Language Models (LMs) for offline RL. Our framework highlights four crucial
components: (1) Initializing Decision Transformers with sequentially
pre-trained LMs, (2) employing the LoRA fine-tuning method, in contrast to
full-weight fine-tuning, to combine the pre-trained knowledge from LMs and
in-domain knowledge effectively, (3) using the non-linear MLP transformation
instead of linear projections, to generate embeddings, and (4) integrating an
auxiliary language prediction loss during fine-tuning to stabilize the LMs and
retain their original abilities on languages. Empirical results indicate
$\textbf{LaMo}$ achieves state-of-the-art performance in sparse-reward tasks
and closes the gap between value-based offline RL methods and decision
transformers in dense-reward tasks. In particular, our method demonstrates
superior performance in scenarios with limited data samples.
|
[
{
"created": "Tue, 31 Oct 2023 16:24:17 GMT",
"version": "v1"
},
{
"created": "Sat, 4 Nov 2023 08:11:10 GMT",
"version": "v2"
},
{
"created": "Tue, 7 Nov 2023 03:26:51 GMT",
"version": "v3"
},
{
"created": "Mon, 27 Nov 2023 07:38:06 GMT",
"version": "v4"
}
] |
2023-11-28
|
[
[
"Shi",
"Ruizhe",
""
],
[
"Liu",
"Yuyao",
""
],
[
"Ze",
"Yanjie",
""
],
[
"Du",
"Simon S.",
""
],
[
"Xu",
"Huazhe",
""
]
] |
Offline reinforcement learning (RL) aims to find a near-optimal policy using pre-collected datasets. In real-world scenarios, data collection could be costly and risky; therefore, offline RL becomes particularly challenging when the in-domain data is limited. Given recent advances in Large Language Models (LLMs) and their few-shot learning prowess, this paper introduces $\textbf{La}$nguage Models for $\textbf{Mo}$tion Control ($\textbf{LaMo}$), a general framework based on Decision Transformers to effectively use pre-trained Language Models (LMs) for offline RL. Our framework highlights four crucial components: (1) Initializing Decision Transformers with sequentially pre-trained LMs, (2) employing the LoRA fine-tuning method, in contrast to full-weight fine-tuning, to combine the pre-trained knowledge from LMs and in-domain knowledge effectively, (3) using the non-linear MLP transformation instead of linear projections, to generate embeddings, and (4) integrating an auxiliary language prediction loss during fine-tuning to stabilize the LMs and retain their original abilities on languages. Empirical results indicate $\textbf{LaMo}$ achieves state-of-the-art performance in sparse-reward tasks and closes the gap between value-based offline RL methods and decision transformers in dense-reward tasks. In particular, our method demonstrates superior performance in scenarios with limited data samples.
|
2111.09888
|
Apoorv Khandelwal
|
Apoorv Khandelwal, Luca Weihs, Roozbeh Mottaghi, Aniruddha Kembhavi
|
Simple but Effective: CLIP Embeddings for Embodied AI
|
Published in CVPR 2022
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Contrastive language image pretraining (CLIP) encoders have been shown to be
beneficial for a range of visual tasks from classification and detection to
captioning and image manipulation. We investigate the effectiveness of CLIP
visual backbones for Embodied AI tasks. We build incredibly simple baselines,
named EmbCLIP, with no task specific architectures, inductive biases (such as
the use of semantic maps), auxiliary tasks during training, or depth maps --
yet we find that our improved baselines perform very well across a range of
tasks and simulators. EmbCLIP tops the RoboTHOR ObjectNav leaderboard by a huge
margin of 20 pts (Success Rate). It tops the iTHOR 1-Phase Rearrangement
leaderboard, beating the next best submission, which employs Active Neural
Mapping, and more than doubling the % Fixed Strict metric (0.08 to 0.17). It
also beats the winners of the 2021 Habitat ObjectNav Challenge, which employ
auxiliary tasks, depth maps, and human demonstrations, and those of the 2019
Habitat PointNav Challenge. We evaluate the ability of CLIP's visual
representations at capturing semantic information about input observations --
primitives that are useful for navigation-heavy embodied tasks -- and find that
CLIP's representations encode these primitives more effectively than
ImageNet-pretrained backbones. Finally, we extend one of our baselines,
producing an agent capable of zero-shot object navigation that can navigate to
objects that were not used as targets during training. Our code and models are
available at https://github.com/allenai/embodied-clip
|
[
{
"created": "Thu, 18 Nov 2021 18:59:59 GMT",
"version": "v1"
},
{
"created": "Fri, 15 Apr 2022 02:12:26 GMT",
"version": "v2"
}
] |
2022-04-18
|
[
[
"Khandelwal",
"Apoorv",
""
],
[
"Weihs",
"Luca",
""
],
[
"Mottaghi",
"Roozbeh",
""
],
[
"Kembhavi",
"Aniruddha",
""
]
] |
Contrastive language image pretraining (CLIP) encoders have been shown to be beneficial for a range of visual tasks from classification and detection to captioning and image manipulation. We investigate the effectiveness of CLIP visual backbones for Embodied AI tasks. We build incredibly simple baselines, named EmbCLIP, with no task specific architectures, inductive biases (such as the use of semantic maps), auxiliary tasks during training, or depth maps -- yet we find that our improved baselines perform very well across a range of tasks and simulators. EmbCLIP tops the RoboTHOR ObjectNav leaderboard by a huge margin of 20 pts (Success Rate). It tops the iTHOR 1-Phase Rearrangement leaderboard, beating the next best submission, which employs Active Neural Mapping, and more than doubling the % Fixed Strict metric (0.08 to 0.17). It also beats the winners of the 2021 Habitat ObjectNav Challenge, which employ auxiliary tasks, depth maps, and human demonstrations, and those of the 2019 Habitat PointNav Challenge. We evaluate the ability of CLIP's visual representations at capturing semantic information about input observations -- primitives that are useful for navigation-heavy embodied tasks -- and find that CLIP's representations encode these primitives more effectively than ImageNet-pretrained backbones. Finally, we extend one of our baselines, producing an agent capable of zero-shot object navigation that can navigate to objects that were not used as targets during training. Our code and models are available at https://github.com/allenai/embodied-clip
|
1611.01195
|
Shusil Dangi
|
Shusil Dangi, Nathan Cahill, Cristian A. Linte
|
Integrating Atlas and Graph Cut Methods for LV Segmentation from Cardiac
Cine MRI
|
Statistical Atlases and Computational Modelling of Heart workshop
2016
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Magnetic Resonance Imaging (MRI) has evolved as a clinical standard-of-care
imaging modality for cardiac morphology, function assessment, and guidance of
cardiac interventions. All these applications rely on accurate extraction of
the myocardial tissue and blood pool from the imaging data. Here we propose a
framework for left ventricle (LV) segmentation from cardiac cine-MRI. First, we
segment the LV blood pool using iterative graph cuts, and subsequently use this
information to segment the myocardium. We formulate the segmentation procedure
as an energy minimization problem in a graph subject to the shape prior
obtained by label propagation from an average atlas using affine registration.
The proposed framework has been validated on 30 patient cardiac cine-MRI
datasets available through the STACOM LV segmentation challenge and yielded
fast, robust, and accurate segmentation results.
|
[
{
"created": "Thu, 3 Nov 2016 21:12:55 GMT",
"version": "v1"
}
] |
2016-11-07
|
[
[
"Dangi",
"Shusil",
""
],
[
"Cahill",
"Nathan",
""
],
[
"Linte",
"Cristian A.",
""
]
] |
Magnetic Resonance Imaging (MRI) has evolved as a clinical standard-of-care imaging modality for cardiac morphology, function assessment, and guidance of cardiac interventions. All these applications rely on accurate extraction of the myocardial tissue and blood pool from the imaging data. Here we propose a framework for left ventricle (LV) segmentation from cardiac cine-MRI. First, we segment the LV blood pool using iterative graph cuts, and subsequently use this information to segment the myocardium. We formulate the segmentation procedure as an energy minimization problem in a graph subject to the shape prior obtained by label propagation from an average atlas using affine registration. The proposed framework has been validated on 30 patient cardiac cine-MRI datasets available through the STACOM LV segmentation challenge and yielded fast, robust, and accurate segmentation results.
|
2311.07634
|
Wenshuai Xu
|
Wenshuai Xu, Zhenghui Hu, Yu Lu, Jinzhou Meng, Qingjie Liu, Yunhong
Wang
|
ActiveDC: Distribution Calibration for Active Finetuning
|
CVPR 2024 Accept
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The pretraining-finetuning paradigm has gained popularity in various computer
vision tasks. In this paradigm, the emergence of active finetuning arises due
to the abundance of large-scale data and costly annotation requirements. Active
finetuning involves selecting a subset of data from an unlabeled pool for
annotation, facilitating subsequent finetuning. However, the use of a limited
number of training samples can lead to a biased distribution, potentially
resulting in model overfitting. In this paper, we propose a new method called
ActiveDC for the active finetuning tasks. Firstly, we select samples for
annotation by optimizing the distribution similarity between the subset to be
selected and the entire unlabeled pool in continuous space. Secondly, we
calibrate the distribution of the selected samples by exploiting implicit
category information in the unlabeled pool. The feature visualization provides
an intuitive sense of the effectiveness of our approach to distribution
calibration. We conducted extensive experiments on three image classification
datasets with different sampling ratios. The results indicate that ActiveDC
consistently outperforms the baseline performance in all image classification
tasks. The improvement is particularly significant when the sampling ratio is
low, with performance gains of up to 10%. Our code will be released.
|
[
{
"created": "Mon, 13 Nov 2023 14:35:18 GMT",
"version": "v1"
},
{
"created": "Wed, 15 Nov 2023 09:32:05 GMT",
"version": "v2"
},
{
"created": "Tue, 27 Feb 2024 07:52:16 GMT",
"version": "v3"
}
] |
2024-02-28
|
[
[
"Xu",
"Wenshuai",
""
],
[
"Hu",
"Zhenghui",
""
],
[
"Lu",
"Yu",
""
],
[
"Meng",
"Jinzhou",
""
],
[
"Liu",
"Qingjie",
""
],
[
"Wang",
"Yunhong",
""
]
] |
The pretraining-finetuning paradigm has gained popularity in various computer vision tasks. In this paradigm, the emergence of active finetuning arises due to the abundance of large-scale data and costly annotation requirements. Active finetuning involves selecting a subset of data from an unlabeled pool for annotation, facilitating subsequent finetuning. However, the use of a limited number of training samples can lead to a biased distribution, potentially resulting in model overfitting. In this paper, we propose a new method called ActiveDC for the active finetuning tasks. Firstly, we select samples for annotation by optimizing the distribution similarity between the subset to be selected and the entire unlabeled pool in continuous space. Secondly, we calibrate the distribution of the selected samples by exploiting implicit category information in the unlabeled pool. The feature visualization provides an intuitive sense of the effectiveness of our approach to distribution calibration. We conducted extensive experiments on three image classification datasets with different sampling ratios. The results indicate that ActiveDC consistently outperforms the baseline performance in all image classification tasks. The improvement is particularly significant when the sampling ratio is low, with performance gains of up to 10%. Our code will be released.
|
0803.2812
|
Mikhail Konnik
|
Mikhail V. Konnik
|
Using Spatially Varying Pixels Exposures and Bayer-covered Photosensors
for High Dynamic Range Imaging
|
Typos corrected
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The method of a linear high dynamic range imaging using solid-state
photosensors with Bayer colour filters array is provided in this paper. Using
information from neighbour pixels, it is possible to reconstruct linear images
with wide dynamic range from the oversaturated images. Bayer colour filters
array is considered as an array of neutral filters in a quasimonochromatic
light. If the camera's response function to the desirable light source is known
then one can calculate correction coefficients to reconstruct oversaturated
images. Reconstructed images are linearized in order to provide a linear high
dynamic range images for optical-digital imaging systems. The calibration
procedure for obtaining the camera's response function to the desired light
source is described. Experimental results of the reconstruction of the images
from the oversaturated images are presented for red, green, and blue
quasimonochromatic light sources. Quantitative analysis of the accuracy of the
reconstructed images is provided.
|
[
{
"created": "Wed, 19 Mar 2008 14:55:15 GMT",
"version": "v1"
},
{
"created": "Mon, 24 Mar 2008 07:04:39 GMT",
"version": "v2"
}
] |
2008-03-24
|
[
[
"Konnik",
"Mikhail V.",
""
]
] |
The method of a linear high dynamic range imaging using solid-state photosensors with Bayer colour filters array is provided in this paper. Using information from neighbour pixels, it is possible to reconstruct linear images with wide dynamic range from the oversaturated images. Bayer colour filters array is considered as an array of neutral filters in a quasimonochromatic light. If the camera's response function to the desirable light source is known then one can calculate correction coefficients to reconstruct oversaturated images. Reconstructed images are linearized in order to provide a linear high dynamic range images for optical-digital imaging systems. The calibration procedure for obtaining the camera's response function to the desired light source is described. Experimental results of the reconstruction of the images from the oversaturated images are presented for red, green, and blue quasimonochromatic light sources. Quantitative analysis of the accuracy of the reconstructed images is provided.
|
2306.14209
|
Fabio Merizzi
|
Fabio Merizzi, Perrine Saillard, Oceane Acquier, Elena Morotti, Elena
Loli Piccolomini, Luca Calatroni and Rosa Maria Dess\`i
|
Deep image prior inpainting of ancient frescoes in the Mediterranean
Alpine arc
|
26 pages
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The unprecedented success of image reconstruction approaches based on deep
neural networks has revolutionised both the processing and the analysis
paradigms in several applied disciplines. In the field of digital humanities,
the task of digital reconstruction of ancient frescoes is particularly
challenging due to the scarce amount of available training data caused by
ageing, wear, tear and retouching over time. To overcome these difficulties, we
consider the Deep Image Prior (DIP) inpainting approach which computes
appropriate reconstructions by relying on the progressive updating of an
untrained convolutional neural network so as to match the reliable piece of
information in the image at hand while promoting regularisation elsewhere. In
comparison with state-of-the-art approaches (based on variational/PDEs and
patch-based methods), DIP-based inpainting reduces artefacts and better adapts
to contextual/non-local information, thus providing a valuable and effective
tool for art historians. As a case study, we apply such approach to reconstruct
missing image contents in a dataset of highly damaged digital images of
medieval paintings located into several chapels in the Mediterranean Alpine Arc
and provide a detailed description on how visible and invisible (e.g.,
infrared) information can be integrated for identifying and reconstructing
damaged image regions.
|
[
{
"created": "Sun, 25 Jun 2023 11:19:47 GMT",
"version": "v1"
},
{
"created": "Mon, 11 Dec 2023 15:30:01 GMT",
"version": "v2"
}
] |
2023-12-14
|
[
[
"Merizzi",
"Fabio",
""
],
[
"Saillard",
"Perrine",
""
],
[
"Acquier",
"Oceane",
""
],
[
"Morotti",
"Elena",
""
],
[
"Piccolomini",
"Elena Loli",
""
],
[
"Calatroni",
"Luca",
""
],
[
"Dessì",
"Rosa Maria",
""
]
] |
The unprecedented success of image reconstruction approaches based on deep neural networks has revolutionised both the processing and the analysis paradigms in several applied disciplines. In the field of digital humanities, the task of digital reconstruction of ancient frescoes is particularly challenging due to the scarce amount of available training data caused by ageing, wear, tear and retouching over time. To overcome these difficulties, we consider the Deep Image Prior (DIP) inpainting approach which computes appropriate reconstructions by relying on the progressive updating of an untrained convolutional neural network so as to match the reliable piece of information in the image at hand while promoting regularisation elsewhere. In comparison with state-of-the-art approaches (based on variational/PDEs and patch-based methods), DIP-based inpainting reduces artefacts and better adapts to contextual/non-local information, thus providing a valuable and effective tool for art historians. As a case study, we apply such approach to reconstruct missing image contents in a dataset of highly damaged digital images of medieval paintings located into several chapels in the Mediterranean Alpine Arc and provide a detailed description on how visible and invisible (e.g., infrared) information can be integrated for identifying and reconstructing damaged image regions.
|
cs/0610151
|
Anant Sahai
|
Anant Sahai
|
Anytime coding on the infinite bandwidth AWGN channel: A sequential
semi-orthogonal optimal code
|
12 pages, 6 figures, submitted to IT Transactions
| null | null | null |
cs.IT math.IT
| null |
It is well known that orthogonal coding can be used to approach the Shannon
capacity of the power-constrained AWGN channel without a bandwidth constraint.
This correspondence describes a semi-orthogonal variation of pulse position
modulation that is sequential in nature -- bits can be ``streamed across''
without having to buffer up blocks of bits at the transmitter. ML decoding
results in an exponentially small probability of error as a function of
tolerated receiver delay and thus eventually a zero probability of error on
every transmitted bit. In the high-rate regime, a matching upper bound is given
on the delay error exponent. We close with some comments on the case with
feedback and the connections to the capacity per unit cost problem.
|
[
{
"created": "Thu, 26 Oct 2006 10:01:38 GMT",
"version": "v1"
}
] |
2007-07-13
|
[
[
"Sahai",
"Anant",
""
]
] |
It is well known that orthogonal coding can be used to approach the Shannon capacity of the power-constrained AWGN channel without a bandwidth constraint. This correspondence describes a semi-orthogonal variation of pulse position modulation that is sequential in nature -- bits can be ``streamed across'' without having to buffer up blocks of bits at the transmitter. ML decoding results in an exponentially small probability of error as a function of tolerated receiver delay and thus eventually a zero probability of error on every transmitted bit. In the high-rate regime, a matching upper bound is given on the delay error exponent. We close with some comments on the case with feedback and the connections to the capacity per unit cost problem.
|
2103.09265
|
Harshitha Machiraju
|
Harshitha Machiraju, Oh-Hyeon Choung, Pascal Frossard, Michael. H
Herzog
|
Bio-inspired Robustness: A Review
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Deep convolutional neural networks (DCNNs) have revolutionized computer
vision and are often advocated as good models of the human visual system.
However, there are currently many shortcomings of DCNNs, which preclude them as
a model of human vision. For example, in the case of adversarial attacks, where
adding small amounts of noise to an image, including an object, can lead to
strong misclassification of that object. But for humans, the noise is often
invisible. If vulnerability to adversarial noise cannot be fixed, DCNNs cannot
be taken as serious models of human vision. Many studies have tried to add
features of the human visual system to DCNNs to make them robust against
adversarial attacks. However, it is not fully clear whether human vision
inspired components increase robustness because performance evaluations of
these novel components in DCNNs are often inconclusive. We propose a set of
criteria for proper evaluation and analyze different models according to these
criteria. We finally sketch future efforts to make DCCNs one step closer to the
model of human vision.
|
[
{
"created": "Tue, 16 Mar 2021 18:20:29 GMT",
"version": "v1"
}
] |
2021-03-18
|
[
[
"Machiraju",
"Harshitha",
""
],
[
"Choung",
"Oh-Hyeon",
""
],
[
"Frossard",
"Pascal",
""
],
[
"Herzog",
"Michael. H",
""
]
] |
Deep convolutional neural networks (DCNNs) have revolutionized computer vision and are often advocated as good models of the human visual system. However, there are currently many shortcomings of DCNNs, which preclude them as a model of human vision. For example, in the case of adversarial attacks, where adding small amounts of noise to an image, including an object, can lead to strong misclassification of that object. But for humans, the noise is often invisible. If vulnerability to adversarial noise cannot be fixed, DCNNs cannot be taken as serious models of human vision. Many studies have tried to add features of the human visual system to DCNNs to make them robust against adversarial attacks. However, it is not fully clear whether human vision inspired components increase robustness because performance evaluations of these novel components in DCNNs are often inconclusive. We propose a set of criteria for proper evaluation and analyze different models according to these criteria. We finally sketch future efforts to make DCCNs one step closer to the model of human vision.
|
2304.13693
|
Thalia Santos De Santana
|
Thalia S. Santana, Taciana N. Kudo, Renato F. Bulc\~ao-Neto
|
Requirements Engineering, Software Testing and Education: A Systematic
Mapping
|
20 pages, in Portuguese language
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The activities of requirements engineering and software testing are
intrinsically related to each other, as these two areas are linked when seeking
to specify and also ensure the expectations of a software product, with quality
and on time. This systematic mapping study aims to verify how requirements and
testing are being addressed together in the educational context.
|
[
{
"created": "Wed, 26 Apr 2023 17:18:34 GMT",
"version": "v1"
}
] |
2023-04-27
|
[
[
"Santana",
"Thalia S.",
""
],
[
"Kudo",
"Taciana N.",
""
],
[
"Bulcão-Neto",
"Renato F.",
""
]
] |
The activities of requirements engineering and software testing are intrinsically related to each other, as these two areas are linked when seeking to specify and also ensure the expectations of a software product, with quality and on time. This systematic mapping study aims to verify how requirements and testing are being addressed together in the educational context.
|
2003.01149
|
Piotr Franciszek Orzechowski
|
Piotr Franciszek Orzechowski, Christoph Burger and Martin Lauer
|
Decision-Making for Automated Vehicles Using a Hierarchical
Behavior-Based Arbitration Scheme
| null | null |
10.1109/IV47402.2020.9304723
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Behavior planning and decision-making are some of the biggest challenges for
highly automated systems. A fully automated vehicle (AV) is confronted with
numerous tactical and strategical choices. Most state-of-the-art AV platforms
implement tactical and strategical behavior generation using finite state
machines. However, these usually result in poor explainability, maintainability
and scalability. Research in robotics has raised many architectures to mitigate
these problems, most interestingly behavior-based systems and hybrid
derivatives. Inspired by these approaches, we propose a hierarchical
behavior-based architecture for tactical and strategical behavior generation in
automated driving. It is a generalizing and scalable decision-making framework,
utilizing modular behavior blocks to compose more complex behaviors in a
bottom-up approach. The system is capable of combining a variety of scenario-
and methodology-specific solutions, like POMDPs, RRT* or learning-based
behavior, into one understandable and traceable architecture. We extend the
hierarchical behavior-based arbitration concept to address scenarios where
multiple behavior options are applicable but have no clear priority against
each other. Then, we formulate the behavior generation stack for automated
driving in urban and highway environments, incorporating parking and emergency
behaviors as well. Finally, we illustrate our design in an explanatory
evaluation.
|
[
{
"created": "Mon, 2 Mar 2020 19:21:18 GMT",
"version": "v1"
},
{
"created": "Mon, 11 May 2020 18:57:05 GMT",
"version": "v2"
},
{
"created": "Tue, 1 Sep 2020 11:11:32 GMT",
"version": "v3"
},
{
"created": "Fri, 5 Feb 2021 11:15:42 GMT",
"version": "v4"
}
] |
2021-02-08
|
[
[
"Orzechowski",
"Piotr Franciszek",
""
],
[
"Burger",
"Christoph",
""
],
[
"Lauer",
"Martin",
""
]
] |
Behavior planning and decision-making are some of the biggest challenges for highly automated systems. A fully automated vehicle (AV) is confronted with numerous tactical and strategical choices. Most state-of-the-art AV platforms implement tactical and strategical behavior generation using finite state machines. However, these usually result in poor explainability, maintainability and scalability. Research in robotics has raised many architectures to mitigate these problems, most interestingly behavior-based systems and hybrid derivatives. Inspired by these approaches, we propose a hierarchical behavior-based architecture for tactical and strategical behavior generation in automated driving. It is a generalizing and scalable decision-making framework, utilizing modular behavior blocks to compose more complex behaviors in a bottom-up approach. The system is capable of combining a variety of scenario- and methodology-specific solutions, like POMDPs, RRT* or learning-based behavior, into one understandable and traceable architecture. We extend the hierarchical behavior-based arbitration concept to address scenarios where multiple behavior options are applicable but have no clear priority against each other. Then, we formulate the behavior generation stack for automated driving in urban and highway environments, incorporating parking and emergency behaviors as well. Finally, we illustrate our design in an explanatory evaluation.
|
2306.11767
|
Marvin Schieseck
|
Marvin Schieseck, Philip Topalis, Alexander Fay
|
A Graphical Modeling Language for Artificial Intelligence Applications
in Automation Systems
| null | null | null | null |
cs.AI cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Artificial Intelligence (AI) applications in automation systems are usually
distributed systems whose development and integration involve several experts.
Each expert uses its own domain-specific modeling language and tools to model
the system elements. An interdisciplinary graphical modeling language that
enables the modeling of an AI application as an overall system comprehensible
to all disciplines does not yet exist. As a result, there is often a lack of
interdisciplinary system understanding, leading to increased development,
integration, and maintenance efforts. This paper therefore presents a graphical
modeling language that enables consistent and understandable modeling of AI
applications in automation systems at system level. This makes it possible to
subdivide individual subareas into domain specific subsystems and thus reduce
the existing efforts.
|
[
{
"created": "Tue, 20 Jun 2023 12:06:41 GMT",
"version": "v1"
}
] |
2023-06-22
|
[
[
"Schieseck",
"Marvin",
""
],
[
"Topalis",
"Philip",
""
],
[
"Fay",
"Alexander",
""
]
] |
Artificial Intelligence (AI) applications in automation systems are usually distributed systems whose development and integration involve several experts. Each expert uses its own domain-specific modeling language and tools to model the system elements. An interdisciplinary graphical modeling language that enables the modeling of an AI application as an overall system comprehensible to all disciplines does not yet exist. As a result, there is often a lack of interdisciplinary system understanding, leading to increased development, integration, and maintenance efforts. This paper therefore presents a graphical modeling language that enables consistent and understandable modeling of AI applications in automation systems at system level. This makes it possible to subdivide individual subareas into domain specific subsystems and thus reduce the existing efforts.
|
2211.02283
|
Jincheng Dai
|
Zixuan Xiao, Shengshi Yao, Jincheng Dai, Sixian Wang, Kai Niu, Ping
Zhang
|
Wireless Deep Speech Semantic Transmission
| null | null | null | null |
cs.SD cs.IT eess.AS math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a new class of high-efficiency semantic coded
transmission methods for end-to-end speech transmission over wireless channels.
We name the whole system as deep speech semantic transmission (DSST).
Specifically, we introduce a nonlinear transform to map the speech source to
semantic latent space and feed semantic features into source-channel encoder to
generate the channel-input sequence. Guided by the variational modeling idea,
we build an entropy model on the latent space to estimate the importance
diversity among semantic feature embeddings. Accordingly, these semantic
features of different importance can be allocated with different coding rates
reasonably, which maximizes the system coding gain. Furthermore, we introduce a
channel signal-to-noise ratio (SNR) adaptation mechanism such that a single
model can be applied over various channel states. The end-to-end optimization
of our model leads to a flexible rate-distortion (RD) trade-off, supporting
versatile wireless speech semantic transmission. Experimental results verify
that our DSST system clearly outperforms current engineered speech transmission
systems on both objective and subjective metrics. Compared with existing neural
speech semantic transmission methods, our model saves up to 75% of channel
bandwidth costs when achieving the same quality. An intuitive comparison of
audio demos can be found at https://ximoo123.github.io/DSST.
|
[
{
"created": "Fri, 4 Nov 2022 06:49:42 GMT",
"version": "v1"
}
] |
2022-11-07
|
[
[
"Xiao",
"Zixuan",
""
],
[
"Yao",
"Shengshi",
""
],
[
"Dai",
"Jincheng",
""
],
[
"Wang",
"Sixian",
""
],
[
"Niu",
"Kai",
""
],
[
"Zhang",
"Ping",
""
]
] |
In this paper, we propose a new class of high-efficiency semantic coded transmission methods for end-to-end speech transmission over wireless channels. We name the whole system as deep speech semantic transmission (DSST). Specifically, we introduce a nonlinear transform to map the speech source to semantic latent space and feed semantic features into source-channel encoder to generate the channel-input sequence. Guided by the variational modeling idea, we build an entropy model on the latent space to estimate the importance diversity among semantic feature embeddings. Accordingly, these semantic features of different importance can be allocated with different coding rates reasonably, which maximizes the system coding gain. Furthermore, we introduce a channel signal-to-noise ratio (SNR) adaptation mechanism such that a single model can be applied over various channel states. The end-to-end optimization of our model leads to a flexible rate-distortion (RD) trade-off, supporting versatile wireless speech semantic transmission. Experimental results verify that our DSST system clearly outperforms current engineered speech transmission systems on both objective and subjective metrics. Compared with existing neural speech semantic transmission methods, our model saves up to 75% of channel bandwidth costs when achieving the same quality. An intuitive comparison of audio demos can be found at https://ximoo123.github.io/DSST.
|
1609.00686
|
Makoto Naruse
|
Makoto Naruse, Martin Berthel, Aur\'elien Drezet, Serge Huant,
Hirokazu Hori, Song-Ju Kim
|
Single photon in hierarchical architecture for physical reinforcement
learning: Photon intelligence
| null | null | null | null |
cs.LG physics.optics quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Understanding and using natural processes for intelligent functionalities,
referred to as natural intelligence, has recently attracted interest from a
variety of fields, including post-silicon computing for artificial intelligence
and decision making in the behavioural sciences. In a past study, we
successfully used the wave-particle duality of single photons to solve the
two-armed bandit problem, which constitutes the foundation of reinforcement
learning and decision making. In this study, we propose and confirm a
hierarchical architecture for single-photon-based reinforcement learning and
decision making that verifies the scalability of the principle. Specifically,
the four-armed bandit problem is solved given zero prior knowledge in a
two-layer hierarchical architecture, where polarization is autonomously adapted
in order to effect adequate decision making using single-photon measurements.
In the hierarchical structure, the notion of layer-dependent decisions emerges.
The optimal solutions in the coarse layer and in the fine layer, however,
conflict with each other in some contradictive problems. We show that while
what we call a tournament strategy resolves such contradictions, the
probabilistic nature of single photons allows for the direct location of the
optimal solution even for contradictive problems, hence manifesting the
exploration ability of single photons. This study provides insights into photon
intelligence in hierarchical architectures for future artificial intelligence
as well as the potential of natural processes for intelligent functionalities.
|
[
{
"created": "Thu, 1 Sep 2016 09:32:29 GMT",
"version": "v1"
}
] |
2016-09-05
|
[
[
"Naruse",
"Makoto",
""
],
[
"Berthel",
"Martin",
""
],
[
"Drezet",
"Aurélien",
""
],
[
"Huant",
"Serge",
""
],
[
"Hori",
"Hirokazu",
""
],
[
"Kim",
"Song-Ju",
""
]
] |
Understanding and using natural processes for intelligent functionalities, referred to as natural intelligence, has recently attracted interest from a variety of fields, including post-silicon computing for artificial intelligence and decision making in the behavioural sciences. In a past study, we successfully used the wave-particle duality of single photons to solve the two-armed bandit problem, which constitutes the foundation of reinforcement learning and decision making. In this study, we propose and confirm a hierarchical architecture for single-photon-based reinforcement learning and decision making that verifies the scalability of the principle. Specifically, the four-armed bandit problem is solved given zero prior knowledge in a two-layer hierarchical architecture, where polarization is autonomously adapted in order to effect adequate decision making using single-photon measurements. In the hierarchical structure, the notion of layer-dependent decisions emerges. The optimal solutions in the coarse layer and in the fine layer, however, conflict with each other in some contradictive problems. We show that while what we call a tournament strategy resolves such contradictions, the probabilistic nature of single photons allows for the direct location of the optimal solution even for contradictive problems, hence manifesting the exploration ability of single photons. This study provides insights into photon intelligence in hierarchical architectures for future artificial intelligence as well as the potential of natural processes for intelligent functionalities.
|
2107.02467
|
Hui Liu
|
J. Wang, X. Liu, S. Shen, L. Deng, H. Liu*
|
DeepDDS: deep graph neural network with attention mechanism to predict
synergistic drug combinations
| null | null | null | null |
cs.LG q-bio.QM
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Drug combination therapy has become a increasingly promising method in the
treatment of cancer. However, the number of possible drug combinations is so
huge that it is hard to screen synergistic drug combinations through wet-lab
experiments. Therefore, computational screening has become an important way to
prioritize drug combinations. Graph neural network have recently shown
remarkable performance in the prediction of compound-protein interactions, but
it has not been applied to the screening of drug combinations. In this paper,
we proposed a deep learning model based on graph neural networks and attention
mechanism to identify drug combinations that can effectively inhibit the
viability of specific cancer cells. The feature embeddings of drug molecule
structure and gene expression profiles were taken as input to multi-layer
feedforward neural network to identify the synergistic drug combinations. We
compared DeepDDS with classical machine learning methods and other deep
learning-based methods on benchmark data set, and the leave-one-out
experimental results showed that DeepDDS achieved better performance than
competitive methods. Also, on an independent test set released by well-known
pharmaceutical enterprise AstraZeneca, DeepDDS was superior to competitive
methods by more than 16\% predictive precision. Furthermore, we explored the
interpretability of the graph attention network, and found the correlation
matrix of atomic features revealed important chemical substructures of drugs.
We believed that DeepDDS is an effective tool that prioritized synergistic drug
combinations for further wet-lab experiment validation.
|
[
{
"created": "Tue, 6 Jul 2021 08:25:43 GMT",
"version": "v1"
}
] |
2021-07-07
|
[
[
"Wang",
"J.",
""
],
[
"Liu",
"X.",
""
],
[
"Shen",
"S.",
""
],
[
"Deng",
"L.",
""
],
[
"Liu*",
"H.",
""
]
] |
Drug combination therapy has become a increasingly promising method in the treatment of cancer. However, the number of possible drug combinations is so huge that it is hard to screen synergistic drug combinations through wet-lab experiments. Therefore, computational screening has become an important way to prioritize drug combinations. Graph neural network have recently shown remarkable performance in the prediction of compound-protein interactions, but it has not been applied to the screening of drug combinations. In this paper, we proposed a deep learning model based on graph neural networks and attention mechanism to identify drug combinations that can effectively inhibit the viability of specific cancer cells. The feature embeddings of drug molecule structure and gene expression profiles were taken as input to multi-layer feedforward neural network to identify the synergistic drug combinations. We compared DeepDDS with classical machine learning methods and other deep learning-based methods on benchmark data set, and the leave-one-out experimental results showed that DeepDDS achieved better performance than competitive methods. Also, on an independent test set released by well-known pharmaceutical enterprise AstraZeneca, DeepDDS was superior to competitive methods by more than 16\% predictive precision. Furthermore, we explored the interpretability of the graph attention network, and found the correlation matrix of atomic features revealed important chemical substructures of drugs. We believed that DeepDDS is an effective tool that prioritized synergistic drug combinations for further wet-lab experiment validation.
|
1603.06756
|
Wayes Tushar
|
Wayes Tushar, Chau Yuen, Bo Chai, Shisheng Huang, Kristin L. Wood, See
Gim Kerk and Zaiyue Yang
|
Smart Grid Testbed for Demand Focused Energy Management in End User
Environments
|
2016
| null | null | null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Successful deployment of smart grids necessitates experimental validities of
their state-of-the-art designs in two-way communications, real-time demand
response and monitoring of consumers' energy usage behavior. The objective is
to observe consumers' energy usage pattern and exploit this information to
assist the grid in designing incentives, energy management mechanisms, and
real-time demand response protocols; so as help the grid achieving lower costs
and improve energy supply stability. Further, by feeding the observed
information back to the consumers instantaneously, it is also possible to
promote energy efficient behavior among the users. To this end, this paper
performs a literature survey on smart grid testbeds around the world, and
presents the main accomplishments towards realizing a smart grid testbed at the
Singapore University of Technology and Design (SUTD). The testbed is able to
monitor, analyze and evaluate smart grid communication network design and
control mechanisms, and test the suitability of various communications networks
for both residential and commercial buildings. The testbeds are deployed within
the SUTD student dormitories and the main university campus to monitor and
record end-user energy consumption in real-time, which will enable us to design
incentives, control algorithms and real-time demand response schemes. The
testbed also provides an effective channel to evaluate the needs on
communication networks to support various smart grid applications. In addition,
our initial results demonstrate that our testbed can provide an effective
platform to identify energy wastage, and prompt the needs of a secure
communications channel as the energy usage pattern can provide privacy related
information on individual user.
|
[
{
"created": "Tue, 22 Mar 2016 12:26:48 GMT",
"version": "v1"
}
] |
2016-03-23
|
[
[
"Tushar",
"Wayes",
""
],
[
"Yuen",
"Chau",
""
],
[
"Chai",
"Bo",
""
],
[
"Huang",
"Shisheng",
""
],
[
"Wood",
"Kristin L.",
""
],
[
"Kerk",
"See Gim",
""
],
[
"Yang",
"Zaiyue",
""
]
] |
Successful deployment of smart grids necessitates experimental validities of their state-of-the-art designs in two-way communications, real-time demand response and monitoring of consumers' energy usage behavior. The objective is to observe consumers' energy usage pattern and exploit this information to assist the grid in designing incentives, energy management mechanisms, and real-time demand response protocols; so as help the grid achieving lower costs and improve energy supply stability. Further, by feeding the observed information back to the consumers instantaneously, it is also possible to promote energy efficient behavior among the users. To this end, this paper performs a literature survey on smart grid testbeds around the world, and presents the main accomplishments towards realizing a smart grid testbed at the Singapore University of Technology and Design (SUTD). The testbed is able to monitor, analyze and evaluate smart grid communication network design and control mechanisms, and test the suitability of various communications networks for both residential and commercial buildings. The testbeds are deployed within the SUTD student dormitories and the main university campus to monitor and record end-user energy consumption in real-time, which will enable us to design incentives, control algorithms and real-time demand response schemes. The testbed also provides an effective channel to evaluate the needs on communication networks to support various smart grid applications. In addition, our initial results demonstrate that our testbed can provide an effective platform to identify energy wastage, and prompt the needs of a secure communications channel as the energy usage pattern can provide privacy related information on individual user.
|
1506.00572
|
Scott A. Hale
|
Han-Teng Liao, King-wa Fu, Scott A. Hale
|
How much is said in a microblog? A multilingual inquiry based on Weibo
and Twitter
|
9 pages, 4 figures WebSci 2015
| null |
10.1145/2786451.2786486
| null |
cs.SI cs.CL cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a multilingual study on, per single post of microblog
text, (a) how much can be said, (b) how much is written in terms of characters
and bytes, and (c) how much is said in terms of information content in posts by
different organizations in different languages. Focusing on three different
languages (English, Chinese, and Japanese), this research analyses Weibo and
Twitter accounts of major embassies and news agencies. We first establish our
criterion for quantifying "how much can be said" in a digital text based on the
openly available Universal Declaration of Human Rights and the translated
subtitles from TED talks. These parallel corpora allow us to determine the
number of characters and bits needed to represent the same content in different
languages and character encodings. We then derive the amount of information
that is actually contained in microblog posts authored by selected accounts on
Weibo and Twitter. Our results confirm that languages with larger character
sets such as Chinese and Japanese contain more information per character than
English, but the actual information content contained within a microblog text
varies depending on both the type of organization and the language of the post.
We conclude with a discussion on the design implications of microblog text
limits for different languages.
|
[
{
"created": "Mon, 1 Jun 2015 17:06:00 GMT",
"version": "v1"
},
{
"created": "Sat, 13 Jun 2015 14:37:25 GMT",
"version": "v2"
}
] |
2015-06-16
|
[
[
"Liao",
"Han-Teng",
""
],
[
"Fu",
"King-wa",
""
],
[
"Hale",
"Scott A.",
""
]
] |
This paper presents a multilingual study on, per single post of microblog text, (a) how much can be said, (b) how much is written in terms of characters and bytes, and (c) how much is said in terms of information content in posts by different organizations in different languages. Focusing on three different languages (English, Chinese, and Japanese), this research analyses Weibo and Twitter accounts of major embassies and news agencies. We first establish our criterion for quantifying "how much can be said" in a digital text based on the openly available Universal Declaration of Human Rights and the translated subtitles from TED talks. These parallel corpora allow us to determine the number of characters and bits needed to represent the same content in different languages and character encodings. We then derive the amount of information that is actually contained in microblog posts authored by selected accounts on Weibo and Twitter. Our results confirm that languages with larger character sets such as Chinese and Japanese contain more information per character than English, but the actual information content contained within a microblog text varies depending on both the type of organization and the language of the post. We conclude with a discussion on the design implications of microblog text limits for different languages.
|
2306.07282
|
Karsten Roth
|
Karsten Roth, Jae Myung Kim, A. Sophia Koepke, Oriol Vinyals, Cordelia
Schmid, Zeynep Akata
|
Waffling around for Performance: Visual Classification with Random Words
and Broad Concepts
|
Accepted to ICCV 2023. Main paper with 9 pages
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The visual classification performance of vision-language models such as CLIP
has been shown to benefit from additional semantic knowledge from large
language models (LLMs) such as GPT-3. In particular, averaging over
LLM-generated class descriptors, e.g. "waffle, which has a round shape", can
notably improve generalization performance. In this work, we critically study
this behavior and propose WaffleCLIP, a framework for zero-shot visual
classification which simply replaces LLM-generated descriptors with random
character and word descriptors. Without querying external models, we achieve
comparable performance gains on a large number of visual classification tasks.
This allows WaffleCLIP to both serve as a low-cost alternative, as well as a
sanity check for any future LLM-based vision-language model extensions. We
conduct an extensive experimental study on the impact and shortcomings of
additional semantics introduced with LLM-generated descriptors, and showcase
how - if available - semantic context is better leveraged by querying LLMs for
high-level concepts, which we show can be done to jointly resolve potential
class name ambiguities. Code is available here:
https://github.com/ExplainableML/WaffleCLIP.
|
[
{
"created": "Mon, 12 Jun 2023 17:59:48 GMT",
"version": "v1"
},
{
"created": "Thu, 17 Aug 2023 02:27:32 GMT",
"version": "v2"
}
] |
2023-08-21
|
[
[
"Roth",
"Karsten",
""
],
[
"Kim",
"Jae Myung",
""
],
[
"Koepke",
"A. Sophia",
""
],
[
"Vinyals",
"Oriol",
""
],
[
"Schmid",
"Cordelia",
""
],
[
"Akata",
"Zeynep",
""
]
] |
The visual classification performance of vision-language models such as CLIP has been shown to benefit from additional semantic knowledge from large language models (LLMs) such as GPT-3. In particular, averaging over LLM-generated class descriptors, e.g. "waffle, which has a round shape", can notably improve generalization performance. In this work, we critically study this behavior and propose WaffleCLIP, a framework for zero-shot visual classification which simply replaces LLM-generated descriptors with random character and word descriptors. Without querying external models, we achieve comparable performance gains on a large number of visual classification tasks. This allows WaffleCLIP to both serve as a low-cost alternative, as well as a sanity check for any future LLM-based vision-language model extensions. We conduct an extensive experimental study on the impact and shortcomings of additional semantics introduced with LLM-generated descriptors, and showcase how - if available - semantic context is better leveraged by querying LLMs for high-level concepts, which we show can be done to jointly resolve potential class name ambiguities. Code is available here: https://github.com/ExplainableML/WaffleCLIP.
|
2203.07463
|
Ramin Raziperchikolaei
|
Ramin Raziperchikolaei and Young-joo Chung
|
Simultaneous Learning of the Inputs and Parameters in Neural
Collaborative Filtering
| null | null | null | null |
cs.IR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Neural network-based collaborative filtering systems focus on designing
network architectures to learn better representations while fixing the input to
the user/item interaction vectors and/or ID. In this paper, we first show that
the non-zero elements of the inputs are learnable parameters that determine the
weights in combining the user/item embeddings, and fixing them limits the power
of the models in learning the representations. Then, we propose to learn the
value of the non-zero elements of the inputs jointly with the neural network
parameters. We analyze the model complexity and the empirical risk of our
approach and prove that learning the input leads to a better generalization
bound. Our experiments on several real-world datasets show that our method
outperforms the state-of-the-art methods, even using shallow network structures
with a smaller number of layers and parameters.
|
[
{
"created": "Mon, 14 Mar 2022 19:47:38 GMT",
"version": "v1"
}
] |
2022-03-16
|
[
[
"Raziperchikolaei",
"Ramin",
""
],
[
"Chung",
"Young-joo",
""
]
] |
Neural network-based collaborative filtering systems focus on designing network architectures to learn better representations while fixing the input to the user/item interaction vectors and/or ID. In this paper, we first show that the non-zero elements of the inputs are learnable parameters that determine the weights in combining the user/item embeddings, and fixing them limits the power of the models in learning the representations. Then, we propose to learn the value of the non-zero elements of the inputs jointly with the neural network parameters. We analyze the model complexity and the empirical risk of our approach and prove that learning the input leads to a better generalization bound. Our experiments on several real-world datasets show that our method outperforms the state-of-the-art methods, even using shallow network structures with a smaller number of layers and parameters.
|
1911.09548
|
Martin Schwalsberger
|
Martin Neum\"uller, Martin Schwalsberger
|
A parallel space-time multigrid method for the eddy-current equation
| null | null | null | null |
cs.CE cs.NA math.NA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We expand the applicabilities and capabilities of an already existing
space-time parallel method based on a block Jacobi smoother. First we formulate
a more detailed criterion for spatial coarsening, which enables the method to
deal with unstructured meshes and varying material parameters. Further we
investigate the application to the eddy-current equation, where the non-trivial
kernel of the curl operator causes severe problems. This is remedied with a new
nodal auxiliary space correction. We proceed to identify convergence rates by
local Fourier analysis and numerical experiments. Finally, we present a
numerical experiment which demonstrates its excellent scaling properties.
|
[
{
"created": "Thu, 21 Nov 2019 15:43:40 GMT",
"version": "v1"
}
] |
2019-11-22
|
[
[
"Neumüller",
"Martin",
""
],
[
"Schwalsberger",
"Martin",
""
]
] |
We expand the applicabilities and capabilities of an already existing space-time parallel method based on a block Jacobi smoother. First we formulate a more detailed criterion for spatial coarsening, which enables the method to deal with unstructured meshes and varying material parameters. Further we investigate the application to the eddy-current equation, where the non-trivial kernel of the curl operator causes severe problems. This is remedied with a new nodal auxiliary space correction. We proceed to identify convergence rates by local Fourier analysis and numerical experiments. Finally, we present a numerical experiment which demonstrates its excellent scaling properties.
|
2208.13137
|
Priyabarta Karmakar PhD
|
Priyabrata Karmakar, Manzur Murshed, Manoranjan Paul, David Taubman
|
Efficient Motion Modelling with Variable-sized blocks from Hierarchical
Cuboidal Partitioning
| null | null | null | null |
cs.CV cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
Motion modelling with block-based architecture has been widely used in video
coding where a frame is divided into fixed-sized blocks that are motion
compensated independently. This often leads to coding inefficiency as
fixed-sized blocks hardly align with the object boundaries. Although
hierarchical block-partitioning has been introduced to address this, the
increased number of motion vectors limits the benefit. Recently, approximate
segmentation of images with cuboidal partitioning has gained popularity. Not
only are the variable-sized rectangular segments (cuboids) readily amenable to
block-based image/video coding techniques, but they are also capable of
aligning well with the object boundaries. This is because cuboidal partitioning
is based on a homogeneity constraint, minimising the sum of squared errors
(SSE). In this paper, we have investigated the potential of cuboids in motion
modelling against the fixed-sized blocks used in scalable video coding.
Specifically, we have constructed motion-compensated current frame using the
cuboidal partitioning information of the anchor frame in a group-of-picture
(GOP). The predicted current frame has then been used as the base layer while
encoding the current frame as an enhancement layer using the scalable HEVC
encoder. Experimental results confirm 6.71%-10.90% bitrate savings on 4K video
sequences.
|
[
{
"created": "Sun, 28 Aug 2022 04:13:58 GMT",
"version": "v1"
}
] |
2022-08-30
|
[
[
"Karmakar",
"Priyabrata",
""
],
[
"Murshed",
"Manzur",
""
],
[
"Paul",
"Manoranjan",
""
],
[
"Taubman",
"David",
""
]
] |
Motion modelling with block-based architecture has been widely used in video coding where a frame is divided into fixed-sized blocks that are motion compensated independently. This often leads to coding inefficiency as fixed-sized blocks hardly align with the object boundaries. Although hierarchical block-partitioning has been introduced to address this, the increased number of motion vectors limits the benefit. Recently, approximate segmentation of images with cuboidal partitioning has gained popularity. Not only are the variable-sized rectangular segments (cuboids) readily amenable to block-based image/video coding techniques, but they are also capable of aligning well with the object boundaries. This is because cuboidal partitioning is based on a homogeneity constraint, minimising the sum of squared errors (SSE). In this paper, we have investigated the potential of cuboids in motion modelling against the fixed-sized blocks used in scalable video coding. Specifically, we have constructed motion-compensated current frame using the cuboidal partitioning information of the anchor frame in a group-of-picture (GOP). The predicted current frame has then been used as the base layer while encoding the current frame as an enhancement layer using the scalable HEVC encoder. Experimental results confirm 6.71%-10.90% bitrate savings on 4K video sequences.
|
1007.0918
|
Jonathan Heusser
|
Jonathan Heusser and Pasquale Malacaria
|
Quantifying Information Leak Vulnerabilities
|
submitted, under review
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Leakage of confidential information represents a serious security risk.
Despite a number of novel, theoretical advances, it has been unclear if and how
quantitative approaches to measuring leakage of confidential information could
be applied to substantial, real-world programs. This is mostly due to the high
complexity of computing precise leakage quantities. In this paper, we introduce
a technique which makes it possible to decide if a program conforms to a
quantitative policy which scales to large state-spaces with the help of bounded
model checking.
Our technique is applied to a number of officially reported information leak
vulnerabilities in the Linux Kernel. Additionally, we also analysed
authentication routines in the Secure Remote Password suite and of a Internet
Message Support Protocol implementation. Our technique shows when there is
unacceptable leakage; the same technique is also used to verify, for the first
time, that the applied software patches indeed plug the information leaks.
This is the first demonstration of quantitative information flow addressing
security concerns of real-world industrial programs.
|
[
{
"created": "Tue, 6 Jul 2010 15:12:12 GMT",
"version": "v1"
}
] |
2010-07-07
|
[
[
"Heusser",
"Jonathan",
""
],
[
"Malacaria",
"Pasquale",
""
]
] |
Leakage of confidential information represents a serious security risk. Despite a number of novel, theoretical advances, it has been unclear if and how quantitative approaches to measuring leakage of confidential information could be applied to substantial, real-world programs. This is mostly due to the high complexity of computing precise leakage quantities. In this paper, we introduce a technique which makes it possible to decide if a program conforms to a quantitative policy which scales to large state-spaces with the help of bounded model checking. Our technique is applied to a number of officially reported information leak vulnerabilities in the Linux Kernel. Additionally, we also analysed authentication routines in the Secure Remote Password suite and of a Internet Message Support Protocol implementation. Our technique shows when there is unacceptable leakage; the same technique is also used to verify, for the first time, that the applied software patches indeed plug the information leaks. This is the first demonstration of quantitative information flow addressing security concerns of real-world industrial programs.
|
1910.05798
|
Jefferson Silva
|
Jefferson O. Silva, Igor Wiese, Daniel M. German, Christoph Treude,
Marco A. Gerosa, Igor Steinmacher
|
Google Summer of Code: Student Motivations and Contributions
|
30 pages
|
Journal of Systems and Software (JSS), V. 162, April 2020, 110487
|
10.1016/j.jss.2019.110487
| null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Several open source software (OSS) projects expect to foster newcomers'
onboarding and to receive contributions by participating in engagement
programs, like Summers of Code. However, there is little empirical evidence
showing why students join such programs. In this paper, we study the
well-established Google Summer of Code (GSoC), which is a 3-month OSS
engagement program that offers stipends and mentors to students willing to
contribute to OSS projects. We combined a survey (students and mentors) and
interviews (students) to understand what motivates students to enter GSoC. Our
results show that students enter GSoC for an enriching experience, not
necessarily to become frequent contributors. Our data suggest that, while the
stipends are an important motivator, the students participate for work
experience and the ability to attach the name of the supporting organization to
their resum\'es. We also discuss practical implications for students, mentors,
OSS projects, and Summer of Code programs.
|
[
{
"created": "Sun, 13 Oct 2019 18:07:24 GMT",
"version": "v1"
}
] |
2024-01-24
|
[
[
"Silva",
"Jefferson O.",
""
],
[
"Wiese",
"Igor",
""
],
[
"German",
"Daniel M.",
""
],
[
"Treude",
"Christoph",
""
],
[
"Gerosa",
"Marco A.",
""
],
[
"Steinmacher",
"Igor",
""
]
] |
Several open source software (OSS) projects expect to foster newcomers' onboarding and to receive contributions by participating in engagement programs, like Summers of Code. However, there is little empirical evidence showing why students join such programs. In this paper, we study the well-established Google Summer of Code (GSoC), which is a 3-month OSS engagement program that offers stipends and mentors to students willing to contribute to OSS projects. We combined a survey (students and mentors) and interviews (students) to understand what motivates students to enter GSoC. Our results show that students enter GSoC for an enriching experience, not necessarily to become frequent contributors. Our data suggest that, while the stipends are an important motivator, the students participate for work experience and the ability to attach the name of the supporting organization to their resum\'es. We also discuss practical implications for students, mentors, OSS projects, and Summer of Code programs.
|
2210.10040
|
Nikil Selvam
|
Nikil Roashan Selvam, Sunipa Dev, Daniel Khashabi, Tushar Khot,
Kai-Wei Chang
|
The Tail Wagging the Dog: Dataset Construction Biases of Social Bias
Benchmarks
|
ACL 2023
| null | null | null |
cs.CL cs.CY cs.LG cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
How reliably can we trust the scores obtained from social bias benchmarks as
faithful indicators of problematic social biases in a given language model? In
this work, we study this question by contrasting social biases with non-social
biases stemming from choices made during dataset construction that might not
even be discernible to the human eye. To do so, we empirically simulate various
alternative constructions for a given benchmark based on innocuous
modifications (such as paraphrasing or random-sampling) that maintain the
essence of their social bias. On two well-known social bias benchmarks
(Winogender and BiasNLI) we observe that these shallow modifications have a
surprising effect on the resulting degree of bias across various models. We
hope these troubling observations motivate more robust measures of social
biases.
|
[
{
"created": "Tue, 18 Oct 2022 17:58:39 GMT",
"version": "v1"
},
{
"created": "Fri, 16 Jun 2023 18:35:13 GMT",
"version": "v2"
}
] |
2023-06-21
|
[
[
"Selvam",
"Nikil Roashan",
""
],
[
"Dev",
"Sunipa",
""
],
[
"Khashabi",
"Daniel",
""
],
[
"Khot",
"Tushar",
""
],
[
"Chang",
"Kai-Wei",
""
]
] |
How reliably can we trust the scores obtained from social bias benchmarks as faithful indicators of problematic social biases in a given language model? In this work, we study this question by contrasting social biases with non-social biases stemming from choices made during dataset construction that might not even be discernible to the human eye. To do so, we empirically simulate various alternative constructions for a given benchmark based on innocuous modifications (such as paraphrasing or random-sampling) that maintain the essence of their social bias. On two well-known social bias benchmarks (Winogender and BiasNLI) we observe that these shallow modifications have a surprising effect on the resulting degree of bias across various models. We hope these troubling observations motivate more robust measures of social biases.
|
2110.10246
|
Nicole Forsgren
|
Nicole Forsgren, Bas Alberts, Kevin Backhouse, Grey Baker, Greg
Cecarelli, Derek Jedamski, Scot Kelly, Clair Sullivan
|
2020 State of the Octoverse: Securing the World's Software
|
published by GitHub
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Open source is the connective tissue for much of the information economy. You
would be hard-pressed to find a scenario where your data does not pass through
at least one open source component. Many of the services and technology we all
rely on, from banking to healthcare, also rely on open source software. The
artifacts of open source code serve as critical i infrastructure for much of
the global economy, making the security of open source software
mission-critical to the world.
|
[
{
"created": "Tue, 19 Oct 2021 20:32:32 GMT",
"version": "v1"
}
] |
2021-10-22
|
[
[
"Forsgren",
"Nicole",
""
],
[
"Alberts",
"Bas",
""
],
[
"Backhouse",
"Kevin",
""
],
[
"Baker",
"Grey",
""
],
[
"Cecarelli",
"Greg",
""
],
[
"Jedamski",
"Derek",
""
],
[
"Kelly",
"Scot",
""
],
[
"Sullivan",
"Clair",
""
]
] |
Open source is the connective tissue for much of the information economy. You would be hard-pressed to find a scenario where your data does not pass through at least one open source component. Many of the services and technology we all rely on, from banking to healthcare, also rely on open source software. The artifacts of open source code serve as critical i infrastructure for much of the global economy, making the security of open source software mission-critical to the world.
|
2107.12562
|
Shifeng Pan
|
Shifeng Pan and Lei He
|
Cross-speaker Style Transfer with Prosody Bottleneck in Neural Speech
Synthesis
|
in Proceedings of INTERSPEECH 2021
| null | null | null |
cs.SD cs.LG eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cross-speaker style transfer is crucial to the applications of multi-style
and expressive speech synthesis at scale. It does not require the target
speakers to be experts in expressing all styles and to collect corresponding
recordings for model training. However, the performances of existing style
transfer methods are still far behind real application needs. The root causes
are mainly twofold. Firstly, the style embedding extracted from single
reference speech can hardly provide fine-grained and appropriate prosody
information for arbitrary text to synthesize. Secondly, in these models the
content/text, prosody, and speaker timbre are usually highly entangled, it's
therefore not realistic to expect a satisfied result when freely combining
these components, such as to transfer speaking style between speakers. In this
paper, we propose a cross-speaker style transfer text-to-speech (TTS) model
with explicit prosody bottleneck. The prosody bottleneck builds up the kernels
accounting for speaking style robustly, and disentangles the prosody from
content and speaker timbre, therefore guarantees high quality cross-speaker
style transfer. Evaluation result shows the proposed method even achieves
on-par performance with source speaker's speaker-dependent (SD) model in
objective measurement of prosody, and significantly outperforms the cycle
consistency and GMVAE-based baselines in objective and subjective evaluations.
|
[
{
"created": "Tue, 27 Jul 2021 02:43:57 GMT",
"version": "v1"
}
] |
2021-07-28
|
[
[
"Pan",
"Shifeng",
""
],
[
"He",
"Lei",
""
]
] |
Cross-speaker style transfer is crucial to the applications of multi-style and expressive speech synthesis at scale. It does not require the target speakers to be experts in expressing all styles and to collect corresponding recordings for model training. However, the performances of existing style transfer methods are still far behind real application needs. The root causes are mainly twofold. Firstly, the style embedding extracted from single reference speech can hardly provide fine-grained and appropriate prosody information for arbitrary text to synthesize. Secondly, in these models the content/text, prosody, and speaker timbre are usually highly entangled, it's therefore not realistic to expect a satisfied result when freely combining these components, such as to transfer speaking style between speakers. In this paper, we propose a cross-speaker style transfer text-to-speech (TTS) model with explicit prosody bottleneck. The prosody bottleneck builds up the kernels accounting for speaking style robustly, and disentangles the prosody from content and speaker timbre, therefore guarantees high quality cross-speaker style transfer. Evaluation result shows the proposed method even achieves on-par performance with source speaker's speaker-dependent (SD) model in objective measurement of prosody, and significantly outperforms the cycle consistency and GMVAE-based baselines in objective and subjective evaluations.
|
1204.2989
|
Vijay Ganesh
|
Vijay Ganesh
|
STP/HAMPI and Computer Security
| null | null | null | null |
cs.CR cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the past several years I have written two SMT solvers called STP and HAMPI
that have found widespread use in computer security research by leading groups
in academia, industry and the government. In this brief note I summarize the
features of STP/HAMPI that make them particularly suited for computer security
research, and a listing of some of the more important projects that use them.
|
[
{
"created": "Thu, 12 Apr 2012 16:46:21 GMT",
"version": "v1"
}
] |
2012-04-16
|
[
[
"Ganesh",
"Vijay",
""
]
] |
In the past several years I have written two SMT solvers called STP and HAMPI that have found widespread use in computer security research by leading groups in academia, industry and the government. In this brief note I summarize the features of STP/HAMPI that make them particularly suited for computer security research, and a listing of some of the more important projects that use them.
|
2303.03583
|
Siqi Fan
|
Siqi Fan, Zhe Wang, Xiaoliang Huo, Yan Wang, Jingjing Liu
|
Calibration-free BEV Representation for Infrastructure Perception
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Effective BEV object detection on infrastructure can greatly improve traffic
scenes understanding and vehicle-toinfrastructure (V2I) cooperative perception.
However, cameras installed on infrastructure have various postures, and
previous BEV detection methods rely on accurate calibration, which is difficult
for practical applications due to inevitable natural factors (e.g., wind and
snow). In this paper, we propose a Calibration-free BEV Representation (CBR)
network, which achieves 3D detection based on BEV representation without
calibration parameters and additional depth supervision. Specifically, we
utilize two multi-layer perceptrons for decoupling the features from
perspective view to front view and birdeye view under boxes-induced foreground
supervision. Then, a cross-view feature fusion module matches features from
orthogonal views according to similarity and conducts BEV feature enhancement
with front view features. Experimental results on DAIR-V2X demonstrate that CBR
achieves acceptable performance without any camera parameters and is naturally
not affected by calibration noises. We hope CBR can serve as a baseline for
future research addressing practical challenges of infrastructure perception.
|
[
{
"created": "Tue, 7 Mar 2023 01:31:06 GMT",
"version": "v1"
},
{
"created": "Fri, 14 Apr 2023 02:45:05 GMT",
"version": "v2"
}
] |
2023-04-17
|
[
[
"Fan",
"Siqi",
""
],
[
"Wang",
"Zhe",
""
],
[
"Huo",
"Xiaoliang",
""
],
[
"Wang",
"Yan",
""
],
[
"Liu",
"Jingjing",
""
]
] |
Effective BEV object detection on infrastructure can greatly improve traffic scenes understanding and vehicle-toinfrastructure (V2I) cooperative perception. However, cameras installed on infrastructure have various postures, and previous BEV detection methods rely on accurate calibration, which is difficult for practical applications due to inevitable natural factors (e.g., wind and snow). In this paper, we propose a Calibration-free BEV Representation (CBR) network, which achieves 3D detection based on BEV representation without calibration parameters and additional depth supervision. Specifically, we utilize two multi-layer perceptrons for decoupling the features from perspective view to front view and birdeye view under boxes-induced foreground supervision. Then, a cross-view feature fusion module matches features from orthogonal views according to similarity and conducts BEV feature enhancement with front view features. Experimental results on DAIR-V2X demonstrate that CBR achieves acceptable performance without any camera parameters and is naturally not affected by calibration noises. We hope CBR can serve as a baseline for future research addressing practical challenges of infrastructure perception.
|
cs/0205080
|
Matus Marko
|
M. Marko, M.A. Porter, A. Probst, C. Gershenson, A. Das
|
Transforming the World Wide Web into a Complexity-Based Semantic Network
|
6 pages, a manuscript for the ICCS 2002
| null | null | null |
cs.NI cs.IR
| null |
The aim of this paper is to introduce the idea of the Semantic Web to the
Complexity community and set a basic ground for a project resulting in creation
of Internet-based semantic network of Complexity-related information providers.
Implementation of the Semantic Web technology would be of mutual benefit to
both the participants and users and will confirm self-referencing power of the
community to apply the products of its own research to itself. We first explain
the logic of the transition and discuss important notions associated with the
Semantic Web technology. We then present a brief outline of the project
milestones.
|
[
{
"created": "Fri, 31 May 2002 18:44:36 GMT",
"version": "v1"
},
{
"created": "Sat, 1 Jun 2002 08:03:07 GMT",
"version": "v2"
}
] |
2007-05-23
|
[
[
"Marko",
"M.",
""
],
[
"Porter",
"M. A.",
""
],
[
"Probst",
"A.",
""
],
[
"Gershenson",
"C.",
""
],
[
"Das",
"A.",
""
]
] |
The aim of this paper is to introduce the idea of the Semantic Web to the Complexity community and set a basic ground for a project resulting in creation of Internet-based semantic network of Complexity-related information providers. Implementation of the Semantic Web technology would be of mutual benefit to both the participants and users and will confirm self-referencing power of the community to apply the products of its own research to itself. We first explain the logic of the transition and discuss important notions associated with the Semantic Web technology. We then present a brief outline of the project milestones.
|
1711.03396
|
Chihao Zhang
|
Heng Guo, Chao Liao, Pinyan Lu, Chihao Zhang
|
Counting hypergraph colorings in the local lemma regime
|
v3: Constants Changed. Accepted to SICOMP
| null | null | null |
cs.DS cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We give a fully polynomial-time approximation scheme (FPTAS) to count the
number of $q$-colorings for $k$-uniform hypergraphs with maximum degree
$\Delta$ if $k\ge 28$ and $q >357\Delta^{\frac{14}{k-14}}$ . We also obtain a
polynomial-time almost uniform sampler if $q>931\Delta^{\frac{16}{k-16/3}}$.
These are the first approximate counting and sampling algorithms in the regime
$q\ll\Delta$ (for large $\Delta$ and $k$) without any additional assumptions.
Our method is based on the recent work of Moitra (STOC, 2017). One important
contribution of ours is to remove the dependency of $k$ and $\Delta$ in
Moitra's approach.
|
[
{
"created": "Thu, 9 Nov 2017 14:49:59 GMT",
"version": "v1"
},
{
"created": "Sun, 12 Nov 2017 08:52:45 GMT",
"version": "v2"
},
{
"created": "Fri, 31 May 2019 13:48:14 GMT",
"version": "v3"
}
] |
2019-06-03
|
[
[
"Guo",
"Heng",
""
],
[
"Liao",
"Chao",
""
],
[
"Lu",
"Pinyan",
""
],
[
"Zhang",
"Chihao",
""
]
] |
We give a fully polynomial-time approximation scheme (FPTAS) to count the number of $q$-colorings for $k$-uniform hypergraphs with maximum degree $\Delta$ if $k\ge 28$ and $q >357\Delta^{\frac{14}{k-14}}$ . We also obtain a polynomial-time almost uniform sampler if $q>931\Delta^{\frac{16}{k-16/3}}$. These are the first approximate counting and sampling algorithms in the regime $q\ll\Delta$ (for large $\Delta$ and $k$) without any additional assumptions. Our method is based on the recent work of Moitra (STOC, 2017). One important contribution of ours is to remove the dependency of $k$ and $\Delta$ in Moitra's approach.
|
2308.06197
|
Angus Maiden MAAI(Prof)
|
Angus Maiden (1), Bahareh Nakisa (1) ((1) Deakin University)
|
Complex Facial Expression Recognition Using Deep Knowledge Distillation
of Basic Features
|
13 pages, 9 figures, 6 tables, 3 algorithms. Code available at
https://github.com/AngusMaiden/complex-FER
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Complex emotion recognition is a cognitive task that has so far eluded the
same excellent performance of other tasks that are at or above the level of
human cognition. Emotion recognition through facial expressions is particularly
difficult due to the complexity of emotions expressed by the human face. For a
machine to approach the same level of performance in complex facial expression
recognition as a human, it may need to synthesise knowledge and understand new
concepts in real-time, as humans do. Humans are able to learn new concepts
using only few examples by distilling important information from memories.
Inspired by human cognition and learning, we propose a novel continual learning
method for complex facial expression recognition that can accurately recognise
new compound expression classes using few training samples, by building on and
retaining its knowledge of basic expression classes. In this work, we also use
GradCAM visualisations to demonstrate the relationship between basic and
compound facial expressions. Our method leverages this relationship through
knowledge distillation and a novel Predictive Sorting Memory Replay, to achieve
the current state-of-the-art in continual learning for complex facial
expression recognition, with 74.28% Overall Accuracy on new classes. We also
demonstrate that using continual learning for complex facial expression
recognition achieves far better performance than non-continual learning
methods, improving on state-of-the-art non-continual learning methods by
13.95%. Our work is also the first to apply few-shot learning to complex facial
expression recognition, achieving the state-of-the-art with 100% accuracy using
only a single training sample per class.
|
[
{
"created": "Fri, 11 Aug 2023 15:42:48 GMT",
"version": "v1"
},
{
"created": "Sun, 5 Nov 2023 23:34:25 GMT",
"version": "v2"
}
] |
2023-11-07
|
[
[
"Maiden",
"Angus",
"",
"Deakin University"
],
[
"Nakisa",
"Bahareh",
"",
"Deakin University"
]
] |
Complex emotion recognition is a cognitive task that has so far eluded the same excellent performance of other tasks that are at or above the level of human cognition. Emotion recognition through facial expressions is particularly difficult due to the complexity of emotions expressed by the human face. For a machine to approach the same level of performance in complex facial expression recognition as a human, it may need to synthesise knowledge and understand new concepts in real-time, as humans do. Humans are able to learn new concepts using only few examples by distilling important information from memories. Inspired by human cognition and learning, we propose a novel continual learning method for complex facial expression recognition that can accurately recognise new compound expression classes using few training samples, by building on and retaining its knowledge of basic expression classes. In this work, we also use GradCAM visualisations to demonstrate the relationship between basic and compound facial expressions. Our method leverages this relationship through knowledge distillation and a novel Predictive Sorting Memory Replay, to achieve the current state-of-the-art in continual learning for complex facial expression recognition, with 74.28% Overall Accuracy on new classes. We also demonstrate that using continual learning for complex facial expression recognition achieves far better performance than non-continual learning methods, improving on state-of-the-art non-continual learning methods by 13.95%. Our work is also the first to apply few-shot learning to complex facial expression recognition, achieving the state-of-the-art with 100% accuracy using only a single training sample per class.
|
2402.01156
|
Yongkun Liu
|
Yongkun Liu, Jiachi Chen, Tingting Bi, John Grundy, Yanlin Wang,
Jianxing Yu, Ting Chen, Yutian Tang, Zibin Zheng
|
An Empirical Study on Low Code Programming using Traditional vs Large
Language Model Support
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Low-code programming (LCP) refers to programming using models at higher
levels of abstraction, resulting in less manual and more efficient programming,
and reduced learning effort for amateur developers. Many LCP tools have rapidly
evolved and have benefited from the concepts of visual programming languages
(VPLs) and programming by demonstration (PBD). With huge increase in interest
in using large language models (LLMs) in software engineering, LLM-based LCP
has began to become increasingly important. However, the technical principles
and application scenarios of traditional approaches to LCP and LLM-based LCP
are significantly different. Understanding these key differences and
characteristics in the application of the two approaches to LCP by users is
crucial for LCP providers in improving existing and developing new LCP tools,
and in better assisting users in choosing the appropriate LCP technology. We
conducted an empirical study of both traditional LCP and LLM-based LCP. We
analyzed developers' discussions on Stack Overflow (SO) over the past three
years and then explored the similarities and differences between traditional
LCP and LLM-based LCP features and developer feedback. Our findings reveal that
while traditional LCP and LLM-based LCP share common primary usage scenarios,
they significantly differ in scope, limitations and usage throughout the
software development lifecycle, particularly during the implementation phase.
We also examine how LLMs impact and integrate with LCP, discussing the latest
technological developments in LLM-based LCP, such as its integration with VPLs
and the application of LLM Agents in software engineering.
|
[
{
"created": "Fri, 2 Feb 2024 05:52:32 GMT",
"version": "v1"
},
{
"created": "Thu, 6 Jun 2024 12:07:38 GMT",
"version": "v2"
}
] |
2024-06-07
|
[
[
"Liu",
"Yongkun",
""
],
[
"Chen",
"Jiachi",
""
],
[
"Bi",
"Tingting",
""
],
[
"Grundy",
"John",
""
],
[
"Wang",
"Yanlin",
""
],
[
"Yu",
"Jianxing",
""
],
[
"Chen",
"Ting",
""
],
[
"Tang",
"Yutian",
""
],
[
"Zheng",
"Zibin",
""
]
] |
Low-code programming (LCP) refers to programming using models at higher levels of abstraction, resulting in less manual and more efficient programming, and reduced learning effort for amateur developers. Many LCP tools have rapidly evolved and have benefited from the concepts of visual programming languages (VPLs) and programming by demonstration (PBD). With huge increase in interest in using large language models (LLMs) in software engineering, LLM-based LCP has began to become increasingly important. However, the technical principles and application scenarios of traditional approaches to LCP and LLM-based LCP are significantly different. Understanding these key differences and characteristics in the application of the two approaches to LCP by users is crucial for LCP providers in improving existing and developing new LCP tools, and in better assisting users in choosing the appropriate LCP technology. We conducted an empirical study of both traditional LCP and LLM-based LCP. We analyzed developers' discussions on Stack Overflow (SO) over the past three years and then explored the similarities and differences between traditional LCP and LLM-based LCP features and developer feedback. Our findings reveal that while traditional LCP and LLM-based LCP share common primary usage scenarios, they significantly differ in scope, limitations and usage throughout the software development lifecycle, particularly during the implementation phase. We also examine how LLMs impact and integrate with LCP, discussing the latest technological developments in LLM-based LCP, such as its integration with VPLs and the application of LLM Agents in software engineering.
|
2406.14155
|
Dominik Stammbach
|
Dominik Stammbach and Philine Widmer and Eunjung Cho and Caglar
Gulcehre and Elliott Ash
|
Aligning Large Language Models with Diverse Political Viewpoints
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Large language models such as ChatGPT often exhibit striking political
biases. If users query them about political information, they might take a
normative stance and reinforce such biases. To overcome this, we align LLMs
with diverse political viewpoints from 100,000 comments written by candidates
running for national parliament in Switzerland. Such aligned models are able to
generate more accurate political viewpoints from Swiss parties compared to
commercial models such as ChatGPT. We also propose a procedure to generate
balanced overviews from multiple viewpoints using such models.
|
[
{
"created": "Thu, 20 Jun 2024 09:53:23 GMT",
"version": "v1"
}
] |
2024-06-21
|
[
[
"Stammbach",
"Dominik",
""
],
[
"Widmer",
"Philine",
""
],
[
"Cho",
"Eunjung",
""
],
[
"Gulcehre",
"Caglar",
""
],
[
"Ash",
"Elliott",
""
]
] |
Large language models such as ChatGPT often exhibit striking political biases. If users query them about political information, they might take a normative stance and reinforce such biases. To overcome this, we align LLMs with diverse political viewpoints from 100,000 comments written by candidates running for national parliament in Switzerland. Such aligned models are able to generate more accurate political viewpoints from Swiss parties compared to commercial models such as ChatGPT. We also propose a procedure to generate balanced overviews from multiple viewpoints using such models.
|
2101.07768
|
Ehsan Zabardast
|
Ehsan Zabardast, Julian Frattini, Javier Gonzalez-Huerta, Daniel
Mendez, Tony Gorschek, Krzysztof Wnuk
|
Assets in Software Engineering: What are they after all?
|
Manuscript submitted to the Journal of Systems and Software
| null |
10.1016/j.jss.2022.111485
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
During the development and maintenance of software-intensive products or
services, we depend on various artefacts. Some of those artefacts, we deem
central to the feasibility of a project and the product's final quality.
Typically, these central artefacts are referred to as assets. However, despite
their central role in the software development process, little thought is yet
invested into what eventually characterises as an asset, often resulting in
many terms and underlying concepts being mixed and used inconsistently. A
precise terminology of assets and related concepts, such as asset degradation,
are crucial for setting up a new generation of cost-effective software
engineering practices.
In this position paper, we critically reflect upon the notion of assets in
software engineering. As a starting point, we define the terminology and
concepts of assets and extend the reasoning behind them. We explore assets'
characteristics and discuss what asset degradation is as well as its various
types and the implications that asset degradation might bring for the planning,
realisation, and evolution of software-intensive products and services over
time.
We aspire to contribute to a more standardised definition of assets in
software engineering and foster research endeavours and their practical
dissemination in a common, more unified direction.
|
[
{
"created": "Tue, 19 Jan 2021 18:31:33 GMT",
"version": "v1"
},
{
"created": "Sun, 24 Oct 2021 13:19:22 GMT",
"version": "v2"
},
{
"created": "Wed, 23 Feb 2022 14:32:15 GMT",
"version": "v3"
},
{
"created": "Wed, 11 May 2022 21:31:28 GMT",
"version": "v4"
},
{
"created": "Mon, 11 Jul 2022 14:57:38 GMT",
"version": "v5"
}
] |
2022-08-29
|
[
[
"Zabardast",
"Ehsan",
""
],
[
"Frattini",
"Julian",
""
],
[
"Gonzalez-Huerta",
"Javier",
""
],
[
"Mendez",
"Daniel",
""
],
[
"Gorschek",
"Tony",
""
],
[
"Wnuk",
"Krzysztof",
""
]
] |
During the development and maintenance of software-intensive products or services, we depend on various artefacts. Some of those artefacts, we deem central to the feasibility of a project and the product's final quality. Typically, these central artefacts are referred to as assets. However, despite their central role in the software development process, little thought is yet invested into what eventually characterises as an asset, often resulting in many terms and underlying concepts being mixed and used inconsistently. A precise terminology of assets and related concepts, such as asset degradation, are crucial for setting up a new generation of cost-effective software engineering practices. In this position paper, we critically reflect upon the notion of assets in software engineering. As a starting point, we define the terminology and concepts of assets and extend the reasoning behind them. We explore assets' characteristics and discuss what asset degradation is as well as its various types and the implications that asset degradation might bring for the planning, realisation, and evolution of software-intensive products and services over time. We aspire to contribute to a more standardised definition of assets in software engineering and foster research endeavours and their practical dissemination in a common, more unified direction.
|
2108.02290
|
Max Willsey
|
Yihong Zhang, Yisu Remy Wang, Max Willsey, Zachary Tatlock
|
Relational E-Matching
|
POPL 2022
| null | null | null |
cs.DB cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
We present a new approach to e-matching based on relational join; in
particular, we apply recent database query execution techniques to guarantee
worst-case optimal run time. Compared to the conventional backtracking approach
that always searches the e-graph "top down", our new relational e-matching
approach can better exploit pattern structure by searching the e-graph
according to an optimized query plan. We also establish the first data
complexity result for e-matching, bounding run time as a function of the
e-graph size and output size. We prototyped and evaluated our technique in the
state-of-the-art egg e-graph framework. Compared to a conventional baseline,
relational e-matching is simpler to implement and orders of magnitude faster in
practice.
|
[
{
"created": "Wed, 4 Aug 2021 21:23:28 GMT",
"version": "v1"
},
{
"created": "Wed, 5 Jan 2022 21:21:44 GMT",
"version": "v2"
}
] |
2022-01-07
|
[
[
"Zhang",
"Yihong",
""
],
[
"Wang",
"Yisu Remy",
""
],
[
"Willsey",
"Max",
""
],
[
"Tatlock",
"Zachary",
""
]
] |
We present a new approach to e-matching based on relational join; in particular, we apply recent database query execution techniques to guarantee worst-case optimal run time. Compared to the conventional backtracking approach that always searches the e-graph "top down", our new relational e-matching approach can better exploit pattern structure by searching the e-graph according to an optimized query plan. We also establish the first data complexity result for e-matching, bounding run time as a function of the e-graph size and output size. We prototyped and evaluated our technique in the state-of-the-art egg e-graph framework. Compared to a conventional baseline, relational e-matching is simpler to implement and orders of magnitude faster in practice.
|
1907.09247
|
Jorge Fandinno
|
Jorge Fandinno
|
Founded (Auto)Epistemic Equilibrium Logic Satisfies Epistemic Splitting
|
Paper presented at the 35th International Conference on Logic
Programming (ICLP 2019), Las Cruces, New Mexico, USA, 20-25 September 2019,
16 pages
|
Theory and Practice of Logic Programming 19 (2019) 671-687
|
10.1017/S1471068419000127
| null |
cs.LO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In a recent line of research, two familiar concepts from logic programming
semantics (unfounded sets and splitting) were extrapolated to the case of
epistemic logic programs. The property of epistemic splitting provides a
natural and modular way to understand programs without epistemic cycles but,
surprisingly, was only fulfilled by Gelfond's original semantics (G91), among
the many proposals in the literature. On the other hand, G91 may suffer from a
kind of self-supported, unfounded derivations when epistemic cycles come into
play. Recently, the absence of these derivations was also formalised as a
property of epistemic semantics called foundedness. Moreover, a first semantics
proved to satisfy foundedness was also proposed, the so-called Founded
Autoepistemic Equilibrium Logic (FAEEL). In this paper, we prove that FAEEL
also satisfies the epistemic splitting property something that, together with
foundedness, was not fulfilled by any other approach up to date. To prove this
result, we provide an alternative characterisation of FAEEL as a combination of
G91 with a simpler logic we called Founded Epistemic Equilibrium Logic (FEEL),
which is somehow an extrapolation of the stable model semantics to the modal
logic S5. Under consideration for acceptance in TPLP.
|
[
{
"created": "Mon, 22 Jul 2019 11:48:15 GMT",
"version": "v1"
},
{
"created": "Thu, 20 Feb 2020 15:33:26 GMT",
"version": "v2"
}
] |
2020-02-21
|
[
[
"Fandinno",
"Jorge",
""
]
] |
In a recent line of research, two familiar concepts from logic programming semantics (unfounded sets and splitting) were extrapolated to the case of epistemic logic programs. The property of epistemic splitting provides a natural and modular way to understand programs without epistemic cycles but, surprisingly, was only fulfilled by Gelfond's original semantics (G91), among the many proposals in the literature. On the other hand, G91 may suffer from a kind of self-supported, unfounded derivations when epistemic cycles come into play. Recently, the absence of these derivations was also formalised as a property of epistemic semantics called foundedness. Moreover, a first semantics proved to satisfy foundedness was also proposed, the so-called Founded Autoepistemic Equilibrium Logic (FAEEL). In this paper, we prove that FAEEL also satisfies the epistemic splitting property something that, together with foundedness, was not fulfilled by any other approach up to date. To prove this result, we provide an alternative characterisation of FAEEL as a combination of G91 with a simpler logic we called Founded Epistemic Equilibrium Logic (FEEL), which is somehow an extrapolation of the stable model semantics to the modal logic S5. Under consideration for acceptance in TPLP.
|
1902.05260
|
Peng Wang
|
Peng Wang, Hong Xu, Xin Jin, Tao Wang
|
Flash: Efficient Dynamic Routing for Offchain Networks
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Offchain networks emerge as a promising solution to address the scalability
challenge of blockchain. Participants directly make payments through a network
of payment channels without the overhead of committing onchain transactions.
Routing is critical to the performance of offchain networks. Existing solutions
use either static routing with poor performance or dynamic routing with high
overhead to obtain the dynamic channel balance information. In this paper, we
propose Flash, a new dynamic routing solution that leverages the unique
characteristics of transactions in offchain networks to strike a better
tradeoff between path optimality and probing overhead. By studying the traces
of real offchain networks, we find that the payment sizes are heavy-tailed, and
most payments are highly recurrent. Flash thus differentiates the treatment of
elephant payments from mice payments. It uses a modified max-flow algorithm for
elephant payments to find paths with sufficient capacity, and strategically
routes the payment across paths to minimize the transaction fees. Mice payments
are directly sent by looking up a routing table with a few precomputed paths to
reduce probing overhead. Testbed experiments and data-driven simulations show
that Flash improves the success volume of payments by up to 2.3x compared to
the state-of-the-art routing algorithm.
|
[
{
"created": "Thu, 14 Feb 2019 08:36:57 GMT",
"version": "v1"
},
{
"created": "Tue, 11 Jun 2019 03:01:10 GMT",
"version": "v2"
}
] |
2019-06-12
|
[
[
"Wang",
"Peng",
""
],
[
"Xu",
"Hong",
""
],
[
"Jin",
"Xin",
""
],
[
"Wang",
"Tao",
""
]
] |
Offchain networks emerge as a promising solution to address the scalability challenge of blockchain. Participants directly make payments through a network of payment channels without the overhead of committing onchain transactions. Routing is critical to the performance of offchain networks. Existing solutions use either static routing with poor performance or dynamic routing with high overhead to obtain the dynamic channel balance information. In this paper, we propose Flash, a new dynamic routing solution that leverages the unique characteristics of transactions in offchain networks to strike a better tradeoff between path optimality and probing overhead. By studying the traces of real offchain networks, we find that the payment sizes are heavy-tailed, and most payments are highly recurrent. Flash thus differentiates the treatment of elephant payments from mice payments. It uses a modified max-flow algorithm for elephant payments to find paths with sufficient capacity, and strategically routes the payment across paths to minimize the transaction fees. Mice payments are directly sent by looking up a routing table with a few precomputed paths to reduce probing overhead. Testbed experiments and data-driven simulations show that Flash improves the success volume of payments by up to 2.3x compared to the state-of-the-art routing algorithm.
|
1811.05932
|
Xi Liu
|
Xi Liu, Ping-Chun Hsieh, Nick Duffield, Rui Chen, Muhe Xie, Xidao Wen
|
Streaming Network Embedding through Local Actions
| null | null | null | null |
cs.LG cs.SI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, considerable research attention has been paid to network embedding,
a popular approach to construct feature vectors of vertices. Due to the curse
of dimensionality and sparsity in graphical datasets, this approach has become
indispensable for machine learning tasks over large networks. The majority of
existing literature has considered this technique under the assumption that the
network is static. However, networks in many applications, nodes and edges
accrue to a growing network as a streaming. A small number of very recent
results have addressed the problem of embedding for dynamic networks. However,
they either rely on knowledge of vertex attributes, suffer high-time complexity
or need to be re-trained without closed-form expression. Thus the approach of
adapting the existing methods to the streaming environment faces non-trivial
technical challenges.
These challenges motivate developing new approaches to the problems of
streaming network embedding. In this paper, We propose a new framework that is
able to generate latent features for new vertices with high efficiency and low
complexity under specified iteration rounds. We formulate a constrained
optimization problem for the modification of the representation resulting from
a stream arrival. We show this problem has no closed-form solution and instead
develop an online approximation solution. Our solution follows three steps: (1)
identify vertices affected by new vertices, (2) generate latent features for
new vertices, and (3) update the latent features of the most affected vertices.
The generated representations are provably feasible and not far from the
optimal ones in terms of expectation. Multi-class classification and clustering
on five real-world networks demonstrate that our model can efficiently update
vertex representations and simultaneously achieve comparable or even better
performance.
|
[
{
"created": "Wed, 14 Nov 2018 18:02:29 GMT",
"version": "v1"
}
] |
2018-11-15
|
[
[
"Liu",
"Xi",
""
],
[
"Hsieh",
"Ping-Chun",
""
],
[
"Duffield",
"Nick",
""
],
[
"Chen",
"Rui",
""
],
[
"Xie",
"Muhe",
""
],
[
"Wen",
"Xidao",
""
]
] |
Recently, considerable research attention has been paid to network embedding, a popular approach to construct feature vectors of vertices. Due to the curse of dimensionality and sparsity in graphical datasets, this approach has become indispensable for machine learning tasks over large networks. The majority of existing literature has considered this technique under the assumption that the network is static. However, networks in many applications, nodes and edges accrue to a growing network as a streaming. A small number of very recent results have addressed the problem of embedding for dynamic networks. However, they either rely on knowledge of vertex attributes, suffer high-time complexity or need to be re-trained without closed-form expression. Thus the approach of adapting the existing methods to the streaming environment faces non-trivial technical challenges. These challenges motivate developing new approaches to the problems of streaming network embedding. In this paper, We propose a new framework that is able to generate latent features for new vertices with high efficiency and low complexity under specified iteration rounds. We formulate a constrained optimization problem for the modification of the representation resulting from a stream arrival. We show this problem has no closed-form solution and instead develop an online approximation solution. Our solution follows three steps: (1) identify vertices affected by new vertices, (2) generate latent features for new vertices, and (3) update the latent features of the most affected vertices. The generated representations are provably feasible and not far from the optimal ones in terms of expectation. Multi-class classification and clustering on five real-world networks demonstrate that our model can efficiently update vertex representations and simultaneously achieve comparable or even better performance.
|
2304.07954
|
Jihao Huang
|
Jihao Huang, Jun Zeng, Xuemin Chi, Koushil Sreenath, Zhitao Liu and
Hongye Su
|
Velocity Obstacle for Polytopic Collision Avoidance for Distributed
Multi-robot Systems
|
Accepted to IEEE Robotics and Automation Letters (RA-L) 2023, with
open source repository released
| null | null | null |
cs.RO cs.SY eess.SY math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Obstacle avoidance for multi-robot navigation with polytopic shapes is
challenging. Existing works simplify the system dynamics or consider it as a
convex or non-convex optimization problem with positive distance constraints
between robots, which limits real-time performance and scalability.
Additionally, generating collision-free behavior for polytopic-shaped robots is
harder due to implicit and non-differentiable distance functions between
polytopes. In this paper, we extend the concept of velocity obstacle (VO)
principle for polytopic-shaped robots and propose a novel approach to construct
the VO in the function of vertex coordinates and other robot's states. Compared
with existing work about obstacle avoidance between polytopic-shaped robots,
our approach is much more computationally efficient as the proposed approach
for construction of VO between polytopes is optimization-free. Based on VO
representation for polytopic shapes, we later propose a navigation approach for
distributed multi-robot systems. We validate our proposed VO representation and
navigation approach in multiple challenging scenarios including large-scale
randomized tests, and our approach outperforms the state of art in many
evaluation metrics, including completion rate, deadlock rate, and the average
travel distance.
|
[
{
"created": "Mon, 17 Apr 2023 02:42:48 GMT",
"version": "v1"
},
{
"created": "Mon, 10 Jun 2024 04:40:33 GMT",
"version": "v2"
}
] |
2024-06-11
|
[
[
"Huang",
"Jihao",
""
],
[
"Zeng",
"Jun",
""
],
[
"Chi",
"Xuemin",
""
],
[
"Sreenath",
"Koushil",
""
],
[
"Liu",
"Zhitao",
""
],
[
"Su",
"Hongye",
""
]
] |
Obstacle avoidance for multi-robot navigation with polytopic shapes is challenging. Existing works simplify the system dynamics or consider it as a convex or non-convex optimization problem with positive distance constraints between robots, which limits real-time performance and scalability. Additionally, generating collision-free behavior for polytopic-shaped robots is harder due to implicit and non-differentiable distance functions between polytopes. In this paper, we extend the concept of velocity obstacle (VO) principle for polytopic-shaped robots and propose a novel approach to construct the VO in the function of vertex coordinates and other robot's states. Compared with existing work about obstacle avoidance between polytopic-shaped robots, our approach is much more computationally efficient as the proposed approach for construction of VO between polytopes is optimization-free. Based on VO representation for polytopic shapes, we later propose a navigation approach for distributed multi-robot systems. We validate our proposed VO representation and navigation approach in multiple challenging scenarios including large-scale randomized tests, and our approach outperforms the state of art in many evaluation metrics, including completion rate, deadlock rate, and the average travel distance.
|
1907.13432
|
Dong Liu
|
Dong Liu, Minh Th\`anh Vu, Saikat Chatterjee, Lars K. Rasmussen
|
Neural Network based Explicit Mixture Models and
Expectation-maximization based Learning
|
IJCNN 2020
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose two neural network based mixture models in this article. The
proposed mixture models are explicit in nature. The explicit models have
analytical forms with the advantages of computing likelihood and efficiency of
generating samples. Computation of likelihood is an important aspect of our
models. Expectation-maximization based algorithms are developed for learning
parameters of the proposed models. We provide sufficient conditions to realize
the expectation-maximization based learning. The main requirements are
invertibility of neural networks that are used as generators and Jacobian
computation of functional form of the neural networks. The requirements are
practically realized using a flow-based neural network. In our first mixture
model, we use multiple flow-based neural networks as generators. Naturally the
model is complex. A single latent variable is used as the common input to all
the neural networks. The second mixture model uses a single flow-based neural
network as a generator to reduce complexity. The single generator has a latent
variable input that follows a Gaussian mixture distribution. We demonstrate
efficiency of proposed mixture models through extensive experiments for
generating samples and maximum likelihood based classification.
|
[
{
"created": "Wed, 31 Jul 2019 11:57:17 GMT",
"version": "v1"
},
{
"created": "Sun, 24 May 2020 19:57:55 GMT",
"version": "v2"
}
] |
2020-05-26
|
[
[
"Liu",
"Dong",
""
],
[
"Vu",
"Minh Thành",
""
],
[
"Chatterjee",
"Saikat",
""
],
[
"Rasmussen",
"Lars K.",
""
]
] |
We propose two neural network based mixture models in this article. The proposed mixture models are explicit in nature. The explicit models have analytical forms with the advantages of computing likelihood and efficiency of generating samples. Computation of likelihood is an important aspect of our models. Expectation-maximization based algorithms are developed for learning parameters of the proposed models. We provide sufficient conditions to realize the expectation-maximization based learning. The main requirements are invertibility of neural networks that are used as generators and Jacobian computation of functional form of the neural networks. The requirements are practically realized using a flow-based neural network. In our first mixture model, we use multiple flow-based neural networks as generators. Naturally the model is complex. A single latent variable is used as the common input to all the neural networks. The second mixture model uses a single flow-based neural network as a generator to reduce complexity. The single generator has a latent variable input that follows a Gaussian mixture distribution. We demonstrate efficiency of proposed mixture models through extensive experiments for generating samples and maximum likelihood based classification.
|
2009.10233
|
Boyuan Feng
|
Boyuan Feng, Yuke Wang, Xu Li, and Yufei Ding
|
Scalable Adversarial Attack on Graph Neural Networks with Alternating
Direction Method of Multipliers
| null | null | null | null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graph neural networks (GNNs) have achieved high performance in analyzing
graph-structured data and have been widely deployed in safety-critical areas,
such as finance and autonomous driving. However, only a few works have explored
GNNs' robustness to adversarial attacks, and their designs are usually limited
by the scale of input datasets (i.e., focusing on small graphs with only
thousands of nodes). In this work, we propose, SAG, the first scalable
adversarial attack method with Alternating Direction Method of Multipliers
(ADMM). We first decouple the large-scale graph into several smaller graph
partitions and cast the original problem into several subproblems. Then, we
propose to solve these subproblems using projected gradient descent on both the
graph topology and the node features that lead to considerably lower memory
consumption compared to the conventional attack methods. Rigorous experiments
further demonstrate that SAG can significantly reduce the computation and
memory overhead compared with the state-of-the-art approach, making SAG
applicable towards graphs with large size of nodes and edges.
|
[
{
"created": "Tue, 22 Sep 2020 00:33:36 GMT",
"version": "v1"
}
] |
2020-09-23
|
[
[
"Feng",
"Boyuan",
""
],
[
"Wang",
"Yuke",
""
],
[
"Li",
"Xu",
""
],
[
"Ding",
"Yufei",
""
]
] |
Graph neural networks (GNNs) have achieved high performance in analyzing graph-structured data and have been widely deployed in safety-critical areas, such as finance and autonomous driving. However, only a few works have explored GNNs' robustness to adversarial attacks, and their designs are usually limited by the scale of input datasets (i.e., focusing on small graphs with only thousands of nodes). In this work, we propose, SAG, the first scalable adversarial attack method with Alternating Direction Method of Multipliers (ADMM). We first decouple the large-scale graph into several smaller graph partitions and cast the original problem into several subproblems. Then, we propose to solve these subproblems using projected gradient descent on both the graph topology and the node features that lead to considerably lower memory consumption compared to the conventional attack methods. Rigorous experiments further demonstrate that SAG can significantly reduce the computation and memory overhead compared with the state-of-the-art approach, making SAG applicable towards graphs with large size of nodes and edges.
|
2407.17483
|
Sherri WeitlHarms
|
Sherri Harms
|
Tackling CS education in K-12: Implementing a Google CS4HS Grant Program
in a Rural Underserved Area
|
Presented at and published in the proceedings of MICS 2017:
https://www.micsymposium.org/mics_2017_proceedings/docs/MICS_2017_paper_44.pdf;
8 pages, 2 figures
|
Harms, S. K. (2017). Tackling CS education in K-12: Implementing a
Google CS4HS Grant Program. La Crosse, WI: 2017 Midwest Instructional
Computing Symposium Proceedings
| null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Providing computer science (CS) offerings in the K-12 education system is
often limited by the lack of experienced teachers, especially in small or rural
underserved school districts. By helping teachers in underserved areas develop
CS curriculum and helping them become certified to teach CS courses, more young
people in underserved areas are aware of IT-career opportunities, and prepared
for CS education at the university level, which ultimately helps tackle the IT
workforce deficit in the United States.
This paper discusses a successful implementation of a Google CS4HS grant to a
rural underserved area, as well as lessons learned through the implementation
of the program. Key elements in the implementation included a face-to-face
hands-on workshop, followed by a seven week graduate-level online summer course
for the teachers to learn and develop curriculum that covers the CS concepts
they will be teaching. The teachers were supported with an online community of
practice for the year as they implemented the curriculum.
|
[
{
"created": "Tue, 2 Jul 2024 18:36:06 GMT",
"version": "v1"
}
] |
2024-07-26
|
[
[
"Harms",
"Sherri",
""
]
] |
Providing computer science (CS) offerings in the K-12 education system is often limited by the lack of experienced teachers, especially in small or rural underserved school districts. By helping teachers in underserved areas develop CS curriculum and helping them become certified to teach CS courses, more young people in underserved areas are aware of IT-career opportunities, and prepared for CS education at the university level, which ultimately helps tackle the IT workforce deficit in the United States. This paper discusses a successful implementation of a Google CS4HS grant to a rural underserved area, as well as lessons learned through the implementation of the program. Key elements in the implementation included a face-to-face hands-on workshop, followed by a seven week graduate-level online summer course for the teachers to learn and develop curriculum that covers the CS concepts they will be teaching. The teachers were supported with an online community of practice for the year as they implemented the curriculum.
|
2101.11946
|
Stefan Heidekr\"uger
|
Stefan Heidekr\"uger, Paul Sutterer, Nils Kohring, Maximilian Fichtl,
and Martin Bichler
|
Equilibrium Learning in Combinatorial Auctions: Computing Approximate
Bayesian Nash Equilibria via Pseudogradient Dynamics
|
This version includes the supplementary material with additional
proofs
| null | null | null |
cs.GT cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Applications of combinatorial auctions (CA) as market mechanisms are
prevalent in practice, yet their Bayesian Nash equilibria (BNE) remain poorly
understood. Analytical solutions are known only for a few cases where the
problem can be reformulated as a tractable partial differential equation (PDE).
In the general case, finding BNE is known to be computationally hard. Previous
work on numerical computation of BNE in auctions has relied either on solving
such PDEs explicitly, calculating pointwise best-responses in strategy space,
or iteratively solving restricted subgames. In this study, we present a generic
yet scalable alternative multi-agent equilibrium learning method that
represents strategies as neural networks and applies policy iteration based on
gradient dynamics in self-play. Most auctions are ex-post nondifferentiable, so
gradients may be unavailable or misleading, and we rely on suitable
pseudogradient estimates instead. Although it is well-known that gradient
dynamics cannot guarantee convergence to NE in general, we observe fast and
robust convergence to approximate BNE in a wide variety of auctions and present
a sufficient condition for convergence
|
[
{
"created": "Thu, 28 Jan 2021 11:53:32 GMT",
"version": "v1"
},
{
"created": "Sat, 6 Feb 2021 11:47:52 GMT",
"version": "v2"
}
] |
2021-02-09
|
[
[
"Heidekrüger",
"Stefan",
""
],
[
"Sutterer",
"Paul",
""
],
[
"Kohring",
"Nils",
""
],
[
"Fichtl",
"Maximilian",
""
],
[
"Bichler",
"Martin",
""
]
] |
Applications of combinatorial auctions (CA) as market mechanisms are prevalent in practice, yet their Bayesian Nash equilibria (BNE) remain poorly understood. Analytical solutions are known only for a few cases where the problem can be reformulated as a tractable partial differential equation (PDE). In the general case, finding BNE is known to be computationally hard. Previous work on numerical computation of BNE in auctions has relied either on solving such PDEs explicitly, calculating pointwise best-responses in strategy space, or iteratively solving restricted subgames. In this study, we present a generic yet scalable alternative multi-agent equilibrium learning method that represents strategies as neural networks and applies policy iteration based on gradient dynamics in self-play. Most auctions are ex-post nondifferentiable, so gradients may be unavailable or misleading, and we rely on suitable pseudogradient estimates instead. Although it is well-known that gradient dynamics cannot guarantee convergence to NE in general, we observe fast and robust convergence to approximate BNE in a wide variety of auctions and present a sufficient condition for convergence
|
2205.11020
|
Rohitash Chandra
|
Rohitash Chandra, Mukul Ranjan
|
Artificial intelligence for topic modelling in Hindu philosophy: mapping
themes between the Upanishads and the Bhagavad Gita
| null | null |
10.1371/journal.pone.0273476
| null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
A distinct feature of Hindu religious and philosophical text is that they
come from a library of texts rather than single source. The Upanishads is known
as one of the oldest philosophical texts in the world that forms the foundation
of Hindu philosophy. The Bhagavad Gita is core text of Hindu philosophy and is
known as a text that summarises the key philosophies of the Upanishads with
major focus on the philosophy of karma. These texts have been translated into
many languages and there exists studies about themes and topics that are
prominent; however, there is not much study of topic modelling using language
models which are powered by deep learning. In this paper, we use advanced
language produces such as BERT to provide topic modelling of the key texts of
the Upanishads and the Bhagavad Gita. We analyse the distinct and overlapping
topics amongst the texts and visualise the link of selected texts of the
Upanishads with Bhagavad Gita. Our results show a very high similarity between
the topics of these two texts with the mean cosine similarity of 73%. We find
that out of the fourteen topics extracted from the Bhagavad Gita, nine of them
have a cosine similarity of more than 70% with the topics of the Upanishads. We
also found that topics generated by the BERT-based models show very high
coherence as compared to that of conventional models. Our best performing model
gives a coherence score of 73% on the Bhagavad Gita and 69% on The Upanishads.
The visualization of the low dimensional embeddings of these texts shows very
clear overlapping among their topics adding another level of validation to our
results.
|
[
{
"created": "Mon, 23 May 2022 03:39:00 GMT",
"version": "v1"
}
] |
2022-10-12
|
[
[
"Chandra",
"Rohitash",
""
],
[
"Ranjan",
"Mukul",
""
]
] |
A distinct feature of Hindu religious and philosophical text is that they come from a library of texts rather than single source. The Upanishads is known as one of the oldest philosophical texts in the world that forms the foundation of Hindu philosophy. The Bhagavad Gita is core text of Hindu philosophy and is known as a text that summarises the key philosophies of the Upanishads with major focus on the philosophy of karma. These texts have been translated into many languages and there exists studies about themes and topics that are prominent; however, there is not much study of topic modelling using language models which are powered by deep learning. In this paper, we use advanced language produces such as BERT to provide topic modelling of the key texts of the Upanishads and the Bhagavad Gita. We analyse the distinct and overlapping topics amongst the texts and visualise the link of selected texts of the Upanishads with Bhagavad Gita. Our results show a very high similarity between the topics of these two texts with the mean cosine similarity of 73%. We find that out of the fourteen topics extracted from the Bhagavad Gita, nine of them have a cosine similarity of more than 70% with the topics of the Upanishads. We also found that topics generated by the BERT-based models show very high coherence as compared to that of conventional models. Our best performing model gives a coherence score of 73% on the Bhagavad Gita and 69% on The Upanishads. The visualization of the low dimensional embeddings of these texts shows very clear overlapping among their topics adding another level of validation to our results.
|
1711.02447
|
Kaushik Sarker
|
Md. Maruf Hassan, Kaushik Sarker, Saikat Biswas, Md. Hasan Sharif
|
Detection of Wordpress Content Injection Vulnerability
| null | null |
10.5121/ijci.2017.6501
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The popularity of content management software (CMS) is growing vastly to the
web developers and the business people because of its capacity for easy
accessibility, manageability and usability of the distributed website contents.
As per the statistics of Built with, 32% of the web applications are developed
with WordPress(WP) among all other CMSs [1]. It is obvious that quite a good
number of web applications were built with WP in version 4.7.0 and 4.7.1. A
recent research reveals that content injection vulnerability was found
available in the above two versions of WP [2]. Unauthorized content injection
by an intruder in a CMS managed application is one of the serious problems for
the business as well as for the web owner.Therefore, detection of the
vulnerability becomes a critical issue for this time. In this paper, we have
discussed about the root cause of WP content injection of the above versions
and have also proposed a detection model for the given vulnerability. A tool,
SAISAN has been implemented as per our anticipated model and conducted an
examination on 176 WP developed web applications using SAISAN. We achieved the
accuracy of 92% of the result of SAISAN as compared to manual black box testing
outcome.
|
[
{
"created": "Tue, 7 Nov 2017 13:01:18 GMT",
"version": "v1"
}
] |
2017-11-08
|
[
[
"Hassan",
"Md. Maruf",
""
],
[
"Sarker",
"Kaushik",
""
],
[
"Biswas",
"Saikat",
""
],
[
"Sharif",
"Md. Hasan",
""
]
] |
The popularity of content management software (CMS) is growing vastly to the web developers and the business people because of its capacity for easy accessibility, manageability and usability of the distributed website contents. As per the statistics of Built with, 32% of the web applications are developed with WordPress(WP) among all other CMSs [1]. It is obvious that quite a good number of web applications were built with WP in version 4.7.0 and 4.7.1. A recent research reveals that content injection vulnerability was found available in the above two versions of WP [2]. Unauthorized content injection by an intruder in a CMS managed application is one of the serious problems for the business as well as for the web owner.Therefore, detection of the vulnerability becomes a critical issue for this time. In this paper, we have discussed about the root cause of WP content injection of the above versions and have also proposed a detection model for the given vulnerability. A tool, SAISAN has been implemented as per our anticipated model and conducted an examination on 176 WP developed web applications using SAISAN. We achieved the accuracy of 92% of the result of SAISAN as compared to manual black box testing outcome.
|
1709.00700
|
Sebastian Bre{\ss}
|
Sebastian Bre{\ss} and Bastian K\"ocher and Henning Funke and Tilmann
Rabl and Volker Markl
|
Generating Custom Code for Efficient Query Execution on Heterogeneous
Processors
|
22 pages
| null | null | null |
cs.DB cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Processor manufacturers build increasingly specialized processors to mitigate
the effects of the power wall to deliver improved performance. Currently,
database engines are manually optimized for each processor: A costly and error
prone process.
In this paper, we propose concepts to enable the database engine to perform
per-processor optimization automatically. Our core idea is to create variants
of generated code and to learn a fast variant for each processor. We create
variants by modifying parallelization strategies, specializing data structures,
and applying different code transformations.
Our experimental results show that the performance of variants may diverge up
to two orders of magnitude. Therefore, we need to generate custom code for each
processor to achieve peak performance. We show that our approach finds a fast
custom variant for multi-core CPUs, GPUs, and MICs.
|
[
{
"created": "Sun, 3 Sep 2017 11:16:31 GMT",
"version": "v1"
}
] |
2017-09-05
|
[
[
"Breß",
"Sebastian",
""
],
[
"Köcher",
"Bastian",
""
],
[
"Funke",
"Henning",
""
],
[
"Rabl",
"Tilmann",
""
],
[
"Markl",
"Volker",
""
]
] |
Processor manufacturers build increasingly specialized processors to mitigate the effects of the power wall to deliver improved performance. Currently, database engines are manually optimized for each processor: A costly and error prone process. In this paper, we propose concepts to enable the database engine to perform per-processor optimization automatically. Our core idea is to create variants of generated code and to learn a fast variant for each processor. We create variants by modifying parallelization strategies, specializing data structures, and applying different code transformations. Our experimental results show that the performance of variants may diverge up to two orders of magnitude. Therefore, we need to generate custom code for each processor to achieve peak performance. We show that our approach finds a fast custom variant for multi-core CPUs, GPUs, and MICs.
|
1604.08552
|
Sameh Sorour
|
Rabe Arshad, Hesham ElSawy, Sameh Sorour, Tareq Y. Al-Naffouri,
Mohamed-Slim Alouini
|
Handover Management in Dense Cellular Networks: A Stochastic Geometry
Approach
|
7 pages, 7 figures, ICC 2016
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cellular operators are continuously densifying their networks to cope with
the ever-increasing capacity demand. Furthermore, an extreme densification
phase for cellular networks is foreseen to fulfill the ambitious fifth
generation (5G) performance requirements. Network densification improves
spectrum utilization and network capacity by shrinking base stations' (BSs)
footprints and reusing the same spectrum more frequently over the spatial
domain. However, network densification also increases the handover (HO) rate,
which may diminish the capacity gains for mobile users due to HO delays. In
highly dense 5G cellular networks, HO delays may neutralize or even negate the
gains offered by network densification. In this paper, we present an analytical
paradigm, based on stochastic geometry, to quantify the effect of HO delay on
the average user rate in cellular networks. To this end, we propose a flexible
handover scheme to reduce HO delay in case of highly dense cellular networks.
This scheme allows skipping the HO procedure with some BSs along users'
trajectories. The performance evaluation and testing of this scheme for only
single HO skipping shows considerable gains in many practical scenarios.
|
[
{
"created": "Thu, 28 Apr 2016 18:44:12 GMT",
"version": "v1"
}
] |
2016-04-29
|
[
[
"Arshad",
"Rabe",
""
],
[
"ElSawy",
"Hesham",
""
],
[
"Sorour",
"Sameh",
""
],
[
"Al-Naffouri",
"Tareq Y.",
""
],
[
"Alouini",
"Mohamed-Slim",
""
]
] |
Cellular operators are continuously densifying their networks to cope with the ever-increasing capacity demand. Furthermore, an extreme densification phase for cellular networks is foreseen to fulfill the ambitious fifth generation (5G) performance requirements. Network densification improves spectrum utilization and network capacity by shrinking base stations' (BSs) footprints and reusing the same spectrum more frequently over the spatial domain. However, network densification also increases the handover (HO) rate, which may diminish the capacity gains for mobile users due to HO delays. In highly dense 5G cellular networks, HO delays may neutralize or even negate the gains offered by network densification. In this paper, we present an analytical paradigm, based on stochastic geometry, to quantify the effect of HO delay on the average user rate in cellular networks. To this end, we propose a flexible handover scheme to reduce HO delay in case of highly dense cellular networks. This scheme allows skipping the HO procedure with some BSs along users' trajectories. The performance evaluation and testing of this scheme for only single HO skipping shows considerable gains in many practical scenarios.
|
1903.08314
|
Shigeru Furuichi Dr.
|
Shigeru Furuichi and Nicu\c{s}or Minculete
|
Inequalities related to some types of entropies and divergences
|
21 pages
| null |
10.1016/j.physa.2019.121907
| null |
cs.IT math.CA math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The aim of this paper is to discuss new results concerning some kinds of
parametric extended entropies and divergences. As a result of our studies for
mathematical properties on entropy and divergence, we give new bounds for the
Tsallis quasilinear entropy and divergence by applying the Hermite-Hadamard
inequality. We also give bounds for biparametrical extended entropies and
divergences which have been given in \cite{7}. In addition, we study
$(r,q)$-quasilinear entropies and divergences as alternative biparametrical
extended entropy and divergence, and then we give bounds for them. Finally we
obtain inequalities for an extended Lin's divergence and some characterizations
of Fermi-Dirac entropy and Bose-Einstein entropy.
|
[
{
"created": "Wed, 20 Mar 2019 02:03:40 GMT",
"version": "v1"
},
{
"created": "Sat, 13 Jul 2019 02:38:55 GMT",
"version": "v2"
}
] |
2019-07-24
|
[
[
"Furuichi",
"Shigeru",
""
],
[
"Minculete",
"Nicuşor",
""
]
] |
The aim of this paper is to discuss new results concerning some kinds of parametric extended entropies and divergences. As a result of our studies for mathematical properties on entropy and divergence, we give new bounds for the Tsallis quasilinear entropy and divergence by applying the Hermite-Hadamard inequality. We also give bounds for biparametrical extended entropies and divergences which have been given in \cite{7}. In addition, we study $(r,q)$-quasilinear entropies and divergences as alternative biparametrical extended entropy and divergence, and then we give bounds for them. Finally we obtain inequalities for an extended Lin's divergence and some characterizations of Fermi-Dirac entropy and Bose-Einstein entropy.
|
2012.01227
|
Taehyeong Kim
|
Taehyeong Kim, Injune Hwang, Hyundo Lee, Hyunseo Kim, Won-Seok Choi,
Joseph J. Lim, Byoung-Tak Zhang
|
Message Passing Adaptive Resonance Theory for Online Active
Semi-supervised Learning
|
Accepted to ICML 2021
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Active learning is widely used to reduce labeling effort and training time by
repeatedly querying only the most beneficial samples from unlabeled data. In
real-world problems where data cannot be stored indefinitely due to limited
storage or privacy issues, the query selection and the model update should be
performed as soon as a new data sample is observed. Various online active
learning methods have been studied to deal with these challenges; however,
there are difficulties in selecting representative query samples and updating
the model efficiently without forgetting. In this study, we propose Message
Passing Adaptive Resonance Theory (MPART) that learns the distribution and
topology of input data online. Through message passing on the topological
graph, MPART actively queries informative and representative samples, and
continuously improves the classification performance using both labeled and
unlabeled data. We evaluate our model in stream-based selective sampling
scenarios with comparable query selection strategies, showing that MPART
significantly outperforms competitive models.
|
[
{
"created": "Wed, 2 Dec 2020 14:14:42 GMT",
"version": "v1"
},
{
"created": "Wed, 24 Feb 2021 10:04:51 GMT",
"version": "v2"
},
{
"created": "Sat, 10 Jul 2021 05:58:30 GMT",
"version": "v3"
}
] |
2021-07-13
|
[
[
"Kim",
"Taehyeong",
""
],
[
"Hwang",
"Injune",
""
],
[
"Lee",
"Hyundo",
""
],
[
"Kim",
"Hyunseo",
""
],
[
"Choi",
"Won-Seok",
""
],
[
"Lim",
"Joseph J.",
""
],
[
"Zhang",
"Byoung-Tak",
""
]
] |
Active learning is widely used to reduce labeling effort and training time by repeatedly querying only the most beneficial samples from unlabeled data. In real-world problems where data cannot be stored indefinitely due to limited storage or privacy issues, the query selection and the model update should be performed as soon as a new data sample is observed. Various online active learning methods have been studied to deal with these challenges; however, there are difficulties in selecting representative query samples and updating the model efficiently without forgetting. In this study, we propose Message Passing Adaptive Resonance Theory (MPART) that learns the distribution and topology of input data online. Through message passing on the topological graph, MPART actively queries informative and representative samples, and continuously improves the classification performance using both labeled and unlabeled data. We evaluate our model in stream-based selective sampling scenarios with comparable query selection strategies, showing that MPART significantly outperforms competitive models.
|
cs/0103008
|
Ke Xu
|
Shilong Ma, Yuefei Sui, Ke Xu
|
The Limits of Horn Logic Programs
|
11 pages, added new results. Welcome any comments to
kexu@nlsde.buaa.edu.cn
|
In P. J. Stuckey (Ed.): Proc. of 18th ICLP (short paper), LNCS
2401, p. 467, Denmark, 2002.
| null | null |
cs.LO cs.PL
| null |
Given a sequence $\{\Pi_n\}$ of Horn logic programs, the limit $\Pi$ of
$\{\Pi_n\}$ is the set of the clauses such that every clause in $\Pi$ belongs
to almost every $\Pi_n$ and every clause in infinitely many $\Pi_n$'s belongs
to $\Pi$ also. The limit program $\Pi$ is still Horn but may be infinite. In
this paper, we consider if the least Herbrand model of the limit of a given
Horn logic program sequence $\{\Pi_n\}$ equals the limit of the least Herbrand
models of each logic program $\Pi_n$. It is proved that this property is not
true in general but holds if Horn logic programs satisfy an assumption which
can be syntactically checked and be satisfied by a class of Horn logic
programs. Thus, under this assumption we can approach the least Herbrand model
of the limit $\Pi$ by the sequence of the least Herbrand models of each finite
program $\Pi_n$. We also prove that if a finite Horn logic program satisfies
this assumption, then the least Herbrand model of this program is recursive.
Finally, by use of the concept of stability from dynamical systems, we prove
that this assumption is exactly a sufficient condition to guarantee the
stability of fixed points for Horn logic programs.
|
[
{
"created": "Thu, 8 Mar 2001 07:42:48 GMT",
"version": "v1"
},
{
"created": "Fri, 9 Mar 2001 03:36:48 GMT",
"version": "v2"
},
{
"created": "Thu, 7 Feb 2002 09:25:45 GMT",
"version": "v3"
}
] |
2007-05-23
|
[
[
"Ma",
"Shilong",
""
],
[
"Sui",
"Yuefei",
""
],
[
"Xu",
"Ke",
""
]
] |
Given a sequence $\{\Pi_n\}$ of Horn logic programs, the limit $\Pi$ of $\{\Pi_n\}$ is the set of the clauses such that every clause in $\Pi$ belongs to almost every $\Pi_n$ and every clause in infinitely many $\Pi_n$'s belongs to $\Pi$ also. The limit program $\Pi$ is still Horn but may be infinite. In this paper, we consider if the least Herbrand model of the limit of a given Horn logic program sequence $\{\Pi_n\}$ equals the limit of the least Herbrand models of each logic program $\Pi_n$. It is proved that this property is not true in general but holds if Horn logic programs satisfy an assumption which can be syntactically checked and be satisfied by a class of Horn logic programs. Thus, under this assumption we can approach the least Herbrand model of the limit $\Pi$ by the sequence of the least Herbrand models of each finite program $\Pi_n$. We also prove that if a finite Horn logic program satisfies this assumption, then the least Herbrand model of this program is recursive. Finally, by use of the concept of stability from dynamical systems, we prove that this assumption is exactly a sufficient condition to guarantee the stability of fixed points for Horn logic programs.
|
2109.00632
|
Sriram Baireddy
|
Changye Yang, Sriram Baireddy, Enyu Cai, Melba Crawford, Edward J.
Delp
|
Field-Based Plot Extraction Using UAV RGB Images
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unmanned Aerial Vehicles (UAVs) have become popular for use in plant
phenotyping of field based crops, such as maize and sorghum, due to their
ability to acquire high resolution data over field trials. Field experiments,
which may comprise thousands of plants, are planted according to experimental
designs to evaluate varieties or management practices. For many types of
phenotyping analysis, we examine smaller groups of plants known as "plots." In
this paper, we propose a new plot extraction method that will segment a UAV
image into plots. We will demonstrate that our method achieves higher plot
extraction accuracy than existing approaches.
|
[
{
"created": "Wed, 1 Sep 2021 22:04:59 GMT",
"version": "v1"
}
] |
2021-09-03
|
[
[
"Yang",
"Changye",
""
],
[
"Baireddy",
"Sriram",
""
],
[
"Cai",
"Enyu",
""
],
[
"Crawford",
"Melba",
""
],
[
"Delp",
"Edward J.",
""
]
] |
Unmanned Aerial Vehicles (UAVs) have become popular for use in plant phenotyping of field based crops, such as maize and sorghum, due to their ability to acquire high resolution data over field trials. Field experiments, which may comprise thousands of plants, are planted according to experimental designs to evaluate varieties or management practices. For many types of phenotyping analysis, we examine smaller groups of plants known as "plots." In this paper, we propose a new plot extraction method that will segment a UAV image into plots. We will demonstrate that our method achieves higher plot extraction accuracy than existing approaches.
|
2309.16632
|
Andrei Graur
|
Andrei Graur, Haotian Jiang, Aaron Sidford
|
Sparse Submodular Function Minimization
|
Accepted to FOCS 2023
| null | null | null |
cs.DS math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we study the problem of minimizing a submodular function $f :
2^V \rightarrow \mathbb{R}$ that is guaranteed to have a $k$-sparse minimizer.
We give a deterministic algorithm that computes an additive
$\epsilon$-approximate minimizer of such $f$ in $\widetilde{O}(\mathsf{poly}(k)
\log(|f|/\epsilon))$ parallel depth using a polynomial number of queries to an
evaluation oracle of $f$, where $|f| = \max_{S \subseteq V} |f(S)|$. Further,
we give a randomized algorithm that computes an exact minimizer of $f$ with
high probability using $\widetilde{O}(|V| \cdot \mathsf{poly}(k))$ queries and
polynomial time. When $k = \widetilde{O}(1)$, our algorithms use either
nearly-constant parallel depth or a nearly-linear number of evaluation oracle
queries. All previous algorithms for this problem either use $\Omega(|V|)$
parallel depth or $\Omega(|V|^2)$ queries.
In contrast to state-of-the-art weakly-polynomial and strongly-polynomial
time algorithms for SFM, our algorithms use first-order optimization methods,
e.g., mirror descent and follow the regularized leader. We introduce what we
call {\em sparse dual certificates}, which encode information on the structure
of sparse minimizers, and both our parallel and sequential algorithms provide
new algorithmic tools for allowing first-order optimization methods to
efficiently compute them. Correspondingly, our algorithm does not invoke fast
matrix multiplication or general linear system solvers and in this sense is
more combinatorial than previous state-of-the-art methods.
|
[
{
"created": "Thu, 28 Sep 2023 17:38:13 GMT",
"version": "v1"
},
{
"created": "Sat, 6 Jul 2024 11:06:01 GMT",
"version": "v2"
}
] |
2024-07-09
|
[
[
"Graur",
"Andrei",
""
],
[
"Jiang",
"Haotian",
""
],
[
"Sidford",
"Aaron",
""
]
] |
In this paper we study the problem of minimizing a submodular function $f : 2^V \rightarrow \mathbb{R}$ that is guaranteed to have a $k$-sparse minimizer. We give a deterministic algorithm that computes an additive $\epsilon$-approximate minimizer of such $f$ in $\widetilde{O}(\mathsf{poly}(k) \log(|f|/\epsilon))$ parallel depth using a polynomial number of queries to an evaluation oracle of $f$, where $|f| = \max_{S \subseteq V} |f(S)|$. Further, we give a randomized algorithm that computes an exact minimizer of $f$ with high probability using $\widetilde{O}(|V| \cdot \mathsf{poly}(k))$ queries and polynomial time. When $k = \widetilde{O}(1)$, our algorithms use either nearly-constant parallel depth or a nearly-linear number of evaluation oracle queries. All previous algorithms for this problem either use $\Omega(|V|)$ parallel depth or $\Omega(|V|^2)$ queries. In contrast to state-of-the-art weakly-polynomial and strongly-polynomial time algorithms for SFM, our algorithms use first-order optimization methods, e.g., mirror descent and follow the regularized leader. We introduce what we call {\em sparse dual certificates}, which encode information on the structure of sparse minimizers, and both our parallel and sequential algorithms provide new algorithmic tools for allowing first-order optimization methods to efficiently compute them. Correspondingly, our algorithm does not invoke fast matrix multiplication or general linear system solvers and in this sense is more combinatorial than previous state-of-the-art methods.
|
2407.12254
|
Ching Chang
|
Ting-Yun Ou, Ching Chang, Wen-Chih Peng
|
COKE: Causal Discovery with Chronological Order and Expert Knowledge in
High Proportion of Missing Manufacturing Data
|
This paper has been accepted by the ACM International Conference on
Information and Knowledge Management (CIKM) 2024
| null | null | null |
cs.LG stat.ME
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Understanding causal relationships between machines is crucial for fault
diagnosis and optimization in manufacturing processes. Real-world datasets
frequently exhibit up to 90% missing data and high dimensionality from hundreds
of sensors. These datasets also include domain-specific expert knowledge and
chronological order information, reflecting the recording order across
different machines, which is pivotal for discerning causal relationships within
the manufacturing data. However, previous methods for handling missing data in
scenarios akin to real-world conditions have not been able to effectively
utilize expert knowledge. Conversely, prior methods that can incorporate expert
knowledge struggle with datasets that exhibit missing values. Therefore, we
propose COKE to construct causal graphs in manufacturing datasets by leveraging
expert knowledge and chronological order among sensors without imputing missing
data. Utilizing the characteristics of the recipe, we maximize the use of
samples with missing values, derive embeddings from intersections with an
initial graph that incorporates expert knowledge and chronological order, and
create a sensor ordering graph. The graph-generating process has been optimized
by an actor-critic architecture to obtain a final graph that has a maximum
reward. Experimental evaluations in diverse settings of sensor quantities and
missing proportions demonstrate that our approach compared with the benchmark
methods shows an average improvement of 39.9% in the F1-score. Moreover, the
F1-score improvement can reach 62.6% when considering the configuration similar
to real-world datasets, and 85.0% in real-world semiconductor datasets. The
source code is available at https://github.com/OuTingYun/COKE.
|
[
{
"created": "Wed, 17 Jul 2024 01:51:27 GMT",
"version": "v1"
},
{
"created": "Thu, 1 Aug 2024 01:21:57 GMT",
"version": "v2"
}
] |
2024-08-02
|
[
[
"Ou",
"Ting-Yun",
""
],
[
"Chang",
"Ching",
""
],
[
"Peng",
"Wen-Chih",
""
]
] |
Understanding causal relationships between machines is crucial for fault diagnosis and optimization in manufacturing processes. Real-world datasets frequently exhibit up to 90% missing data and high dimensionality from hundreds of sensors. These datasets also include domain-specific expert knowledge and chronological order information, reflecting the recording order across different machines, which is pivotal for discerning causal relationships within the manufacturing data. However, previous methods for handling missing data in scenarios akin to real-world conditions have not been able to effectively utilize expert knowledge. Conversely, prior methods that can incorporate expert knowledge struggle with datasets that exhibit missing values. Therefore, we propose COKE to construct causal graphs in manufacturing datasets by leveraging expert knowledge and chronological order among sensors without imputing missing data. Utilizing the characteristics of the recipe, we maximize the use of samples with missing values, derive embeddings from intersections with an initial graph that incorporates expert knowledge and chronological order, and create a sensor ordering graph. The graph-generating process has been optimized by an actor-critic architecture to obtain a final graph that has a maximum reward. Experimental evaluations in diverse settings of sensor quantities and missing proportions demonstrate that our approach compared with the benchmark methods shows an average improvement of 39.9% in the F1-score. Moreover, the F1-score improvement can reach 62.6% when considering the configuration similar to real-world datasets, and 85.0% in real-world semiconductor datasets. The source code is available at https://github.com/OuTingYun/COKE.
|
1311.1163
|
Zuoqiang Shi
|
Thomas Y. Hou, Zuoqiang Shi
|
Sparse Time-Frequency decomposition by dictionary learning
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a time-frequency analysis method to obtain
instantaneous frequencies and the corresponding decomposition by solving an
optimization problem. In this optimization problem, the basis to decompose the
signal is not known. Instead, it is adapted to the signal and is determined as
part of the optimization problem. In this sense, this optimization problem can
be seen as a dictionary learning problem. This dictionary learning problem is
solved by using the Augmented Lagrangian Multiplier method (ALM) iteratively.
We further accelerate the convergence of the ALM method in each iteration by
using the fast wavelet transform. We apply our method to decompose several
signals, including signals with poor scale separation, signals with outliers
and polluted by noise and a real signal. The results show that this method can
give accurate recovery of both the instantaneous frequencies and the intrinsic
mode functions.
|
[
{
"created": "Thu, 5 Sep 2013 12:55:41 GMT",
"version": "v1"
},
{
"created": "Sat, 11 Oct 2014 08:47:30 GMT",
"version": "v2"
}
] |
2014-10-14
|
[
[
"Hou",
"Thomas Y.",
""
],
[
"Shi",
"Zuoqiang",
""
]
] |
In this paper, we propose a time-frequency analysis method to obtain instantaneous frequencies and the corresponding decomposition by solving an optimization problem. In this optimization problem, the basis to decompose the signal is not known. Instead, it is adapted to the signal and is determined as part of the optimization problem. In this sense, this optimization problem can be seen as a dictionary learning problem. This dictionary learning problem is solved by using the Augmented Lagrangian Multiplier method (ALM) iteratively. We further accelerate the convergence of the ALM method in each iteration by using the fast wavelet transform. We apply our method to decompose several signals, including signals with poor scale separation, signals with outliers and polluted by noise and a real signal. The results show that this method can give accurate recovery of both the instantaneous frequencies and the intrinsic mode functions.
|
1711.09728
|
Nazmus Saquib
|
Md. Naimul Hoque, Rawshan E Fatima, Manash Kumar Mandal, Nazmus Saquib
|
Evaluating gender portrayal in Bangladeshi TV
|
Presented at NIPS 2017 Workshop on Machine Learning for the
Developing World. Corresponding author: Nazmus Saquib
| null | null | null |
cs.CY stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Computer Vision and machine learning methods were previously used to reveal
screen presence of genders in TV and movies. In this work, using head pose,
gender detection, and skin color estimation techniques, we demonstrate that the
gender disparity in TV in a South Asian country such as Bangladesh exhibits
unique characteristics and is sometimes counter-intuitive to popular
perception. We demonstrate a noticeable discrepancy in female screen presence
in Bangladeshi TV advertisements and political talk shows. Further, contrary to
popular hypotheses, we demonstrate that lighter-toned skin colors are less
prevalent than darker complexions, and additionally, quantifiable body language
markers do not provide conclusive insights about gender dynamics. Overall,
these gender portrayal parameters reveal the different layers of onscreen
gender politics and can help direct incentives to address existing disparities
in a nuanced and targeted manner.
|
[
{
"created": "Tue, 14 Nov 2017 17:54:13 GMT",
"version": "v1"
}
] |
2017-11-29
|
[
[
"Hoque",
"Md. Naimul",
""
],
[
"Fatima",
"Rawshan E",
""
],
[
"Mandal",
"Manash Kumar",
""
],
[
"Saquib",
"Nazmus",
""
]
] |
Computer Vision and machine learning methods were previously used to reveal screen presence of genders in TV and movies. In this work, using head pose, gender detection, and skin color estimation techniques, we demonstrate that the gender disparity in TV in a South Asian country such as Bangladesh exhibits unique characteristics and is sometimes counter-intuitive to popular perception. We demonstrate a noticeable discrepancy in female screen presence in Bangladeshi TV advertisements and political talk shows. Further, contrary to popular hypotheses, we demonstrate that lighter-toned skin colors are less prevalent than darker complexions, and additionally, quantifiable body language markers do not provide conclusive insights about gender dynamics. Overall, these gender portrayal parameters reveal the different layers of onscreen gender politics and can help direct incentives to address existing disparities in a nuanced and targeted manner.
|
2207.07629
|
Zhiruo Zhou
|
Zhiruo Zhou, Hongyu Fu, Suya You, C.-C. Jay Kuo
|
GUSOT: Green and Unsupervised Single Object Tracking for Long Video
Sequences
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Supervised and unsupervised deep trackers that rely on deep learning
technologies are popular in recent years. Yet, they demand high computational
complexity and a high memory cost. A green unsupervised single-object tracker,
called GUSOT, that aims at object tracking for long videos under a
resource-constrained environment is proposed in this work. Built upon a
baseline tracker, UHP-SOT++, which works well for short-term tracking, GUSOT
contains two additional new modules: 1) lost object recovery, and 2)
color-saliency-based shape proposal. They help resolve the tracking loss
problem and offer a more flexible object proposal, respectively. Thus, they
enable GUSOT to achieve higher tracking accuracy in the long run. We conduct
experiments on the large-scale dataset LaSOT with long video sequences, and
show that GUSOT offers a lightweight high-performance tracking solution that
finds applications in mobile and edge computing platforms.
|
[
{
"created": "Fri, 15 Jul 2022 17:42:49 GMT",
"version": "v1"
}
] |
2022-07-18
|
[
[
"Zhou",
"Zhiruo",
""
],
[
"Fu",
"Hongyu",
""
],
[
"You",
"Suya",
""
],
[
"Kuo",
"C. -C. Jay",
""
]
] |
Supervised and unsupervised deep trackers that rely on deep learning technologies are popular in recent years. Yet, they demand high computational complexity and a high memory cost. A green unsupervised single-object tracker, called GUSOT, that aims at object tracking for long videos under a resource-constrained environment is proposed in this work. Built upon a baseline tracker, UHP-SOT++, which works well for short-term tracking, GUSOT contains two additional new modules: 1) lost object recovery, and 2) color-saliency-based shape proposal. They help resolve the tracking loss problem and offer a more flexible object proposal, respectively. Thus, they enable GUSOT to achieve higher tracking accuracy in the long run. We conduct experiments on the large-scale dataset LaSOT with long video sequences, and show that GUSOT offers a lightweight high-performance tracking solution that finds applications in mobile and edge computing platforms.
|
2404.14739
|
Utkarsh Gupta
|
Utkarsh Gupta, Emmanouil Nikolakakis, Moritz Zaiss, Razvan Marinescu
|
BMapEst: Estimation of Brain Tissue Probability Maps using a
Differentiable MRI Simulator
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reconstructing digital brain phantoms in the form of voxel-based,
multi-channeled tissue probability maps for individual subjects is essential
for capturing brain anatomical variability, understanding neurological
diseases, as well as for testing image processing methods. We demonstrate the
first framework that estimates brain tissue probability maps (Grey Matter - GM,
White Matter - WM, and Cerebrospinal fluid - CSF) with the help of a
Physics-based differentiable MRI simulator that models the magnetization signal
at each voxel in the volume. Given an observed $T_1$/$T_2$-weighted MRI scan,
the corresponding clinical MRI sequence, and the MRI differentiable simulator,
we estimate the simulator's input probability maps by back-propagating the L2
loss between the simulator's output and the $T_1$/$T_2$-weighted scan. This
approach has the significant advantage of not relying on any training data and
instead uses the strong inductive bias of the MRI simulator. We tested the
model on 20 scans from the BrainWeb database and demonstrated a highly accurate
reconstruction of GM, WM, and CSF. Our source code is available online:
https://github.com/BioMedAI-UCSC/BMapEst.
|
[
{
"created": "Tue, 23 Apr 2024 04:45:23 GMT",
"version": "v1"
},
{
"created": "Sun, 30 Jun 2024 04:00:09 GMT",
"version": "v2"
}
] |
2024-07-02
|
[
[
"Gupta",
"Utkarsh",
""
],
[
"Nikolakakis",
"Emmanouil",
""
],
[
"Zaiss",
"Moritz",
""
],
[
"Marinescu",
"Razvan",
""
]
] |
Reconstructing digital brain phantoms in the form of voxel-based, multi-channeled tissue probability maps for individual subjects is essential for capturing brain anatomical variability, understanding neurological diseases, as well as for testing image processing methods. We demonstrate the first framework that estimates brain tissue probability maps (Grey Matter - GM, White Matter - WM, and Cerebrospinal fluid - CSF) with the help of a Physics-based differentiable MRI simulator that models the magnetization signal at each voxel in the volume. Given an observed $T_1$/$T_2$-weighted MRI scan, the corresponding clinical MRI sequence, and the MRI differentiable simulator, we estimate the simulator's input probability maps by back-propagating the L2 loss between the simulator's output and the $T_1$/$T_2$-weighted scan. This approach has the significant advantage of not relying on any training data and instead uses the strong inductive bias of the MRI simulator. We tested the model on 20 scans from the BrainWeb database and demonstrated a highly accurate reconstruction of GM, WM, and CSF. Our source code is available online: https://github.com/BioMedAI-UCSC/BMapEst.
|
2011.10659
|
Mathilde Fekom
|
Mathilde Fekom and Argyris Kalogeratos
|
Efficient stream-based Max-Min diversification with minimal failure rate
| null | null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
The stream-based Max-Min diversification problem concerns the task of
selecting a limited number of diverse instances from a data stream. The nature
of the problem demands immediate and irrevocable decisions. The set-wise
diversity to be maximized is the minimum distance among any pair of the
selected instances. Standard algorithmic approaches for sequential selection
disregard the possibility of selection failures, which is the situation where
the last instances of the stream are picked by default to prevent having an
incomplete selection. This defect can be catastrophic for the Max-Min
diversification objective. In this paper we present the Failure Rate
Minimization (FRM) algorithm that allows the selection of a set of disparate
instances while reducing significantly the probability of having failures. This
is achieved by means of both analytical and empirical techniques. FRM is put in
comparison with relevant algorithms from the literature through simulations on
real datasets, where we demonstrate its efficiency and low time complexity.
|
[
{
"created": "Tue, 17 Nov 2020 14:20:16 GMT",
"version": "v1"
}
] |
2020-11-24
|
[
[
"Fekom",
"Mathilde",
""
],
[
"Kalogeratos",
"Argyris",
""
]
] |
The stream-based Max-Min diversification problem concerns the task of selecting a limited number of diverse instances from a data stream. The nature of the problem demands immediate and irrevocable decisions. The set-wise diversity to be maximized is the minimum distance among any pair of the selected instances. Standard algorithmic approaches for sequential selection disregard the possibility of selection failures, which is the situation where the last instances of the stream are picked by default to prevent having an incomplete selection. This defect can be catastrophic for the Max-Min diversification objective. In this paper we present the Failure Rate Minimization (FRM) algorithm that allows the selection of a set of disparate instances while reducing significantly the probability of having failures. This is achieved by means of both analytical and empirical techniques. FRM is put in comparison with relevant algorithms from the literature through simulations on real datasets, where we demonstrate its efficiency and low time complexity.
|
2209.10477
|
Yan Liu
|
Yan Liu, Maria Laricheva, Chiyu Zhang, Patrick Boutet, Guanyu Chen,
Terence Tracey, Giuseppe Carenini, Richard Young
|
Transition to Adulthood for Young People with Intellectual or
Developmental Disabilities: Emotion Detection and Topic Modeling
|
Conference proceedings of 2022 SBP-BRiMS
|
In: Thomson, R., Dancy, C., Pyke, A. (eds) SBP-BRiMS 2022. Lecture
Notes in Computer Science, vol 13558. Springer, Cham (2022)
|
10.1007/978-3-031-17114-7_21
| null |
cs.CL stat.AP stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transition to Adulthood is an essential life stage for many families. The
prior research has shown that young people with intellectual or development
disabil-ities (IDD) have more challenges than their peers. This study is to
explore how to use natural language processing (NLP) methods, especially
unsupervised machine learning, to assist psychologists to analyze emotions and
sentiments and to use topic modeling to identify common issues and challenges
that young people with IDD and their families have. Additionally, the results
were compared to those obtained from young people without IDD who were in
tran-sition to adulthood. The findings showed that NLP methods can be very
useful for psychologists to analyze emotions, conduct cross-case analysis, and
sum-marize key topics from conversational data. Our Python code is available at
https://github.com/mlaricheva/emotion_topic_modeling.
|
[
{
"created": "Wed, 21 Sep 2022 16:23:45 GMT",
"version": "v1"
}
] |
2022-09-22
|
[
[
"Liu",
"Yan",
""
],
[
"Laricheva",
"Maria",
""
],
[
"Zhang",
"Chiyu",
""
],
[
"Boutet",
"Patrick",
""
],
[
"Chen",
"Guanyu",
""
],
[
"Tracey",
"Terence",
""
],
[
"Carenini",
"Giuseppe",
""
],
[
"Young",
"Richard",
""
]
] |
Transition to Adulthood is an essential life stage for many families. The prior research has shown that young people with intellectual or development disabil-ities (IDD) have more challenges than their peers. This study is to explore how to use natural language processing (NLP) methods, especially unsupervised machine learning, to assist psychologists to analyze emotions and sentiments and to use topic modeling to identify common issues and challenges that young people with IDD and their families have. Additionally, the results were compared to those obtained from young people without IDD who were in tran-sition to adulthood. The findings showed that NLP methods can be very useful for psychologists to analyze emotions, conduct cross-case analysis, and sum-marize key topics from conversational data. Our Python code is available at https://github.com/mlaricheva/emotion_topic_modeling.
|
2303.05740
|
Furan Xie
|
Bing Liu, Furan Xie and Li Chai
|
A Novel Bilateral Energy Trading Mechanism for Electricity Markets with
Numerous Prosumers
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the rapid development of distributed energy resources, increasing number
of residential and commercial users have been switched from pure electricity
consumers to prosumers that can both consume and produce energy. To properly
manage these emerging prosumers, a peer-to-peer electricity market has been
explored and extensively studied. In such an electricity market, each prosumer
trades energy directly with other prosumers, posing a serious challenge to the
scalability of the market. Therefore, a bilateral energy trading mechanism with
good scalability is proposed for electricity markets with numerous prosumers in
this paper. First, the multi-bilateral economic dispatch problem that maximizes
the social welfare is formulated, taking into account product differentiation
and network constraints. Then, an energy trading mechanism is devised to
improve the scalability from two aspects: (i) an accelerated distributed
clearing algorithm with less exchanged information and faster convergence rate.
(ii) a novel selection strategy to reduce the amount of computation and
communication per prosumer. Finally, the convergence proof of the proposed
accelerated algorithm is given, and the proposed selection strategy is
illustrated through a Monte Carlo simulation experiment.
|
[
{
"created": "Fri, 10 Mar 2023 06:49:53 GMT",
"version": "v1"
},
{
"created": "Tue, 12 Sep 2023 12:39:52 GMT",
"version": "v2"
},
{
"created": "Thu, 14 Sep 2023 03:36:37 GMT",
"version": "v3"
}
] |
2023-09-15
|
[
[
"Liu",
"Bing",
""
],
[
"Xie",
"Furan",
""
],
[
"Chai",
"Li",
""
]
] |
With the rapid development of distributed energy resources, increasing number of residential and commercial users have been switched from pure electricity consumers to prosumers that can both consume and produce energy. To properly manage these emerging prosumers, a peer-to-peer electricity market has been explored and extensively studied. In such an electricity market, each prosumer trades energy directly with other prosumers, posing a serious challenge to the scalability of the market. Therefore, a bilateral energy trading mechanism with good scalability is proposed for electricity markets with numerous prosumers in this paper. First, the multi-bilateral economic dispatch problem that maximizes the social welfare is formulated, taking into account product differentiation and network constraints. Then, an energy trading mechanism is devised to improve the scalability from two aspects: (i) an accelerated distributed clearing algorithm with less exchanged information and faster convergence rate. (ii) a novel selection strategy to reduce the amount of computation and communication per prosumer. Finally, the convergence proof of the proposed accelerated algorithm is given, and the proposed selection strategy is illustrated through a Monte Carlo simulation experiment.
|
2002.03586
|
Thomas Sturm
|
Hamid Rahkooy and Thomas Sturm
|
First-Order Tests for Toricity
| null |
Proc. CASC 2020, LNCS 12291, pp.510-527, Springer 2020
|
10.1007/978-3-030-60026-6_30
| null |
cs.SC q-bio.MN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Motivated by problems arising with the symbolic analysis of steady state
ideals in Chemical Reaction Network Theory, we consider the problem of testing
whether the points in a complex or real variety with non-zero coordinates form
a coset of a multiplicative group. That property corresponds to Shifted
Toricity, a recent generalization of toricity of the corresponding polynomial
ideal. The key idea is to take a geometric view on varieties rather than an
algebraic view on ideals. Recently, corresponding coset tests have been
proposed for complex and for real varieties. The former combine numerous
techniques from commutative algorithmic algebra with Gr\"obner bases as the
central algorithmic tool. The latter are based on interpreted first-order logic
in real closed fields with real quantifier elimination techniques on the
algorithmic side. Here we take a new logic approach to both theories, complex
and real, and beyond. Besides alternative algorithms, our approach provides a
unified view on theories of fields and helps to understand the relevance and
interconnection of the rich existing literature in the area, which has been
focusing on complex numbers, while from a scientific point of view the
(positive) real numbers are clearly the relevant domain in chemical reaction
network theory. We apply prototypical implementations of our new approach to a
set of 129 models from the BioModels repository.
|
[
{
"created": "Mon, 10 Feb 2020 07:56:07 GMT",
"version": "v1"
}
] |
2020-10-22
|
[
[
"Rahkooy",
"Hamid",
""
],
[
"Sturm",
"Thomas",
""
]
] |
Motivated by problems arising with the symbolic analysis of steady state ideals in Chemical Reaction Network Theory, we consider the problem of testing whether the points in a complex or real variety with non-zero coordinates form a coset of a multiplicative group. That property corresponds to Shifted Toricity, a recent generalization of toricity of the corresponding polynomial ideal. The key idea is to take a geometric view on varieties rather than an algebraic view on ideals. Recently, corresponding coset tests have been proposed for complex and for real varieties. The former combine numerous techniques from commutative algorithmic algebra with Gr\"obner bases as the central algorithmic tool. The latter are based on interpreted first-order logic in real closed fields with real quantifier elimination techniques on the algorithmic side. Here we take a new logic approach to both theories, complex and real, and beyond. Besides alternative algorithms, our approach provides a unified view on theories of fields and helps to understand the relevance and interconnection of the rich existing literature in the area, which has been focusing on complex numbers, while from a scientific point of view the (positive) real numbers are clearly the relevant domain in chemical reaction network theory. We apply prototypical implementations of our new approach to a set of 129 models from the BioModels repository.
|
2006.01780
|
Rahat Yeasin Emon
|
Rahat Yeasin Emon
|
A Novel Nudity Detection Algorithm for Web and Mobile Application
Development
|
5 pages
| null | null | null |
cs.CV cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
In our current web and mobile application development runtime nude image
content detection is very important. This paper presents a runtime nudity
detection method for web and mobile application development. We use two
parameters to detect the nude content of an image. One is the number of skin
pixels another is face region. A skin color model based on RGB, HSV color
spaces are used to detect skin pixels in an image. Google vision api is used to
detect the face region. By the percentage of skin regions and face regions an
image is identified nude or not. The success of this algorithm exists in
detecting skin regions and face regions. The skin detection algorithm can
detect skin 95% accurately with a low false-positive rate and the google vision
api for web and mobile applications can detect face 99% accurately with less
than 1 second time. From the experimental analysis, we have seen that the
proposed algorithm can detect 95% percent accurately the nudity of an image.
|
[
{
"created": "Tue, 2 Jun 2020 17:00:47 GMT",
"version": "v1"
},
{
"created": "Sun, 28 Jun 2020 15:29:09 GMT",
"version": "v2"
}
] |
2020-06-30
|
[
[
"Emon",
"Rahat Yeasin",
""
]
] |
In our current web and mobile application development runtime nude image content detection is very important. This paper presents a runtime nudity detection method for web and mobile application development. We use two parameters to detect the nude content of an image. One is the number of skin pixels another is face region. A skin color model based on RGB, HSV color spaces are used to detect skin pixels in an image. Google vision api is used to detect the face region. By the percentage of skin regions and face regions an image is identified nude or not. The success of this algorithm exists in detecting skin regions and face regions. The skin detection algorithm can detect skin 95% accurately with a low false-positive rate and the google vision api for web and mobile applications can detect face 99% accurately with less than 1 second time. From the experimental analysis, we have seen that the proposed algorithm can detect 95% percent accurately the nudity of an image.
|
2008.08675
|
Anders Johan Andreassen
|
Anders Andreassen, Ethan Dyer
|
Asymptotics of Wide Convolutional Neural Networks
|
23 pages, 12 figures
| null | null | null |
cs.LG hep-th stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Wide neural networks have proven to be a rich class of architectures for both
theory and practice. Motivated by the observation that finite width
convolutional networks appear to outperform infinite width networks, we study
scaling laws for wide CNNs and networks with skip connections. Following the
approach of (Dyer & Gur-Ari, 2019), we present a simple diagrammatic recipe to
derive the asymptotic width dependence for many quantities of interest. These
scaling relationships provide a solvable description for the training dynamics
of wide convolutional networks. We test these relations across a broad range of
architectures. In particular, we find that the difference in performance
between finite and infinite width models vanishes at a definite rate with
respect to model width. Nonetheless, this relation is consistent with finite
width models generalizing either better or worse than their infinite width
counterparts, and we provide examples where the relative performance depends on
the optimization details.
|
[
{
"created": "Wed, 19 Aug 2020 21:22:19 GMT",
"version": "v1"
}
] |
2020-08-21
|
[
[
"Andreassen",
"Anders",
""
],
[
"Dyer",
"Ethan",
""
]
] |
Wide neural networks have proven to be a rich class of architectures for both theory and practice. Motivated by the observation that finite width convolutional networks appear to outperform infinite width networks, we study scaling laws for wide CNNs and networks with skip connections. Following the approach of (Dyer & Gur-Ari, 2019), we present a simple diagrammatic recipe to derive the asymptotic width dependence for many quantities of interest. These scaling relationships provide a solvable description for the training dynamics of wide convolutional networks. We test these relations across a broad range of architectures. In particular, we find that the difference in performance between finite and infinite width models vanishes at a definite rate with respect to model width. Nonetheless, this relation is consistent with finite width models generalizing either better or worse than their infinite width counterparts, and we provide examples where the relative performance depends on the optimization details.
|
1702.03186
|
Matthieu Guillot
|
Matthieu Guillot and Gautier Stauffer
|
The Stochastic Shortest Path Problem : A polyhedral combinatorics
perspective
| null | null | null | null |
cs.DM math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we give a new framework for the stochastic shortest path
problem in finite state and action spaces. Our framework generalizes both the
frameworks proposed by Bertsekas and Tsitsikli and by Bertsekas and Yu. We
prove that the problem is well-defined and (weakly) polynomial when (i) there
is a way to reach the target state from any initial state and (ii) there is no
transition cycle of negative costs (a generalization of negative cost cycles).
These assumptions generalize the standard assumptions for the deterministic
shortest path problem and our framework encapsulates the latter problem (in
contrast with prior works). In this new setting, we can show that (a) one can
restrict to deterministic and stationary policies, (b) the problem is still
(weakly) polynomial through linear programming, (c) Value Iteration and Policy
Iteration converge, and (d) we can extend Dijkstra's algorithm.
|
[
{
"created": "Fri, 10 Feb 2017 14:36:32 GMT",
"version": "v1"
}
] |
2017-02-13
|
[
[
"Guillot",
"Matthieu",
""
],
[
"Stauffer",
"Gautier",
""
]
] |
In this paper, we give a new framework for the stochastic shortest path problem in finite state and action spaces. Our framework generalizes both the frameworks proposed by Bertsekas and Tsitsikli and by Bertsekas and Yu. We prove that the problem is well-defined and (weakly) polynomial when (i) there is a way to reach the target state from any initial state and (ii) there is no transition cycle of negative costs (a generalization of negative cost cycles). These assumptions generalize the standard assumptions for the deterministic shortest path problem and our framework encapsulates the latter problem (in contrast with prior works). In this new setting, we can show that (a) one can restrict to deterministic and stationary policies, (b) the problem is still (weakly) polynomial through linear programming, (c) Value Iteration and Policy Iteration converge, and (d) we can extend Dijkstra's algorithm.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.