id stringlengths 9 10 | submitter stringlengths 1 64 ⌀ | authors stringlengths 4 20.7k | title stringlengths 4 246 | comments stringlengths 1 523 ⌀ | journal-ref stringlengths 4 404 ⌀ | doi stringlengths 11 153 ⌀ | report-no stringlengths 2 254 ⌀ | categories stringlengths 5 98 | license stringclasses 9 values | orig_abstract stringlengths 14 3.35k | versions listlengths 1 60 | update_date stringlengths 10 10 | authors_parsed listlengths 1 1.35k | abstract stringlengths 11 3.34k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2111.03278 | Eric Neyman | Rafael Frongillo, Eric Neyman, Bo Waggoner | Agreement Implies Accuracy for Substitutable Signals | 31 pages, 1 figure | null | null | null | cs.GT | http://creativecommons.org/licenses/by/4.0/ | Inspired by Aumann's agreement theorem, Scott Aaronson studied the amount of
communication necessary for two Bayesian experts to approximately agree on the
expectation of a random variable. Aaronson showed that, remarkably, the number
of bits does not depend on the amount of information available to each expert.
However, in general the agreed-upon estimate may be inaccurate: far from the
estimate they would settle on if they were to share all of their information.
We show that if the experts' signals are \emph{substitutes} -- meaning the
experts' information has diminishing marginal returns -- then it is the case
that if the experts are close to agreement then they are close to the truth. We
prove this result for a broad class of agreement and accuracy measures that
includes squared distance and KL divergence. Additionally, we show that
although these measures capture fundamentally different kinds of agreement,
Aaronson's agreement result generalizes to them as well.
| [
{
"created": "Fri, 5 Nov 2021 05:47:03 GMT",
"version": "v1"
}
] | 2021-11-08 | [
[
"Frongillo",
"Rafael",
""
],
[
"Neyman",
"Eric",
""
],
[
"Waggoner",
"Bo",
""
]
] | Inspired by Aumann's agreement theorem, Scott Aaronson studied the amount of communication necessary for two Bayesian experts to approximately agree on the expectation of a random variable. Aaronson showed that, remarkably, the number of bits does not depend on the amount of information available to each expert. However, in general the agreed-upon estimate may be inaccurate: far from the estimate they would settle on if they were to share all of their information. We show that if the experts' signals are \emph{substitutes} -- meaning the experts' information has diminishing marginal returns -- then it is the case that if the experts are close to agreement then they are close to the truth. We prove this result for a broad class of agreement and accuracy measures that includes squared distance and KL divergence. Additionally, we show that although these measures capture fundamentally different kinds of agreement, Aaronson's agreement result generalizes to them as well. |
2404.00466 | Heqiang Wang | Heqiang Wang, Jieming Bian, Lei Wang | Computation and Communication Efficient Lightweighting Vertical
Federated Learning | null | null | null | null | cs.LG cs.DC | http://creativecommons.org/licenses/by/4.0/ | The exploration of computational and communication efficiency within
Federated Learning (FL) has emerged as a prominent and crucial field of study.
While most existing efforts to enhance these efficiencies have focused on
Horizontal FL, the distinct processes and model structures of Vertical FL
preclude the direct application of Horizontal FL-based techniques. In response,
we introduce the concept of Lightweight Vertical Federated Learning (LVFL),
targeting both computational and communication efficiencies. This approach
involves separate lightweighting strategies for the feature model, to improve
computational efficiency, and for feature embedding, to enhance communication
efficiency. Moreover, we establish a convergence bound for our LVFL algorithm,
which accounts for both communication and computational lightweighting ratios.
Our evaluation of the algorithm on a image classification dataset reveals that
LVFL significantly alleviates computational and communication demands while
preserving robust learning performance. This work effectively addresses the
gaps in communication and computational efficiency within Vertical FL.
| [
{
"created": "Sat, 30 Mar 2024 20:19:28 GMT",
"version": "v1"
}
] | 2024-04-02 | [
[
"Wang",
"Heqiang",
""
],
[
"Bian",
"Jieming",
""
],
[
"Wang",
"Lei",
""
]
] | The exploration of computational and communication efficiency within Federated Learning (FL) has emerged as a prominent and crucial field of study. While most existing efforts to enhance these efficiencies have focused on Horizontal FL, the distinct processes and model structures of Vertical FL preclude the direct application of Horizontal FL-based techniques. In response, we introduce the concept of Lightweight Vertical Federated Learning (LVFL), targeting both computational and communication efficiencies. This approach involves separate lightweighting strategies for the feature model, to improve computational efficiency, and for feature embedding, to enhance communication efficiency. Moreover, we establish a convergence bound for our LVFL algorithm, which accounts for both communication and computational lightweighting ratios. Our evaluation of the algorithm on a image classification dataset reveals that LVFL significantly alleviates computational and communication demands while preserving robust learning performance. This work effectively addresses the gaps in communication and computational efficiency within Vertical FL. |
1106.1910 | Vahid Majid Nezhad | Vahid Majid Nezhad, Habib Motee Gader and Evgueni Efimov | A Novel Hybrid Algorithm for Task Graph Scheduling | null | IJCSI, Vol 8, Issue 2, March 2011, p32-38 | null | null | cs.CC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the important problems in multiprocessor systems is Task Graph
Scheduling. Task Graph Scheduling is an NP-Hard problem. Both learning automata
and genetic algorithms are search tools which are used for solving many NP-Hard
problems. In this paper a new hybrid method based on Genetic Algorithm and
Learning Automata is proposed. The proposed algorithm begins with an initial
population of randomly generated chromosomes and after some stages, each
chromosome maps to an automaton. Experimental results show that superiority of
the proposed algorithm over the current approaches.
| [
{
"created": "Thu, 9 Jun 2011 20:26:22 GMT",
"version": "v1"
}
] | 2011-06-13 | [
[
"Nezhad",
"Vahid Majid",
""
],
[
"Gader",
"Habib Motee",
""
],
[
"Efimov",
"Evgueni",
""
]
] | One of the important problems in multiprocessor systems is Task Graph Scheduling. Task Graph Scheduling is an NP-Hard problem. Both learning automata and genetic algorithms are search tools which are used for solving many NP-Hard problems. In this paper a new hybrid method based on Genetic Algorithm and Learning Automata is proposed. The proposed algorithm begins with an initial population of randomly generated chromosomes and after some stages, each chromosome maps to an automaton. Experimental results show that superiority of the proposed algorithm over the current approaches. |
1909.02273 | Chengyi Wang | Chengyi Wang, Shuangzhi Wu, Shujie Liu | Source Dependency-Aware Transformer with Supervised Self-Attention | 6 pages | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, Transformer has achieved the state-of-the-art performance on many
machine translation tasks. However, without syntax knowledge explicitly
considered in the encoder, incorrect context information that violates the
syntax structure may be integrated into source hidden states, leading to
erroneous translations. In this paper, we propose a novel method to incorporate
source dependencies into the Transformer. Specifically, we adopt the source
dependency tree and define two matrices to represent the dependency relations.
Based on the matrices, two heads in the multi-head self-attention module are
trained in a supervised manner and two extra cross entropy losses are
introduced into the training objective function. Under this training objective,
the model is trained to learn the source dependency relations directly. Without
requiring pre-parsed input during inference, our model can generate better
translations with the dependency-aware context information. Experiments on
bi-directional Chinese-to-English, English-to-Japanese and English-to-German
translation tasks show that our proposed method can significantly improve the
Transformer baseline.
| [
{
"created": "Thu, 5 Sep 2019 09:17:37 GMT",
"version": "v1"
}
] | 2019-09-06 | [
[
"Wang",
"Chengyi",
""
],
[
"Wu",
"Shuangzhi",
""
],
[
"Liu",
"Shujie",
""
]
] | Recently, Transformer has achieved the state-of-the-art performance on many machine translation tasks. However, without syntax knowledge explicitly considered in the encoder, incorrect context information that violates the syntax structure may be integrated into source hidden states, leading to erroneous translations. In this paper, we propose a novel method to incorporate source dependencies into the Transformer. Specifically, we adopt the source dependency tree and define two matrices to represent the dependency relations. Based on the matrices, two heads in the multi-head self-attention module are trained in a supervised manner and two extra cross entropy losses are introduced into the training objective function. Under this training objective, the model is trained to learn the source dependency relations directly. Without requiring pre-parsed input during inference, our model can generate better translations with the dependency-aware context information. Experiments on bi-directional Chinese-to-English, English-to-Japanese and English-to-German translation tasks show that our proposed method can significantly improve the Transformer baseline. |
1607.01284 | Ruifeng Duan | Ruifeng Duan, Riku J\"antti, H\"useyin Yi\u{g}itler, and Kalle Ruttik | On the Achievable Rate of Bi-Static Modulated Re-Scatter Systems | 5 pages, 3 figures, accepted | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In ambient re-scatter communications, devices convey information by
modulating and re-scattering the radio frequency signals impinging on their
antennas. In this correspondence, we consider a system consisting of a legacy
modulated continuous carrier multiple-input-multiple-output (MIMO) link and a
multi-antenna modulated re-scatter (MRS) node, where the MRS node modulates and
re-scatters the signal generated by the legacy transmitter. The receiver seeks
to decode both the original message and the information added by the MRS. We
show that the achievable sum rate of this system exceeds that which the legacy
system could achieve alone. We further consider the impact of channel
estimation errors under the least squares channel estimation and study the
achievable rate of the legacy and MRS systems, where a linear minimum mean
square error receiver with successive interference cancellation is utilized for
joint decoding.
| [
{
"created": "Tue, 5 Jul 2016 14:54:14 GMT",
"version": "v1"
},
{
"created": "Mon, 12 Jun 2017 13:19:10 GMT",
"version": "v2"
}
] | 2017-06-13 | [
[
"Duan",
"Ruifeng",
""
],
[
"Jäntti",
"Riku",
""
],
[
"Yiğitler",
"Hüseyin",
""
],
[
"Ruttik",
"Kalle",
""
]
] | In ambient re-scatter communications, devices convey information by modulating and re-scattering the radio frequency signals impinging on their antennas. In this correspondence, we consider a system consisting of a legacy modulated continuous carrier multiple-input-multiple-output (MIMO) link and a multi-antenna modulated re-scatter (MRS) node, where the MRS node modulates and re-scatters the signal generated by the legacy transmitter. The receiver seeks to decode both the original message and the information added by the MRS. We show that the achievable sum rate of this system exceeds that which the legacy system could achieve alone. We further consider the impact of channel estimation errors under the least squares channel estimation and study the achievable rate of the legacy and MRS systems, where a linear minimum mean square error receiver with successive interference cancellation is utilized for joint decoding. |
1405.5202 | Altaf Rahman | Altaf Rahman, Vincent Ng | Narrowing the Modeling Gap: A Cluster-Ranking Approach to Coreference
Resolution | null | Journal Of Artificial Intelligence Research, Volume 40, pages
469-521, 2011 | 10.1613/jair.3120 | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traditional learning-based coreference resolvers operate by training the
mention-pair model for determining whether two mentions are coreferent or not.
Though conceptually simple and easy to understand, the mention-pair model is
linguistically rather unappealing and lags far behind the heuristic-based
coreference models proposed in the pre-statistical NLP era in terms of
sophistication. Two independent lines of recent research have attempted to
improve the mention-pair model, one by acquiring the mention-ranking model to
rank preceding mentions for a given anaphor, and the other by training the
entity-mention model to determine whether a preceding cluster is coreferent
with a given mention. We propose a cluster-ranking approach to coreference
resolution, which combines the strengths of the mention-ranking model and the
entity-mention model, and is therefore theoretically more appealing than both
of these models. In addition, we seek to improve cluster rankers via two
extensions: (1) lexicalization and (2) incorporating knowledge of anaphoricity
by jointly modeling anaphoricity determination and coreference resolution.
Experimental results on the ACE data sets demonstrate the superior performance
of cluster rankers to competing approaches as well as the effectiveness of our
two extensions.
| [
{
"created": "Thu, 16 Jan 2014 05:06:09 GMT",
"version": "v1"
}
] | 2014-05-21 | [
[
"Rahman",
"Altaf",
""
],
[
"Ng",
"Vincent",
""
]
] | Traditional learning-based coreference resolvers operate by training the mention-pair model for determining whether two mentions are coreferent or not. Though conceptually simple and easy to understand, the mention-pair model is linguistically rather unappealing and lags far behind the heuristic-based coreference models proposed in the pre-statistical NLP era in terms of sophistication. Two independent lines of recent research have attempted to improve the mention-pair model, one by acquiring the mention-ranking model to rank preceding mentions for a given anaphor, and the other by training the entity-mention model to determine whether a preceding cluster is coreferent with a given mention. We propose a cluster-ranking approach to coreference resolution, which combines the strengths of the mention-ranking model and the entity-mention model, and is therefore theoretically more appealing than both of these models. In addition, we seek to improve cluster rankers via two extensions: (1) lexicalization and (2) incorporating knowledge of anaphoricity by jointly modeling anaphoricity determination and coreference resolution. Experimental results on the ACE data sets demonstrate the superior performance of cluster rankers to competing approaches as well as the effectiveness of our two extensions. |
1412.2716 | Paul Ginsparg | Daniel T. Citron and Paul Ginsparg | Patterns of Text Reuse in a Scientific Corpus | 6 pages, plus 10 pages of supplementary material. To appear in PNAS
(online 8 Dec 2014) | null | 10.1073/pnas.1415135111 | null | cs.DL physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the incidence of text "reuse" by researchers, via a systematic
pairwise comparison of the text content of all articles deposited to arXiv.org
from 1991--2012. We measure the global frequencies of three classes of text
reuse, and measure how chronic text reuse is distributed among authors in the
dataset. We infer a baseline for accepted practice, perhaps surprisingly
permissive compared with other societal contexts, and a clearly delineated set
of aberrant authors. We find a negative correlation between the amount of
reused text in an article and its influence, as measured by subsequent
citations. Finally, we consider the distribution of countries of origin of
articles containing large amounts of reused text.
| [
{
"created": "Mon, 8 Dec 2014 20:01:17 GMT",
"version": "v1"
}
] | 2014-12-09 | [
[
"Citron",
"Daniel T.",
""
],
[
"Ginsparg",
"Paul",
""
]
] | We consider the incidence of text "reuse" by researchers, via a systematic pairwise comparison of the text content of all articles deposited to arXiv.org from 1991--2012. We measure the global frequencies of three classes of text reuse, and measure how chronic text reuse is distributed among authors in the dataset. We infer a baseline for accepted practice, perhaps surprisingly permissive compared with other societal contexts, and a clearly delineated set of aberrant authors. We find a negative correlation between the amount of reused text in an article and its influence, as measured by subsequent citations. Finally, we consider the distribution of countries of origin of articles containing large amounts of reused text. |
2205.11107 | Lara Scavuzzo | Lara Scavuzzo and Feng Yang Chen and Didier Ch\'etelat and Maxime
Gasse and Andrea Lodi and Neil Yorke-Smith and Karen Aardal | Learning to branch with Tree MDPs | 10 pages, 2 figures, plus supplementary material | null | null | null | cs.LG math.OC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | State-of-the-art Mixed Integer Linear Program (MILP) solvers combine
systematic tree search with a plethora of hard-coded heuristics, such as the
branching rule. The idea of learning branching rules from data has received
increasing attention recently, and promising results have been obtained by
learning fast approximations of the strong branching expert. In this work, we
instead propose to learn branching rules from scratch via Reinforcement
Learning (RL). We revisit the work of Etheve et al. (2020) and propose tree
Markov Decision Processes, or tree MDPs, a generalization of temporal MDPs that
provides a more suitable framework for learning to branch. We derive a tree
policy gradient theorem, which exhibits a better credit assignment compared to
its temporal counterpart. We demonstrate through computational experiments that
tree MDPs improve the learning convergence, and offer a promising framework for
tackling the learning-to-branch problem in MILPs.
| [
{
"created": "Mon, 23 May 2022 07:57:32 GMT",
"version": "v1"
},
{
"created": "Tue, 31 May 2022 11:05:56 GMT",
"version": "v2"
},
{
"created": "Thu, 13 Oct 2022 13:37:42 GMT",
"version": "v3"
}
] | 2022-10-14 | [
[
"Scavuzzo",
"Lara",
""
],
[
"Chen",
"Feng Yang",
""
],
[
"Chételat",
"Didier",
""
],
[
"Gasse",
"Maxime",
""
],
[
"Lodi",
"Andrea",
""
],
[
"Yorke-Smith",
"Neil",
""
],
[
"Aardal",
"Karen",
""
]
] | State-of-the-art Mixed Integer Linear Program (MILP) solvers combine systematic tree search with a plethora of hard-coded heuristics, such as the branching rule. The idea of learning branching rules from data has received increasing attention recently, and promising results have been obtained by learning fast approximations of the strong branching expert. In this work, we instead propose to learn branching rules from scratch via Reinforcement Learning (RL). We revisit the work of Etheve et al. (2020) and propose tree Markov Decision Processes, or tree MDPs, a generalization of temporal MDPs that provides a more suitable framework for learning to branch. We derive a tree policy gradient theorem, which exhibits a better credit assignment compared to its temporal counterpart. We demonstrate through computational experiments that tree MDPs improve the learning convergence, and offer a promising framework for tackling the learning-to-branch problem in MILPs. |
2209.12413 | P B Sujit Dr | Kasi Vishwanath, P.B. Sujit and Srikanth Saripalli | CAMEL: Learning Cost-maps Made Easy for Off-road Driving | null | null | null | null | cs.RO cs.CV | http://creativecommons.org/licenses/by/4.0/ | Cost-maps are used by robotic vehicles to plan collision-free paths. The cost
associated with each cell in the map represents the sensed environment
information which is often determined manually after several trial-and-error
efforts. In off-road environments, due to the presence of several types of
features, it is challenging to handcraft the cost values associated with each
feature. Moreover, different handcrafted cost values can lead to different
paths for the same environment which is not desirable. In this paper, we
address the problem of learning the cost-map values from the sensed environment
for robust vehicle path planning. We propose a novel framework called as CAMEL
using deep learning approach that learns the parameters through demonstrations
yielding an adaptive and robust cost-map for path planning. CAMEL has been
trained on multi-modal datasets such as RELLIS-3D. The evaluation of CAMEL is
carried out on an off-road scene simulator (MAVS) and on field data from
IISER-B campus. We also perform realworld implementation of CAMEL on a ground
rover. The results shows flexible and robust motion of the vehicle without
collisions in unstructured terrains.
| [
{
"created": "Mon, 26 Sep 2022 04:37:03 GMT",
"version": "v1"
},
{
"created": "Tue, 18 Oct 2022 08:07:59 GMT",
"version": "v2"
}
] | 2022-10-19 | [
[
"Vishwanath",
"Kasi",
""
],
[
"Sujit",
"P. B.",
""
],
[
"Saripalli",
"Srikanth",
""
]
] | Cost-maps are used by robotic vehicles to plan collision-free paths. The cost associated with each cell in the map represents the sensed environment information which is often determined manually after several trial-and-error efforts. In off-road environments, due to the presence of several types of features, it is challenging to handcraft the cost values associated with each feature. Moreover, different handcrafted cost values can lead to different paths for the same environment which is not desirable. In this paper, we address the problem of learning the cost-map values from the sensed environment for robust vehicle path planning. We propose a novel framework called as CAMEL using deep learning approach that learns the parameters through demonstrations yielding an adaptive and robust cost-map for path planning. CAMEL has been trained on multi-modal datasets such as RELLIS-3D. The evaluation of CAMEL is carried out on an off-road scene simulator (MAVS) and on field data from IISER-B campus. We also perform realworld implementation of CAMEL on a ground rover. The results shows flexible and robust motion of the vehicle without collisions in unstructured terrains. |
1303.2764 | Lin Na | Na Lin, Hong-Dong Liu, Chang-Qing Gong | Research and Simulation on Drivers' Route Choice Behavior Cognition
Model | 6 pages,8 figures,a table | null | null | null | cs.NI | http://creativecommons.org/licenses/by-nc-sa/3.0/ | This paper studied the behavior-cognitive model of drivers during their
travel based on the current research on driver behavior. Firstly, a route
choice behavior-cognitive model was proposed for describing the decision-making
mechanism of drivers during his travel; then, simulation experiments were
carried out on the cosimulation VBc-vissim platform. From the experimental
results, dynamic behavior features of drivers during their travel can be
properly explained by the behavior-cognitive model, thus optimal path can be
obtained from this model.
| [
{
"created": "Tue, 12 Mar 2013 02:59:15 GMT",
"version": "v1"
}
] | 2013-03-13 | [
[
"Lin",
"Na",
""
],
[
"Liu",
"Hong-Dong",
""
],
[
"Gong",
"Chang-Qing",
""
]
] | This paper studied the behavior-cognitive model of drivers during their travel based on the current research on driver behavior. Firstly, a route choice behavior-cognitive model was proposed for describing the decision-making mechanism of drivers during his travel; then, simulation experiments were carried out on the cosimulation VBc-vissim platform. From the experimental results, dynamic behavior features of drivers during their travel can be properly explained by the behavior-cognitive model, thus optimal path can be obtained from this model. |
1609.00559 | Ted Pedersen | Bridget T. McInnes and Ted Pedersen | Improving Correlation with Human Judgments by Integrating Semantic
Similarity with Second--Order Vectors | 10 pages, Appears in the Proceedings of the 16th Workshop on
Biomedical Natural Language Processing (BioNLP-2017), August 2017, Vancouver,
BC | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vector space methods that measure semantic similarity and relatedness often
rely on distributional information such as co--occurrence frequencies or
statistical measures of association to weight the importance of particular
co--occurrences. In this paper, we extend these methods by incorporating a
measure of semantic similarity based on a human curated taxonomy into a
second--order vector representation. This results in a measure of semantic
relatedness that combines both the contextual information available in a
corpus--based vector space representation with the semantic knowledge found in
a biomedical ontology. Our results show that incorporating semantic similarity
into a second order co--occurrence matrices improves correlation with human
judgments for both similarity and relatedness, and that our method compares
favorably to various different word embedding methods that have recently been
evaluated on the same reference standards we have used.
| [
{
"created": "Fri, 2 Sep 2016 11:44:17 GMT",
"version": "v1"
},
{
"created": "Sat, 27 May 2017 00:23:06 GMT",
"version": "v2"
}
] | 2017-05-30 | [
[
"McInnes",
"Bridget T.",
""
],
[
"Pedersen",
"Ted",
""
]
] | Vector space methods that measure semantic similarity and relatedness often rely on distributional information such as co--occurrence frequencies or statistical measures of association to weight the importance of particular co--occurrences. In this paper, we extend these methods by incorporating a measure of semantic similarity based on a human curated taxonomy into a second--order vector representation. This results in a measure of semantic relatedness that combines both the contextual information available in a corpus--based vector space representation with the semantic knowledge found in a biomedical ontology. Our results show that incorporating semantic similarity into a second order co--occurrence matrices improves correlation with human judgments for both similarity and relatedness, and that our method compares favorably to various different word embedding methods that have recently been evaluated on the same reference standards we have used. |
2301.11716 | Phuong-Hang Le | Phuong-Hang Le, Hongyu Gong, Changhan Wang, Juan Pino, Benjamin
Lecouteux, Didier Schwab | Pre-training for Speech Translation: CTC Meets Optimal Transport | ICML 2023 (oral presentation). This version fixed URLs, updated
affiliations & acknowledgements, and improved formatting | null | null | null | cs.CL cs.LG cs.SD eess.AS | http://creativecommons.org/licenses/by/4.0/ | The gap between speech and text modalities is a major challenge in
speech-to-text translation (ST). Different methods have been proposed to reduce
this gap, but most of them require architectural changes in ST training. In
this work, we propose to mitigate this issue at the pre-training stage,
requiring no change in the ST model. First, we show that the connectionist
temporal classification (CTC) loss can reduce the modality gap by design. We
provide a quantitative comparison with the more common cross-entropy loss,
showing that pre-training with CTC consistently achieves better final ST
accuracy. Nevertheless, CTC is only a partial solution and thus, in our second
contribution, we propose a novel pre-training method combining CTC and optimal
transport to further reduce this gap. Our method pre-trains a Siamese-like
model composed of two encoders, one for acoustic inputs and the other for
textual inputs, such that they produce representations that are close to each
other in the Wasserstein space. Extensive experiments on the standard CoVoST-2
and MuST-C datasets show that our pre-training method applied to the vanilla
encoder-decoder Transformer achieves state-of-the-art performance under the
no-external-data setting, and performs on par with recent strong multi-task
learning systems trained with external data. Finally, our method can also be
applied on top of these multi-task systems, leading to further improvements for
these models. Code and pre-trained models are available at
https://github.com/formiel/fairseq.
| [
{
"created": "Fri, 27 Jan 2023 14:03:09 GMT",
"version": "v1"
},
{
"created": "Tue, 30 May 2023 09:06:22 GMT",
"version": "v2"
},
{
"created": "Mon, 5 Jun 2023 11:44:02 GMT",
"version": "v3"
}
] | 2023-06-06 | [
[
"Le",
"Phuong-Hang",
""
],
[
"Gong",
"Hongyu",
""
],
[
"Wang",
"Changhan",
""
],
[
"Pino",
"Juan",
""
],
[
"Lecouteux",
"Benjamin",
""
],
[
"Schwab",
"Didier",
""
]
] | The gap between speech and text modalities is a major challenge in speech-to-text translation (ST). Different methods have been proposed to reduce this gap, but most of them require architectural changes in ST training. In this work, we propose to mitigate this issue at the pre-training stage, requiring no change in the ST model. First, we show that the connectionist temporal classification (CTC) loss can reduce the modality gap by design. We provide a quantitative comparison with the more common cross-entropy loss, showing that pre-training with CTC consistently achieves better final ST accuracy. Nevertheless, CTC is only a partial solution and thus, in our second contribution, we propose a novel pre-training method combining CTC and optimal transport to further reduce this gap. Our method pre-trains a Siamese-like model composed of two encoders, one for acoustic inputs and the other for textual inputs, such that they produce representations that are close to each other in the Wasserstein space. Extensive experiments on the standard CoVoST-2 and MuST-C datasets show that our pre-training method applied to the vanilla encoder-decoder Transformer achieves state-of-the-art performance under the no-external-data setting, and performs on par with recent strong multi-task learning systems trained with external data. Finally, our method can also be applied on top of these multi-task systems, leading to further improvements for these models. Code and pre-trained models are available at https://github.com/formiel/fairseq. |
1804.07663 | Andreas Steyven | Andreas Steyven, Emma Hart, Ben Paechter | An Investigation of Environmental Influence on the Benefits of
Adaptation Mechanisms in Evolutionary Swarm Robotics | In GECCO 2017 | null | 10.1145/3071178.3071232 | null | cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A robotic swarm that is required to operate for long periods in a potentially
unknown environment can use both evolution and individual learning methods in
order to adapt. However, the role played by the environment in influencing the
effectiveness of each type of learning is not well understood. In this paper,
we address this question by analysing the performance of a swarm in a range of
simulated, dynamic environments where a distributed evolutionary algorithm for
evolving a controller is augmented with a number of different individual
learning mechanisms. The learning mechanisms themselves are defined by
parameters which can be either fixed or inherited. We conduct experiments in a
range of dynamic environments whose characteristics are varied so as to present
different opportunities for learning. Results enable us to map environmental
characteristics to the most effective learning algorithm.
| [
{
"created": "Fri, 20 Apr 2018 15:13:47 GMT",
"version": "v1"
}
] | 2018-04-23 | [
[
"Steyven",
"Andreas",
""
],
[
"Hart",
"Emma",
""
],
[
"Paechter",
"Ben",
""
]
] | A robotic swarm that is required to operate for long periods in a potentially unknown environment can use both evolution and individual learning methods in order to adapt. However, the role played by the environment in influencing the effectiveness of each type of learning is not well understood. In this paper, we address this question by analysing the performance of a swarm in a range of simulated, dynamic environments where a distributed evolutionary algorithm for evolving a controller is augmented with a number of different individual learning mechanisms. The learning mechanisms themselves are defined by parameters which can be either fixed or inherited. We conduct experiments in a range of dynamic environments whose characteristics are varied so as to present different opportunities for learning. Results enable us to map environmental characteristics to the most effective learning algorithm. |
1306.0322 | Hector Zenil | Hector Zenil, Fernando Soler-Toscano, Kamaludin Dingle and Ard A.
Louis | Correlation of Automorphism Group Size and Topological Properties with
Program-size Complexity Evaluations of Graphs and Complex Networks | 15 2-column pages, 20 figures. Forthcoming in Physica A: Statistical
Mechanics and its Applications | null | 10.1016/j.physa.2014.02.060 | null | cs.IT cs.CC cs.CG math.IT q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We show that numerical approximations of Kolmogorov complexity (K) applied to
graph adjacency matrices capture some group-theoretic and topological
properties of graphs and empirical networks ranging from metabolic to social
networks. That K and the size of the group of automorphisms of a graph are
correlated opens up interesting connections to problems in computational
geometry, and thus connects several measures and concepts from complexity
science. We show that approximations of K characterise synthetic and natural
networks by their generating mechanisms, assigning lower algorithmic randomness
to complex network models (Watts-Strogatz and Barabasi-Albert networks) and
high Kolmogorov complexity to (random) Erdos-Renyi graphs. We derive these
results via two different Kolmogorov complexity approximation methods applied
to the adjacency matrices of the graphs and networks. The methods used are the
traditional lossless compression approach to Kolmogorov complexity, and a
normalised version of a Block Decomposition Method (BDM) measure, based on
algorithmic probability theory.
| [
{
"created": "Mon, 3 Jun 2013 08:36:11 GMT",
"version": "v1"
},
{
"created": "Mon, 17 Jun 2013 11:32:00 GMT",
"version": "v2"
},
{
"created": "Sun, 23 Feb 2014 01:42:27 GMT",
"version": "v3"
}
] | 2015-06-16 | [
[
"Zenil",
"Hector",
""
],
[
"Soler-Toscano",
"Fernando",
""
],
[
"Dingle",
"Kamaludin",
""
],
[
"Louis",
"Ard A.",
""
]
] | We show that numerical approximations of Kolmogorov complexity (K) applied to graph adjacency matrices capture some group-theoretic and topological properties of graphs and empirical networks ranging from metabolic to social networks. That K and the size of the group of automorphisms of a graph are correlated opens up interesting connections to problems in computational geometry, and thus connects several measures and concepts from complexity science. We show that approximations of K characterise synthetic and natural networks by their generating mechanisms, assigning lower algorithmic randomness to complex network models (Watts-Strogatz and Barabasi-Albert networks) and high Kolmogorov complexity to (random) Erdos-Renyi graphs. We derive these results via two different Kolmogorov complexity approximation methods applied to the adjacency matrices of the graphs and networks. The methods used are the traditional lossless compression approach to Kolmogorov complexity, and a normalised version of a Block Decomposition Method (BDM) measure, based on algorithmic probability theory. |
2307.01831 | Shentong Mo | Shentong Mo, Enze Xie, Ruihang Chu, Lewei Yao, Lanqing Hong, Matthias
Nie{\ss}ner, Zhenguo Li | DiT-3D: Exploring Plain Diffusion Transformers for 3D Shape Generation | Project Page: https://dit-3d.github.io/ | null | null | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent Diffusion Transformers (e.g., DiT) have demonstrated their powerful
effectiveness in generating high-quality 2D images. However, it is still being
determined whether the Transformer architecture performs equally well in 3D
shape generation, as previous 3D diffusion methods mostly adopted the U-Net
architecture. To bridge this gap, we propose a novel Diffusion Transformer for
3D shape generation, namely DiT-3D, which can directly operate the denoising
process on voxelized point clouds using plain Transformers. Compared to
existing U-Net approaches, our DiT-3D is more scalable in model size and
produces much higher quality generations. Specifically, the DiT-3D adopts the
design philosophy of DiT but modifies it by incorporating 3D positional and
patch embeddings to adaptively aggregate input from voxelized point clouds. To
reduce the computational cost of self-attention in 3D shape generation, we
incorporate 3D window attention into Transformer blocks, as the increased 3D
token length resulting from the additional dimension of voxels can lead to high
computation. Finally, linear and devoxelization layers are used to predict the
denoised point clouds. In addition, our transformer architecture supports
efficient fine-tuning from 2D to 3D, where the pre-trained DiT-2D checkpoint on
ImageNet can significantly improve DiT-3D on ShapeNet. Experimental results on
the ShapeNet dataset demonstrate that the proposed DiT-3D achieves
state-of-the-art performance in high-fidelity and diverse 3D point cloud
generation. In particular, our DiT-3D decreases the 1-Nearest Neighbor Accuracy
of the state-of-the-art method by 4.59 and increases the Coverage metric by
3.51 when evaluated on Chamfer Distance.
| [
{
"created": "Tue, 4 Jul 2023 17:15:46 GMT",
"version": "v1"
}
] | 2023-07-06 | [
[
"Mo",
"Shentong",
""
],
[
"Xie",
"Enze",
""
],
[
"Chu",
"Ruihang",
""
],
[
"Yao",
"Lewei",
""
],
[
"Hong",
"Lanqing",
""
],
[
"Nießner",
"Matthias",
""
],
[
"Li",
"Zhenguo",
""
]
] | Recent Diffusion Transformers (e.g., DiT) have demonstrated their powerful effectiveness in generating high-quality 2D images. However, it is still being determined whether the Transformer architecture performs equally well in 3D shape generation, as previous 3D diffusion methods mostly adopted the U-Net architecture. To bridge this gap, we propose a novel Diffusion Transformer for 3D shape generation, namely DiT-3D, which can directly operate the denoising process on voxelized point clouds using plain Transformers. Compared to existing U-Net approaches, our DiT-3D is more scalable in model size and produces much higher quality generations. Specifically, the DiT-3D adopts the design philosophy of DiT but modifies it by incorporating 3D positional and patch embeddings to adaptively aggregate input from voxelized point clouds. To reduce the computational cost of self-attention in 3D shape generation, we incorporate 3D window attention into Transformer blocks, as the increased 3D token length resulting from the additional dimension of voxels can lead to high computation. Finally, linear and devoxelization layers are used to predict the denoised point clouds. In addition, our transformer architecture supports efficient fine-tuning from 2D to 3D, where the pre-trained DiT-2D checkpoint on ImageNet can significantly improve DiT-3D on ShapeNet. Experimental results on the ShapeNet dataset demonstrate that the proposed DiT-3D achieves state-of-the-art performance in high-fidelity and diverse 3D point cloud generation. In particular, our DiT-3D decreases the 1-Nearest Neighbor Accuracy of the state-of-the-art method by 4.59 and increases the Coverage metric by 3.51 when evaluated on Chamfer Distance. |
2206.00564 | Laurie Burchell | Laurie Burchell, Alexandra Birch, Kenneth Heafield | Exploring Diversity in Back Translation for Low-Resource Machine
Translation | null | null | 10.18653/v1/2022.deeplo-1.8 | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Back translation is one of the most widely used methods for improving the
performance of neural machine translation systems. Recent research has sought
to enhance the effectiveness of this method by increasing the 'diversity' of
the generated translations. We argue that the definitions and metrics used to
quantify 'diversity' in previous work have been insufficient. This work puts
forward a more nuanced framework for understanding diversity in training data,
splitting it into lexical diversity and syntactic diversity. We present novel
metrics for measuring these different aspects of diversity and carry out
empirical analysis into the effect of these types of diversity on final neural
machine translation model performance for low-resource
English$\leftrightarrow$Turkish and mid-resource
English$\leftrightarrow$Icelandic. Our findings show that generating back
translation using nucleus sampling results in higher final model performance,
and that this method of generation has high levels of both lexical and
syntactic diversity. We also find evidence that lexical diversity is more
important than syntactic for back translation performance.
| [
{
"created": "Wed, 1 Jun 2022 15:21:16 GMT",
"version": "v1"
}
] | 2023-09-01 | [
[
"Burchell",
"Laurie",
""
],
[
"Birch",
"Alexandra",
""
],
[
"Heafield",
"Kenneth",
""
]
] | Back translation is one of the most widely used methods for improving the performance of neural machine translation systems. Recent research has sought to enhance the effectiveness of this method by increasing the 'diversity' of the generated translations. We argue that the definitions and metrics used to quantify 'diversity' in previous work have been insufficient. This work puts forward a more nuanced framework for understanding diversity in training data, splitting it into lexical diversity and syntactic diversity. We present novel metrics for measuring these different aspects of diversity and carry out empirical analysis into the effect of these types of diversity on final neural machine translation model performance for low-resource English$\leftrightarrow$Turkish and mid-resource English$\leftrightarrow$Icelandic. Our findings show that generating back translation using nucleus sampling results in higher final model performance, and that this method of generation has high levels of both lexical and syntactic diversity. We also find evidence that lexical diversity is more important than syntactic for back translation performance. |
2207.01171 | Alessandra Carneiro | Alessandra Carneiro and Lorena Nascimento and Mauricio Noernberg and
Carmem Hara and Aurora Pozo | Portuguese Man-of-War Image Classification with Convolutional Neural
Networks | null | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Portuguese man-of-war (PMW) is a gelatinous organism with long tentacles
capable of causing severe burns, thus leading to negative impacts on human
activities, such as tourism and fishing. There is a lack of information about
the spatio-temporal dynamics of this species. Therefore, the use of alternative
methods for collecting data can contribute to their monitoring. Given the
widespread use of social networks and the eye-catching look of PMW, Instagram
posts can be a promising data source for monitoring. The first task to follow
this approach is to identify posts that refer to PMW. This paper reports on the
use of convolutional neural networks for PMW images classification, in order to
automate the recognition of Instagram posts. We created a suitable dataset, and
trained three different neural networks: VGG-16, ResNet50, and InceptionV3,
with and without a pre-trained step with the ImageNet dataset. We analyzed
their results using accuracy, precision, recall, and F1 score metrics. The
pre-trained ResNet50 network presented the best results, obtaining 94% of
accuracy and 95% of precision, recall, and F1 score. These results show that
convolutional neural networks can be very effective for recognizing PMW images
from the Instagram social media.
| [
{
"created": "Mon, 4 Jul 2022 03:06:45 GMT",
"version": "v1"
}
] | 2022-07-05 | [
[
"Carneiro",
"Alessandra",
""
],
[
"Nascimento",
"Lorena",
""
],
[
"Noernberg",
"Mauricio",
""
],
[
"Hara",
"Carmem",
""
],
[
"Pozo",
"Aurora",
""
]
] | Portuguese man-of-war (PMW) is a gelatinous organism with long tentacles capable of causing severe burns, thus leading to negative impacts on human activities, such as tourism and fishing. There is a lack of information about the spatio-temporal dynamics of this species. Therefore, the use of alternative methods for collecting data can contribute to their monitoring. Given the widespread use of social networks and the eye-catching look of PMW, Instagram posts can be a promising data source for monitoring. The first task to follow this approach is to identify posts that refer to PMW. This paper reports on the use of convolutional neural networks for PMW images classification, in order to automate the recognition of Instagram posts. We created a suitable dataset, and trained three different neural networks: VGG-16, ResNet50, and InceptionV3, with and without a pre-trained step with the ImageNet dataset. We analyzed their results using accuracy, precision, recall, and F1 score metrics. The pre-trained ResNet50 network presented the best results, obtaining 94% of accuracy and 95% of precision, recall, and F1 score. These results show that convolutional neural networks can be very effective for recognizing PMW images from the Instagram social media. |
2107.13165 | Kushal Chawla | Kushal Chawla, Rene Clever, Jaysa Ramirez, Gale Lucas, Jonathan Gratch | Towards Emotion-Aware Agents For Negotiation Dialogues | Accepted at 9th International Conference on Affective Computing &
Intelligent Interaction (ACII 2021) | null | null | null | cs.HC cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | Negotiation is a complex social interaction that encapsulates emotional
encounters in human decision-making. Virtual agents that can negotiate with
humans are useful in pedagogy and conversational AI. To advance the development
of such agents, we explore the prediction of two important subjective goals in
a negotiation - outcome satisfaction and partner perception. Specifically, we
analyze the extent to which emotion attributes extracted from the negotiation
help in the prediction, above and beyond the individual difference variables.
We focus on a recent dataset in chat-based negotiations, grounded in a
realistic camping scenario. We study three degrees of emotion dimensions -
emoticons, lexical, and contextual by leveraging affective lexicons and a
state-of-the-art deep learning architecture. Our insights will be helpful in
designing adaptive negotiation agents that interact through realistic
communication interfaces.
| [
{
"created": "Wed, 28 Jul 2021 04:42:36 GMT",
"version": "v1"
}
] | 2021-07-29 | [
[
"Chawla",
"Kushal",
""
],
[
"Clever",
"Rene",
""
],
[
"Ramirez",
"Jaysa",
""
],
[
"Lucas",
"Gale",
""
],
[
"Gratch",
"Jonathan",
""
]
] | Negotiation is a complex social interaction that encapsulates emotional encounters in human decision-making. Virtual agents that can negotiate with humans are useful in pedagogy and conversational AI. To advance the development of such agents, we explore the prediction of two important subjective goals in a negotiation - outcome satisfaction and partner perception. Specifically, we analyze the extent to which emotion attributes extracted from the negotiation help in the prediction, above and beyond the individual difference variables. We focus on a recent dataset in chat-based negotiations, grounded in a realistic camping scenario. We study three degrees of emotion dimensions - emoticons, lexical, and contextual by leveraging affective lexicons and a state-of-the-art deep learning architecture. Our insights will be helpful in designing adaptive negotiation agents that interact through realistic communication interfaces. |
1410.1237 | Mahantesh Halappanavar | Hao Lu and Mahantesh Halappanavar and Ananth Kalyanaraman | Parallel Heuristics for Scalable Community Detection | Submitted to a journal | null | null | null | cs.SI cs.DC physics.soc-ph | http://creativecommons.org/licenses/publicdomain/ | Community detection has become a fundamental operation in numerous
graph-theoretic applications. It is used to reveal natural divisions that exist
within real world networks without imposing prior size or cardinality
constraints on the set of communities. Despite its potential for application,
there is only limited support for community detection on large-scale parallel
computers, largely owing to the irregular and inherently sequential nature of
the underlying heuristics. In this paper, we present parallelization heuristics
for fast community detection using the Louvain method as the serial template.
The Louvain method is an iterative heuristic for modularity optimization.
Originally developed by Blondel et al. in 2008, the method has become
increasingly popular owing to its ability to detect high modularity community
partitions in a fast and memory-efficient manner. However, the method is also
inherently sequential, thereby limiting its scalability. Here, we observe
certain key properties of this method that present challenges for its
parallelization, and consequently propose heuristics that are designed to break
the sequential barrier. For evaluation purposes, we implemented our heuristics
using OpenMP multithreading, and tested them over real world graphs derived
from multiple application domains (e.g., internet, citation, biological).
Compared to the serial Louvain implementation, our parallel implementation is
able to produce community outputs with a higher modularity for most of the
inputs tested, in comparable number or fewer iterations, while providing
absolute speedups of up to 16x using 32 threads.
| [
{
"created": "Mon, 6 Oct 2014 01:54:15 GMT",
"version": "v1"
},
{
"created": "Tue, 7 Oct 2014 01:01:12 GMT",
"version": "v2"
}
] | 2014-10-08 | [
[
"Lu",
"Hao",
""
],
[
"Halappanavar",
"Mahantesh",
""
],
[
"Kalyanaraman",
"Ananth",
""
]
] | Community detection has become a fundamental operation in numerous graph-theoretic applications. It is used to reveal natural divisions that exist within real world networks without imposing prior size or cardinality constraints on the set of communities. Despite its potential for application, there is only limited support for community detection on large-scale parallel computers, largely owing to the irregular and inherently sequential nature of the underlying heuristics. In this paper, we present parallelization heuristics for fast community detection using the Louvain method as the serial template. The Louvain method is an iterative heuristic for modularity optimization. Originally developed by Blondel et al. in 2008, the method has become increasingly popular owing to its ability to detect high modularity community partitions in a fast and memory-efficient manner. However, the method is also inherently sequential, thereby limiting its scalability. Here, we observe certain key properties of this method that present challenges for its parallelization, and consequently propose heuristics that are designed to break the sequential barrier. For evaluation purposes, we implemented our heuristics using OpenMP multithreading, and tested them over real world graphs derived from multiple application domains (e.g., internet, citation, biological). Compared to the serial Louvain implementation, our parallel implementation is able to produce community outputs with a higher modularity for most of the inputs tested, in comparable number or fewer iterations, while providing absolute speedups of up to 16x using 32 threads. |
1907.05016 | Jing Li | Jing Li and Dongning Guo | On Analysis of the Bitcoin and Prism Backbone Protocols | null | null | null | null | cs.CR cs.DC cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bitcoin is a peer-to-peer payment system proposed by Nakamoto in 2008.
Properties of the bitcoin backbone protocol have been investigated in some
depth: the blockchain growth property quantifies the number of blocks added to
the blockchain during any time intervals; the blockchain quality property
ensures the honest miners always contribute at least a certain fraction of the
blockchain; the common prefix property ensures if a block is deep enough, it
will eventually be adopted by all honest miners with high probability.
Following the spirit of decoupling various functionalities of the blockchain,
the Prism protocol is proposed to dramatically improve the throughput while
maintaining the same level of security. Prior analyses of the bitcoin and Prism
backbone protocols assume the lifespan of blockchain is finite. This paper
presents a streamlined and strengthened analysis without the finite horizon
assumption. Specifically, the results include a blockchain growth property, a
blockchain quality property, and a common prefix property of the bitcoin
backbone protocol, as well as the liveness and persistence of the Prism
backbone protocol regardless of whether the blockchains have a infinite
lifespan. We also express the properties of bitcoin and Prism backbone
protocols in explicit expressions rather than order optimal results, which lead
to tighter bounds and practical references for public transaction ledger
protocol design.
| [
{
"created": "Thu, 11 Jul 2019 06:35:05 GMT",
"version": "v1"
},
{
"created": "Sun, 20 Oct 2019 01:10:42 GMT",
"version": "v2"
}
] | 2019-10-22 | [
[
"Li",
"Jing",
""
],
[
"Guo",
"Dongning",
""
]
] | Bitcoin is a peer-to-peer payment system proposed by Nakamoto in 2008. Properties of the bitcoin backbone protocol have been investigated in some depth: the blockchain growth property quantifies the number of blocks added to the blockchain during any time intervals; the blockchain quality property ensures the honest miners always contribute at least a certain fraction of the blockchain; the common prefix property ensures if a block is deep enough, it will eventually be adopted by all honest miners with high probability. Following the spirit of decoupling various functionalities of the blockchain, the Prism protocol is proposed to dramatically improve the throughput while maintaining the same level of security. Prior analyses of the bitcoin and Prism backbone protocols assume the lifespan of blockchain is finite. This paper presents a streamlined and strengthened analysis without the finite horizon assumption. Specifically, the results include a blockchain growth property, a blockchain quality property, and a common prefix property of the bitcoin backbone protocol, as well as the liveness and persistence of the Prism backbone protocol regardless of whether the blockchains have a infinite lifespan. We also express the properties of bitcoin and Prism backbone protocols in explicit expressions rather than order optimal results, which lead to tighter bounds and practical references for public transaction ledger protocol design. |
1512.01872 | Pranav Rajpurkar | Pranav Rajpurkar, Toki Migimatsu, Jeff Kiske, Royce Cheng-Yue, Sameep
Tandon, Tao Wang, Andrew Ng | Driverseat: Crowdstrapping Learning Tasks for Autonomous Driving | null | null | null | null | cs.HC cs.AI cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While emerging deep-learning systems have outclassed knowledge-based
approaches in many tasks, their application to detection tasks for autonomous
technologies remains an open field for scientific exploration. Broadly, there
are two major developmental bottlenecks: the unavailability of comprehensively
labeled datasets and of expressive evaluation strategies. Approaches for
labeling datasets have relied on intensive hand-engineering, and strategies for
evaluating learning systems have been unable to identify failure-case
scenarios. Human intelligence offers an untapped approach for breaking through
these bottlenecks. This paper introduces Driverseat, a technology for embedding
crowds around learning systems for autonomous driving. Driverseat utilizes
crowd contributions for (a) collecting complex 3D labels and (b) tagging
diverse scenarios for ready evaluation of learning systems. We demonstrate how
Driverseat can crowdstrap a convolutional neural network on the lane-detection
task. More generally, crowdstrapping introduces a valuable paradigm for any
technology that can benefit from leveraging the powerful combination of human
and computer intelligence.
| [
{
"created": "Mon, 7 Dec 2015 01:34:23 GMT",
"version": "v1"
}
] | 2015-12-08 | [
[
"Rajpurkar",
"Pranav",
""
],
[
"Migimatsu",
"Toki",
""
],
[
"Kiske",
"Jeff",
""
],
[
"Cheng-Yue",
"Royce",
""
],
[
"Tandon",
"Sameep",
""
],
[
"Wang",
"Tao",
""
],
[
"Ng",
"Andrew",
""
]
] | While emerging deep-learning systems have outclassed knowledge-based approaches in many tasks, their application to detection tasks for autonomous technologies remains an open field for scientific exploration. Broadly, there are two major developmental bottlenecks: the unavailability of comprehensively labeled datasets and of expressive evaluation strategies. Approaches for labeling datasets have relied on intensive hand-engineering, and strategies for evaluating learning systems have been unable to identify failure-case scenarios. Human intelligence offers an untapped approach for breaking through these bottlenecks. This paper introduces Driverseat, a technology for embedding crowds around learning systems for autonomous driving. Driverseat utilizes crowd contributions for (a) collecting complex 3D labels and (b) tagging diverse scenarios for ready evaluation of learning systems. We demonstrate how Driverseat can crowdstrap a convolutional neural network on the lane-detection task. More generally, crowdstrapping introduces a valuable paradigm for any technology that can benefit from leveraging the powerful combination of human and computer intelligence. |
1803.09413 | Sumita Mishra | Sachin Kumar, Sumita Mishra, Pooja Khanna, Pragya | Precision Sugarcane Monitoring Using SVM Classifier | This is a pre-print of an article published in [Procedia Computer
Science 2017] | Procedia Computer Science,2017,vol.122,pp. 881-887 | 10.1016/j.procs.2017.11.450 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | India is agriculture based economy and sugarcane is one of the major crops
produced in northern India. Productivity of sugarcane decreases due to
inappropriate soil conditions and infections caused by various types of
diseases , timely and accurate disease diagnosis, plays an important role
towards optimizing crop yield. This paper presents a system model for
monitoring of sugarcane crop, the proposed model continuously monitor
parameters (temperature, humidity and moisture) responsible for healthy growth
of the crop in addition KNN clustering along with SVM classifier is utilized
for infection identification if any through images obtained at regular
intervals. The data has been transmitted wirelessly from the site to the
control unit. Model achieves an accuracy of 96% on a sample of 200 images, the
model was tested at Lolai, near Malhaur, Gomti Nagar Extension.
| [
{
"created": "Mon, 26 Mar 2018 05:05:25 GMT",
"version": "v1"
}
] | 2018-03-28 | [
[
"Kumar",
"Sachin",
""
],
[
"Mishra",
"Sumita",
""
],
[
"Khanna",
"Pooja",
""
],
[
"Pragya",
"",
""
]
] | India is agriculture based economy and sugarcane is one of the major crops produced in northern India. Productivity of sugarcane decreases due to inappropriate soil conditions and infections caused by various types of diseases , timely and accurate disease diagnosis, plays an important role towards optimizing crop yield. This paper presents a system model for monitoring of sugarcane crop, the proposed model continuously monitor parameters (temperature, humidity and moisture) responsible for healthy growth of the crop in addition KNN clustering along with SVM classifier is utilized for infection identification if any through images obtained at regular intervals. The data has been transmitted wirelessly from the site to the control unit. Model achieves an accuracy of 96% on a sample of 200 images, the model was tested at Lolai, near Malhaur, Gomti Nagar Extension. |
2402.14162 | Minh-Hao Van | Minh-Hao Van, Prateek Verma, Xintao Wu | On Large Visual Language Models for Medical Imaging Analysis: An
Empirical Study | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recently, large language models (LLMs) have taken the spotlight in natural
language processing. Further, integrating LLMs with vision enables the users to
explore emergent abilities with multimodal data. Visual language models (VLMs),
such as LLaVA, Flamingo, or CLIP, have demonstrated impressive performance on
various visio-linguistic tasks. Consequently, there are enormous applications
of large models that could be potentially used in the biomedical imaging field.
Along that direction, there is a lack of related work to show the ability of
large models to diagnose the diseases. In this work, we study the zero-shot and
few-shot robustness of VLMs on the medical imaging analysis tasks. Our
comprehensive experiments demonstrate the effectiveness of VLMs in analyzing
biomedical images such as brain MRIs, microscopic images of blood cells, and
chest X-rays.
| [
{
"created": "Wed, 21 Feb 2024 23:01:38 GMT",
"version": "v1"
}
] | 2024-02-23 | [
[
"Van",
"Minh-Hao",
""
],
[
"Verma",
"Prateek",
""
],
[
"Wu",
"Xintao",
""
]
] | Recently, large language models (LLMs) have taken the spotlight in natural language processing. Further, integrating LLMs with vision enables the users to explore emergent abilities with multimodal data. Visual language models (VLMs), such as LLaVA, Flamingo, or CLIP, have demonstrated impressive performance on various visio-linguistic tasks. Consequently, there are enormous applications of large models that could be potentially used in the biomedical imaging field. Along that direction, there is a lack of related work to show the ability of large models to diagnose the diseases. In this work, we study the zero-shot and few-shot robustness of VLMs on the medical imaging analysis tasks. Our comprehensive experiments demonstrate the effectiveness of VLMs in analyzing biomedical images such as brain MRIs, microscopic images of blood cells, and chest X-rays. |
2312.07381 | Hallee Wong | Hallee E. Wong, Marianne Rakic, John Guttag, Adrian V. Dalca | ScribblePrompt: Fast and Flexible Interactive Segmentation for Any
Biomedical Image | Accepted by ECCV 2024. Project Website:
https://scribbleprompt.csail.mit.edu Keywords: Interactive Segmentation,
Medical Imaging, Segment Anything Model, SAM, Scribble Annotations, Prompt | null | null | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Biomedical image segmentation is a crucial part of both scientific research
and clinical care. With enough labelled data, deep learning models can be
trained to accurately automate specific biomedical image segmentation tasks.
However, manually segmenting images to create training data is highly labor
intensive and requires domain expertise. We present \emph{ScribblePrompt}, a
flexible neural network based interactive segmentation tool for biomedical
imaging that enables human annotators to segment previously unseen structures
using scribbles, clicks, and bounding boxes. Through rigorous quantitative
experiments, we demonstrate that given comparable amounts of interaction,
ScribblePrompt produces more accurate segmentations than previous methods on
datasets unseen during training. In a user study with domain experts,
ScribblePrompt reduced annotation time by 28% while improving Dice by 15%
compared to the next best method. ScribblePrompt's success rests on a set of
careful design decisions. These include a training strategy that incorporates
both a highly diverse set of images and tasks, novel algorithms for simulated
user interactions and labels, and a network that enables fast inference. We
showcase ScribblePrompt in an interactive demo, provide code, and release a
dataset of scribble annotations at https://scribbleprompt.csail.mit.edu
| [
{
"created": "Tue, 12 Dec 2023 15:57:03 GMT",
"version": "v1"
},
{
"created": "Fri, 12 Apr 2024 20:41:14 GMT",
"version": "v2"
},
{
"created": "Tue, 16 Jul 2024 21:21:42 GMT",
"version": "v3"
}
] | 2024-07-18 | [
[
"Wong",
"Hallee E.",
""
],
[
"Rakic",
"Marianne",
""
],
[
"Guttag",
"John",
""
],
[
"Dalca",
"Adrian V.",
""
]
] | Biomedical image segmentation is a crucial part of both scientific research and clinical care. With enough labelled data, deep learning models can be trained to accurately automate specific biomedical image segmentation tasks. However, manually segmenting images to create training data is highly labor intensive and requires domain expertise. We present \emph{ScribblePrompt}, a flexible neural network based interactive segmentation tool for biomedical imaging that enables human annotators to segment previously unseen structures using scribbles, clicks, and bounding boxes. Through rigorous quantitative experiments, we demonstrate that given comparable amounts of interaction, ScribblePrompt produces more accurate segmentations than previous methods on datasets unseen during training. In a user study with domain experts, ScribblePrompt reduced annotation time by 28% while improving Dice by 15% compared to the next best method. ScribblePrompt's success rests on a set of careful design decisions. These include a training strategy that incorporates both a highly diverse set of images and tasks, novel algorithms for simulated user interactions and labels, and a network that enables fast inference. We showcase ScribblePrompt in an interactive demo, provide code, and release a dataset of scribble annotations at https://scribbleprompt.csail.mit.edu |
1712.06897 | Jie Lyu | Jie Lyu, Zejian Yuan, Dapeng Chen | Learning Fixation Point Strategy for Object Detection and Classification | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel recurrent attentional structure to localize and recognize
objects jointly. The network can learn to extract a sequence of local
observations with detailed appearance and rough context, instead of sliding
windows or convolutions on the entire image. Meanwhile, those observations are
fused to complete detection and classification tasks. On training, we present a
hybrid loss function to learn the parameters of the multi-task network
end-to-end. Particularly, the combination of stochastic and object-awareness
strategy, named SA, can select more abundant context and ensure the last
fixation close to the object. In addition, we build a real-world dataset to
verify the capacity of our method in detecting the object of interest including
those small ones. Our method can predict a precise bounding box on an image,
and achieve high speed on large images without pooling operations. Experimental
results indicate that the proposed method can mine effective context by several
local observations. Moreover, the precision and speed are easily improved by
changing the number of recurrent steps. Finally, we will open the source code
of our proposed approach.
| [
{
"created": "Tue, 19 Dec 2017 12:28:01 GMT",
"version": "v1"
}
] | 2017-12-20 | [
[
"Lyu",
"Jie",
""
],
[
"Yuan",
"Zejian",
""
],
[
"Chen",
"Dapeng",
""
]
] | We propose a novel recurrent attentional structure to localize and recognize objects jointly. The network can learn to extract a sequence of local observations with detailed appearance and rough context, instead of sliding windows or convolutions on the entire image. Meanwhile, those observations are fused to complete detection and classification tasks. On training, we present a hybrid loss function to learn the parameters of the multi-task network end-to-end. Particularly, the combination of stochastic and object-awareness strategy, named SA, can select more abundant context and ensure the last fixation close to the object. In addition, we build a real-world dataset to verify the capacity of our method in detecting the object of interest including those small ones. Our method can predict a precise bounding box on an image, and achieve high speed on large images without pooling operations. Experimental results indicate that the proposed method can mine effective context by several local observations. Moreover, the precision and speed are easily improved by changing the number of recurrent steps. Finally, we will open the source code of our proposed approach. |
2310.14162 | Rohan Gupta | Rohan Gupta | Augmenting End-to-End Steering Angle Prediction with CAN Bus Data | 5 pages | null | null | null | cs.CV cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | In recent years, end to end steering prediction for autonomous vehicles has
become a major area of research. The primary method for achieving end to end
steering was to use computer vision models on a live feed of video data.
However, to further increase accuracy, many companies have added data from
light detection and ranging (LiDAR) and or radar sensors through sensor fusion.
However, the addition of lasers and sensors comes at a high financial cost. In
this paper, I address both of these issues by increasing the accuracy of the
computer vision models without the increased cost of using LiDAR and or
sensors. I achieved this by improving the accuracy of computer vision models by
sensor fusing CAN bus data, a vehicle protocol, with video data. CAN bus data
is a rich source of information about the vehicle's state, including its speed,
steering angle, and acceleration. By fusing this data with video data, the
accuracy of the computer vision model's predictions can be improved. When I
trained the model without CAN bus data, I obtained an RMSE of 0.02492, while
the model trained with the CAN bus data achieved an RMSE of 0.01970. This
finding indicates that fusing CAN Bus data with video data can reduce the
computer vision model's prediction error by 20% with some models decreasing the
error by 80%.
| [
{
"created": "Sun, 22 Oct 2023 03:24:53 GMT",
"version": "v1"
}
] | 2023-10-24 | [
[
"Gupta",
"Rohan",
""
]
] | In recent years, end to end steering prediction for autonomous vehicles has become a major area of research. The primary method for achieving end to end steering was to use computer vision models on a live feed of video data. However, to further increase accuracy, many companies have added data from light detection and ranging (LiDAR) and or radar sensors through sensor fusion. However, the addition of lasers and sensors comes at a high financial cost. In this paper, I address both of these issues by increasing the accuracy of the computer vision models without the increased cost of using LiDAR and or sensors. I achieved this by improving the accuracy of computer vision models by sensor fusing CAN bus data, a vehicle protocol, with video data. CAN bus data is a rich source of information about the vehicle's state, including its speed, steering angle, and acceleration. By fusing this data with video data, the accuracy of the computer vision model's predictions can be improved. When I trained the model without CAN bus data, I obtained an RMSE of 0.02492, while the model trained with the CAN bus data achieved an RMSE of 0.01970. This finding indicates that fusing CAN Bus data with video data can reduce the computer vision model's prediction error by 20% with some models decreasing the error by 80%. |
1309.6455 | Gwen Spencer PhD | Gwen Spencer and Richard Howarth | Maximizing the Spread of Stable Influence: Leveraging Norm-driven
Moral-Motivation for Green Behavior Change in Networks | null | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In an effort to understand why individuals choose to participate in
personally-expensive pro-environmental behaviors, environmental and behavioral
economists have examined a moral-motivation model in which the decision to
adopt a pro-environmental behavior depends on the society-wide market share of
that behavior. An increasing body of practical research on adoption of
pro-environmental behavior emphasizes the importance of encouragement from
local social contacts and messaging about locally-embraced norms: we respond by
extending the moral-motivation model to a social networks setting. We obtain a
new decision rule: an individual adopts a pro-environmental behavior if he or
she observes a certain threshold of adoption within their local social
neighborhood. This gives rise to a concurrent update process which describes
adoption of a pro-environmental behavior spreading through a network. The
original moral-motivation model corresponds to the special case of our network
version in a complete graph.
By improving convergence results, we formulate modest-size Integer Programs
that accurately (but not efficiently) find minimum-size sets of nodes that
convert the entire network, or alternately that maximize long-term adoption in
the network given a limited number of nodes which may be temporarily converted.
Issues of stability in determining long-term adoption are key. We give hardness
of approximation results for these optimization problems. We demonstrate that
there exist classes of networks which qualitatively have severely different
behavior than the non-networked version, and provide preliminary computational
results in in modestly-sized highly-clustered small-world networks related to
the famous small-world networks of Watts and Strogatz.
| [
{
"created": "Wed, 25 Sep 2013 10:30:57 GMT",
"version": "v1"
}
] | 2013-09-26 | [
[
"Spencer",
"Gwen",
""
],
[
"Howarth",
"Richard",
""
]
] | In an effort to understand why individuals choose to participate in personally-expensive pro-environmental behaviors, environmental and behavioral economists have examined a moral-motivation model in which the decision to adopt a pro-environmental behavior depends on the society-wide market share of that behavior. An increasing body of practical research on adoption of pro-environmental behavior emphasizes the importance of encouragement from local social contacts and messaging about locally-embraced norms: we respond by extending the moral-motivation model to a social networks setting. We obtain a new decision rule: an individual adopts a pro-environmental behavior if he or she observes a certain threshold of adoption within their local social neighborhood. This gives rise to a concurrent update process which describes adoption of a pro-environmental behavior spreading through a network. The original moral-motivation model corresponds to the special case of our network version in a complete graph. By improving convergence results, we formulate modest-size Integer Programs that accurately (but not efficiently) find minimum-size sets of nodes that convert the entire network, or alternately that maximize long-term adoption in the network given a limited number of nodes which may be temporarily converted. Issues of stability in determining long-term adoption are key. We give hardness of approximation results for these optimization problems. We demonstrate that there exist classes of networks which qualitatively have severely different behavior than the non-networked version, and provide preliminary computational results in in modestly-sized highly-clustered small-world networks related to the famous small-world networks of Watts and Strogatz. |
2305.02309 | Erik Nijkamp Dr. | Erik Nijkamp, Hiroaki Hayashi, Caiming Xiong, Silvio Savarese, Yingbo
Zhou | CodeGen2: Lessons for Training LLMs on Programming and Natural Languages | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) have demonstrated remarkable abilities in
representation learning for program synthesis and understanding tasks. The
quality of the learned representations appears to be dictated by the neural
scaling laws as a function of the number of model parameters and observations,
while imposing upper bounds on the model performance by the amount of available
data and compute, which is costly.
In this study, we attempt to render the training of LLMs for program
synthesis more efficient by unifying four key components: (1) model
architectures, (2) learning methods, (3) infill sampling, and, (4) data
distributions. Specifically, for the model architecture, we attempt to unify
encoder and decoder-based models into a single prefix-LM. For learning methods,
(i) causal language modeling, (ii) span corruption, (iii) infilling are unified
into a simple learning algorithm. For infill sampling, we explore the claim of
a "free lunch" hypothesis. For data distributions, the effect of a mixture
distribution and multi-epoch training of programming and natural languages on
model performance is explored.
We conduct a comprehensive series of empirical experiments on 1B LLMs, for
which failures and successes of this exploration are distilled into five
lessons. We will provide a final recipe for training and release CodeGen2
models in size 1B, 3.7B, 7B, and, 16B parameters, along with the training
framework as open-source: https://github.com/salesforce/CodeGen.
| [
{
"created": "Wed, 3 May 2023 17:55:25 GMT",
"version": "v1"
},
{
"created": "Tue, 11 Jul 2023 21:11:23 GMT",
"version": "v2"
}
] | 2023-07-13 | [
[
"Nijkamp",
"Erik",
""
],
[
"Hayashi",
"Hiroaki",
""
],
[
"Xiong",
"Caiming",
""
],
[
"Savarese",
"Silvio",
""
],
[
"Zhou",
"Yingbo",
""
]
] | Large language models (LLMs) have demonstrated remarkable abilities in representation learning for program synthesis and understanding tasks. The quality of the learned representations appears to be dictated by the neural scaling laws as a function of the number of model parameters and observations, while imposing upper bounds on the model performance by the amount of available data and compute, which is costly. In this study, we attempt to render the training of LLMs for program synthesis more efficient by unifying four key components: (1) model architectures, (2) learning methods, (3) infill sampling, and, (4) data distributions. Specifically, for the model architecture, we attempt to unify encoder and decoder-based models into a single prefix-LM. For learning methods, (i) causal language modeling, (ii) span corruption, (iii) infilling are unified into a simple learning algorithm. For infill sampling, we explore the claim of a "free lunch" hypothesis. For data distributions, the effect of a mixture distribution and multi-epoch training of programming and natural languages on model performance is explored. We conduct a comprehensive series of empirical experiments on 1B LLMs, for which failures and successes of this exploration are distilled into five lessons. We will provide a final recipe for training and release CodeGen2 models in size 1B, 3.7B, 7B, and, 16B parameters, along with the training framework as open-source: https://github.com/salesforce/CodeGen. |
2007.03338 | Mehdi Ghatee Dr. | Marzieh Heidari, Mehdi Ghatee, Ahmad Nickabadi, Arash Pourhasan Nezhad | Diverse and Styled Image Captioning Using SVD-Based Mixture of Recurrent
Experts | 13 pages, 4 figures and 5 tables, extracted from an MSc thesis in the
Amirkabir University of Technology, Tehran, Iran | Concurrency and Computation: Practice and Experience, 2022 | 10.1002/cpe.6866 | null | cs.CV cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With great advances in vision and natural language processing, the generation
of image captions becomes a need. In a recent paper, Mathews, Xie and He [1],
extended a new model to generate styled captions by separating semantics and
style. In continuation of this work, here a new captioning model is developed
including an image encoder to extract the features, a mixture of recurrent
networks to embed the set of extracted features to a set of words, and a
sentence generator that combines the obtained words as a stylized sentence. The
resulted system that entitled as Mixture of Recurrent Experts (MoRE), uses a
new training algorithm that derives singular value decomposition (SVD) from
weighting matrices of Recurrent Neural Networks (RNNs) to increase the
diversity of captions. Each decomposition step depends on a distinctive factor
based on the number of RNNs in MoRE. Since the used sentence generator gives a
stylized language corpus without paired images, our captioning model can do the
same. Besides, the styled and diverse captions are extracted without training
on a densely labeled or styled dataset. To validate this captioning model, we
use Microsoft COCO which is a standard factual image caption dataset. We show
that the proposed captioning model can generate a diverse and stylized image
captions without the necessity of extra-labeling. The results also show better
descriptions in terms of content accuracy.
| [
{
"created": "Tue, 7 Jul 2020 11:00:27 GMT",
"version": "v1"
}
] | 2022-02-03 | [
[
"Heidari",
"Marzieh",
""
],
[
"Ghatee",
"Mehdi",
""
],
[
"Nickabadi",
"Ahmad",
""
],
[
"Nezhad",
"Arash Pourhasan",
""
]
] | With great advances in vision and natural language processing, the generation of image captions becomes a need. In a recent paper, Mathews, Xie and He [1], extended a new model to generate styled captions by separating semantics and style. In continuation of this work, here a new captioning model is developed including an image encoder to extract the features, a mixture of recurrent networks to embed the set of extracted features to a set of words, and a sentence generator that combines the obtained words as a stylized sentence. The resulted system that entitled as Mixture of Recurrent Experts (MoRE), uses a new training algorithm that derives singular value decomposition (SVD) from weighting matrices of Recurrent Neural Networks (RNNs) to increase the diversity of captions. Each decomposition step depends on a distinctive factor based on the number of RNNs in MoRE. Since the used sentence generator gives a stylized language corpus without paired images, our captioning model can do the same. Besides, the styled and diverse captions are extracted without training on a densely labeled or styled dataset. To validate this captioning model, we use Microsoft COCO which is a standard factual image caption dataset. We show that the proposed captioning model can generate a diverse and stylized image captions without the necessity of extra-labeling. The results also show better descriptions in terms of content accuracy. |
1905.03919 | Filippo Menczer | Kazutoshi Sasahara, Wen Chen, Hao Peng, Giovanni Luca Ciampaglia,
Alessandro Flammini, Filippo Menczer | Social Influence and Unfollowing Accelerate the Emergence of Echo
Chambers | 28 pages, 11 figures. Forthcoming in Journal of Computational Social
Science | J Comput Soc Sc (2020) | 10.1007/s42001-020-00084-7 | null | cs.CY cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While social media make it easy to connect with and access information from
anyone, they also facilitate basic influence and unfriending mechanisms that
may lead to segregated and polarized clusters known as "echo chambers." Here we
study the conditions in which such echo chambers emerge by introducing a simple
model of information sharing in online social networks with the two ingredients
of influence and unfriending. Users can change both their opinions and social
connections based on the information to which they are exposed through sharing.
The model dynamics show that even with minimal amounts of influence and
unfriending, the social network rapidly devolves into segregated, homogeneous
communities. These predictions are consistent with empirical data from Twitter.
Although our findings suggest that echo chambers are somewhat inevitable given
the mechanisms at play in online social media, they also provide insights into
possible mitigation strategies.
| [
{
"created": "Fri, 10 May 2019 03:08:23 GMT",
"version": "v1"
},
{
"created": "Mon, 20 May 2019 16:40:50 GMT",
"version": "v2"
},
{
"created": "Tue, 25 Aug 2020 02:27:40 GMT",
"version": "v3"
}
] | 2020-09-15 | [
[
"Sasahara",
"Kazutoshi",
""
],
[
"Chen",
"Wen",
""
],
[
"Peng",
"Hao",
""
],
[
"Ciampaglia",
"Giovanni Luca",
""
],
[
"Flammini",
"Alessandro",
""
],
[
"Menczer",
"Filippo",
""
]
] | While social media make it easy to connect with and access information from anyone, they also facilitate basic influence and unfriending mechanisms that may lead to segregated and polarized clusters known as "echo chambers." Here we study the conditions in which such echo chambers emerge by introducing a simple model of information sharing in online social networks with the two ingredients of influence and unfriending. Users can change both their opinions and social connections based on the information to which they are exposed through sharing. The model dynamics show that even with minimal amounts of influence and unfriending, the social network rapidly devolves into segregated, homogeneous communities. These predictions are consistent with empirical data from Twitter. Although our findings suggest that echo chambers are somewhat inevitable given the mechanisms at play in online social media, they also provide insights into possible mitigation strategies. |
1702.07478 | Igor Tarasyuk | Igor V. Tarasyuk, Hermenegilda Maci\`a, Valent\'in Valero | Stochastic equivalence for performance analysis of concurrent systems in
dtsiPBC | Prepared for submission to Discrete Mathematics and Theoretical
Computer Science | null | null | null | cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose an extension with immediate multiactions of discrete time
stochastic Petri Box Calculus (dtsPBC), presented by I.V. Tarasyuk. The
resulting algebra dtsiPBC is a discrete time analogue of stochastic Petri Box
Calculus (sPBC) with immediate multiactions, designed by H. Maci\`a, V. Valero
et al. within a continuous time domain. The step operational semantics is
constructed via labeled probabilistic transition systems. The denotational
semantics is based on labeled discrete time stochastic Petri nets with
immediate transitions. To evaluate performance, the corresponding semi-Markov
chains are analyzed. We define step stochastic bisimulation equivalence of
expressions that is applied to reduce their transition systems and underlying
semi-Markov chains while preserving the functionality and performance
characteristics. We explain how this equivalence can be used to simplify
performance analysis of the algebraic processes. In a case study, a method of
modeling, performance evaluation and behaviour reduction for concurrent systems
is outlined and applied to the shared memory system.
| [
{
"created": "Fri, 24 Feb 2017 07:07:24 GMT",
"version": "v1"
}
] | 2017-02-27 | [
[
"Tarasyuk",
"Igor V.",
""
],
[
"Macià",
"Hermenegilda",
""
],
[
"Valero",
"Valentín",
""
]
] | We propose an extension with immediate multiactions of discrete time stochastic Petri Box Calculus (dtsPBC), presented by I.V. Tarasyuk. The resulting algebra dtsiPBC is a discrete time analogue of stochastic Petri Box Calculus (sPBC) with immediate multiactions, designed by H. Maci\`a, V. Valero et al. within a continuous time domain. The step operational semantics is constructed via labeled probabilistic transition systems. The denotational semantics is based on labeled discrete time stochastic Petri nets with immediate transitions. To evaluate performance, the corresponding semi-Markov chains are analyzed. We define step stochastic bisimulation equivalence of expressions that is applied to reduce their transition systems and underlying semi-Markov chains while preserving the functionality and performance characteristics. We explain how this equivalence can be used to simplify performance analysis of the algebraic processes. In a case study, a method of modeling, performance evaluation and behaviour reduction for concurrent systems is outlined and applied to the shared memory system. |
1810.01966 | Ekram Hossain | Mohammad Salehi, Hina Tabassum, and Ekram Hossain | Accuracy of Distance-Based Ranking of Users in the Analysis of NOMA
Systems | null | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We characterize the accuracy of analyzing the performance of a NOMA system
where users are ranked according to their distances instead of instantaneous
channel gains, i.e., product of distance-based path-loss and fading channel
gains. Distance-based ranking is analytically tractable and can lead to
important insights. However, it may not be appropriate in a multipath fading
environment where a near user suffers from severe fading while a far user
experiences weak fading. Since the ranking of users in a NOMA system has a
direct impact on coverage probability analysis, impact of the traditional
distance-based ranking, as opposed to instantaneous signal power-based ranking,
needs to be understood. This will enable us to identify scenarios where
distance-based ranking, which is easier to implement compared to instantaneous
signal power-based ranking, is acceptable for system performance analysis. To
this end, in this paper, we derive the probability of the event when
distance-based ranking yields the same results as instantaneous signal
power-based ranking, which is referred to as the accuracy probability. We
characterize the probability of accuracy considering Nakagami-m fading channels
and three different spatial distribution models of user locations in NOMA. We
illustrate the impact of accuracy probability on uplink and downlink coverage
probability.
| [
{
"created": "Wed, 3 Oct 2018 20:50:32 GMT",
"version": "v1"
}
] | 2018-10-05 | [
[
"Salehi",
"Mohammad",
""
],
[
"Tabassum",
"Hina",
""
],
[
"Hossain",
"Ekram",
""
]
] | We characterize the accuracy of analyzing the performance of a NOMA system where users are ranked according to their distances instead of instantaneous channel gains, i.e., product of distance-based path-loss and fading channel gains. Distance-based ranking is analytically tractable and can lead to important insights. However, it may not be appropriate in a multipath fading environment where a near user suffers from severe fading while a far user experiences weak fading. Since the ranking of users in a NOMA system has a direct impact on coverage probability analysis, impact of the traditional distance-based ranking, as opposed to instantaneous signal power-based ranking, needs to be understood. This will enable us to identify scenarios where distance-based ranking, which is easier to implement compared to instantaneous signal power-based ranking, is acceptable for system performance analysis. To this end, in this paper, we derive the probability of the event when distance-based ranking yields the same results as instantaneous signal power-based ranking, which is referred to as the accuracy probability. We characterize the probability of accuracy considering Nakagami-m fading channels and three different spatial distribution models of user locations in NOMA. We illustrate the impact of accuracy probability on uplink and downlink coverage probability. |
1203.0587 | Alex Brik | Alex Brik, Jeffrey B. Remmel | Expressing Preferences using Preference Set Constraint Atoms | 9 pages | null | null | null | cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces an extension of Answer Set Programming called
Preference Set Constraint Programming which is a convenient and general
formalism to reason with preferences. PSC programming extends Set Constraint
Programming introduced by Marek and Remmel (Marek and Remmel 2004) by
introducing two types of preference set constraint atoms, measure preference
set constraint atoms and pre-ordered preference set constraint atoms, which are
extensions of set constraint atoms. We show that the question of whether a PSC
program has a preferred stable model is CoNP-complete. We give examples of the
uses of the preference set constraint atoms and show that Answer Set
Optimization (Brewka, Niemel\"a, and Truszczynski 2003) and General Preference
(Son and Pontelli 2006) can be expressed using preference set constraint atoms.
| [
{
"created": "Fri, 2 Mar 2012 23:25:07 GMT",
"version": "v1"
}
] | 2012-03-06 | [
[
"Brik",
"Alex",
""
],
[
"Remmel",
"Jeffrey B.",
""
]
] | This paper introduces an extension of Answer Set Programming called Preference Set Constraint Programming which is a convenient and general formalism to reason with preferences. PSC programming extends Set Constraint Programming introduced by Marek and Remmel (Marek and Remmel 2004) by introducing two types of preference set constraint atoms, measure preference set constraint atoms and pre-ordered preference set constraint atoms, which are extensions of set constraint atoms. We show that the question of whether a PSC program has a preferred stable model is CoNP-complete. We give examples of the uses of the preference set constraint atoms and show that Answer Set Optimization (Brewka, Niemel\"a, and Truszczynski 2003) and General Preference (Son and Pontelli 2006) can be expressed using preference set constraint atoms. |
1711.09008 | Yuming Jiang | Atef Abdelkefi and Yuming Jiang and Sachin Sharma | SENATUS: An Approach to Joint Traffic Anomaly Detection and Root Cause
Analysis | null | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a novel approach, called SENATUS, for joint traffic
anomaly detection and root-cause analysis. Inspired from the concept of a
senate, the key idea of the proposed approach is divided into three stages:
election, voting and decision. At the election stage, a small number of
\nop{traffic flow sets (termed as senator flows)}senator flows are chosen\nop{,
which are used} to represent approximately the total (usually huge) set of
traffic flows. In the voting stage, anomaly detection is applied on the senator
flows and the detected anomalies are correlated to identify the most possible
anomalous time bins. Finally in the decision stage, a machine learning
technique is applied to the senator flows of each anomalous time bin to find
the root cause of the anomalies. We evaluate SENATUS using traffic traces
collected from the Pan European network, GEANT, and compare against another
approach which detects anomalies using lossless compression of traffic
histograms. We show the effectiveness of SENATUS in diagnosing anomaly types:
network scans and DoS/DDoS attacks.
| [
{
"created": "Fri, 24 Nov 2017 15:14:50 GMT",
"version": "v1"
}
] | 2017-11-27 | [
[
"Abdelkefi",
"Atef",
""
],
[
"Jiang",
"Yuming",
""
],
[
"Sharma",
"Sachin",
""
]
] | In this paper, we propose a novel approach, called SENATUS, for joint traffic anomaly detection and root-cause analysis. Inspired from the concept of a senate, the key idea of the proposed approach is divided into three stages: election, voting and decision. At the election stage, a small number of \nop{traffic flow sets (termed as senator flows)}senator flows are chosen\nop{, which are used} to represent approximately the total (usually huge) set of traffic flows. In the voting stage, anomaly detection is applied on the senator flows and the detected anomalies are correlated to identify the most possible anomalous time bins. Finally in the decision stage, a machine learning technique is applied to the senator flows of each anomalous time bin to find the root cause of the anomalies. We evaluate SENATUS using traffic traces collected from the Pan European network, GEANT, and compare against another approach which detects anomalies using lossless compression of traffic histograms. We show the effectiveness of SENATUS in diagnosing anomaly types: network scans and DoS/DDoS attacks. |
1602.03599 | EPTCS | Juliana Franco (Imperial College London), Sophia Drossopoulou
(Imperial College London) | Behavioural types for non-uniform memory accesses | In Proceedings PLACES 2015, arXiv:1602.03254 | EPTCS 203, 2016, pp. 109-120 | 10.4204/EPTCS.203.9 | null | cs.PL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Concurrent programs executing on NUMA architectures consist of concurrent
entities (e.g. threads, actors) and data placed on different nodes. Execution
of these concurrent entities often reads or updates states from remote nodes.
The performance of such systems depends on the extent to which the concurrent
entities can be executing in parallel, and on the amount of the remote reads
and writes.
We consider an actor-based object oriented language, and propose a type
system which expresses the topology of the program (the placement of the actors
and data on the nodes), and an effect system which characterises remote reads
and writes (in terms of which node reads/writes from which other nodes). We use
a variant of ownership types for the topology, and a combination of behavioural
and ownership types for the effect system.
| [
{
"created": "Thu, 11 Feb 2016 01:21:09 GMT",
"version": "v1"
}
] | 2016-02-12 | [
[
"Franco",
"Juliana",
"",
"Imperial College London"
],
[
"Drossopoulou",
"Sophia",
"",
"Imperial College London"
]
] | Concurrent programs executing on NUMA architectures consist of concurrent entities (e.g. threads, actors) and data placed on different nodes. Execution of these concurrent entities often reads or updates states from remote nodes. The performance of such systems depends on the extent to which the concurrent entities can be executing in parallel, and on the amount of the remote reads and writes. We consider an actor-based object oriented language, and propose a type system which expresses the topology of the program (the placement of the actors and data on the nodes), and an effect system which characterises remote reads and writes (in terms of which node reads/writes from which other nodes). We use a variant of ownership types for the topology, and a combination of behavioural and ownership types for the effect system. |
2105.03655 | Huy Ha | Huy Ha, Shuran Song | FlingBot: The Unreasonable Effectiveness of Dynamic Manipulation for
Cloth Unfolding | 11 pages, 6 figures. Code, data, and simulation environment publicly
available at https://flingbot.cs.columbia.edu | Conference on Robot Learning (CoRL 2021) | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | High-velocity dynamic actions (e.g., fling or throw) play a crucial role in
our everyday interaction with deformable objects by improving our efficiency
and effectively expanding our physical reach range. Yet, most prior works have
tackled cloth manipulation using exclusively single-arm quasi-static actions,
which requires a large number of interactions for challenging initial cloth
configurations and strictly limits the maximum cloth size by the robot's reach
range. In this work, we demonstrate the effectiveness of dynamic flinging
actions for cloth unfolding with our proposed self-supervised learning
framework, FlingBot. Our approach learns how to unfold a piece of fabric from
arbitrary initial configurations using a pick, stretch, and fling primitive for
a dual-arm setup from visual observations. The final system achieves over 80%
coverage within 3 actions on novel cloths, can unfold cloths larger than the
system's reach range, and generalizes to T-shirts despite being trained on only
rectangular cloths. We also finetuned FlingBot on a real-world dual-arm robot
platform, where it increased the cloth coverage over 4 times more than the
quasi-static baseline did. The simplicity of FlingBot combined with its
superior performance over quasi-static baselines demonstrates the effectiveness
of dynamic actions for deformable object manipulation.
| [
{
"created": "Sat, 8 May 2021 09:48:15 GMT",
"version": "v1"
},
{
"created": "Tue, 29 Jun 2021 05:47:27 GMT",
"version": "v2"
},
{
"created": "Mon, 18 Oct 2021 19:03:27 GMT",
"version": "v3"
}
] | 2021-10-20 | [
[
"Ha",
"Huy",
""
],
[
"Song",
"Shuran",
""
]
] | High-velocity dynamic actions (e.g., fling or throw) play a crucial role in our everyday interaction with deformable objects by improving our efficiency and effectively expanding our physical reach range. Yet, most prior works have tackled cloth manipulation using exclusively single-arm quasi-static actions, which requires a large number of interactions for challenging initial cloth configurations and strictly limits the maximum cloth size by the robot's reach range. In this work, we demonstrate the effectiveness of dynamic flinging actions for cloth unfolding with our proposed self-supervised learning framework, FlingBot. Our approach learns how to unfold a piece of fabric from arbitrary initial configurations using a pick, stretch, and fling primitive for a dual-arm setup from visual observations. The final system achieves over 80% coverage within 3 actions on novel cloths, can unfold cloths larger than the system's reach range, and generalizes to T-shirts despite being trained on only rectangular cloths. We also finetuned FlingBot on a real-world dual-arm robot platform, where it increased the cloth coverage over 4 times more than the quasi-static baseline did. The simplicity of FlingBot combined with its superior performance over quasi-static baselines demonstrates the effectiveness of dynamic actions for deformable object manipulation. |
2405.16091 | Myong Chol Jung | Myong Chol Jung, He Zhao, Joanna Dipnall, Belinda Gabbe, Lan Du | Enhancing Near OOD Detection in Prompt Learning: Maximum Gains, Minimal
Costs | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Prompt learning has shown to be an efficient and effective fine-tuning method
for vision-language models like CLIP. While numerous studies have focused on
the generalisation of these models in few-shot classification, their capability
in near out-of-distribution (OOD) detection has been overlooked. A few recent
works have highlighted the promising performance of prompt learning in far OOD
detection. However, the more challenging task of few-shot near OOD detection
has not yet been addressed. In this study, we investigate the near OOD
detection capabilities of prompt learning models and observe that commonly used
OOD scores have limited performance in near OOD detection. To enhance the
performance, we propose a fast and simple post-hoc method that complements
existing logit-based scores, improving near OOD detection AUROC by up to 11.67%
with minimal computational cost. Our method can be easily applied to any prompt
learning model without change in architecture or re-training the models.
Comprehensive empirical evaluations across 13 datasets and 8 models demonstrate
the effectiveness and adaptability of our method.
| [
{
"created": "Sat, 25 May 2024 06:46:16 GMT",
"version": "v1"
}
] | 2024-05-28 | [
[
"Jung",
"Myong Chol",
""
],
[
"Zhao",
"He",
""
],
[
"Dipnall",
"Joanna",
""
],
[
"Gabbe",
"Belinda",
""
],
[
"Du",
"Lan",
""
]
] | Prompt learning has shown to be an efficient and effective fine-tuning method for vision-language models like CLIP. While numerous studies have focused on the generalisation of these models in few-shot classification, their capability in near out-of-distribution (OOD) detection has been overlooked. A few recent works have highlighted the promising performance of prompt learning in far OOD detection. However, the more challenging task of few-shot near OOD detection has not yet been addressed. In this study, we investigate the near OOD detection capabilities of prompt learning models and observe that commonly used OOD scores have limited performance in near OOD detection. To enhance the performance, we propose a fast and simple post-hoc method that complements existing logit-based scores, improving near OOD detection AUROC by up to 11.67% with minimal computational cost. Our method can be easily applied to any prompt learning model without change in architecture or re-training the models. Comprehensive empirical evaluations across 13 datasets and 8 models demonstrate the effectiveness and adaptability of our method. |
1401.3682 | Minglai Cai | Holger Boche, Minglai Cai, and Christian Deppe | Broadcast Classical-Quantum Capacity Region of Two-Phase Bidirectional
Relaying Channel | null | Quantum Information Processing: Volume 14, Issue 10 (2015), Page
3879-3897 | 10.1007/s11128-015-1065-2 | null | cs.IT math.IT math.QA quant-ph | http://creativecommons.org/licenses/by/4.0/ | We study a three-node quantum network which enables bidirectional
communication between two nodes with a half-duplex relay node. A
decode-and-forward protocol is used to perform the communication in two phases.
In the first phase, the messages of two nodes are transmitted to the relay
node. In the second phase, the relay node broadcasts a re-encoded composition
to the two nodes. We determine the capacity region of the broadcast phase.
| [
{
"created": "Wed, 15 Jan 2014 17:29:23 GMT",
"version": "v1"
},
{
"created": "Tue, 21 Jan 2014 10:27:07 GMT",
"version": "v2"
},
{
"created": "Mon, 27 Jul 2015 13:02:18 GMT",
"version": "v3"
},
{
"created": "Mon, 28 Sep 2015 15:23:00 GMT",
"version": "v4"
}
] | 2015-09-29 | [
[
"Boche",
"Holger",
""
],
[
"Cai",
"Minglai",
""
],
[
"Deppe",
"Christian",
""
]
] | We study a three-node quantum network which enables bidirectional communication between two nodes with a half-duplex relay node. A decode-and-forward protocol is used to perform the communication in two phases. In the first phase, the messages of two nodes are transmitted to the relay node. In the second phase, the relay node broadcasts a re-encoded composition to the two nodes. We determine the capacity region of the broadcast phase. |
2306.01996 | Han Wang | Han Wang, Ming Tang, Ke Xu, Quancheng Wang | BandwidthBreach: Unleashing Covert and Side Channels through Cache
Bandwidth Exploitation | null | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the modern CPU architecture, enhancements such as the Line Fill Buffer
(LFB) and Super Queue (SQ), which are designed to track pending cache requests,
have significantly boosted performance. To exploit this structures, we
deliberately engineered blockages in the L2 to L1d route by controlling LFB
conflict and triggering prefetch prediction failures, while consciously
dismissing other plausible influencing factors. This approach was subsequently
extended to the L3 to L2 and L2 to L1i pathways, resulting in three potent
covert channels, termed L2CC, L3CC, and LiCC, with capacities of 10.02 Mbps,
10.37 Mbps, and 1.83 Mbps, respectively. Strikingly, the capacities of L2CC and
L3CC surpass those of earlier non-shared-memory-based covert channels, reaching
a level comparable to their shared memory-dependent equivalents. Leveraging
this congestion further facilitated the extraction of key bits from RSA and
EdDSA implementations. Coupled with SpectreV1 and V2, our covert channels
effectively evade the majority of traditional Spectre defenses. Their
confluence with Branch Prediction (BP) Timing assaults additionally undercuts
balanced branch protections, hence broadening their capability to infiltrate a
wide range of cryptography libraries.
| [
{
"created": "Sat, 3 Jun 2023 04:09:07 GMT",
"version": "v1"
}
] | 2023-06-06 | [
[
"Wang",
"Han",
""
],
[
"Tang",
"Ming",
""
],
[
"Xu",
"Ke",
""
],
[
"Wang",
"Quancheng",
""
]
] | In the modern CPU architecture, enhancements such as the Line Fill Buffer (LFB) and Super Queue (SQ), which are designed to track pending cache requests, have significantly boosted performance. To exploit this structures, we deliberately engineered blockages in the L2 to L1d route by controlling LFB conflict and triggering prefetch prediction failures, while consciously dismissing other plausible influencing factors. This approach was subsequently extended to the L3 to L2 and L2 to L1i pathways, resulting in three potent covert channels, termed L2CC, L3CC, and LiCC, with capacities of 10.02 Mbps, 10.37 Mbps, and 1.83 Mbps, respectively. Strikingly, the capacities of L2CC and L3CC surpass those of earlier non-shared-memory-based covert channels, reaching a level comparable to their shared memory-dependent equivalents. Leveraging this congestion further facilitated the extraction of key bits from RSA and EdDSA implementations. Coupled with SpectreV1 and V2, our covert channels effectively evade the majority of traditional Spectre defenses. Their confluence with Branch Prediction (BP) Timing assaults additionally undercuts balanced branch protections, hence broadening their capability to infiltrate a wide range of cryptography libraries. |
2108.06832 | Sanjiang Li | Xueqing Yan, Yongming Li, Sanjiang Li | A Fast Algorithm for Computing the Deficiency Number of a Mahjong Hand | 32 pages, 3 figures | null | null | null | cs.AI cs.DM cs.MA | http://creativecommons.org/licenses/by/4.0/ | The tile-based multiplayer game Mahjong is widely played in Asia and has also
become increasingly popular worldwide. Face-to-face or online, each player
begins with a hand of 13 tiles and players draw and discard tiles in turn until
they complete a winning hand. An important notion in Mahjong is the deficiency
number (a.k.a. shanten number in Japanese Mahjong) of a hand, which estimates
how many tile changes are necessary to complete the hand into a winning hand.
The deficiency number plays an essential role in major decision-making tasks
such as selecting a tile to discard. This paper proposes a fast algorithm for
computing the deficiency number of a Mahjong hand. Compared with the baseline
algorithm, the new algorithm is usually 100 times faster and, more importantly,
respects the agent's knowledge about available tiles. The algorithm can be used
as a basic procedure in all Mahjong variants by both rule-based and machine
learning-based Mahjong AI.
| [
{
"created": "Sun, 15 Aug 2021 22:44:14 GMT",
"version": "v1"
}
] | 2021-08-17 | [
[
"Yan",
"Xueqing",
""
],
[
"Li",
"Yongming",
""
],
[
"Li",
"Sanjiang",
""
]
] | The tile-based multiplayer game Mahjong is widely played in Asia and has also become increasingly popular worldwide. Face-to-face or online, each player begins with a hand of 13 tiles and players draw and discard tiles in turn until they complete a winning hand. An important notion in Mahjong is the deficiency number (a.k.a. shanten number in Japanese Mahjong) of a hand, which estimates how many tile changes are necessary to complete the hand into a winning hand. The deficiency number plays an essential role in major decision-making tasks such as selecting a tile to discard. This paper proposes a fast algorithm for computing the deficiency number of a Mahjong hand. Compared with the baseline algorithm, the new algorithm is usually 100 times faster and, more importantly, respects the agent's knowledge about available tiles. The algorithm can be used as a basic procedure in all Mahjong variants by both rule-based and machine learning-based Mahjong AI. |
1601.04952 | Andrea Baronchelli | Vito Trianni, Daniele De Simone, Andreagiovanni Reina, Andrea
Baronchelli | Emergence of Consensus in a Multi-Robot Network: from Abstract Models to
Empirical Validation | A supporting video is available here:
https://mail.google.com/mail/u/0/#search/vito.trianni%40istc.cnr.it/15244cd6f27f0e99?projector=1 | Robotics and Automation Letters, IEEE , vol.PP, no.99, pp.1 (2016) | 10.1109/LRA.2016.2519537 | null | cs.MA cs.RO cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Consensus dynamics in decentralised multiagent systems are subject to intense
studies, and several different models have been proposed and analysed. Among
these, the naming game stands out for its simplicity and applicability to a
wide range of phenomena and applications, from semiotics to engineering.
Despite the wide range of studies available, the implementation of theoretical
models in real distributed systems is not always straightforward, as the
physical platform imposes several constraints that may have a bearing on the
consensus dynamics. In this paper, we investigate the effects of an
implementation of the naming game for the kilobot robotic platform, in which we
consider concurrent execution of games and physical interferences. Consensus
dynamics are analysed in the light of the continuously evolving communication
network created by the robots, highlighting how the different regimes crucially
depend on the robot density and on their ability to spread widely in the
experimental arena. We find that physical interferences reduce the benefits
resulting from robot mobility in terms of consensus time, but also result in
lower cognitive load for individual agents.
| [
{
"created": "Tue, 19 Jan 2016 15:29:52 GMT",
"version": "v1"
}
] | 2016-01-21 | [
[
"Trianni",
"Vito",
""
],
[
"De Simone",
"Daniele",
""
],
[
"Reina",
"Andreagiovanni",
""
],
[
"Baronchelli",
"Andrea",
""
]
] | Consensus dynamics in decentralised multiagent systems are subject to intense studies, and several different models have been proposed and analysed. Among these, the naming game stands out for its simplicity and applicability to a wide range of phenomena and applications, from semiotics to engineering. Despite the wide range of studies available, the implementation of theoretical models in real distributed systems is not always straightforward, as the physical platform imposes several constraints that may have a bearing on the consensus dynamics. In this paper, we investigate the effects of an implementation of the naming game for the kilobot robotic platform, in which we consider concurrent execution of games and physical interferences. Consensus dynamics are analysed in the light of the continuously evolving communication network created by the robots, highlighting how the different regimes crucially depend on the robot density and on their ability to spread widely in the experimental arena. We find that physical interferences reduce the benefits resulting from robot mobility in terms of consensus time, but also result in lower cognitive load for individual agents. |
1804.07419 | Rafael Menelau Oliveira E Cruz | Felipe N. Walmsley, George D. C. Cavalcanti, Dayvid V. R. Oliveira,
Rafael M. O. Cruz and Robert Sabourin | An Ensemble Generation Method Based on Instance Hardness | Paper accepted for publication on IJCNN 2018 | null | 10.1109/IJCNN.2018.8489269 | null | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In Machine Learning, ensemble methods have been receiving a great deal of
attention. Techniques such as Bagging and Boosting have been successfully
applied to a variety of problems. Nevertheless, such techniques are still
susceptible to the effects of noise and outliers in the training data. We
propose a new method for the generation of pools of classifiers based on
Bagging, in which the probability of an instance being selected during the
resampling process is inversely proportional to its instance hardness, which
can be understood as the likelihood of an instance being misclassified,
regardless of the choice of classifier. The goal of the proposed method is to
remove noisy data without sacrificing the hard instances which are likely to be
found on class boundaries. We evaluate the performance of the method in
nineteen public data sets, and compare it to the performance of the Bagging and
Random Subspace algorithms. Our experiments show that in high noise scenarios
the accuracy of our method is significantly better than that of Bagging.
| [
{
"created": "Fri, 20 Apr 2018 01:29:47 GMT",
"version": "v1"
},
{
"created": "Mon, 30 Apr 2018 07:18:12 GMT",
"version": "v2"
}
] | 2018-11-01 | [
[
"Walmsley",
"Felipe N.",
""
],
[
"Cavalcanti",
"George D. C.",
""
],
[
"Oliveira",
"Dayvid V. R.",
""
],
[
"Cruz",
"Rafael M. O.",
""
],
[
"Sabourin",
"Robert",
""
]
] | In Machine Learning, ensemble methods have been receiving a great deal of attention. Techniques such as Bagging and Boosting have been successfully applied to a variety of problems. Nevertheless, such techniques are still susceptible to the effects of noise and outliers in the training data. We propose a new method for the generation of pools of classifiers based on Bagging, in which the probability of an instance being selected during the resampling process is inversely proportional to its instance hardness, which can be understood as the likelihood of an instance being misclassified, regardless of the choice of classifier. The goal of the proposed method is to remove noisy data without sacrificing the hard instances which are likely to be found on class boundaries. We evaluate the performance of the method in nineteen public data sets, and compare it to the performance of the Bagging and Random Subspace algorithms. Our experiments show that in high noise scenarios the accuracy of our method is significantly better than that of Bagging. |
2202.11423 | Kailun Yang | Kunyu Peng, Alina Roitberg, Kailun Yang, Jiaming Zhang, Rainer
Stiefelhagen | Delving Deep into One-Shot Skeleton-based Action Recognition with
Diverse Occlusions | Accepted to IEEE Transactions on Multimedia (TMM). Code is publicly
available at https://github.com/KPeng9510/Trans4SOAR | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Occlusions are universal disruptions constantly present in the real world.
Especially for sparse representations, such as human skeletons, a few occluded
points might destroy the geometrical and temporal continuity critically
affecting the results. Yet, the research of data-scarce recognition from
skeleton sequences, such as one-shot action recognition, does not explicitly
consider occlusions despite their everyday pervasiveness. In this work, we
explicitly tackle body occlusions for Skeleton-based One-shot Action
Recognition (SOAR). We mainly consider two occlusion variants: 1) random
occlusions and 2) more realistic occlusions caused by diverse everyday objects,
which we generate by projecting the existing IKEA 3D furniture models into the
camera coordinate system of the 3D skeletons with different geometric
parameters. We leverage the proposed pipeline to blend out portions of skeleton
sequences of the three popular action recognition datasets and formalize the
first benchmark for SOAR from partially occluded body poses. Another key
property of our benchmark are the more realistic occlusions generated by
everyday objects, as even in standard recognition from 3D skeletons, only
randomly missing joints were considered. We re-evaluate existing
state-of-the-art frameworks for SOAR in the light of this new task and further
introduce Trans4SOAR - a new transformer-based model which leverages three data
streams and mixed attention fusion mechanism to alleviate the adverse effects
caused by occlusions. While our experiments demonstrate a clear decline in
accuracy with missing skeleton portions, this effect is smaller with
Trans4SOAR, which outperforms other architectures on all datasets. Although we
specifically focus on occlusions, Trans4SOAR additionally yields
state-of-the-art in the standard SOAR without occlusion, surpassing the best
published approach by 2.85% on NTU-120.
| [
{
"created": "Wed, 23 Feb 2022 11:11:54 GMT",
"version": "v1"
},
{
"created": "Wed, 13 Jul 2022 17:34:01 GMT",
"version": "v2"
},
{
"created": "Mon, 9 Jan 2023 20:55:30 GMT",
"version": "v3"
}
] | 2023-01-11 | [
[
"Peng",
"Kunyu",
""
],
[
"Roitberg",
"Alina",
""
],
[
"Yang",
"Kailun",
""
],
[
"Zhang",
"Jiaming",
""
],
[
"Stiefelhagen",
"Rainer",
""
]
] | Occlusions are universal disruptions constantly present in the real world. Especially for sparse representations, such as human skeletons, a few occluded points might destroy the geometrical and temporal continuity critically affecting the results. Yet, the research of data-scarce recognition from skeleton sequences, such as one-shot action recognition, does not explicitly consider occlusions despite their everyday pervasiveness. In this work, we explicitly tackle body occlusions for Skeleton-based One-shot Action Recognition (SOAR). We mainly consider two occlusion variants: 1) random occlusions and 2) more realistic occlusions caused by diverse everyday objects, which we generate by projecting the existing IKEA 3D furniture models into the camera coordinate system of the 3D skeletons with different geometric parameters. We leverage the proposed pipeline to blend out portions of skeleton sequences of the three popular action recognition datasets and formalize the first benchmark for SOAR from partially occluded body poses. Another key property of our benchmark are the more realistic occlusions generated by everyday objects, as even in standard recognition from 3D skeletons, only randomly missing joints were considered. We re-evaluate existing state-of-the-art frameworks for SOAR in the light of this new task and further introduce Trans4SOAR - a new transformer-based model which leverages three data streams and mixed attention fusion mechanism to alleviate the adverse effects caused by occlusions. While our experiments demonstrate a clear decline in accuracy with missing skeleton portions, this effect is smaller with Trans4SOAR, which outperforms other architectures on all datasets. Although we specifically focus on occlusions, Trans4SOAR additionally yields state-of-the-art in the standard SOAR without occlusion, surpassing the best published approach by 2.85% on NTU-120. |
1210.6192 | Kasturika B Ray | Rachita Misra, Kasturika B ray | Textural Approach to Palmprint Identification | 9 pages | http://www.ijascse.in/publications-2012--2 | null | null | cs.CV cs.CR cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Biometrics which use of human physiological characteristics for identifying
an individual is now a widespread method of identification and authentication.
Biometric identification is a technology which uses several image processing
techniques and describes the general procedure for identification and
verification using feature extraction, storage and matching from the digitized
image of biometric characters such as Finger Print, Face, Iris or Palm Print.
The current paper uses palm print biometrics. Here we have presented an
identification approach using textural properties of palm print images. The
elegance of the method is that the conventional edge detection technique is
extended to suitably describe the texture features. In this technique all the
characteristics of the palm such as principal lines, edges and wrinkles are
considered with equal importance.
| [
{
"created": "Tue, 23 Oct 2012 10:52:31 GMT",
"version": "v1"
}
] | 2012-10-24 | [
[
"Misra",
"Rachita",
""
],
[
"ray",
"Kasturika B",
""
]
] | Biometrics which use of human physiological characteristics for identifying an individual is now a widespread method of identification and authentication. Biometric identification is a technology which uses several image processing techniques and describes the general procedure for identification and verification using feature extraction, storage and matching from the digitized image of biometric characters such as Finger Print, Face, Iris or Palm Print. The current paper uses palm print biometrics. Here we have presented an identification approach using textural properties of palm print images. The elegance of the method is that the conventional edge detection technique is extended to suitably describe the texture features. In this technique all the characteristics of the palm such as principal lines, edges and wrinkles are considered with equal importance. |
2102.07312 | Shoya Ishimaru | Shoya Ishimaru, Takanori Maruichi, Andreas Dengel and Koichi Kise | Confidence-Aware Learning Assistant | 9 pages, 11 figures | null | null | null | cs.HC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Not only correctness but also self-confidence play an important role in
improving the quality of knowledge. Undesirable situations such as confident
incorrect and unconfident correct knowledge prevent learners from revising
their knowledge because it is not always easy for them to perceive the
situations. To solve this problem, we propose a system that estimates
self-confidence while solving multiple-choice questions by eye tracking and
gives feedback about which question should be reviewed carefully. We report the
results of three studies measuring its effectiveness. (1) On a well-controlled
dataset with 10 participants, our approach detected confidence and unconfidence
with 81% and 79% average precision. (2) With the help of 20 participants, we
observed that correct answer rates of questions were increased by 14% and 17%
by giving feedback about correct answers without confidence and incorrect
answers with confidence, respectively. (3) We conducted a large-scale data
recording in a private school (72 high school students solved 14,302 questions)
to investigate effective features and the number of required training samples.
| [
{
"created": "Mon, 15 Feb 2021 02:47:11 GMT",
"version": "v1"
}
] | 2021-02-16 | [
[
"Ishimaru",
"Shoya",
""
],
[
"Maruichi",
"Takanori",
""
],
[
"Dengel",
"Andreas",
""
],
[
"Kise",
"Koichi",
""
]
] | Not only correctness but also self-confidence play an important role in improving the quality of knowledge. Undesirable situations such as confident incorrect and unconfident correct knowledge prevent learners from revising their knowledge because it is not always easy for them to perceive the situations. To solve this problem, we propose a system that estimates self-confidence while solving multiple-choice questions by eye tracking and gives feedback about which question should be reviewed carefully. We report the results of three studies measuring its effectiveness. (1) On a well-controlled dataset with 10 participants, our approach detected confidence and unconfidence with 81% and 79% average precision. (2) With the help of 20 participants, we observed that correct answer rates of questions were increased by 14% and 17% by giving feedback about correct answers without confidence and incorrect answers with confidence, respectively. (3) We conducted a large-scale data recording in a private school (72 high school students solved 14,302 questions) to investigate effective features and the number of required training samples. |
1808.06075 | Gehui Shen | Gehui Shen, Zhi-Hong Deng, Ting Huang and Xi Chen | Learning to Compose over Tree Structures via POS Tags | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recursive Neural Network (RecNN), a type of models which compose words or
phrases recursively over syntactic tree structures, has been proven to have
superior ability to obtain sentence representation for a variety of NLP tasks.
However, RecNN is born with a thorny problem that a shared compositional
function for each node of trees can't capture the complex semantic
compositionality so that the expressive power of model is limited. In this
paper, in order to address this problem, we propose Tag-Guided
HyperRecNN/TreeLSTM (TG-HRecNN/TreeLSTM), which introduces hypernetwork into
RecNNs to take as inputs Part-of-Speech (POS) tags of word/phrase and generate
the semantic composition parameters dynamically. Experimental results on five
datasets for two typical NLP tasks show proposed models both obtain significant
improvement compared with RecNN and TreeLSTM consistently. Our TG-HTreeLSTM
outperforms all existing RecNN-based models and achieves or is competitive with
state-of-the-art on four sentence classification benchmarks. The effectiveness
of our models is also demonstrated by qualitative analysis.
| [
{
"created": "Sat, 18 Aug 2018 11:53:24 GMT",
"version": "v1"
},
{
"created": "Tue, 21 Aug 2018 01:57:49 GMT",
"version": "v2"
}
] | 2018-08-22 | [
[
"Shen",
"Gehui",
""
],
[
"Deng",
"Zhi-Hong",
""
],
[
"Huang",
"Ting",
""
],
[
"Chen",
"Xi",
""
]
] | Recursive Neural Network (RecNN), a type of models which compose words or phrases recursively over syntactic tree structures, has been proven to have superior ability to obtain sentence representation for a variety of NLP tasks. However, RecNN is born with a thorny problem that a shared compositional function for each node of trees can't capture the complex semantic compositionality so that the expressive power of model is limited. In this paper, in order to address this problem, we propose Tag-Guided HyperRecNN/TreeLSTM (TG-HRecNN/TreeLSTM), which introduces hypernetwork into RecNNs to take as inputs Part-of-Speech (POS) tags of word/phrase and generate the semantic composition parameters dynamically. Experimental results on five datasets for two typical NLP tasks show proposed models both obtain significant improvement compared with RecNN and TreeLSTM consistently. Our TG-HTreeLSTM outperforms all existing RecNN-based models and achieves or is competitive with state-of-the-art on four sentence classification benchmarks. The effectiveness of our models is also demonstrated by qualitative analysis. |
2402.17766 | Runpei Dong | Zekun Qi, Runpei Dong, Shaochen Zhang, Haoran Geng, Chunrui Han, Zheng
Ge, Li Yi, Kaisheng Ma | ShapeLLM: Universal 3D Object Understanding for Embodied Interaction | Accepted at ECCV 2024 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents ShapeLLM, the first 3D Multimodal Large Language Model
(LLM) designed for embodied interaction, exploring a universal 3D object
understanding with 3D point clouds and languages. ShapeLLM is built upon an
improved 3D encoder by extending ReCon to ReCon++ that benefits from multi-view
image distillation for enhanced geometry understanding. By utilizing ReCon++ as
the 3D point cloud input encoder for LLMs, ShapeLLM is trained on constructed
instruction-following data and tested on our newly human-curated benchmark, 3D
MM-Vet. ReCon++ and ShapeLLM achieve state-of-the-art performance in 3D
geometry understanding and language-unified 3D interaction tasks, such as
embodied visual grounding. Project page: https://qizekun.github.io/shapellm/
| [
{
"created": "Tue, 27 Feb 2024 18:57:12 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Mar 2024 15:11:37 GMT",
"version": "v2"
},
{
"created": "Fri, 12 Jul 2024 15:36:15 GMT",
"version": "v3"
}
] | 2024-07-15 | [
[
"Qi",
"Zekun",
""
],
[
"Dong",
"Runpei",
""
],
[
"Zhang",
"Shaochen",
""
],
[
"Geng",
"Haoran",
""
],
[
"Han",
"Chunrui",
""
],
[
"Ge",
"Zheng",
""
],
[
"Yi",
"Li",
""
],
[
"Ma",
"Kaisheng",
""
]
] | This paper presents ShapeLLM, the first 3D Multimodal Large Language Model (LLM) designed for embodied interaction, exploring a universal 3D object understanding with 3D point clouds and languages. ShapeLLM is built upon an improved 3D encoder by extending ReCon to ReCon++ that benefits from multi-view image distillation for enhanced geometry understanding. By utilizing ReCon++ as the 3D point cloud input encoder for LLMs, ShapeLLM is trained on constructed instruction-following data and tested on our newly human-curated benchmark, 3D MM-Vet. ReCon++ and ShapeLLM achieve state-of-the-art performance in 3D geometry understanding and language-unified 3D interaction tasks, such as embodied visual grounding. Project page: https://qizekun.github.io/shapellm/ |
2308.03774 | Nick Zhang | Nick Zhang | Knowledge Consilience: One Culture, Two Cultures or Many Cultures? | null | null | null | null | cs.DL | http://creativecommons.org/licenses/by/4.0/ | The hostility between the two cultures, scientific and literary, was framed
by C.P. Snow in 1959 and later by others. The scientific culture is nowadays
often identified with STEM (Science, Technology, Engineering and Mathematics)
whereas the literary culture generally refers to humanities and social
sciences. Wilson expressed the wish for the unity of knowledge. We put forward
the notions of knowledge distance and knowledge consilience threshold to
quantitatively measure distance and coupling process between different branches
of knowledge. Our findings suggest that the gulf between the two cultures is
widening.
| [
{
"created": "Sun, 30 Jul 2023 11:26:32 GMT",
"version": "v1"
}
] | 2023-08-09 | [
[
"Zhang",
"Nick",
""
]
] | The hostility between the two cultures, scientific and literary, was framed by C.P. Snow in 1959 and later by others. The scientific culture is nowadays often identified with STEM (Science, Technology, Engineering and Mathematics) whereas the literary culture generally refers to humanities and social sciences. Wilson expressed the wish for the unity of knowledge. We put forward the notions of knowledge distance and knowledge consilience threshold to quantitatively measure distance and coupling process between different branches of knowledge. Our findings suggest that the gulf between the two cultures is widening. |
2404.16587 | Zhihao Zhu | Zhihao Zhu, Ninglu Shao, Defu Lian, Chenwang Wu, Zheng Liu, Yi Yang,
Enhong Chen | Understanding Privacy Risks of Embeddings Induced by Large Language
Models | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) show early signs of artificial general
intelligence but struggle with hallucinations. One promising solution to
mitigate these hallucinations is to store external knowledge as embeddings,
aiding LLMs in retrieval-augmented generation. However, such a solution risks
compromising privacy, as recent studies experimentally showed that the original
text can be partially reconstructed from text embeddings by pre-trained
language models. The significant advantage of LLMs over traditional pre-trained
models may exacerbate these concerns. To this end, we investigate the
effectiveness of reconstructing original knowledge and predicting entity
attributes from these embeddings when LLMs are employed. Empirical findings
indicate that LLMs significantly improve the accuracy of two evaluated tasks
over those from pre-trained models, regardless of whether the texts are
in-distribution or out-of-distribution. This underscores a heightened potential
for LLMs to jeopardize user privacy, highlighting the negative consequences of
their widespread use. We further discuss preliminary strategies to mitigate
this risk.
| [
{
"created": "Thu, 25 Apr 2024 13:10:48 GMT",
"version": "v1"
}
] | 2024-04-26 | [
[
"Zhu",
"Zhihao",
""
],
[
"Shao",
"Ninglu",
""
],
[
"Lian",
"Defu",
""
],
[
"Wu",
"Chenwang",
""
],
[
"Liu",
"Zheng",
""
],
[
"Yang",
"Yi",
""
],
[
"Chen",
"Enhong",
""
]
] | Large language models (LLMs) show early signs of artificial general intelligence but struggle with hallucinations. One promising solution to mitigate these hallucinations is to store external knowledge as embeddings, aiding LLMs in retrieval-augmented generation. However, such a solution risks compromising privacy, as recent studies experimentally showed that the original text can be partially reconstructed from text embeddings by pre-trained language models. The significant advantage of LLMs over traditional pre-trained models may exacerbate these concerns. To this end, we investigate the effectiveness of reconstructing original knowledge and predicting entity attributes from these embeddings when LLMs are employed. Empirical findings indicate that LLMs significantly improve the accuracy of two evaluated tasks over those from pre-trained models, regardless of whether the texts are in-distribution or out-of-distribution. This underscores a heightened potential for LLMs to jeopardize user privacy, highlighting the negative consequences of their widespread use. We further discuss preliminary strategies to mitigate this risk. |
2406.13474 | Junhan Kim | Junhan Kim, Ho-young Kim, Eulrang Cho, Chungman Lee, Joonyoung Kim,
Yongkweon Jeon | Attention-aware Post-training Quantization without Backpropagation | 20 pages, under review | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Quantization is a promising solution for deploying large-scale language
models (LLMs) on resource-constrained devices. Existing quantization
approaches, however, rely on gradient-based optimization, regardless of it
being post-training quantization (PTQ) or quantization-aware training (QAT),
which becomes problematic for hyper-scale LLMs with billions of parameters.
This overhead can be alleviated via recently proposed backpropagation-free PTQ
methods; however, their performance is somewhat limited by their lack of
consideration of inter-layer dependencies. In this paper, we thus propose a
novel PTQ algorithm that considers inter-layer dependencies without relying on
backpropagation. The fundamental concept involved is the development of
attention-aware Hessian matrices, which facilitates the consideration of
inter-layer dependencies within the attention module. Extensive experiments
demonstrate that the proposed algorithm significantly outperforms conventional
PTQ methods, particularly for low bit-widths.
| [
{
"created": "Wed, 19 Jun 2024 11:53:21 GMT",
"version": "v1"
}
] | 2024-06-21 | [
[
"Kim",
"Junhan",
""
],
[
"Kim",
"Ho-young",
""
],
[
"Cho",
"Eulrang",
""
],
[
"Lee",
"Chungman",
""
],
[
"Kim",
"Joonyoung",
""
],
[
"Jeon",
"Yongkweon",
""
]
] | Quantization is a promising solution for deploying large-scale language models (LLMs) on resource-constrained devices. Existing quantization approaches, however, rely on gradient-based optimization, regardless of it being post-training quantization (PTQ) or quantization-aware training (QAT), which becomes problematic for hyper-scale LLMs with billions of parameters. This overhead can be alleviated via recently proposed backpropagation-free PTQ methods; however, their performance is somewhat limited by their lack of consideration of inter-layer dependencies. In this paper, we thus propose a novel PTQ algorithm that considers inter-layer dependencies without relying on backpropagation. The fundamental concept involved is the development of attention-aware Hessian matrices, which facilitates the consideration of inter-layer dependencies within the attention module. Extensive experiments demonstrate that the proposed algorithm significantly outperforms conventional PTQ methods, particularly for low bit-widths. |
2405.11471 | Kento Uchida | Kento Uchida, Kenta Nishihara, Shinichi Shirakawa | CMA-ES with Adaptive Reevaluation for Multiplicative Noise | This paper has been accepted as a full paper at GECCO2024 | null | null | null | cs.NE | http://creativecommons.org/licenses/by-sa/4.0/ | The covariance matrix adaptation evolution strategy (CMA-ES) is a powerful
optimization method for continuous black-box optimization problems. Several
noise-handling methods have been proposed to bring out the optimization
performance of the CMA-ES on noisy objective functions. The adaptations of the
population size and the learning rate are two major approaches that perform
well under additive Gaussian noise. The reevaluation technique is another
technique that evaluates each solution multiple times. In this paper, we
discuss the difference between those methods from the perspective of stochastic
relaxation that considers the maximization of the expected utility function. We
derive that the set of maximizers of the noise-independent utility, which is
used in the reevaluation technique, certainly contains the optimal solution,
while the noise-dependent utility, which is used in the population size and
leaning rate adaptations, does not satisfy it under multiplicative noise. Based
on the discussion, we develop the reevaluation adaptation CMA-ES (RA-CMA-ES),
which computes two update directions using half of the evaluations and adapts
the number of reevaluations based on the estimated correlation of those two
update directions. The numerical simulation shows that the RA-CMA-ES
outperforms the comparative method under multiplicative noise, maintaining
competitive performance under additive noise.
| [
{
"created": "Sun, 19 May 2024 07:42:10 GMT",
"version": "v1"
}
] | 2024-05-21 | [
[
"Uchida",
"Kento",
""
],
[
"Nishihara",
"Kenta",
""
],
[
"Shirakawa",
"Shinichi",
""
]
] | The covariance matrix adaptation evolution strategy (CMA-ES) is a powerful optimization method for continuous black-box optimization problems. Several noise-handling methods have been proposed to bring out the optimization performance of the CMA-ES on noisy objective functions. The adaptations of the population size and the learning rate are two major approaches that perform well under additive Gaussian noise. The reevaluation technique is another technique that evaluates each solution multiple times. In this paper, we discuss the difference between those methods from the perspective of stochastic relaxation that considers the maximization of the expected utility function. We derive that the set of maximizers of the noise-independent utility, which is used in the reevaluation technique, certainly contains the optimal solution, while the noise-dependent utility, which is used in the population size and leaning rate adaptations, does not satisfy it under multiplicative noise. Based on the discussion, we develop the reevaluation adaptation CMA-ES (RA-CMA-ES), which computes two update directions using half of the evaluations and adapts the number of reevaluations based on the estimated correlation of those two update directions. The numerical simulation shows that the RA-CMA-ES outperforms the comparative method under multiplicative noise, maintaining competitive performance under additive noise. |
2106.06272 | Stefano Maria Nicoletti | Stefano M. Nicoletti, Marijn Peppelman, Christina Kolb, Mari\"elle
Stoelinga | Model-based Joint Analysis of Safety and Security: Survey and
Identification of Gaps | null | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We survey the state-of-the-art on model-based formalisms for safety and
security joint analysis, where safety refers to the absence of unintended
failures, and security to absence of malicious attacks. We conduct a thorough
literature review and - as a result - we consider fourteen model-based
formalisms and compare them with respect to several criteria: (1) Modelling
capabilities and Expressiveness: which phenomena can be expressed in these
formalisms? To which extent can they capture safety-security interactions? (2)
Analytical capabilities: which analysis types are supported? (3) Practical
applicability: to what extent have the formalisms been used to analyze small or
larger case studies? Furthermore, (1) we present more precise definitions for
safety-security dependencies in tree-like formalisms; (2) we showcase the
potential of each formalism by modelling the same toy example from the
literature and (3) we present our findings and reflect on possible ways to
narrow highlighted gaps. In summary, our key findings are the following: (1)
the majority of approaches combine tree-like formal models; (2) the exact
nature of safety-security interaction is still ill-understood and (3) diverse
formalisms can capture different interactions; (4) analyzed formalisms merge
modelling constructs from existing safety- and security-specific formalisms,
without introducing ad hoc constructs to model safety-security interactions, or
(5) metrics to analyze trade offs. Moreover, (6) large case studies
representing safety-security interactions are still missing.
| [
{
"created": "Fri, 11 Jun 2021 09:38:23 GMT",
"version": "v1"
},
{
"created": "Wed, 19 Jan 2022 11:02:54 GMT",
"version": "v2"
},
{
"created": "Mon, 23 Oct 2023 09:34:05 GMT",
"version": "v3"
}
] | 2023-10-24 | [
[
"Nicoletti",
"Stefano M.",
""
],
[
"Peppelman",
"Marijn",
""
],
[
"Kolb",
"Christina",
""
],
[
"Stoelinga",
"Mariëlle",
""
]
] | We survey the state-of-the-art on model-based formalisms for safety and security joint analysis, where safety refers to the absence of unintended failures, and security to absence of malicious attacks. We conduct a thorough literature review and - as a result - we consider fourteen model-based formalisms and compare them with respect to several criteria: (1) Modelling capabilities and Expressiveness: which phenomena can be expressed in these formalisms? To which extent can they capture safety-security interactions? (2) Analytical capabilities: which analysis types are supported? (3) Practical applicability: to what extent have the formalisms been used to analyze small or larger case studies? Furthermore, (1) we present more precise definitions for safety-security dependencies in tree-like formalisms; (2) we showcase the potential of each formalism by modelling the same toy example from the literature and (3) we present our findings and reflect on possible ways to narrow highlighted gaps. In summary, our key findings are the following: (1) the majority of approaches combine tree-like formal models; (2) the exact nature of safety-security interaction is still ill-understood and (3) diverse formalisms can capture different interactions; (4) analyzed formalisms merge modelling constructs from existing safety- and security-specific formalisms, without introducing ad hoc constructs to model safety-security interactions, or (5) metrics to analyze trade offs. Moreover, (6) large case studies representing safety-security interactions are still missing. |
2402.01296 | Man-Jie Yuan | Man-Jie Yuan, Zheng Zou, Wei Gao | Bi-CryptoNets: Leveraging Different-Level Privacy for Encrypted
Inference | null | null | null | null | cs.LG cs.CR cs.CV | http://creativecommons.org/licenses/by/4.0/ | Privacy-preserving neural networks have attracted increasing attention in
recent years, and various algorithms have been developed to keep the balance
between accuracy, computational complexity and information security from the
cryptographic view. This work takes a different view from the input data and
structure of neural networks. We decompose the input data (e.g., some images)
into sensitive and insensitive segments according to importance and privacy.
The sensitive segment includes some important and private information such as
human faces and we take strong homomorphic encryption to keep security, whereas
the insensitive one contains some background and we add perturbations. We
propose the bi-CryptoNets, i.e., plaintext and ciphertext branches, to deal
with two segments, respectively, and ciphertext branch could utilize the
information from plaintext branch by unidirectional connections. We adopt
knowledge distillation for our bi-CryptoNets by transferring representations
from a well-trained teacher neural network. Empirical studies show the
effectiveness and decrease of inference latency for our bi-CryptoNets.
| [
{
"created": "Fri, 2 Feb 2024 10:35:05 GMT",
"version": "v1"
}
] | 2024-02-05 | [
[
"Yuan",
"Man-Jie",
""
],
[
"Zou",
"Zheng",
""
],
[
"Gao",
"Wei",
""
]
] | Privacy-preserving neural networks have attracted increasing attention in recent years, and various algorithms have been developed to keep the balance between accuracy, computational complexity and information security from the cryptographic view. This work takes a different view from the input data and structure of neural networks. We decompose the input data (e.g., some images) into sensitive and insensitive segments according to importance and privacy. The sensitive segment includes some important and private information such as human faces and we take strong homomorphic encryption to keep security, whereas the insensitive one contains some background and we add perturbations. We propose the bi-CryptoNets, i.e., plaintext and ciphertext branches, to deal with two segments, respectively, and ciphertext branch could utilize the information from plaintext branch by unidirectional connections. We adopt knowledge distillation for our bi-CryptoNets by transferring representations from a well-trained teacher neural network. Empirical studies show the effectiveness and decrease of inference latency for our bi-CryptoNets. |
0805.4219 | Grenville Croll | Andrew Hawker | Building Financial Accuracy into Spreadsheets | 6 Pages | Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2000 35-40
ISBN:1 86166 158 4 | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Students learning how to apply spreadsheets to accounting problems are not
always well served by the built-in financial functions. Problems can arise
because of differences between UK and US practice, through anomalies in the
functions themselves, and because the promptings of Wizards' engender an
attitude of filling in the blanks on the screen, and hoping for the best. Some
examples of these problems are described, and suggestions are presented for
ways of improving the situation. Principally, it is suggested that spreadsheet
prompts and 'Help' screens should offer integrated guidance, covering some
aspects of financial practice, as well as matters of spreadsheet technique.
| [
{
"created": "Tue, 27 May 2008 21:11:48 GMT",
"version": "v1"
}
] | 2008-05-29 | [
[
"Hawker",
"Andrew",
""
]
] | Students learning how to apply spreadsheets to accounting problems are not always well served by the built-in financial functions. Problems can arise because of differences between UK and US practice, through anomalies in the functions themselves, and because the promptings of Wizards' engender an attitude of filling in the blanks on the screen, and hoping for the best. Some examples of these problems are described, and suggestions are presented for ways of improving the situation. Principally, it is suggested that spreadsheet prompts and 'Help' screens should offer integrated guidance, covering some aspects of financial practice, as well as matters of spreadsheet technique. |
2209.07749 | Diana Negoescu | Diana M. Negoescu, Pasha Khosravi, Shadow Zhao, Nanyu Chen, Parvez
Ahammad, Humberto Gonzalez | Sales Channel Optimization via Simulations Based on Observational Data
with Delayed Rewards: A Case Study at LinkedIn | Accepted at REVEAL'22 Workshop (16th ACM Conference on Recommender
Systems - RecSys 2022) | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Training models on data obtained from randomized experiments is ideal for
making good decisions. However, randomized experiments are often
time-consuming, expensive, risky, infeasible or unethical to perform, leaving
decision makers little choice but to rely on observational data collected under
historical policies when training models. This opens questions regarding not
only which decision-making policies would perform best in practice, but also
regarding the impact of different data collection protocols on the performance
of various policies trained on the data, or the robustness of policy
performance with respect to changes in problem characteristics such as action-
or reward- specific delays in observing outcomes. We aim to answer such
questions for the problem of optimizing sales channel allocations at LinkedIn,
where sales accounts (leads) need to be allocated to one of three channels,
with the goal of maximizing the number of successful conversions over a period
of time. A key problem feature constitutes the presence of stochastic delays in
observing allocation outcomes, whose distribution is both channel- and outcome-
dependent. We built a discrete-time simulation that can handle our problem
features and used it to evaluate: a) a historical rule-based policy; b) a
supervised machine learning policy (XGBoost); and c) multi-armed bandit (MAB)
policies, under different scenarios involving: i) data collection used for
training (observational vs randomized); ii) lead conversion scenarios; iii)
delay distributions. Our simulation results indicate that LinUCB, a simple MAB
policy, consistently outperforms the other policies, achieving a 18-47% lift
relative to a rule-based policy
| [
{
"created": "Fri, 16 Sep 2022 07:08:37 GMT",
"version": "v1"
}
] | 2022-09-19 | [
[
"Negoescu",
"Diana M.",
""
],
[
"Khosravi",
"Pasha",
""
],
[
"Zhao",
"Shadow",
""
],
[
"Chen",
"Nanyu",
""
],
[
"Ahammad",
"Parvez",
""
],
[
"Gonzalez",
"Humberto",
""
]
] | Training models on data obtained from randomized experiments is ideal for making good decisions. However, randomized experiments are often time-consuming, expensive, risky, infeasible or unethical to perform, leaving decision makers little choice but to rely on observational data collected under historical policies when training models. This opens questions regarding not only which decision-making policies would perform best in practice, but also regarding the impact of different data collection protocols on the performance of various policies trained on the data, or the robustness of policy performance with respect to changes in problem characteristics such as action- or reward- specific delays in observing outcomes. We aim to answer such questions for the problem of optimizing sales channel allocations at LinkedIn, where sales accounts (leads) need to be allocated to one of three channels, with the goal of maximizing the number of successful conversions over a period of time. A key problem feature constitutes the presence of stochastic delays in observing allocation outcomes, whose distribution is both channel- and outcome- dependent. We built a discrete-time simulation that can handle our problem features and used it to evaluate: a) a historical rule-based policy; b) a supervised machine learning policy (XGBoost); and c) multi-armed bandit (MAB) policies, under different scenarios involving: i) data collection used for training (observational vs randomized); ii) lead conversion scenarios; iii) delay distributions. Our simulation results indicate that LinUCB, a simple MAB policy, consistently outperforms the other policies, achieving a 18-47% lift relative to a rule-based policy |
2210.05791 | Shalaleh Rismani | Renee Shelby, Shalaleh Rismani, Kathryn Henne, AJung Moon, Negar
Rostamzadeh, Paul Nicholas, N'Mah Yilla, Jess Gallegos, Andrew Smart, Emilio
Garcia, Gurleen Virk | Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm
Reduction | null | null | null | null | cs.HC cs.GL | http://creativecommons.org/licenses/by/4.0/ | Understanding the landscape of potential harms from algorithmic systems
enables practitioners to better anticipate consequences of the systems they
build. It also supports the prospect of incorporating controls to help minimize
harms that emerge from the interplay of technologies and social and cultural
dynamics. A growing body of scholarship has identified a wide range of harms
across different algorithmic technologies. However, computing research and
practitioners lack a high level and synthesized overview of harms from
algorithmic systems. Based on a scoping review of computing research $(n=172)$,
we present an applied taxonomy of sociotechnical harms to support a more
systematic surfacing of potential harms in algorithmic systems. The final
taxonomy builds on and refers to existing taxonomies, classifications, and
terminologies. Five major themes related to sociotechnical harms -
representational, allocative, quality-of-service, interpersonal harms, and
social system/societal harms - and sub-themes are presented along with a
description of these categories. We conclude with a discussion of challenges
and opportunities for future research.
| [
{
"created": "Tue, 11 Oct 2022 21:22:30 GMT",
"version": "v1"
},
{
"created": "Thu, 9 Feb 2023 03:31:48 GMT",
"version": "v2"
},
{
"created": "Wed, 19 Jul 2023 02:56:32 GMT",
"version": "v3"
}
] | 2023-07-21 | [
[
"Shelby",
"Renee",
""
],
[
"Rismani",
"Shalaleh",
""
],
[
"Henne",
"Kathryn",
""
],
[
"Moon",
"AJung",
""
],
[
"Rostamzadeh",
"Negar",
""
],
[
"Nicholas",
"Paul",
""
],
[
"Yilla",
"N'Mah",
""
],
[
"Gallegos",
"Jess",
""
],
[
"Smart",
"Andrew",
""
],
[
"Garcia",
"Emilio",
""
],
[
"Virk",
"Gurleen",
""
]
] | Understanding the landscape of potential harms from algorithmic systems enables practitioners to better anticipate consequences of the systems they build. It also supports the prospect of incorporating controls to help minimize harms that emerge from the interplay of technologies and social and cultural dynamics. A growing body of scholarship has identified a wide range of harms across different algorithmic technologies. However, computing research and practitioners lack a high level and synthesized overview of harms from algorithmic systems. Based on a scoping review of computing research $(n=172)$, we present an applied taxonomy of sociotechnical harms to support a more systematic surfacing of potential harms in algorithmic systems. The final taxonomy builds on and refers to existing taxonomies, classifications, and terminologies. Five major themes related to sociotechnical harms - representational, allocative, quality-of-service, interpersonal harms, and social system/societal harms - and sub-themes are presented along with a description of these categories. We conclude with a discussion of challenges and opportunities for future research. |
2112.00270 | Jamison Ebert | Vamsi K. Amalladinne, Jamison R. Ebert, Jean-Francois Chamberland, and
Krishna R. Narayanan | An Enhanced Decoding Algorithm for Coded Compressed Sensing with
Applications to Unsourced Random Access | Submitted to MDPI Sensors | null | null | null | cs.IT math.IT | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Unsourced random access (URA) has emerged as a pragmatic framework for
next-generation distributed sensor networks. Within URA, concatenated coding
structures are often employed to ensure that the central base station can
accurately recover the set of sent codewords during a given transmission
period. Many URA algorithms employ independent inner and outer decoders, which
can help reduce computational complexity at the expense of a decay in
performance. In this article, an enhanced decoding algorithm is presented for a
concatenated coding structure consisting of a wide range of inner codes and an
outer tree-based code. It is shown that this algorithmic enhancement has the
potential to simultaneously improve error performance and decrease the
computational complexity of the decoder. This enhanced decoding algorithm is
applied to two existing URA algorithms and the performance benefits of the
algorithm are characterized. Findings are supported by numerical simulations.
| [
{
"created": "Wed, 1 Dec 2021 04:30:30 GMT",
"version": "v1"
}
] | 2021-12-02 | [
[
"Amalladinne",
"Vamsi K.",
""
],
[
"Ebert",
"Jamison R.",
""
],
[
"Chamberland",
"Jean-Francois",
""
],
[
"Narayanan",
"Krishna R.",
""
]
] | Unsourced random access (URA) has emerged as a pragmatic framework for next-generation distributed sensor networks. Within URA, concatenated coding structures are often employed to ensure that the central base station can accurately recover the set of sent codewords during a given transmission period. Many URA algorithms employ independent inner and outer decoders, which can help reduce computational complexity at the expense of a decay in performance. In this article, an enhanced decoding algorithm is presented for a concatenated coding structure consisting of a wide range of inner codes and an outer tree-based code. It is shown that this algorithmic enhancement has the potential to simultaneously improve error performance and decrease the computational complexity of the decoder. This enhanced decoding algorithm is applied to two existing URA algorithms and the performance benefits of the algorithm are characterized. Findings are supported by numerical simulations. |
1102.4121 | EPTCS | Laurent Doyen (LSV, ENS Cachan & CNRS, France), Thierry Massart
(Universit\'e Libre de Bruxelles, Belgium), Mahsa Shirmohammadi (Universit\'e
Libre de Bruxelles, Belgium) | Synchronizing Objectives for Markov Decision Processes | In Proceedings iWIGP 2011, arXiv:1102.3741 | EPTCS 50, 2011, pp. 61-75 | 10.4204/EPTCS.50.5 | null | cs.LO cs.CC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce synchronizing objectives for Markov decision processes (MDP).
Intuitively, a synchronizing objective requires that eventually, at every step
there is a state which concentrates almost all the probability mass. In
particular, it implies that the probabilistic system behaves in the long run
like a deterministic system: eventually, the current state of the MDP can be
identified with almost certainty.
We study the problem of deciding the existence of a strategy to enforce a
synchronizing objective in MDPs. We show that the problem is decidable for
general strategies, as well as for blind strategies where the player cannot
observe the current state of the MDP. We also show that pure strategies are
sufficient, but memory may be necessary.
| [
{
"created": "Mon, 21 Feb 2011 02:30:55 GMT",
"version": "v1"
}
] | 2011-02-22 | [
[
"Doyen",
"Laurent",
"",
"LSV, ENS Cachan & CNRS, France"
],
[
"Massart",
"Thierry",
"",
"Université Libre de Bruxelles, Belgium"
],
[
"Shirmohammadi",
"Mahsa",
"",
"Université\n Libre de Bruxelles, Belgium"
]
] | We introduce synchronizing objectives for Markov decision processes (MDP). Intuitively, a synchronizing objective requires that eventually, at every step there is a state which concentrates almost all the probability mass. In particular, it implies that the probabilistic system behaves in the long run like a deterministic system: eventually, the current state of the MDP can be identified with almost certainty. We study the problem of deciding the existence of a strategy to enforce a synchronizing objective in MDPs. We show that the problem is decidable for general strategies, as well as for blind strategies where the player cannot observe the current state of the MDP. We also show that pure strategies are sufficient, but memory may be necessary. |
cs/0510044 | Andrea Montanari | Andrea Montanari, Balaji Prabhakar, David Tse | Belief Propagation Based Multi--User Detection | 9 pages, 4 eps figures. Forty-third Allerton Conference on
Communications, Control and Computing, invited paper | null | null | null | cs.IT math.IT | null | We apply belief propagation (BP) to multi--user detection in a spread
spectrum system, under the assumption of Gaussian symbols. We prove that BP is
both convergent and allows to estimate the correct conditional expectation of
the input symbols. It is therefore an optimal --minimum mean square error--
detection algorithm. This suggests the possibility of designing BP detection
algorithms for more general systems. As a byproduct we rederive the Tse-Hanly
formula for minimum mean square error without any recourse to random matrix
theory.
| [
{
"created": "Sun, 16 Oct 2005 16:05:31 GMT",
"version": "v1"
},
{
"created": "Mon, 22 May 2006 10:56:18 GMT",
"version": "v2"
}
] | 2007-07-13 | [
[
"Montanari",
"Andrea",
""
],
[
"Prabhakar",
"Balaji",
""
],
[
"Tse",
"David",
""
]
] | We apply belief propagation (BP) to multi--user detection in a spread spectrum system, under the assumption of Gaussian symbols. We prove that BP is both convergent and allows to estimate the correct conditional expectation of the input symbols. It is therefore an optimal --minimum mean square error-- detection algorithm. This suggests the possibility of designing BP detection algorithms for more general systems. As a byproduct we rederive the Tse-Hanly formula for minimum mean square error without any recourse to random matrix theory. |
2003.12299 | Youngjae Yu | Youngjae Yu, Seunghwan Lee, Yuncheol Choi, Gunhee Kim | CurlingNet: Compositional Learning between Images and Text for Fashion
IQ Data | 4 pages, 4 figures, ICCV 2019 Linguistics Meets image and video
retrieval workshop, Fashion IQ challenge | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an approach named CurlingNet that can measure the semantic
distance of composition of image-text embedding. In order to learn an effective
image-text composition for the data in the fashion domain, our model proposes
two key components as follows. First, the Delivery makes the transition of a
source image in an embedding space. Second, the Sweeping emphasizes
query-related components of fashion images in the embedding space. We utilize a
channel-wise gating mechanism to make it possible. Our single model outperforms
previous state-of-the-art image-text composition models including TIRG and
FiLM. We participate in the first fashion-IQ challenge in ICCV 2019, for which
ensemble of our model achieves one of the best performances.
| [
{
"created": "Fri, 27 Mar 2020 09:36:32 GMT",
"version": "v1"
},
{
"created": "Mon, 30 Mar 2020 04:35:16 GMT",
"version": "v2"
}
] | 2020-03-31 | [
[
"Yu",
"Youngjae",
""
],
[
"Lee",
"Seunghwan",
""
],
[
"Choi",
"Yuncheol",
""
],
[
"Kim",
"Gunhee",
""
]
] | We present an approach named CurlingNet that can measure the semantic distance of composition of image-text embedding. In order to learn an effective image-text composition for the data in the fashion domain, our model proposes two key components as follows. First, the Delivery makes the transition of a source image in an embedding space. Second, the Sweeping emphasizes query-related components of fashion images in the embedding space. We utilize a channel-wise gating mechanism to make it possible. Our single model outperforms previous state-of-the-art image-text composition models including TIRG and FiLM. We participate in the first fashion-IQ challenge in ICCV 2019, for which ensemble of our model achieves one of the best performances. |
1309.4035 | Peter Turney | Peter D. Turney | Domain and Function: A Dual-Space Model of Semantic Relations and
Compositions | null | Journal of Artificial Intelligence Research (JAIR), (2012), 44,
533-585 | 10.1613/jair.3640 | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given appropriate representations of the semantic relations between carpenter
and wood and between mason and stone (for example, vectors in a vector space
model), a suitable algorithm should be able to recognize that these relations
are highly similar (carpenter is to wood as mason is to stone; the relations
are analogous). Likewise, with representations of dog, house, and kennel, an
algorithm should be able to recognize that the semantic composition of dog and
house, dog house, is highly similar to kennel (dog house and kennel are
synonymous). It seems that these two tasks, recognizing relations and
compositions, are closely connected. However, up to now, the best models for
relations are significantly different from the best models for compositions. In
this paper, we introduce a dual-space model that unifies these two tasks. This
model matches the performance of the best previous models for relations and
compositions. The dual-space model consists of a space for measuring domain
similarity and a space for measuring function similarity. Carpenter and wood
share the same domain, the domain of carpentry. Mason and stone share the same
domain, the domain of masonry. Carpenter and mason share the same function, the
function of artisans. Wood and stone share the same function, the function of
materials. In the composition dog house, kennel has some domain overlap with
both dog and house (the domains of pets and buildings). The function of kennel
is similar to the function of house (the function of shelters). By combining
domain and function similarities in various ways, we can model relations,
compositions, and other aspects of semantics.
| [
{
"created": "Mon, 16 Sep 2013 16:51:02 GMT",
"version": "v1"
}
] | 2013-09-17 | [
[
"Turney",
"Peter D.",
""
]
] | Given appropriate representations of the semantic relations between carpenter and wood and between mason and stone (for example, vectors in a vector space model), a suitable algorithm should be able to recognize that these relations are highly similar (carpenter is to wood as mason is to stone; the relations are analogous). Likewise, with representations of dog, house, and kennel, an algorithm should be able to recognize that the semantic composition of dog and house, dog house, is highly similar to kennel (dog house and kennel are synonymous). It seems that these two tasks, recognizing relations and compositions, are closely connected. However, up to now, the best models for relations are significantly different from the best models for compositions. In this paper, we introduce a dual-space model that unifies these two tasks. This model matches the performance of the best previous models for relations and compositions. The dual-space model consists of a space for measuring domain similarity and a space for measuring function similarity. Carpenter and wood share the same domain, the domain of carpentry. Mason and stone share the same domain, the domain of masonry. Carpenter and mason share the same function, the function of artisans. Wood and stone share the same function, the function of materials. In the composition dog house, kennel has some domain overlap with both dog and house (the domains of pets and buildings). The function of kennel is similar to the function of house (the function of shelters). By combining domain and function similarities in various ways, we can model relations, compositions, and other aspects of semantics. |
2004.00794 | Zhonghao Wang | Zhonghao Wang, Yunchao Wei, Rogerior Feris, Jinjun Xiong, Wen-Mei Hwu,
Thomas S. Huang, Humphrey Shi | Alleviating Semantic-level Shift: A Semi-supervised Domain Adaptation
Method for Semantic Segmentation | CVPRW 2020 | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning segmentation from synthetic data and adapting to real data can
significantly relieve human efforts in labelling pixel-level masks. A key
challenge of this task is how to alleviate the data distribution discrepancy
between the source and target domains, i.e. reducing domain shift. The common
approach to this problem is to minimize the discrepancy between feature
distributions from different domains through adversarial training. However,
directly aligning the feature distribution globally cannot guarantee
consistency from a local view (i.e. semantic-level), which prevents certain
semantic knowledge learned on the source domain from being applied to the
target domain. To tackle this issue, we propose a semi-supervised approach
named Alleviating Semantic-level Shift (ASS), which can successfully promote
the distribution consistency from both global and local views. Specifically,
leveraging a small number of labeled data from the target domain, we directly
extract semantic-level feature representations from both the source and the
target domains by averaging the features corresponding to same categories
advised by pixel-level masks. We then feed the produced features to the
discriminator to conduct semantic-level adversarial learning, which
collaborates with the adversarial learning from the global view to better
alleviate the domain shift. We apply our ASS to two domain adaptation tasks,
from GTA5 to Cityscapes and from Synthia to Cityscapes. Extensive experiments
demonstrate that: (1) ASS can significantly outperform the current unsupervised
state-of-the-arts by employing a small number of annotated samples from the
target domain; (2) ASS can beat the oracle model trained on the whole target
dataset by over 3 points by augmenting the synthetic source data with annotated
samples from the target domain without suffering from the prevalent problem of
overfitting to the source domain.
| [
{
"created": "Thu, 2 Apr 2020 03:25:05 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Jun 2020 22:38:27 GMT",
"version": "v2"
}
] | 2020-06-11 | [
[
"Wang",
"Zhonghao",
""
],
[
"Wei",
"Yunchao",
""
],
[
"Feris",
"Rogerior",
""
],
[
"Xiong",
"Jinjun",
""
],
[
"Hwu",
"Wen-Mei",
""
],
[
"Huang",
"Thomas S.",
""
],
[
"Shi",
"Humphrey",
""
]
] | Learning segmentation from synthetic data and adapting to real data can significantly relieve human efforts in labelling pixel-level masks. A key challenge of this task is how to alleviate the data distribution discrepancy between the source and target domains, i.e. reducing domain shift. The common approach to this problem is to minimize the discrepancy between feature distributions from different domains through adversarial training. However, directly aligning the feature distribution globally cannot guarantee consistency from a local view (i.e. semantic-level), which prevents certain semantic knowledge learned on the source domain from being applied to the target domain. To tackle this issue, we propose a semi-supervised approach named Alleviating Semantic-level Shift (ASS), which can successfully promote the distribution consistency from both global and local views. Specifically, leveraging a small number of labeled data from the target domain, we directly extract semantic-level feature representations from both the source and the target domains by averaging the features corresponding to same categories advised by pixel-level masks. We then feed the produced features to the discriminator to conduct semantic-level adversarial learning, which collaborates with the adversarial learning from the global view to better alleviate the domain shift. We apply our ASS to two domain adaptation tasks, from GTA5 to Cityscapes and from Synthia to Cityscapes. Extensive experiments demonstrate that: (1) ASS can significantly outperform the current unsupervised state-of-the-arts by employing a small number of annotated samples from the target domain; (2) ASS can beat the oracle model trained on the whole target dataset by over 3 points by augmenting the synthetic source data with annotated samples from the target domain without suffering from the prevalent problem of overfitting to the source domain. |
1707.03642 | Longfei Zhou | L. Zhou, L. Zheng, X. Wang, W. Jiang, and W. Luo | Coordinated Multicell Multicast Beamforming Based on Manifold
Optimization | This paper is already available in IEEE Commmunication Letter. See
http://ieeexplore.ieee.org/document/7898517/ | null | 10.1109/LCOMM.2017.2693374 | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multicast beamforming is a key technology for next-generation wireless
cellular networks to support high-rate content distribution services. In this
paper, the coordinated downlink multicast beamforming design in multicell
networks is considered. The goal is to maximize the minimum
signal-to-interference-plus-noise ratio of all users under individual base
station power constraints. We exploit the fractional form of the objective
function and geometric properties of the con-straints to reformulate the
problem as a parametric manifold optimization program. Afterwards we propose a
low-complexity Dinkelbach-type algorithm combined with adaptive exponential
smoothing and Riemannian conjugate gradient iteration, which is guaranteed to
converge. Numerical experiments show that the proposed algorithm outperforms
the existing SDP-based method and DC-programming-based method and achieves
near-optimal performance.
| [
{
"created": "Wed, 12 Jul 2017 10:56:50 GMT",
"version": "v1"
}
] | 2017-07-13 | [
[
"Zhou",
"L.",
""
],
[
"Zheng",
"L.",
""
],
[
"Wang",
"X.",
""
],
[
"Jiang",
"W.",
""
],
[
"Luo",
"W.",
""
]
] | Multicast beamforming is a key technology for next-generation wireless cellular networks to support high-rate content distribution services. In this paper, the coordinated downlink multicast beamforming design in multicell networks is considered. The goal is to maximize the minimum signal-to-interference-plus-noise ratio of all users under individual base station power constraints. We exploit the fractional form of the objective function and geometric properties of the con-straints to reformulate the problem as a parametric manifold optimization program. Afterwards we propose a low-complexity Dinkelbach-type algorithm combined with adaptive exponential smoothing and Riemannian conjugate gradient iteration, which is guaranteed to converge. Numerical experiments show that the proposed algorithm outperforms the existing SDP-based method and DC-programming-based method and achieves near-optimal performance. |
1703.07384 | Vishal Jain | Vishal Jain and Dr. Mayank Singh | Ontology Based Pivoted normalization using Vector Based Approach for
information Retrieval | null | 7th International Conference on Advanced Computing and
Communication Technologies, 16th November, 2013 | null | null | cs.IR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The proposed methodology is procedural i.e. it follows finite number of steps
that extracts relevant documents according to users query. It is based on
principles of Data Mining for analyzing web data. Data Mining first adapts
integration of data to generate warehouse. Then, it extracts useful information
with the help of algorithm. The task of representing extracted documents is
done by using Vector Based Statistical Approach that represents each document
in set of Terms.
| [
{
"created": "Tue, 21 Mar 2017 18:34:34 GMT",
"version": "v1"
}
] | 2017-03-23 | [
[
"Jain",
"Vishal",
""
],
[
"Singh",
"Dr. Mayank",
""
]
] | The proposed methodology is procedural i.e. it follows finite number of steps that extracts relevant documents according to users query. It is based on principles of Data Mining for analyzing web data. Data Mining first adapts integration of data to generate warehouse. Then, it extracts useful information with the help of algorithm. The task of representing extracted documents is done by using Vector Based Statistical Approach that represents each document in set of Terms. |
2312.10273 | Rui Jin | Rui Jin, Yong Liao, and Pengyuan Zhou | User Authentication and Identity Inconsistency Detection via
Mouse-trajectory Similarity Measurement | null | null | null | null | cs.CR | http://creativecommons.org/licenses/by/4.0/ | Completely Automated Public Turing Test To Tell Computers and Humans Apart
(CAPTCHA) is a type of challenge-response test widely used in authentication
systems. A well-known challenge it faces is the CAPTCHA farm, where workers are
hired to solve CAPTCHAs manually. In this work, we propose to tackle this
challenge from a novel perspective, converting CAPTCHA farm detection to
identity inconsistency detection, which essentially becomes an authentication
process. Specifically, we develop a novel embedding model, which measures the
similarity between mouse trajectories collected during the session and when
registering/solving CAPTCHA, to authenticate and detect identity inconsistency.
Moreover, unlike most existing works that employ a separate mouse movement
classifier for each individual user, which brings in considerable costs when
serving a large number of users, our model performs detection tasks using only
one classifier for all users, significantly reducing the cost. Experiment
results validate the superiority of our method over the state-of-the-art time
series classification methods, achieving 94.3% and 97.7% of AUC in identity and
authentication inconsistency detection, respectively.
| [
{
"created": "Sat, 16 Dec 2023 00:28:10 GMT",
"version": "v1"
}
] | 2023-12-19 | [
[
"Jin",
"Rui",
""
],
[
"Liao",
"Yong",
""
],
[
"Zhou",
"Pengyuan",
""
]
] | Completely Automated Public Turing Test To Tell Computers and Humans Apart (CAPTCHA) is a type of challenge-response test widely used in authentication systems. A well-known challenge it faces is the CAPTCHA farm, where workers are hired to solve CAPTCHAs manually. In this work, we propose to tackle this challenge from a novel perspective, converting CAPTCHA farm detection to identity inconsistency detection, which essentially becomes an authentication process. Specifically, we develop a novel embedding model, which measures the similarity between mouse trajectories collected during the session and when registering/solving CAPTCHA, to authenticate and detect identity inconsistency. Moreover, unlike most existing works that employ a separate mouse movement classifier for each individual user, which brings in considerable costs when serving a large number of users, our model performs detection tasks using only one classifier for all users, significantly reducing the cost. Experiment results validate the superiority of our method over the state-of-the-art time series classification methods, achieving 94.3% and 97.7% of AUC in identity and authentication inconsistency detection, respectively. |
1410.8808 | Paolo Pareti Mr. | Paolo Pareti, Ewan Klein and Adam Barker | A Semantic Web of Know-How: Linked Data for Community-Centric Tasks | 6th International Workshop on Web Intelligence & Communities (WIC14),
Proceedings of the companion publication of the 23rd International Conference
on World Wide Web (WWW 2014) | null | null | null | cs.AI cs.CL | http://creativecommons.org/licenses/by-nc-sa/3.0/ | This paper proposes a novel framework for representing community know-how on
the Semantic Web. Procedural knowledge generated by web communities typically
takes the form of natural language instructions or videos and is largely
unstructured. The absence of semantic structure impedes the deployment of many
useful applications, in particular the ability to discover and integrate
know-how automatically. We discuss the characteristics of community know-how
and argue that existing knowledge representation frameworks fail to represent
it adequately. We present a novel framework for representing the semantic
structure of community know-how and demonstrate the feasibility of our approach
by providing a concrete implementation which includes a method for
automatically acquiring procedural knowledge for real-world tasks.
| [
{
"created": "Wed, 29 Oct 2014 15:48:40 GMT",
"version": "v1"
}
] | 2014-11-03 | [
[
"Pareti",
"Paolo",
""
],
[
"Klein",
"Ewan",
""
],
[
"Barker",
"Adam",
""
]
] | This paper proposes a novel framework for representing community know-how on the Semantic Web. Procedural knowledge generated by web communities typically takes the form of natural language instructions or videos and is largely unstructured. The absence of semantic structure impedes the deployment of many useful applications, in particular the ability to discover and integrate know-how automatically. We discuss the characteristics of community know-how and argue that existing knowledge representation frameworks fail to represent it adequately. We present a novel framework for representing the semantic structure of community know-how and demonstrate the feasibility of our approach by providing a concrete implementation which includes a method for automatically acquiring procedural knowledge for real-world tasks. |
2210.02594 | Jeongyeol Kwon | Jeongyeol Kwon, Yonathan Efroni, Constantine Caramanis, Shie Mannor | Reward-Mixing MDPs with a Few Latent Contexts are Learnable | null | null | null | null | cs.LG cs.IT math.IT stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider episodic reinforcement learning in reward-mixing Markov decision
processes (RMMDPs): at the beginning of every episode nature randomly picks a
latent reward model among $M$ candidates and an agent interacts with the MDP
throughout the episode for $H$ time steps. Our goal is to learn a near-optimal
policy that nearly maximizes the $H$ time-step cumulative rewards in such a
model. Previous work established an upper bound for RMMDPs for $M=2$. In this
work, we resolve several open questions remained for the RMMDP model. For an
arbitrary $M\ge2$, we provide a sample-efficient
algorithm--$\texttt{EM}^2$--that outputs an $\epsilon$-optimal policy using
$\tilde{O} \left(\epsilon^{-2} \cdot S^d A^d \cdot \texttt{poly}(H, Z)^d
\right)$ episodes, where $S, A$ are the number of states and actions
respectively, $H$ is the time-horizon, $Z$ is the support size of reward
distributions and $d=\min(2M-1,H)$. Our technique is a higher-order extension
of the method-of-moments based approach, nevertheless, the design and analysis
of the \algname algorithm requires several new ideas beyond existing
techniques. We also provide a lower bound of $(SA)^{\Omega(\sqrt{M})} /
\epsilon^{2}$ for a general instance of RMMDP, supporting that super-polynomial
sample complexity in $M$ is necessary.
| [
{
"created": "Wed, 5 Oct 2022 22:52:00 GMT",
"version": "v1"
}
] | 2022-10-07 | [
[
"Kwon",
"Jeongyeol",
""
],
[
"Efroni",
"Yonathan",
""
],
[
"Caramanis",
"Constantine",
""
],
[
"Mannor",
"Shie",
""
]
] | We consider episodic reinforcement learning in reward-mixing Markov decision processes (RMMDPs): at the beginning of every episode nature randomly picks a latent reward model among $M$ candidates and an agent interacts with the MDP throughout the episode for $H$ time steps. Our goal is to learn a near-optimal policy that nearly maximizes the $H$ time-step cumulative rewards in such a model. Previous work established an upper bound for RMMDPs for $M=2$. In this work, we resolve several open questions remained for the RMMDP model. For an arbitrary $M\ge2$, we provide a sample-efficient algorithm--$\texttt{EM}^2$--that outputs an $\epsilon$-optimal policy using $\tilde{O} \left(\epsilon^{-2} \cdot S^d A^d \cdot \texttt{poly}(H, Z)^d \right)$ episodes, where $S, A$ are the number of states and actions respectively, $H$ is the time-horizon, $Z$ is the support size of reward distributions and $d=\min(2M-1,H)$. Our technique is a higher-order extension of the method-of-moments based approach, nevertheless, the design and analysis of the \algname algorithm requires several new ideas beyond existing techniques. We also provide a lower bound of $(SA)^{\Omega(\sqrt{M})} / \epsilon^{2}$ for a general instance of RMMDP, supporting that super-polynomial sample complexity in $M$ is necessary. |
2206.03761 | Yifan Wang | Yifan Wang, Weizhi Ma, Min Zhang, Yiqun Liu, and Shaoping Ma | A Survey on the Fairness of Recommender Systems | Submitted to the Special Section on Trustworthy Recommendation and
Search of ACM TOIS on March 27, 2022 and accepted on June 6 | null | 10.1145/3547333 | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recommender systems are an essential tool to relieve the information overload
challenge and play an important role in people's daily lives. Since
recommendations involve allocations of social resources (e.g., job
recommendation), an important issue is whether recommendations are fair. Unfair
recommendations are not only unethical but also harm the long-term interests of
the recommender system itself. As a result, fairness issues in recommender
systems have recently attracted increasing attention. However, due to multiple
complex resource allocation processes and various fairness definitions, the
research on fairness in recommendation is scattered. To fill this gap, we
review over 60 papers published in top conferences/journals, including TOIS,
SIGIR, and WWW. First, we summarize fairness definitions in the recommendation
and provide several views to classify fairness issues. Then, we review
recommendation datasets and measurements in fairness studies and provide an
elaborate taxonomy of fairness methods in the recommendation. Finally, we
conclude this survey by outlining some promising future directions.
| [
{
"created": "Wed, 8 Jun 2022 09:15:08 GMT",
"version": "v1"
},
{
"created": "Sun, 19 Jun 2022 16:14:06 GMT",
"version": "v2"
}
] | 2022-07-12 | [
[
"Wang",
"Yifan",
""
],
[
"Ma",
"Weizhi",
""
],
[
"Zhang",
"Min",
""
],
[
"Liu",
"Yiqun",
""
],
[
"Ma",
"Shaoping",
""
]
] | Recommender systems are an essential tool to relieve the information overload challenge and play an important role in people's daily lives. Since recommendations involve allocations of social resources (e.g., job recommendation), an important issue is whether recommendations are fair. Unfair recommendations are not only unethical but also harm the long-term interests of the recommender system itself. As a result, fairness issues in recommender systems have recently attracted increasing attention. However, due to multiple complex resource allocation processes and various fairness definitions, the research on fairness in recommendation is scattered. To fill this gap, we review over 60 papers published in top conferences/journals, including TOIS, SIGIR, and WWW. First, we summarize fairness definitions in the recommendation and provide several views to classify fairness issues. Then, we review recommendation datasets and measurements in fairness studies and provide an elaborate taxonomy of fairness methods in the recommendation. Finally, we conclude this survey by outlining some promising future directions. |
1402.5078 | Kri\v{s}j\=anis Pr\=usis | Andris Ambainis and Kri\v{s}j\=anis Pr\=usis | A Tight Lower Bound on Certificate Complexity in Terms of Block
Sensitivity and Sensitivity | 12 pages | null | 10.1007/978-3-662-44465-8_4 | null | cs.CC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sensitivity, certificate complexity and block sensitivity are widely used
Boolean function complexity measures. A longstanding open problem, proposed by
Nisan and Szegedy, is whether sensitivity and block sensitivity are
polynomially related. Motivated by the constructions of functions which achieve
the largest known separations, we study the relation between 1-certificate
complexity and 0-sensitivity and 0-block sensitivity.
Previously the best known lower bound was $C_1(f)\geq \frac{bs_0(f)}{2
s_0(f)}$, achieved by Kenyon and Kutin. We improve this to $C_1(f)\geq \frac{3
bs_0(f)}{2 s_0(f)}$. While this improvement is only by a constant factor, this
is quite important, as it precludes achieving a superquadratic separation
between $bs(f)$ and $s(f)$ by iterating functions which reach this bound. In
addition, this bound is tight, as it matches the construction of Ambainis and
Sun up to an additive constant.
| [
{
"created": "Thu, 20 Feb 2014 17:16:23 GMT",
"version": "v1"
},
{
"created": "Thu, 31 Jul 2014 11:55:55 GMT",
"version": "v2"
}
] | 2015-03-27 | [
[
"Ambainis",
"Andris",
""
],
[
"Prūsis",
"Krišjānis",
""
]
] | Sensitivity, certificate complexity and block sensitivity are widely used Boolean function complexity measures. A longstanding open problem, proposed by Nisan and Szegedy, is whether sensitivity and block sensitivity are polynomially related. Motivated by the constructions of functions which achieve the largest known separations, we study the relation between 1-certificate complexity and 0-sensitivity and 0-block sensitivity. Previously the best known lower bound was $C_1(f)\geq \frac{bs_0(f)}{2 s_0(f)}$, achieved by Kenyon and Kutin. We improve this to $C_1(f)\geq \frac{3 bs_0(f)}{2 s_0(f)}$. While this improvement is only by a constant factor, this is quite important, as it precludes achieving a superquadratic separation between $bs(f)$ and $s(f)$ by iterating functions which reach this bound. In addition, this bound is tight, as it matches the construction of Ambainis and Sun up to an additive constant. |
2109.06853 | Naoya Inoue | Naoya Inoue, Harsh Trivedi, Steven Sinha, Niranjan Balasubramanian and
Kentaro Inui | Summarize-then-Answer: Generating Concise Explanations for Multi-hop
Reading Comprehension | Accepted to EMNLP2021 Long Paper (Main Track) | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | How can we generate concise explanations for multi-hop Reading Comprehension
(RC)? The current strategies of identifying supporting sentences can be seen as
an extractive question-focused summarization of the input text. However, these
extractive explanations are not necessarily concise i.e. not minimally
sufficient for answering a question. Instead, we advocate for an abstractive
approach, where we propose to generate a question-focused, abstractive summary
of input paragraphs and then feed it to an RC system. Given a limited amount of
human-annotated abstractive explanations, we train the abstractive explainer in
a semi-supervised manner, where we start from the supervised model and then
train it further through trial and error maximizing a conciseness-promoted
reward function. Our experiments demonstrate that the proposed abstractive
explainer can generate more compact explanations than an extractive explainer
with limited supervision (only 2k instances) while maintaining sufficiency.
| [
{
"created": "Tue, 14 Sep 2021 17:44:34 GMT",
"version": "v1"
}
] | 2021-09-15 | [
[
"Inoue",
"Naoya",
""
],
[
"Trivedi",
"Harsh",
""
],
[
"Sinha",
"Steven",
""
],
[
"Balasubramanian",
"Niranjan",
""
],
[
"Inui",
"Kentaro",
""
]
] | How can we generate concise explanations for multi-hop Reading Comprehension (RC)? The current strategies of identifying supporting sentences can be seen as an extractive question-focused summarization of the input text. However, these extractive explanations are not necessarily concise i.e. not minimally sufficient for answering a question. Instead, we advocate for an abstractive approach, where we propose to generate a question-focused, abstractive summary of input paragraphs and then feed it to an RC system. Given a limited amount of human-annotated abstractive explanations, we train the abstractive explainer in a semi-supervised manner, where we start from the supervised model and then train it further through trial and error maximizing a conciseness-promoted reward function. Our experiments demonstrate that the proposed abstractive explainer can generate more compact explanations than an extractive explainer with limited supervision (only 2k instances) while maintaining sufficiency. |
2102.00321 | Orestis Papadigenopoulos | Orestis Papadigenopoulos and Constantine Caramanis | Recurrent Submodular Welfare and Matroid Blocking Bandits | Corrected Remark 3.2 | null | null | null | cs.LG cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A recent line of research focuses on the study of the stochastic multi-armed
bandits problem (MAB), in the case where temporal correlations of specific
structure are imposed between the player's actions and the reward distributions
of the arms (Kleinberg and Immorlica [FOCS18], Basu et al. [NeurIPS19]). As
opposed to the standard MAB setting, where the optimal solution in hindsight
can be trivially characterized, these correlations lead to (sub-)optimal
solutions that exhibit interesting dynamical patterns -- a phenomenon that
yields new challenges both from an algorithmic as well as a learning
perspective. In this work, we extend the above direction to a combinatorial
bandit setting and study a variant of stochastic MAB, where arms are subject to
matroid constraints and each arm becomes unavailable (blocked) for a fixed
number of rounds after each play. A natural common generalization of the
state-of-the-art for blocking bandits, and that for matroid bandits, yields a
$(1-\frac{1}{e})$-approximation for partition matroids, yet it only guarantees
a $\frac{1}{2}$-approximation for general matroids. In this paper we develop
new algorithmic ideas that allow us to obtain a polynomial-time $(1 -
\frac{1}{e})$-approximation algorithm (asymptotically and in expectation) for
any matroid, and thus to control the $(1-\frac{1}{e})$-approximate regret. A
key ingredient is the technique of correlated (interleaved) scheduling. Along
the way, we discover an interesting connection to a variant of Submodular
Welfare Maximization, for which we provide (asymptotically) matching upper and
lower approximability bounds.
| [
{
"created": "Sat, 30 Jan 2021 21:51:47 GMT",
"version": "v1"
},
{
"created": "Tue, 16 Feb 2021 06:30:35 GMT",
"version": "v2"
},
{
"created": "Sun, 28 Feb 2021 03:34:19 GMT",
"version": "v3"
}
] | 2021-03-02 | [
[
"Papadigenopoulos",
"Orestis",
""
],
[
"Caramanis",
"Constantine",
""
]
] | A recent line of research focuses on the study of the stochastic multi-armed bandits problem (MAB), in the case where temporal correlations of specific structure are imposed between the player's actions and the reward distributions of the arms (Kleinberg and Immorlica [FOCS18], Basu et al. [NeurIPS19]). As opposed to the standard MAB setting, where the optimal solution in hindsight can be trivially characterized, these correlations lead to (sub-)optimal solutions that exhibit interesting dynamical patterns -- a phenomenon that yields new challenges both from an algorithmic as well as a learning perspective. In this work, we extend the above direction to a combinatorial bandit setting and study a variant of stochastic MAB, where arms are subject to matroid constraints and each arm becomes unavailable (blocked) for a fixed number of rounds after each play. A natural common generalization of the state-of-the-art for blocking bandits, and that for matroid bandits, yields a $(1-\frac{1}{e})$-approximation for partition matroids, yet it only guarantees a $\frac{1}{2}$-approximation for general matroids. In this paper we develop new algorithmic ideas that allow us to obtain a polynomial-time $(1 - \frac{1}{e})$-approximation algorithm (asymptotically and in expectation) for any matroid, and thus to control the $(1-\frac{1}{e})$-approximate regret. A key ingredient is the technique of correlated (interleaved) scheduling. Along the way, we discover an interesting connection to a variant of Submodular Welfare Maximization, for which we provide (asymptotically) matching upper and lower approximability bounds. |
2404.05587 | Wolfgang Otto | Wolfgang Otto, Sharmila Upadhyaya, Stefan Dietze | Enhancing Software-Related Information Extraction via Single-Choice
Question Answering with Large Language Models | Accepted at: 1st Workshop on Natural Scientific Language Processing
and Research Knowledge Graphs (NSLP 2024) Co-located with Extended Semantic
Web Conference (ESWC 2024) | null | null | null | cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | This paper describes our participation in the Shared Task on Software
Mentions Disambiguation (SOMD), with a focus on improving relation extraction
in scholarly texts through generative Large Language Models (LLMs) using
single-choice question-answering. The methodology prioritises the use of
in-context learning capabilities of GLMs to extract software-related entities
and their descriptive attributes, such as distributive information. Our
approach uses Retrieval-Augmented Generation (RAG) techniques and GLMs for
Named Entity Recognition (NER) and Attributive NER to identify relationships
between extracted software entities, providing a structured solution for
analysing software citations in academic literature. The paper provides a
detailed description of our approach, demonstrating how using GLMs in a
single-choice QA paradigm can greatly enhance IE methodologies. Our
participation in the SOMD shared task highlights the importance of precise
software citation practices and showcases our system's ability to overcome the
challenges of disambiguating and extracting relationships between software
mentions. This sets the groundwork for future research and development in this
field.
| [
{
"created": "Mon, 8 Apr 2024 15:00:36 GMT",
"version": "v1"
},
{
"created": "Fri, 19 Apr 2024 23:19:17 GMT",
"version": "v2"
}
] | 2024-04-23 | [
[
"Otto",
"Wolfgang",
""
],
[
"Upadhyaya",
"Sharmila",
""
],
[
"Dietze",
"Stefan",
""
]
] | This paper describes our participation in the Shared Task on Software Mentions Disambiguation (SOMD), with a focus on improving relation extraction in scholarly texts through generative Large Language Models (LLMs) using single-choice question-answering. The methodology prioritises the use of in-context learning capabilities of GLMs to extract software-related entities and their descriptive attributes, such as distributive information. Our approach uses Retrieval-Augmented Generation (RAG) techniques and GLMs for Named Entity Recognition (NER) and Attributive NER to identify relationships between extracted software entities, providing a structured solution for analysing software citations in academic literature. The paper provides a detailed description of our approach, demonstrating how using GLMs in a single-choice QA paradigm can greatly enhance IE methodologies. Our participation in the SOMD shared task highlights the importance of precise software citation practices and showcases our system's ability to overcome the challenges of disambiguating and extracting relationships between software mentions. This sets the groundwork for future research and development in this field. |
1909.00440 | Abir De | Abir De, Adish Singla, Utkarsh Upadhyay, Manuel Gomez-Rodriguez | Can A User Anticipate What Her Followers Want? | Fixed some typos | null | null | null | cs.SI cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Whenever a social media user decides to share a story, she is typically
pleased to receive likes, comments, shares, or, more generally, feedback from
her followers. As a result, she may feel compelled to use the feedback she
receives to (re-)estimate her followers' preferences and decides which stories
to share next to receive more (positive) feedback. Under which conditions can
she succeed? In this work, we first look into this problem from a theoretical
perspective and then provide a set of practical algorithms to identify and
characterize such behavior in social media. More specifically, we address the
above problem from the viewpoint of sequential decision making and utility
maximization. For a wide variety of utility functions, we first show that, to
succeed, a user needs to actively trade off exploitation-- sharing stories
which lead to more (positive) feedback--and exploration-- sharing stories to
learn about her followers' preferences. However, exploration is not necessary
if a user utilizes the feedback her followers provide to other users in
addition to the feedback she receives. Then, we develop a utility estimation
framework for observation data, which relies on statistical hypothesis testing
to determine whether a user utilizes the feedback she receives from each of her
followers to decide what to post next. Experiments on synthetic data illustrate
our theoretical findings and show that our estimation framework is able to
accurately recover users' underlying utility functions. Experiments on several
real datasets gathered from Twitter and Reddit reveal that up to 82% (43%) of
the Twitter (Reddit) users in our datasets do use the feedback they receive to
decide what to post next.
| [
{
"created": "Sun, 1 Sep 2019 18:19:51 GMT",
"version": "v1"
},
{
"created": "Thu, 19 Sep 2019 12:28:39 GMT",
"version": "v2"
}
] | 2019-09-20 | [
[
"De",
"Abir",
""
],
[
"Singla",
"Adish",
""
],
[
"Upadhyay",
"Utkarsh",
""
],
[
"Gomez-Rodriguez",
"Manuel",
""
]
] | Whenever a social media user decides to share a story, she is typically pleased to receive likes, comments, shares, or, more generally, feedback from her followers. As a result, she may feel compelled to use the feedback she receives to (re-)estimate her followers' preferences and decides which stories to share next to receive more (positive) feedback. Under which conditions can she succeed? In this work, we first look into this problem from a theoretical perspective and then provide a set of practical algorithms to identify and characterize such behavior in social media. More specifically, we address the above problem from the viewpoint of sequential decision making and utility maximization. For a wide variety of utility functions, we first show that, to succeed, a user needs to actively trade off exploitation-- sharing stories which lead to more (positive) feedback--and exploration-- sharing stories to learn about her followers' preferences. However, exploration is not necessary if a user utilizes the feedback her followers provide to other users in addition to the feedback she receives. Then, we develop a utility estimation framework for observation data, which relies on statistical hypothesis testing to determine whether a user utilizes the feedback she receives from each of her followers to decide what to post next. Experiments on synthetic data illustrate our theoretical findings and show that our estimation framework is able to accurately recover users' underlying utility functions. Experiments on several real datasets gathered from Twitter and Reddit reveal that up to 82% (43%) of the Twitter (Reddit) users in our datasets do use the feedback they receive to decide what to post next. |
2011.03659 | Jingnan Shi | Jingnan Shi, Heng Yang, Luca Carlone | ROBIN: a Graph-Theoretic Approach to Reject Outliers in Robust
Estimation using Invariants | null | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many estimation problems in robotics, computer vision, and learning require
estimating unknown quantities in the face of outliers. Outliers are typically
the result of incorrect data association or feature matching, and it is common
to have problems where more than 90% of the measurements used for estimation
are outliers. While current approaches for robust estimation are able to deal
with moderate amounts of outliers, they fail to produce accurate estimates in
the presence of many outliers. This paper develops an approach to prune
outliers. First, we develop a theory of invariance that allows us to quickly
check if a subset of measurements are mutually compatible without explicitly
solving the estimation problem. Second, we develop a graph-theoretic framework,
where measurements are modeled as vertices and mutual compatibility is captured
by edges. We generalize existing results showing that the inliers form a clique
in this graph and typically belong to the maximum clique. We also show that in
practice the maximum k-core of the compatibility graph provides an
approximation of the maximum clique, while being faster to compute in large
problems. These two contributions leads to ROBIN, our approach to Reject
Outliers Based on INvariants, which allows us to quickly prune outliers in
generic estimation problems. We demonstrate ROBIN in four geometric perception
problems and show it boosts robustness of existing solvers while running in
milliseconds in large problems.
| [
{
"created": "Sat, 7 Nov 2020 02:09:33 GMT",
"version": "v1"
},
{
"created": "Tue, 23 Mar 2021 20:02:00 GMT",
"version": "v2"
}
] | 2021-03-25 | [
[
"Shi",
"Jingnan",
""
],
[
"Yang",
"Heng",
""
],
[
"Carlone",
"Luca",
""
]
] | Many estimation problems in robotics, computer vision, and learning require estimating unknown quantities in the face of outliers. Outliers are typically the result of incorrect data association or feature matching, and it is common to have problems where more than 90% of the measurements used for estimation are outliers. While current approaches for robust estimation are able to deal with moderate amounts of outliers, they fail to produce accurate estimates in the presence of many outliers. This paper develops an approach to prune outliers. First, we develop a theory of invariance that allows us to quickly check if a subset of measurements are mutually compatible without explicitly solving the estimation problem. Second, we develop a graph-theoretic framework, where measurements are modeled as vertices and mutual compatibility is captured by edges. We generalize existing results showing that the inliers form a clique in this graph and typically belong to the maximum clique. We also show that in practice the maximum k-core of the compatibility graph provides an approximation of the maximum clique, while being faster to compute in large problems. These two contributions leads to ROBIN, our approach to Reject Outliers Based on INvariants, which allows us to quickly prune outliers in generic estimation problems. We demonstrate ROBIN in four geometric perception problems and show it boosts robustness of existing solvers while running in milliseconds in large problems. |
1609.01755 | Ulf R\"uegg | Adalat Jabrayilov, Sven Mallach, Petra Mutzel, Ulf R\"uegg, and
Reinhard von Hanxleden | Compact Layered Drawings of General Directed Graphs | Appears in the Proceedings of the 24th International Symposium on
Graph Drawing and Network Visualization (GD 2016) | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of layering general directed graphs under height and
possibly also width constraints. Given a directed graph G = (V,A) and a maximal
height, we propose a layering approach that minimizes a weighted sum of the
number of reversed arcs, the arc lengths, and the width of the drawing. We call
this the Compact Generalized Layering Problem (CGLP). Here, the width of a
drawing is defined as the maximum sum of the number of vertices placed on a
layer and the number of dummy vertices caused by arcs traversing the layer. The
CGLP is NP-hard. We present two MIP models for this problem. The first one
(EXT) is our extension of a natural formulation for directed acyclic graphs as
suggested by Healy and Nikolov. The second one (CGL) is a new formulation based
on partial orderings. Our computational experiments on two benchmark sets show
that the CGL formulation can be solved much faster than EXT using standard
commercial MIP solvers. Moreover, we suggest a variant of CGL, called MML, that
can be seen as a heuristic approach. In our experiments, MML clearly improves
on CGL in terms of running time while it does not considerably increase the
average arc lengths and widths of the layouts although it solves a slightly
different problem where the dummy vertices are not taken into account.
| [
{
"created": "Mon, 29 Aug 2016 22:09:37 GMT",
"version": "v1"
}
] | 2016-09-08 | [
[
"Jabrayilov",
"Adalat",
""
],
[
"Mallach",
"Sven",
""
],
[
"Mutzel",
"Petra",
""
],
[
"Rüegg",
"Ulf",
""
],
[
"von Hanxleden",
"Reinhard",
""
]
] | We consider the problem of layering general directed graphs under height and possibly also width constraints. Given a directed graph G = (V,A) and a maximal height, we propose a layering approach that minimizes a weighted sum of the number of reversed arcs, the arc lengths, and the width of the drawing. We call this the Compact Generalized Layering Problem (CGLP). Here, the width of a drawing is defined as the maximum sum of the number of vertices placed on a layer and the number of dummy vertices caused by arcs traversing the layer. The CGLP is NP-hard. We present two MIP models for this problem. The first one (EXT) is our extension of a natural formulation for directed acyclic graphs as suggested by Healy and Nikolov. The second one (CGL) is a new formulation based on partial orderings. Our computational experiments on two benchmark sets show that the CGL formulation can be solved much faster than EXT using standard commercial MIP solvers. Moreover, we suggest a variant of CGL, called MML, that can be seen as a heuristic approach. In our experiments, MML clearly improves on CGL in terms of running time while it does not considerably increase the average arc lengths and widths of the layouts although it solves a slightly different problem where the dummy vertices are not taken into account. |
2204.09009 | Ishay Haviv | Ishay Haviv | Fixed-Parameter Algorithms for the Kneser and Schrijver Problems | 31 pages. This paper includes and extends the content of
arXiv:2204.06761 | null | null | null | cs.DS math.CO | http://creativecommons.org/licenses/by/4.0/ | The Kneser graph $K(n,k)$ is defined for integers $n$ and $k$ with $n \geq
2k$ as the graph whose vertices are all the $k$-subsets of
$[n]=\{1,2,\ldots,n\}$ where two such sets are adjacent if they are disjoint.
The Schrijver graph $S(n,k)$ is defined as the subgraph of $K(n,k)$ induced by
the collection of all $k$-subsets of $[n]$ that do not include two consecutive
elements modulo $n$. It is known that the chromatic number of both $K(n,k)$ and
$S(n,k)$ is $n-2k+2$.
In the computational Kneser and Schrijver problems, we are given an access to
a coloring with $n-2k+1$ colors of the vertices of $K(n,k)$ and $S(n,k)$
respectively, and the goal is to find a monochromatic edge. We prove that the
problems admit randomized algorithms with running time $n^{O(1)} \cdot
k^{O(k)}$, hence they are fixed-parameter tractable with respect to the
parameter $k$. The analysis involves structural results on intersecting
families and on induced subgraphs of Kneser and Schrijver graphs.
We also study the Agreeable-Set problem of assigning a small subset of a set
of $m$ items to a group of $\ell$ agents, so that all agents value the subset
at least as much as its complement. As an application of our algorithm for the
Kneser problem, we obtain a randomized polynomial-time algorithm for the
Agreeable-Set problem for instances with $\ell \geq m - O(\frac{\log m}{\log
\log m})$. We further show that the Agreeable-Set problem is at least as hard
as a variant of the Kneser problem with an extended access to the input
coloring.
| [
{
"created": "Tue, 19 Apr 2022 17:09:01 GMT",
"version": "v1"
},
{
"created": "Sun, 1 May 2022 06:49:00 GMT",
"version": "v2"
},
{
"created": "Tue, 13 Feb 2024 08:11:56 GMT",
"version": "v3"
}
] | 2024-02-14 | [
[
"Haviv",
"Ishay",
""
]
] | The Kneser graph $K(n,k)$ is defined for integers $n$ and $k$ with $n \geq 2k$ as the graph whose vertices are all the $k$-subsets of $[n]=\{1,2,\ldots,n\}$ where two such sets are adjacent if they are disjoint. The Schrijver graph $S(n,k)$ is defined as the subgraph of $K(n,k)$ induced by the collection of all $k$-subsets of $[n]$ that do not include two consecutive elements modulo $n$. It is known that the chromatic number of both $K(n,k)$ and $S(n,k)$ is $n-2k+2$. In the computational Kneser and Schrijver problems, we are given an access to a coloring with $n-2k+1$ colors of the vertices of $K(n,k)$ and $S(n,k)$ respectively, and the goal is to find a monochromatic edge. We prove that the problems admit randomized algorithms with running time $n^{O(1)} \cdot k^{O(k)}$, hence they are fixed-parameter tractable with respect to the parameter $k$. The analysis involves structural results on intersecting families and on induced subgraphs of Kneser and Schrijver graphs. We also study the Agreeable-Set problem of assigning a small subset of a set of $m$ items to a group of $\ell$ agents, so that all agents value the subset at least as much as its complement. As an application of our algorithm for the Kneser problem, we obtain a randomized polynomial-time algorithm for the Agreeable-Set problem for instances with $\ell \geq m - O(\frac{\log m}{\log \log m})$. We further show that the Agreeable-Set problem is at least as hard as a variant of the Kneser problem with an extended access to the input coloring. |
2402.09872 | Arman Isajanyan | Arman Isajanyan, Artur Shatveryan, David Kocharyan, Zhangyang Wang,
Humphrey Shi | Social Reward: Evaluating and Enhancing Generative AI through
Million-User Feedback from an Online Creative Community | 16 pages with 10 figures, accepted at ICLR 2024 as a spotlight, codes
can be accessed at https://github.com/Picsart-AI-Research/Social-Reward | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Social reward as a form of community recognition provides a strong source of
motivation for users of online platforms to engage and contribute with content.
The recent progress of text-conditioned image synthesis has ushered in a
collaborative era where AI empowers users to craft original visual artworks
seeking community validation. Nevertheless, assessing these models in the
context of collective community preference introduces distinct challenges.
Existing evaluation methods predominantly center on limited size user studies
guided by image quality and prompt alignment. This work pioneers a paradigm
shift, unveiling Social Reward - an innovative reward modeling framework that
leverages implicit feedback from social network users engaged in creative
editing of generated images. We embark on an extensive journey of dataset
curation and refinement, drawing from Picsart: an online visual creation and
editing platform, yielding a first million-user-scale dataset of implicit human
preferences for user-generated visual art named Picsart Image-Social. Our
analysis exposes the shortcomings of current metrics in modeling community
creative preference of text-to-image models' outputs, compelling us to
introduce a novel predictive model explicitly tailored to address these
limitations. Rigorous quantitative experiments and user study show that our
Social Reward model aligns better with social popularity than existing metrics.
Furthermore, we utilize Social Reward to fine-tune text-to-image models,
yielding images that are more favored by not only Social Reward, but also other
established metrics. These findings highlight the relevance and effectiveness
of Social Reward in assessing community appreciation for AI-generated artworks,
establishing a closer alignment with users' creative goals: creating popular
visual art. Codes can be accessed at
https://github.com/Picsart-AI-Research/Social-Reward
| [
{
"created": "Thu, 15 Feb 2024 10:56:31 GMT",
"version": "v1"
}
] | 2024-02-16 | [
[
"Isajanyan",
"Arman",
""
],
[
"Shatveryan",
"Artur",
""
],
[
"Kocharyan",
"David",
""
],
[
"Wang",
"Zhangyang",
""
],
[
"Shi",
"Humphrey",
""
]
] | Social reward as a form of community recognition provides a strong source of motivation for users of online platforms to engage and contribute with content. The recent progress of text-conditioned image synthesis has ushered in a collaborative era where AI empowers users to craft original visual artworks seeking community validation. Nevertheless, assessing these models in the context of collective community preference introduces distinct challenges. Existing evaluation methods predominantly center on limited size user studies guided by image quality and prompt alignment. This work pioneers a paradigm shift, unveiling Social Reward - an innovative reward modeling framework that leverages implicit feedback from social network users engaged in creative editing of generated images. We embark on an extensive journey of dataset curation and refinement, drawing from Picsart: an online visual creation and editing platform, yielding a first million-user-scale dataset of implicit human preferences for user-generated visual art named Picsart Image-Social. Our analysis exposes the shortcomings of current metrics in modeling community creative preference of text-to-image models' outputs, compelling us to introduce a novel predictive model explicitly tailored to address these limitations. Rigorous quantitative experiments and user study show that our Social Reward model aligns better with social popularity than existing metrics. Furthermore, we utilize Social Reward to fine-tune text-to-image models, yielding images that are more favored by not only Social Reward, but also other established metrics. These findings highlight the relevance and effectiveness of Social Reward in assessing community appreciation for AI-generated artworks, establishing a closer alignment with users' creative goals: creating popular visual art. Codes can be accessed at https://github.com/Picsart-AI-Research/Social-Reward |
2108.11903 | Grischa Liebel | Rodi Jolak and Andreas Wortmann and Grischa Liebel and Eric Umuhoza
and Michel R.V. Chaudron | Design Thinking and Creativity of Co-located vs. Globally Distributed
Software Developers | This is a pre-peer-review version of an article published in Wiley
Journal of Software: Evolution and Process. The final version is available
via https://dx.doi.org/10.1002/smr.2377 | null | 10.1002/smr.2377 | null | cs.SE | http://creativecommons.org/licenses/by/4.0/ | Context: Designing software is an activity in which software developers think
and make design decisions that shape the structure and behavior of software
products. Designing software is one of the least understood software
engineering activities. In a collaborative design setting, various types of
distances can lead to challenges and effects that potentially affect how
software is designed. Objective: To contribute to a better understanding of
collaborative software design, we investigate how geographic distance affects
its design thinking and the creativity of its discussions. Method: To this end,
we conducted a multiple-case study exploring the design thinking and creativity
of co-located and distributed software developers in a collaborative design
setting. Results: Compared to co-located developers, distributed developers
spend less time on exploring the problem space, which could be related to
different socio-technical challenges, such as lack of awareness and common
understanding. Distributed development does not seem to affect the creativity
of their activities. Conclusion: Developers engaging in collaborative design
need to be aware that problem space exploration is reduced in a distributed
setting. Unless distributed teams take compensatory measures, this could
adversely affect the development. Regarding the effect distance has on
creativity, our results are inconclusive and further studies are needed.
| [
{
"created": "Thu, 26 Aug 2021 16:50:31 GMT",
"version": "v1"
}
] | 2021-08-27 | [
[
"Jolak",
"Rodi",
""
],
[
"Wortmann",
"Andreas",
""
],
[
"Liebel",
"Grischa",
""
],
[
"Umuhoza",
"Eric",
""
],
[
"Chaudron",
"Michel R. V.",
""
]
] | Context: Designing software is an activity in which software developers think and make design decisions that shape the structure and behavior of software products. Designing software is one of the least understood software engineering activities. In a collaborative design setting, various types of distances can lead to challenges and effects that potentially affect how software is designed. Objective: To contribute to a better understanding of collaborative software design, we investigate how geographic distance affects its design thinking and the creativity of its discussions. Method: To this end, we conducted a multiple-case study exploring the design thinking and creativity of co-located and distributed software developers in a collaborative design setting. Results: Compared to co-located developers, distributed developers spend less time on exploring the problem space, which could be related to different socio-technical challenges, such as lack of awareness and common understanding. Distributed development does not seem to affect the creativity of their activities. Conclusion: Developers engaging in collaborative design need to be aware that problem space exploration is reduced in a distributed setting. Unless distributed teams take compensatory measures, this could adversely affect the development. Regarding the effect distance has on creativity, our results are inconclusive and further studies are needed. |
2401.13150 | Onur Cankur | Onur Cankur, Aditya Tomar, Daniel Nichols, Connor Scully-Allison,
Katherine E. Isaacs, Abhinav Bhatele | Automated Programmatic Performance Analysis of Parallel Programs | null | null | null | null | cs.DC cs.PF | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Developing efficient parallel applications is critical to advancing
scientific development but requires significant performance analysis and
optimization. Performance analysis tools help developers manage the increasing
complexity and scale of performance data, but often rely on the user to
manually explore low-level data and are rigid in how the data can be
manipulated. We propose a Python-based API, Chopper, which provides high-level
and flexible performance analysis for both single and multiple executions of
parallel applications. Chopper facilitates performance analysis and reduces
developer effort by providing configurable high-level methods for common
performance analysis tasks such as calculating load imbalance, hot paths,
scalability bottlenecks, correlation between metrics and CCT nodes, and causes
of performance variability within a robust and mature Python environment that
provides fluid access to lower-level data manipulations. We demonstrate how
Chopper allows developers to quickly and succinctly explore performance and
identify issues across applications such as AMG, Laghos, LULESH, Quicksilver
and Tortuga.
| [
{
"created": "Tue, 23 Jan 2024 23:52:48 GMT",
"version": "v1"
}
] | 2024-01-25 | [
[
"Cankur",
"Onur",
""
],
[
"Tomar",
"Aditya",
""
],
[
"Nichols",
"Daniel",
""
],
[
"Scully-Allison",
"Connor",
""
],
[
"Isaacs",
"Katherine E.",
""
],
[
"Bhatele",
"Abhinav",
""
]
] | Developing efficient parallel applications is critical to advancing scientific development but requires significant performance analysis and optimization. Performance analysis tools help developers manage the increasing complexity and scale of performance data, but often rely on the user to manually explore low-level data and are rigid in how the data can be manipulated. We propose a Python-based API, Chopper, which provides high-level and flexible performance analysis for both single and multiple executions of parallel applications. Chopper facilitates performance analysis and reduces developer effort by providing configurable high-level methods for common performance analysis tasks such as calculating load imbalance, hot paths, scalability bottlenecks, correlation between metrics and CCT nodes, and causes of performance variability within a robust and mature Python environment that provides fluid access to lower-level data manipulations. We demonstrate how Chopper allows developers to quickly and succinctly explore performance and identify issues across applications such as AMG, Laghos, LULESH, Quicksilver and Tortuga. |
2207.12909 | Zerui Chen | Zerui Chen, Yana Hasson, Cordelia Schmid, Ivan Laptev | AlignSDF: Pose-Aligned Signed Distance Fields for Hand-Object
Reconstruction | Accepted by ECCV 2022. Project Page:
https://zerchen.github.io/projects/alignsdf.html | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent work achieved impressive progress towards joint reconstruction of
hands and manipulated objects from monocular color images. Existing methods
focus on two alternative representations in terms of either parametric meshes
or signed distance fields (SDFs). On one side, parametric models can benefit
from prior knowledge at the cost of limited shape deformations and mesh
resolutions. Mesh models, hence, may fail to precisely reconstruct details such
as contact surfaces of hands and objects. SDF-based methods, on the other side,
can represent arbitrary details but are lacking explicit priors. In this work
we aim to improve SDF models using priors provided by parametric
representations. In particular, we propose a joint learning framework that
disentangles the pose and the shape. We obtain hand and object poses from
parametric models and use them to align SDFs in 3D space. We show that such
aligned SDFs better focus on reconstructing shape details and improve
reconstruction accuracy both for hands and objects. We evaluate our method and
demonstrate significant improvements over the state of the art on the
challenging ObMan and DexYCB benchmarks.
| [
{
"created": "Tue, 26 Jul 2022 13:58:59 GMT",
"version": "v1"
}
] | 2022-07-27 | [
[
"Chen",
"Zerui",
""
],
[
"Hasson",
"Yana",
""
],
[
"Schmid",
"Cordelia",
""
],
[
"Laptev",
"Ivan",
""
]
] | Recent work achieved impressive progress towards joint reconstruction of hands and manipulated objects from monocular color images. Existing methods focus on two alternative representations in terms of either parametric meshes or signed distance fields (SDFs). On one side, parametric models can benefit from prior knowledge at the cost of limited shape deformations and mesh resolutions. Mesh models, hence, may fail to precisely reconstruct details such as contact surfaces of hands and objects. SDF-based methods, on the other side, can represent arbitrary details but are lacking explicit priors. In this work we aim to improve SDF models using priors provided by parametric representations. In particular, we propose a joint learning framework that disentangles the pose and the shape. We obtain hand and object poses from parametric models and use them to align SDFs in 3D space. We show that such aligned SDFs better focus on reconstructing shape details and improve reconstruction accuracy both for hands and objects. We evaluate our method and demonstrate significant improvements over the state of the art on the challenging ObMan and DexYCB benchmarks. |
2403.07376 | Bingqian Lin | Bingqian Lin, Yunshuang Nie, Ziming Wei, Jiaqi Chen, Shikui Ma,
Jianhua Han, Hang Xu, Xiaojun Chang, Xiaodan Liang | NavCoT: Boosting LLM-Based Vision-and-Language Navigation via Learning
Disentangled Reasoning | null | null | null | null | cs.CV cs.AI cs.CL cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vision-and-Language Navigation (VLN), as a crucial research problem of
Embodied AI, requires an embodied agent to navigate through complex 3D
environments following natural language instructions. Recent research has
highlighted the promising capacity of large language models (LLMs) in VLN by
improving navigational reasoning accuracy and interpretability. However, their
predominant use in an offline manner usually suffers from substantial domain
gap between the VLN task and the LLM training corpus. This paper introduces a
novel strategy called Navigational Chain-of-Thought (NavCoT), where we fulfill
parameter-efficient in-domain training to enable self-guided navigational
decision, leading to a significant mitigation of the domain gap in a
cost-effective manner. Specifically, at each timestep, the LLM is prompted to
forecast the navigational chain-of-thought by: 1) acting as a world model to
imagine the next observation according to the instruction, 2) selecting the
candidate observation that best aligns with the imagination, and 3) determining
the action based on the reasoning from the prior steps. Through constructing
formalized labels for training, the LLM can learn to generate desired and
reasonable chain-of-thought outputs for improving the action decision.
Experimental results across various training settings and popular VLN
benchmarks (e.g., Room-to-Room (R2R), Room-across-Room (RxR), Room-for-Room
(R4R)) show the significant superiority of NavCoT over the direct action
prediction variants. Through simple parameter-efficient finetuning, our NavCoT
outperforms a recent GPT4-based approach with ~7% relative improvement on the
R2R dataset. We believe that NavCoT will help unlock more task-adaptive and
scalable LLM-based embodied agents, which are helpful for developing real-world
robotics applications. Code is available at
https://github.com/expectorlin/NavCoT.
| [
{
"created": "Tue, 12 Mar 2024 07:27:02 GMT",
"version": "v1"
}
] | 2024-03-13 | [
[
"Lin",
"Bingqian",
""
],
[
"Nie",
"Yunshuang",
""
],
[
"Wei",
"Ziming",
""
],
[
"Chen",
"Jiaqi",
""
],
[
"Ma",
"Shikui",
""
],
[
"Han",
"Jianhua",
""
],
[
"Xu",
"Hang",
""
],
[
"Chang",
"Xiaojun",
""
],
[
"Liang",
"Xiaodan",
""
]
] | Vision-and-Language Navigation (VLN), as a crucial research problem of Embodied AI, requires an embodied agent to navigate through complex 3D environments following natural language instructions. Recent research has highlighted the promising capacity of large language models (LLMs) in VLN by improving navigational reasoning accuracy and interpretability. However, their predominant use in an offline manner usually suffers from substantial domain gap between the VLN task and the LLM training corpus. This paper introduces a novel strategy called Navigational Chain-of-Thought (NavCoT), where we fulfill parameter-efficient in-domain training to enable self-guided navigational decision, leading to a significant mitigation of the domain gap in a cost-effective manner. Specifically, at each timestep, the LLM is prompted to forecast the navigational chain-of-thought by: 1) acting as a world model to imagine the next observation according to the instruction, 2) selecting the candidate observation that best aligns with the imagination, and 3) determining the action based on the reasoning from the prior steps. Through constructing formalized labels for training, the LLM can learn to generate desired and reasonable chain-of-thought outputs for improving the action decision. Experimental results across various training settings and popular VLN benchmarks (e.g., Room-to-Room (R2R), Room-across-Room (RxR), Room-for-Room (R4R)) show the significant superiority of NavCoT over the direct action prediction variants. Through simple parameter-efficient finetuning, our NavCoT outperforms a recent GPT4-based approach with ~7% relative improvement on the R2R dataset. We believe that NavCoT will help unlock more task-adaptive and scalable LLM-based embodied agents, which are helpful for developing real-world robotics applications. Code is available at https://github.com/expectorlin/NavCoT. |
2309.08638 | Rajan Vivek | Rajan Vivek, Kawin Ethayarajh, Diyi Yang, Douwe Kiela | Anchor Points: Benchmarking Models with Much Fewer Examples | Accepted to EACL 2024 Main Conference. Code will be released at:
https://github.com/rvivek3/AnchorPoints | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modern language models often exhibit powerful but brittle behavior, leading
to the development of larger and more diverse benchmarks to reliably assess
their behavior. Here, we suggest that model performance can be benchmarked and
elucidated with much smaller evaluation sets. We first show that in six popular
language classification benchmarks, model confidence in the correct class on
many pairs of points is strongly correlated across models. We build upon this
phenomenon to propose Anchor Point Selection, a technique to select small
subsets of datasets that capture model behavior across the entire dataset.
Anchor points reliably rank models: across 87 diverse language model-prompt
pairs, evaluating models using 1-30 anchor points outperforms uniform sampling
and other baselines at accurately ranking models. Moreover, just several anchor
points can be used to estimate model per-class predictions on all other points
in a dataset with low mean absolute error, sufficient for gauging where the
model is likely to fail. Lastly, we present Anchor Point Maps for visualizing
these insights and facilitating comparisons of the performance of different
models on various regions within the dataset distribution.
| [
{
"created": "Thu, 14 Sep 2023 17:45:51 GMT",
"version": "v1"
},
{
"created": "Sun, 18 Feb 2024 21:37:47 GMT",
"version": "v2"
}
] | 2024-02-20 | [
[
"Vivek",
"Rajan",
""
],
[
"Ethayarajh",
"Kawin",
""
],
[
"Yang",
"Diyi",
""
],
[
"Kiela",
"Douwe",
""
]
] | Modern language models often exhibit powerful but brittle behavior, leading to the development of larger and more diverse benchmarks to reliably assess their behavior. Here, we suggest that model performance can be benchmarked and elucidated with much smaller evaluation sets. We first show that in six popular language classification benchmarks, model confidence in the correct class on many pairs of points is strongly correlated across models. We build upon this phenomenon to propose Anchor Point Selection, a technique to select small subsets of datasets that capture model behavior across the entire dataset. Anchor points reliably rank models: across 87 diverse language model-prompt pairs, evaluating models using 1-30 anchor points outperforms uniform sampling and other baselines at accurately ranking models. Moreover, just several anchor points can be used to estimate model per-class predictions on all other points in a dataset with low mean absolute error, sufficient for gauging where the model is likely to fail. Lastly, we present Anchor Point Maps for visualizing these insights and facilitating comparisons of the performance of different models on various regions within the dataset distribution. |
2112.13974 | Akansha Singh Bansal | Akansha Singh Bansal, Trapit Bansal, David Irwin | A Moment in the Sun: Solar Nowcasting from Multispectral Satellite Data
using Self-Supervised Learning | 18 pages | null | null | null | cs.LG cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | Solar energy is now the cheapest form of electricity in history.
Unfortunately, significantly increasing the grid's fraction of solar energy
remains challenging due to its variability, which makes balancing electricity's
supply and demand more difficult. While thermal generators' ramp rate -- the
maximum rate that they can change their output -- is finite, solar's ramp rate
is essentially infinite. Thus, accurate near-term solar forecasting, or
nowcasting, is important to provide advance warning to adjust thermal generator
output in response to solar variations to ensure a balanced supply and demand.
To address the problem, this paper develops a general model for solar
nowcasting from abundant and readily available multispectral satellite data
using self-supervised learning. Specifically, we develop deep auto-regressive
models using convolutional neural networks (CNN) and long short-term memory
networks (LSTM) that are globally trained across multiple locations to predict
raw future observations of the spatio-temporal data collected by the recently
launched GOES-R series of satellites. Our model estimates a location's future
solar irradiance based on satellite observations, which we feed to a regression
model trained on smaller site-specific solar data to provide near-term solar
photovoltaic (PV) forecasts that account for site-specific characteristics. We
evaluate our approach for different coverage areas and forecast horizons across
25 solar sites and show that our approach yields errors close to that of a
model using ground-truth observations.
| [
{
"created": "Tue, 28 Dec 2021 03:13:44 GMT",
"version": "v1"
}
] | 2021-12-30 | [
[
"Bansal",
"Akansha Singh",
""
],
[
"Bansal",
"Trapit",
""
],
[
"Irwin",
"David",
""
]
] | Solar energy is now the cheapest form of electricity in history. Unfortunately, significantly increasing the grid's fraction of solar energy remains challenging due to its variability, which makes balancing electricity's supply and demand more difficult. While thermal generators' ramp rate -- the maximum rate that they can change their output -- is finite, solar's ramp rate is essentially infinite. Thus, accurate near-term solar forecasting, or nowcasting, is important to provide advance warning to adjust thermal generator output in response to solar variations to ensure a balanced supply and demand. To address the problem, this paper develops a general model for solar nowcasting from abundant and readily available multispectral satellite data using self-supervised learning. Specifically, we develop deep auto-regressive models using convolutional neural networks (CNN) and long short-term memory networks (LSTM) that are globally trained across multiple locations to predict raw future observations of the spatio-temporal data collected by the recently launched GOES-R series of satellites. Our model estimates a location's future solar irradiance based on satellite observations, which we feed to a regression model trained on smaller site-specific solar data to provide near-term solar photovoltaic (PV) forecasts that account for site-specific characteristics. We evaluate our approach for different coverage areas and forecast horizons across 25 solar sites and show that our approach yields errors close to that of a model using ground-truth observations. |
2101.08918 | Tianming Feng | Tianming Feng, Shuo Shi, Shushi Gu, Ning Zhang, Wei Xiang, and Xuemai
Gu | Performance Analysis for Cache-enabled Cellular Networks with
Cooperative Transmission | arXiv admin note: text overlap with arXiv:2101.08669 | null | null | null | cs.IT eess.SP math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The large amount of deployed smart devices put tremendous traffic pressure on
networks. Caching at the edge has been widely studied as a promising technique
to solve this problem. To further improve the successful transmission
probability (STP) of cache-enabled cellular networks (CEN), we combine the
cooperative transmission technique with CEN and propose a novel transmission
scheme. Local channel state information (CSI) is introduced at each cooperative
base station (BS) to enhance the strength of the signal received by the user. A
tight approximation for the STP of this scheme is derived using tools from
stochastic geometry. The optimal content placement strategy of this scheme is
obtained using a numerical method to maximize the STP. Simulation results
demonstrate the optimal strategy achieves significant gains in STP over several
comparative baselines with the proposed scheme.
| [
{
"created": "Fri, 22 Jan 2021 02:00:06 GMT",
"version": "v1"
}
] | 2021-01-25 | [
[
"Feng",
"Tianming",
""
],
[
"Shi",
"Shuo",
""
],
[
"Gu",
"Shushi",
""
],
[
"Zhang",
"Ning",
""
],
[
"Xiang",
"Wei",
""
],
[
"Gu",
"Xuemai",
""
]
] | The large amount of deployed smart devices put tremendous traffic pressure on networks. Caching at the edge has been widely studied as a promising technique to solve this problem. To further improve the successful transmission probability (STP) of cache-enabled cellular networks (CEN), we combine the cooperative transmission technique with CEN and propose a novel transmission scheme. Local channel state information (CSI) is introduced at each cooperative base station (BS) to enhance the strength of the signal received by the user. A tight approximation for the STP of this scheme is derived using tools from stochastic geometry. The optimal content placement strategy of this scheme is obtained using a numerical method to maximize the STP. Simulation results demonstrate the optimal strategy achieves significant gains in STP over several comparative baselines with the proposed scheme. |
2203.03704 | Jeff Delaune | Jeff Delaune, Jacob Izraelevitz, Samuel Sirlin, David Sternberg, Louis
Giersch, L. Phillipe Tosi, Evgeniy Skliyanskiy, Larry Young, Michael Mischna,
Shannah Withrow-Maser, Juergen Mueller, Joshua Bowman, Mark S Wallace, Havard
F. Grip, Larry Matthies, Wayne Johnson, Matthew Keennon, Benjamin Pipenberg,
Harsh Patel, Christopher Lim, Aaron Schutte, Marcel Veismann, Haley Cummings,
Sarah Conley, Jonathan Bapst, Theodore Tzanetos, Roland Brockers, Abhinandan
Jain, David Bayard, Art Chmielewski, Olivier Toupet, Joel Burdick, Morteza
Gharib and J. (Bob) Balaram | Mid-Air Helicopter Delivery at Mars Using a Jetpack | Accepted in 2022 IEEE Aerospace Conference | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mid-Air Helicopter Delivery (MAHD) is a new Entry, Descent and Landing (EDL)
architecture to enable in situ mobility for Mars science at lower cost than
previous missions. It uses a jetpack to slow down a Mars Science Helicopter
(MSH) after separation from the backshell, and reach aerodynamic conditions
suitable for helicopter take-off in mid air. For given aeroshell dimensions,
only MAHD's lander-free approach leaves enough room in the aeroshell to
accommodate the largest rotor option for MSH. This drastically improves flight
performance, notably allowing +150\% increased science payload mass. Compared
to heritage EDL approaches, the simpler MAHD architecture is also likely to
reduce cost, and enables access to more hazardous and higher-elevation terrains
on Mars. This paper introduces a design for the MAHD system architecture and
operations. We present a mechanical configuration that fits both MSH and the
jetpack within the 2.65-m Mars heritage aeroshell, and a jetpack control
architecture which fully leverages the available helicopter avionics. We
discuss preliminary numerical models of the flow dynamics resulting from the
interaction between the jets, the rotors and the side winds. We define a
force-torque sensing architecture capable of handling the wind and trimming the
rotors to prepare for safe take-off. Finally, we analyze the dynamic
environment and closed-loop control simulation results to demonstrate the
preliminary feasibility of MAHD.
| [
{
"created": "Mon, 7 Mar 2022 21:07:56 GMT",
"version": "v1"
}
] | 2022-03-09 | [
[
"Delaune",
"Jeff",
"",
"Bob"
],
[
"Izraelevitz",
"Jacob",
"",
"Bob"
],
[
"Sirlin",
"Samuel",
"",
"Bob"
],
[
"Sternberg",
"David",
"",
"Bob"
],
[
"Giersch",
"Louis",
"",
"Bob"
],
[
"Tosi",
"L. Phillipe",
"",
"Bob"
],
[
"Skliyanskiy",
"Evgeniy",
"",
"Bob"
],
[
"Young",
"Larry",
"",
"Bob"
],
[
"Mischna",
"Michael",
"",
"Bob"
],
[
"Withrow-Maser",
"Shannah",
"",
"Bob"
],
[
"Mueller",
"Juergen",
"",
"Bob"
],
[
"Bowman",
"Joshua",
"",
"Bob"
],
[
"Wallace",
"Mark S",
"",
"Bob"
],
[
"Grip",
"Havard F.",
"",
"Bob"
],
[
"Matthies",
"Larry",
"",
"Bob"
],
[
"Johnson",
"Wayne",
"",
"Bob"
],
[
"Keennon",
"Matthew",
"",
"Bob"
],
[
"Pipenberg",
"Benjamin",
"",
"Bob"
],
[
"Patel",
"Harsh",
"",
"Bob"
],
[
"Lim",
"Christopher",
"",
"Bob"
],
[
"Schutte",
"Aaron",
"",
"Bob"
],
[
"Veismann",
"Marcel",
"",
"Bob"
],
[
"Cummings",
"Haley",
"",
"Bob"
],
[
"Conley",
"Sarah",
"",
"Bob"
],
[
"Bapst",
"Jonathan",
"",
"Bob"
],
[
"Tzanetos",
"Theodore",
"",
"Bob"
],
[
"Brockers",
"Roland",
"",
"Bob"
],
[
"Jain",
"Abhinandan",
"",
"Bob"
],
[
"Bayard",
"David",
"",
"Bob"
],
[
"Chmielewski",
"Art",
"",
"Bob"
],
[
"Toupet",
"Olivier",
"",
"Bob"
],
[
"Burdick",
"Joel",
"",
"Bob"
],
[
"Gharib",
"Morteza",
"",
"Bob"
],
[
"J.",
"",
"",
"Bob"
],
[
"Balaram",
"",
""
]
] | Mid-Air Helicopter Delivery (MAHD) is a new Entry, Descent and Landing (EDL) architecture to enable in situ mobility for Mars science at lower cost than previous missions. It uses a jetpack to slow down a Mars Science Helicopter (MSH) after separation from the backshell, and reach aerodynamic conditions suitable for helicopter take-off in mid air. For given aeroshell dimensions, only MAHD's lander-free approach leaves enough room in the aeroshell to accommodate the largest rotor option for MSH. This drastically improves flight performance, notably allowing +150\% increased science payload mass. Compared to heritage EDL approaches, the simpler MAHD architecture is also likely to reduce cost, and enables access to more hazardous and higher-elevation terrains on Mars. This paper introduces a design for the MAHD system architecture and operations. We present a mechanical configuration that fits both MSH and the jetpack within the 2.65-m Mars heritage aeroshell, and a jetpack control architecture which fully leverages the available helicopter avionics. We discuss preliminary numerical models of the flow dynamics resulting from the interaction between the jets, the rotors and the side winds. We define a force-torque sensing architecture capable of handling the wind and trimming the rotors to prepare for safe take-off. Finally, we analyze the dynamic environment and closed-loop control simulation results to demonstrate the preliminary feasibility of MAHD. |
2211.11344 | Jakub Tetek | Shyam Narayanan, Jakub T\v{e}tek | Estimating the Effective Support Size in Constant Query Complexity | null | null | null | null | cs.DS math.ST stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Estimating the support size of a distribution is a well-studied problem in
statistics. Motivated by the fact that this problem is highly non-robust (as
small perturbations in the distributions can drastically affect the support
size) and thus hard to estimate, Goldreich [ECCC 2019] studied the query
complexity of estimating the $\epsilon$-\emph{effective support size}
$\text{Ess}_\epsilon$ of a distribution ${P}$, which is equal to the smallest
support size of a distribution that is $\epsilon$-far in total variation
distance from ${P}$.
In his paper, he shows an algorithm in the dual access setting (where we may
both receive random samples and query the sampling probability $p(x)$ for any
$x$) for a bicriteria approximation, giving an answer in
$[\text{Ess}_{(1+\beta)\epsilon},(1+\gamma) \text{Ess}_{\epsilon}]$ for some
values $\beta, \gamma > 0$. However, his algorithm has either super-constant
query complexity in the support size or super-constant approximation ratio
$1+\gamma = \omega(1)$. He then asked if this is necessary, or if it is
possible to get a constant-factor approximation in a number of queries
independent of the support size.
We answer his question by showing that not only is complexity independent of
$n$ possible for $\gamma>0$, but also for $\gamma=0$, that is, that the
bicriteria relaxation is not necessary. Specifically, we show an algorithm with
query complexity $O(\frac{1}{\beta^3 \epsilon^3})$. That is, for any $0 <
\epsilon, \beta < 1$, we output in this complexity a number $\tilde{n} \in
[\text{Ess}_{(1+\beta)\epsilon},\text{Ess}_\epsilon]$. We also show that it is
possible to solve the approximate version with approximation ratio $1+\gamma$
in complexity $O\left(\frac{1}{\beta^2 \epsilon} + \frac{1}{\beta \epsilon
\gamma^2}\right)$. Our algorithm is very simple, and has $4$ short lines of
pseudocode.
| [
{
"created": "Mon, 21 Nov 2022 10:49:32 GMT",
"version": "v1"
}
] | 2022-11-22 | [
[
"Narayanan",
"Shyam",
""
],
[
"Tětek",
"Jakub",
""
]
] | Estimating the support size of a distribution is a well-studied problem in statistics. Motivated by the fact that this problem is highly non-robust (as small perturbations in the distributions can drastically affect the support size) and thus hard to estimate, Goldreich [ECCC 2019] studied the query complexity of estimating the $\epsilon$-\emph{effective support size} $\text{Ess}_\epsilon$ of a distribution ${P}$, which is equal to the smallest support size of a distribution that is $\epsilon$-far in total variation distance from ${P}$. In his paper, he shows an algorithm in the dual access setting (where we may both receive random samples and query the sampling probability $p(x)$ for any $x$) for a bicriteria approximation, giving an answer in $[\text{Ess}_{(1+\beta)\epsilon},(1+\gamma) \text{Ess}_{\epsilon}]$ for some values $\beta, \gamma > 0$. However, his algorithm has either super-constant query complexity in the support size or super-constant approximation ratio $1+\gamma = \omega(1)$. He then asked if this is necessary, or if it is possible to get a constant-factor approximation in a number of queries independent of the support size. We answer his question by showing that not only is complexity independent of $n$ possible for $\gamma>0$, but also for $\gamma=0$, that is, that the bicriteria relaxation is not necessary. Specifically, we show an algorithm with query complexity $O(\frac{1}{\beta^3 \epsilon^3})$. That is, for any $0 < \epsilon, \beta < 1$, we output in this complexity a number $\tilde{n} \in [\text{Ess}_{(1+\beta)\epsilon},\text{Ess}_\epsilon]$. We also show that it is possible to solve the approximate version with approximation ratio $1+\gamma$ in complexity $O\left(\frac{1}{\beta^2 \epsilon} + \frac{1}{\beta \epsilon \gamma^2}\right)$. Our algorithm is very simple, and has $4$ short lines of pseudocode. |
1901.10237 | Hai Duong Nguyen | Hai-Duong Nguyen, Soo-Hyung Kim | Automatic Whole-body Bone Age Assessment Using Deep Hierarchical
Features | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bone age assessment gives us evidence to analyze the children growth status
and the rejuvenation involved chronological and biological ages. All the
previous works consider left-hand X-ray image of a child in their works. In
this paper, we carry out a study on estimating human age using whole-body bone
CT images and a novel convolutional neural network. Our model with additional
connections shows an effective way to generate a massive number of vital
features while reducing overfitting influence on small training data in the
medical image analysis research area. A dataset and a comparison with common
deep architectures will be provided for future research in this field.
| [
{
"created": "Tue, 29 Jan 2019 11:53:30 GMT",
"version": "v1"
}
] | 2019-01-30 | [
[
"Nguyen",
"Hai-Duong",
""
],
[
"Kim",
"Soo-Hyung",
""
]
] | Bone age assessment gives us evidence to analyze the children growth status and the rejuvenation involved chronological and biological ages. All the previous works consider left-hand X-ray image of a child in their works. In this paper, we carry out a study on estimating human age using whole-body bone CT images and a novel convolutional neural network. Our model with additional connections shows an effective way to generate a massive number of vital features while reducing overfitting influence on small training data in the medical image analysis research area. A dataset and a comparison with common deep architectures will be provided for future research in this field. |
2405.10098 | Sen Huang | Sen Huang, Kaixiang Yang, Sheng Qi, Rui Wang | When Large Language Model Meets Optimization | null | null | null | null | cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Optimization algorithms and large language models (LLMs) enhance
decision-making in dynamic environments by integrating artificial intelligence
with traditional techniques. LLMs, with extensive domain knowledge, facilitate
intelligent modeling and strategic decision-making in optimization, while
optimization algorithms refine LLM architectures and output quality. This
synergy offers novel approaches for advancing general AI, addressing both the
computational challenges of complex problems and the application of LLMs in
practical scenarios. This review outlines the progress and potential of
combining LLMs with optimization algorithms, providing insights for future
research directions.
| [
{
"created": "Thu, 16 May 2024 13:54:37 GMT",
"version": "v1"
}
] | 2024-05-17 | [
[
"Huang",
"Sen",
""
],
[
"Yang",
"Kaixiang",
""
],
[
"Qi",
"Sheng",
""
],
[
"Wang",
"Rui",
""
]
] | Optimization algorithms and large language models (LLMs) enhance decision-making in dynamic environments by integrating artificial intelligence with traditional techniques. LLMs, with extensive domain knowledge, facilitate intelligent modeling and strategic decision-making in optimization, while optimization algorithms refine LLM architectures and output quality. This synergy offers novel approaches for advancing general AI, addressing both the computational challenges of complex problems and the application of LLMs in practical scenarios. This review outlines the progress and potential of combining LLMs with optimization algorithms, providing insights for future research directions. |
1504.01802 | Morteza Hashemi | Morteza Hashemi, Yuval Cassuto, Ari Trachtenberg | Fountain Codes with Nonuniform Selection Distributions through Feedback | Submitted to the IEEE Transactions on Information Theory | null | 10.1109/TIT.2016.2570232 | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One key requirement for fountain (rateless) coding schemes is to achieve a
high intermediate symbol recovery rate. Recent coding schemes have incorporated
the use of a feedback channel to improve intermediate performance of
traditional rateless codes; however, these codes with feedback are designed
based on uniformly at random selection of input symbols. In this paper, on the
other hand, we develop feedback-based fountain codes with dynamically-adjusted
nonuniform symbol selection distributions, and show that this characteristic
can enhance the intermediate decoding rate. We provide an analysis of our
codes, including bounds on computational complexity and failure probability for
a maximum likelihood decoder; the latter are tighter than bounds known for
classical rateless codes. Through numerical simulations, we also show that
feedback information paired with a nonuniform selection distribution can highly
improve the symbol recovery rate, and that the amount of feedback sent can be
tuned to the specific transmission properties of a given feedback channel.
| [
{
"created": "Wed, 8 Apr 2015 02:11:09 GMT",
"version": "v1"
}
] | 2016-11-17 | [
[
"Hashemi",
"Morteza",
""
],
[
"Cassuto",
"Yuval",
""
],
[
"Trachtenberg",
"Ari",
""
]
] | One key requirement for fountain (rateless) coding schemes is to achieve a high intermediate symbol recovery rate. Recent coding schemes have incorporated the use of a feedback channel to improve intermediate performance of traditional rateless codes; however, these codes with feedback are designed based on uniformly at random selection of input symbols. In this paper, on the other hand, we develop feedback-based fountain codes with dynamically-adjusted nonuniform symbol selection distributions, and show that this characteristic can enhance the intermediate decoding rate. We provide an analysis of our codes, including bounds on computational complexity and failure probability for a maximum likelihood decoder; the latter are tighter than bounds known for classical rateless codes. Through numerical simulations, we also show that feedback information paired with a nonuniform selection distribution can highly improve the symbol recovery rate, and that the amount of feedback sent can be tuned to the specific transmission properties of a given feedback channel. |
1708.08844 | Jan Czarnowski | Jan Czarnowski, Stefan Leutenegger, Andrew Davison | Semantic Texture for Robust Dense Tracking | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We argue that robust dense SLAM systems can make valuable use of the layers
of features coming from a standard CNN as a pyramid of `semantic texture' which
is suitable for dense alignment while being much more robust to nuisance
factors such as lighting than raw RGB values. We use a straightforward
Lucas-Kanade formulation of image alignment, with a schedule of iterations over
the coarse-to-fine levels of a pyramid, and simply replace the usual image
pyramid by the hierarchy of convolutional feature maps from a pre-trained CNN.
The resulting dense alignment performance is much more robust to lighting and
other variations, as we show by camera rotation tracking experiments on
time-lapse sequences captured over many hours. Looking towards the future of
scene representation for real-time visual SLAM, we further demonstrate that a
selection using simple criteria of a small number of the total set of features
output by a CNN gives just as accurate but much more efficient tracking
performance.
| [
{
"created": "Tue, 29 Aug 2017 15:58:18 GMT",
"version": "v1"
}
] | 2017-08-30 | [
[
"Czarnowski",
"Jan",
""
],
[
"Leutenegger",
"Stefan",
""
],
[
"Davison",
"Andrew",
""
]
] | We argue that robust dense SLAM systems can make valuable use of the layers of features coming from a standard CNN as a pyramid of `semantic texture' which is suitable for dense alignment while being much more robust to nuisance factors such as lighting than raw RGB values. We use a straightforward Lucas-Kanade formulation of image alignment, with a schedule of iterations over the coarse-to-fine levels of a pyramid, and simply replace the usual image pyramid by the hierarchy of convolutional feature maps from a pre-trained CNN. The resulting dense alignment performance is much more robust to lighting and other variations, as we show by camera rotation tracking experiments on time-lapse sequences captured over many hours. Looking towards the future of scene representation for real-time visual SLAM, we further demonstrate that a selection using simple criteria of a small number of the total set of features output by a CNN gives just as accurate but much more efficient tracking performance. |
2312.10029 | Zachary Kenton | Sebastian Farquhar, Vikrant Varma, Zachary Kenton, Johannes Gasteiger,
Vladimir Mikulik, Rohin Shah | Challenges with unsupervised LLM knowledge discovery | 12 pages (38 including references and appendices). First three
authors equal contribution, randomised order | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We show that existing unsupervised methods on large language model (LLM)
activations do not discover knowledge -- instead they seem to discover whatever
feature of the activations is most prominent. The idea behind unsupervised
knowledge elicitation is that knowledge satisfies a consistency structure,
which can be used to discover knowledge. We first prove theoretically that
arbitrary features (not just knowledge) satisfy the consistency structure of a
particular leading unsupervised knowledge-elicitation method,
contrast-consistent search (Burns et al. - arXiv:2212.03827). We then present a
series of experiments showing settings in which unsupervised methods result in
classifiers that do not predict knowledge, but instead predict a different
prominent feature. We conclude that existing unsupervised methods for
discovering latent knowledge are insufficient, and we contribute sanity checks
to apply to evaluating future knowledge elicitation methods. Conceptually, we
hypothesise that the identification issues explored here, e.g. distinguishing a
model's knowledge from that of a simulated character's, will persist for future
unsupervised methods.
| [
{
"created": "Fri, 15 Dec 2023 18:49:43 GMT",
"version": "v1"
},
{
"created": "Mon, 18 Dec 2023 16:43:35 GMT",
"version": "v2"
}
] | 2023-12-19 | [
[
"Farquhar",
"Sebastian",
""
],
[
"Varma",
"Vikrant",
""
],
[
"Kenton",
"Zachary",
""
],
[
"Gasteiger",
"Johannes",
""
],
[
"Mikulik",
"Vladimir",
""
],
[
"Shah",
"Rohin",
""
]
] | We show that existing unsupervised methods on large language model (LLM) activations do not discover knowledge -- instead they seem to discover whatever feature of the activations is most prominent. The idea behind unsupervised knowledge elicitation is that knowledge satisfies a consistency structure, which can be used to discover knowledge. We first prove theoretically that arbitrary features (not just knowledge) satisfy the consistency structure of a particular leading unsupervised knowledge-elicitation method, contrast-consistent search (Burns et al. - arXiv:2212.03827). We then present a series of experiments showing settings in which unsupervised methods result in classifiers that do not predict knowledge, but instead predict a different prominent feature. We conclude that existing unsupervised methods for discovering latent knowledge are insufficient, and we contribute sanity checks to apply to evaluating future knowledge elicitation methods. Conceptually, we hypothesise that the identification issues explored here, e.g. distinguishing a model's knowledge from that of a simulated character's, will persist for future unsupervised methods. |
1604.07153 | Kim-Manuel Klein | Klaus Jansen, Kim-Manuel Klein, Jos\'e Verschae | Closing the Gap for Makespan Scheduling via Sparsification Techniques | 20 pages, ICALP 2016 | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Makespan scheduling on identical machines is one of the most basic and
fundamental packing problems studied in the discrete optimization literature.
It asks for an assignment of $n$ jobs to a set of $m$ identical machines that
minimizes the makespan. The problem is strongly NP-hard, and thus we do not
expect a $(1+\epsilon)$-approximation algorithm with a running time that
depends polynomially on $1/\epsilon$. Furthermore, Chen et al. [3] recently
showed that a running time of $2^{(1/\epsilon)^{1-\delta}}+\text{poly}(n)$ for
any $\delta>0$ would imply that the Exponential Time Hypothesis (ETH) fails. A
long sequence of algorithms have been developed that try to obtain low
dependencies on $1/\epsilon$, the better of which achieves a running time of
$2^{\tilde{O}(1/\epsilon^2)}+O(n\log n)$ [11]. In this paper we obtain an
algorithm with a running time of $2^{\tilde{O}(1/\epsilon)}+O(n\log n)$, which
is tight under ETH up to logarithmic factors on the exponent.
Our main technical contribution is a new structural result on the
configuration-IP. More precisely, we show the existence of a highly symmetric
and sparse optimal solution, in which all but a constant number of machines are
assigned a configuration with small support. This structure can then be
exploited by integer programming techniques and enumeration. We believe that
our structural result is of independent interest and should find applications
to other settings. In particular, we show how the structure can be applied to
the minimum makespan problem on related machines and to a larger class of
objective functions on parallel machines. For all these cases we obtain an
efficient PTAS with running time $2^{\tilde{O}(1/\epsilon)} + \text{poly}(n)$.
| [
{
"created": "Mon, 25 Apr 2016 07:47:34 GMT",
"version": "v1"
}
] | 2016-04-26 | [
[
"Jansen",
"Klaus",
""
],
[
"Klein",
"Kim-Manuel",
""
],
[
"Verschae",
"José",
""
]
] | Makespan scheduling on identical machines is one of the most basic and fundamental packing problems studied in the discrete optimization literature. It asks for an assignment of $n$ jobs to a set of $m$ identical machines that minimizes the makespan. The problem is strongly NP-hard, and thus we do not expect a $(1+\epsilon)$-approximation algorithm with a running time that depends polynomially on $1/\epsilon$. Furthermore, Chen et al. [3] recently showed that a running time of $2^{(1/\epsilon)^{1-\delta}}+\text{poly}(n)$ for any $\delta>0$ would imply that the Exponential Time Hypothesis (ETH) fails. A long sequence of algorithms have been developed that try to obtain low dependencies on $1/\epsilon$, the better of which achieves a running time of $2^{\tilde{O}(1/\epsilon^2)}+O(n\log n)$ [11]. In this paper we obtain an algorithm with a running time of $2^{\tilde{O}(1/\epsilon)}+O(n\log n)$, which is tight under ETH up to logarithmic factors on the exponent. Our main technical contribution is a new structural result on the configuration-IP. More precisely, we show the existence of a highly symmetric and sparse optimal solution, in which all but a constant number of machines are assigned a configuration with small support. This structure can then be exploited by integer programming techniques and enumeration. We believe that our structural result is of independent interest and should find applications to other settings. In particular, we show how the structure can be applied to the minimum makespan problem on related machines and to a larger class of objective functions on parallel machines. For all these cases we obtain an efficient PTAS with running time $2^{\tilde{O}(1/\epsilon)} + \text{poly}(n)$. |
1812.03953 | Hadi Abdi Khojasteh | Hadi Abdi Khojasteh, Alireza Abbas Alipour, Ebrahim Ansari and Parvin
Razzaghi | An Intelligent Safety System for Human-Centered Semi-Autonomous Vehicles | 15 pages and 5 figures, Submitted to the international conference on
Contemporary issues in Data Science (CiDaS 2019), Learn more about this
project at https://iasbs.ac.ir/~ansari/faraz | Nature Switzerland AG - Springer LNDECT 45(2020) 322-336 | 10.1007/978-3-030-37309-2_26 | null | cs.CV cs.LG cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nowadays, automobile manufacturers make efforts to develop ways to make cars
fully safe. Monitoring driver's actions by computer vision techniques to detect
driving mistakes in real-time and then planning for autonomous driving to avoid
vehicle collisions is one of the most important issues that has been
investigated in the machine vision and Intelligent Transportation Systems
(ITS). The main goal of this study is to prevent accidents caused by fatigue,
drowsiness, and driver distraction. To avoid these incidents, this paper
proposes an integrated safety system that continuously monitors the driver's
attention and vehicle surroundings, and finally decides whether the actual
steering control status is safe or not. For this purpose, we equipped an
ordinary car called FARAZ with a vision system consisting of four mounted
cameras along with a universal car tool for communicating with surrounding
factory-installed sensors and other car systems, and sending commands to
actuators. The proposed system leverages a scene understanding pipeline using
deep convolutional encoder-decoder networks and a driver state detection
pipeline. We have been identifying and assessing domestic capabilities for the
development of technologies specifically of the ordinary vehicles in order to
manufacture smart cars and eke providing an intelligent system to increase
safety and to assist the driver in various conditions/situations.
| [
{
"created": "Mon, 10 Dec 2018 18:08:18 GMT",
"version": "v1"
},
{
"created": "Wed, 20 Feb 2019 22:57:02 GMT",
"version": "v2"
}
] | 2020-02-28 | [
[
"Khojasteh",
"Hadi Abdi",
""
],
[
"Alipour",
"Alireza Abbas",
""
],
[
"Ansari",
"Ebrahim",
""
],
[
"Razzaghi",
"Parvin",
""
]
] | Nowadays, automobile manufacturers make efforts to develop ways to make cars fully safe. Monitoring driver's actions by computer vision techniques to detect driving mistakes in real-time and then planning for autonomous driving to avoid vehicle collisions is one of the most important issues that has been investigated in the machine vision and Intelligent Transportation Systems (ITS). The main goal of this study is to prevent accidents caused by fatigue, drowsiness, and driver distraction. To avoid these incidents, this paper proposes an integrated safety system that continuously monitors the driver's attention and vehicle surroundings, and finally decides whether the actual steering control status is safe or not. For this purpose, we equipped an ordinary car called FARAZ with a vision system consisting of four mounted cameras along with a universal car tool for communicating with surrounding factory-installed sensors and other car systems, and sending commands to actuators. The proposed system leverages a scene understanding pipeline using deep convolutional encoder-decoder networks and a driver state detection pipeline. We have been identifying and assessing domestic capabilities for the development of technologies specifically of the ordinary vehicles in order to manufacture smart cars and eke providing an intelligent system to increase safety and to assist the driver in various conditions/situations. |
2011.10278 | Junho Koh | Junho Koh, Jaekyum Kim, Younji Shin, Byeongwon Lee, Seungji Yang and
Jun Won Choi | Joint Representation of Temporal Image Sequences and Object Motion for
Video Object Detection | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | In this paper, we propose a new video object detector (VoD) method referred
to as temporal feature aggregation and motion-aware VoD (TM-VoD), which
produces a joint representation of temporal image sequences and object motion.
The proposed TM-VoD aggregates visual feature maps extracted by convolutional
neural networks applying the temporal attention gating and spatial feature
alignment. This temporal feature aggregation is performed in two stages in a
hierarchical fashion. In the first stage, the visual feature maps are fused at
a pixel level via gated attention model. In the second stage, the proposed
method aggregates the features after aligning the object features using
temporal box offset calibration and weights them according to the cosine
similarity measure. The proposed TM-VoD also finds the representation of the
motion of objects in two successive steps. The pixel-level motion features are
first computed based on the incremental changes between the adjacent visual
feature maps. Then, box-level motion features are obtained from both the region
of interest (RoI)-aligned pixel-level motion features and the sequential
changes of the box coordinates. Finally, all these features are concatenated to
produce a joint representation of the objects for VoD. The experiments
conducted on the ImageNet VID dataset demonstrate that the proposed method
outperforms existing VoD methods and achieves a performance comparable to that
of state-of-the-art VoDs.
| [
{
"created": "Fri, 20 Nov 2020 08:46:12 GMT",
"version": "v1"
}
] | 2020-11-23 | [
[
"Koh",
"Junho",
""
],
[
"Kim",
"Jaekyum",
""
],
[
"Shin",
"Younji",
""
],
[
"Lee",
"Byeongwon",
""
],
[
"Yang",
"Seungji",
""
],
[
"Choi",
"Jun Won",
""
]
] | In this paper, we propose a new video object detector (VoD) method referred to as temporal feature aggregation and motion-aware VoD (TM-VoD), which produces a joint representation of temporal image sequences and object motion. The proposed TM-VoD aggregates visual feature maps extracted by convolutional neural networks applying the temporal attention gating and spatial feature alignment. This temporal feature aggregation is performed in two stages in a hierarchical fashion. In the first stage, the visual feature maps are fused at a pixel level via gated attention model. In the second stage, the proposed method aggregates the features after aligning the object features using temporal box offset calibration and weights them according to the cosine similarity measure. The proposed TM-VoD also finds the representation of the motion of objects in two successive steps. The pixel-level motion features are first computed based on the incremental changes between the adjacent visual feature maps. Then, box-level motion features are obtained from both the region of interest (RoI)-aligned pixel-level motion features and the sequential changes of the box coordinates. Finally, all these features are concatenated to produce a joint representation of the objects for VoD. The experiments conducted on the ImageNet VID dataset demonstrate that the proposed method outperforms existing VoD methods and achieves a performance comparable to that of state-of-the-art VoDs. |
2102.07344 | Ana Stanescu | Ana Stanescu and Gaurav Pandey | Developing parsimonious ensembles using predictor diversity within a
reinforcement learning framework | This work was intended as a replacement of arXiv:1805.02103 and any
subsequent updates will appear there | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Heterogeneous ensembles that can aggregate an unrestricted number and variety
of base predictors can effectively address challenging prediction problems. In
particular, accurate ensembles that are also parsimonious, i.e., consist of as
few base predictors as possible, can help reveal potentially useful knowledge
about the target problem domain. Although ensemble selection offers a potential
approach to achieving these goals, the currently available algorithms are
limited in their abilities. In this paper, we present several algorithms that
incorporate ensemble diversity into a reinforcement learning (RL)-based
ensemble selection framework to build accurate and parsimonious ensembles.
These algorithms, as well as several baselines, are rigorously evaluated on
datasets from diverse domains in terms of the predictive performance and
parsimony of their ensembles. This evaluation demonstrates that our
diversity-incorporated RL-based algorithms perform better than the others for
constructing simultaneously accurate and parsimonious ensembles. These
algorithms can eventually aid the interpretation or reverse engineering of
predictive models assimilated into effective ensembles. To enable such a
translation, an implementation of these algorithms, as well the experimental
setup they are evaluated in, has been made available at
https://github.com/GauravPandeyLab/lens-learning-ensembles-using-reinforcement-learning.
| [
{
"created": "Mon, 15 Feb 2021 05:00:19 GMT",
"version": "v1"
},
{
"created": "Thu, 25 Feb 2021 23:43:42 GMT",
"version": "v2"
}
] | 2021-03-01 | [
[
"Stanescu",
"Ana",
""
],
[
"Pandey",
"Gaurav",
""
]
] | Heterogeneous ensembles that can aggregate an unrestricted number and variety of base predictors can effectively address challenging prediction problems. In particular, accurate ensembles that are also parsimonious, i.e., consist of as few base predictors as possible, can help reveal potentially useful knowledge about the target problem domain. Although ensemble selection offers a potential approach to achieving these goals, the currently available algorithms are limited in their abilities. In this paper, we present several algorithms that incorporate ensemble diversity into a reinforcement learning (RL)-based ensemble selection framework to build accurate and parsimonious ensembles. These algorithms, as well as several baselines, are rigorously evaluated on datasets from diverse domains in terms of the predictive performance and parsimony of their ensembles. This evaluation demonstrates that our diversity-incorporated RL-based algorithms perform better than the others for constructing simultaneously accurate and parsimonious ensembles. These algorithms can eventually aid the interpretation or reverse engineering of predictive models assimilated into effective ensembles. To enable such a translation, an implementation of these algorithms, as well the experimental setup they are evaluated in, has been made available at https://github.com/GauravPandeyLab/lens-learning-ensembles-using-reinforcement-learning. |
1910.06511 | Ling-Xiao Zhang | Lin Gao, Ling-Xiao Zhang, Hsien-Yu Meng, Yi-Hui Ren, Yu-Kun Lai, Leif
Kobbelt | PRS-Net: Planar Reflective Symmetry Detection Net for 3D Models | null | IEEE Transactions on Visualization and Computer Graphics, Volume:
27, Issue: 6, 2021 | 10.1109/TVCG.2020.3003823 | null | cs.GR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In geometry processing, symmetry is a universal type of high-level structural
information of 3D models and benefits many geometry processing tasks including
shape segmentation, alignment, matching, and completion. Thus it is an
important problem to analyze various symmetry forms of 3D shapes. Planar
reflective symmetry is the most fundamental one. Traditional methods based on
spatial sampling can be time-consuming and may not be able to identify all the
symmetry planes. In this paper, we present a novel learning framework to
automatically discover global planar reflective symmetry of a 3D shape. Our
framework trains an unsupervised 3D convolutional neural network to extract
global model features and then outputs possible global symmetry parameters,
where input shapes are represented using voxels. We introduce a dedicated
symmetry distance loss along with a regularization loss to avoid generating
duplicated symmetry planes. Our network can also identify generalized cylinders
by predicting their rotation axes. We further provide a method to remove
invalid and duplicated planes and axes. We demonstrate that our method is able
to produce reliable and accurate results. Our neural network based method is
hundreds of times faster than the state-of-the-art methods, which are based on
sampling. Our method is also robust even with noisy or incomplete input
surfaces.
| [
{
"created": "Tue, 15 Oct 2019 03:46:58 GMT",
"version": "v1"
},
{
"created": "Fri, 18 Oct 2019 04:05:15 GMT",
"version": "v2"
},
{
"created": "Thu, 24 Oct 2019 06:55:05 GMT",
"version": "v3"
},
{
"created": "Sat, 16 May 2020 02:21:17 GMT",
"version": "v4"
},
{
"created": "Mon, 1 Jun 2020 11:54:32 GMT",
"version": "v5"
},
{
"created": "Tue, 14 Sep 2021 11:49:12 GMT",
"version": "v6"
}
] | 2021-09-15 | [
[
"Gao",
"Lin",
""
],
[
"Zhang",
"Ling-Xiao",
""
],
[
"Meng",
"Hsien-Yu",
""
],
[
"Ren",
"Yi-Hui",
""
],
[
"Lai",
"Yu-Kun",
""
],
[
"Kobbelt",
"Leif",
""
]
] | In geometry processing, symmetry is a universal type of high-level structural information of 3D models and benefits many geometry processing tasks including shape segmentation, alignment, matching, and completion. Thus it is an important problem to analyze various symmetry forms of 3D shapes. Planar reflective symmetry is the most fundamental one. Traditional methods based on spatial sampling can be time-consuming and may not be able to identify all the symmetry planes. In this paper, we present a novel learning framework to automatically discover global planar reflective symmetry of a 3D shape. Our framework trains an unsupervised 3D convolutional neural network to extract global model features and then outputs possible global symmetry parameters, where input shapes are represented using voxels. We introduce a dedicated symmetry distance loss along with a regularization loss to avoid generating duplicated symmetry planes. Our network can also identify generalized cylinders by predicting their rotation axes. We further provide a method to remove invalid and duplicated planes and axes. We demonstrate that our method is able to produce reliable and accurate results. Our neural network based method is hundreds of times faster than the state-of-the-art methods, which are based on sampling. Our method is also robust even with noisy or incomplete input surfaces. |
2103.03666 | Benedikt Kleppmann | Benedikt T. Kleppmann | Tree of Knowledge: an Online Platform for Learning the Behaviour of
Complex Systems | 10 pages, 5 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Many social sciences such as psychology and economics try to learn the
behaviour of complex agents such as humans, organisations and countries. The
current statistical methods used for learning this behaviour try to infer
generally valid behaviour, but can only learn from one type of study at a time.
Furthermore, only data from carefully designed studies can be used, as the
phenomenon of interest has to be isolated and confounding factors accounted
for. These restrictions limit the robustness and accuracy of insights that can
be gained from social/economic systems. Here we present the online platform
TreeOfKnowledge which implements a new methodology specifically designed for
learning complex behaviours from complex systems: agent-based behaviour
learning. With agent-based behaviour learning it is possible to gain more
accurate and robust insights as it does not have the restriction of
conventional statistics. It learns agent behaviour from many heterogenous
datasets and can learn from these datasets even if the phenomenon of interest
is not directly observed, but appears deep within complex systems. This new
methodology shows how the internet and advances in computational power allow
for more accurate and powerful mathematical models.
| [
{
"created": "Sat, 27 Feb 2021 19:39:14 GMT",
"version": "v1"
}
] | 2021-03-08 | [
[
"Kleppmann",
"Benedikt T.",
""
]
] | Many social sciences such as psychology and economics try to learn the behaviour of complex agents such as humans, organisations and countries. The current statistical methods used for learning this behaviour try to infer generally valid behaviour, but can only learn from one type of study at a time. Furthermore, only data from carefully designed studies can be used, as the phenomenon of interest has to be isolated and confounding factors accounted for. These restrictions limit the robustness and accuracy of insights that can be gained from social/economic systems. Here we present the online platform TreeOfKnowledge which implements a new methodology specifically designed for learning complex behaviours from complex systems: agent-based behaviour learning. With agent-based behaviour learning it is possible to gain more accurate and robust insights as it does not have the restriction of conventional statistics. It learns agent behaviour from many heterogenous datasets and can learn from these datasets even if the phenomenon of interest is not directly observed, but appears deep within complex systems. This new methodology shows how the internet and advances in computational power allow for more accurate and powerful mathematical models. |
1902.10439 | Su Yang | Su Yang, Yuqing Zhang, Chensi Wu | Attack-Defense Quantification Based On Game-Theory | null | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the developing of the attack and defense technology, the cyber
environment has been more and more sophisticated. We failed to give an accurate
evaluation of network security situation, as we lack a more accurate
quantitative evaluation of attack-defense behaviors. In response to this
situation, we proposed an attack-defense stochastic game model (ADSGM),
analyzed the different security property of distinct defense mechanism, and put
forward a corresponding utility calculation coping with the distinct defense
mechanism. Through a case study, we showed the impact of active defense and the
risk of attack exposure, demonstrated the effectiveness of our methods on
attack-defense behavior quantification. This paper filled with the gap in the
quantitative assessment of defensive measures, to make the quantitative
evaluation of attack-defense more comprehensive and accurate.
| [
{
"created": "Wed, 27 Feb 2019 10:28:34 GMT",
"version": "v1"
}
] | 2019-02-28 | [
[
"Yang",
"Su",
""
],
[
"Zhang",
"Yuqing",
""
],
[
"Wu",
"Chensi",
""
]
] | With the developing of the attack and defense technology, the cyber environment has been more and more sophisticated. We failed to give an accurate evaluation of network security situation, as we lack a more accurate quantitative evaluation of attack-defense behaviors. In response to this situation, we proposed an attack-defense stochastic game model (ADSGM), analyzed the different security property of distinct defense mechanism, and put forward a corresponding utility calculation coping with the distinct defense mechanism. Through a case study, we showed the impact of active defense and the risk of attack exposure, demonstrated the effectiveness of our methods on attack-defense behavior quantification. This paper filled with the gap in the quantitative assessment of defensive measures, to make the quantitative evaluation of attack-defense more comprehensive and accurate. |
2405.11320 | Emmanouil Maragkoudakis | Emmanouil Maragkoudakis, Symeon Papadopoulos, Iraklis Varlamis and
Christos Diou | Sampling Strategies for Mitigating Bias in Face Synthesis Methods | Accepted to the BIAS 2023 ECML-PKDD Workshop | null | null | null | cs.LG cs.CV | http://creativecommons.org/licenses/by/4.0/ | Synthetically generated images can be used to create media content or to
complement datasets for training image analysis models. Several methods have
recently been proposed for the synthesis of high-fidelity face images; however,
the potential biases introduced by such methods have not been sufficiently
addressed. This paper examines the bias introduced by the widely popular
StyleGAN2 generative model trained on the Flickr Faces HQ dataset and proposes
two sampling strategies to balance the representation of selected attributes in
the generated face images. We focus on two protected attributes, gender and
age, and reveal that biases arise in the distribution of randomly sampled
images against very young and very old age groups, as well as against female
faces. These biases are also assessed for different image quality levels based
on the GIQA score. To mitigate bias, we propose two alternative methods for
sampling on selected lines or spheres of the latent space to increase the
number of generated samples from the under-represented classes. The
experimental results show a decrease in bias against underrepresented groups
and a more uniform distribution of the protected features at different levels
of image quality.
| [
{
"created": "Sat, 18 May 2024 15:30:14 GMT",
"version": "v1"
}
] | 2024-05-21 | [
[
"Maragkoudakis",
"Emmanouil",
""
],
[
"Papadopoulos",
"Symeon",
""
],
[
"Varlamis",
"Iraklis",
""
],
[
"Diou",
"Christos",
""
]
] | Synthetically generated images can be used to create media content or to complement datasets for training image analysis models. Several methods have recently been proposed for the synthesis of high-fidelity face images; however, the potential biases introduced by such methods have not been sufficiently addressed. This paper examines the bias introduced by the widely popular StyleGAN2 generative model trained on the Flickr Faces HQ dataset and proposes two sampling strategies to balance the representation of selected attributes in the generated face images. We focus on two protected attributes, gender and age, and reveal that biases arise in the distribution of randomly sampled images against very young and very old age groups, as well as against female faces. These biases are also assessed for different image quality levels based on the GIQA score. To mitigate bias, we propose two alternative methods for sampling on selected lines or spheres of the latent space to increase the number of generated samples from the under-represented classes. The experimental results show a decrease in bias against underrepresented groups and a more uniform distribution of the protected features at different levels of image quality. |
2205.09351 | Arnab Dey | Arnab Dey, Yassine Ahmine, Andrew I. Comport | Mip-NeRF RGB-D: Depth Assisted Fast Neural Radiance Fields | null | Journal of WSCG 2022 | 10.24132/JWSCG.2022.5 | Vol.30., No.1-2, ISSN 1213-6972 | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Neural scene representations, such as Neural Radiance Fields (NeRF), are
based on training a multilayer perceptron (MLP) using a set of color images
with known poses. An increasing number of devices now produce RGB-D(color +
depth) information, which has been shown to be very important for a wide range
of tasks. Therefore, the aim of this paper is to investigate what improvements
can be made to these promising implicit representations by incorporating depth
information with the color images. In particular, the recently proposed
Mip-NeRF approach, which uses conical frustums instead of rays for volume
rendering, allows one to account for the varying area of a pixel with distance
from the camera center. The proposed method additionally models depth
uncertainty. This allows to address major limitations of NeRF-based approaches
including improving the accuracy of geometry, reduced artifacts, faster
training time, and shortened prediction time. Experiments are performed on
well-known benchmark scenes, and comparisons show improved accuracy in scene
geometry and photometric reconstruction, while reducing the training time by 3
- 5 times.
| [
{
"created": "Thu, 19 May 2022 07:11:42 GMT",
"version": "v1"
},
{
"created": "Thu, 9 Jun 2022 11:35:53 GMT",
"version": "v2"
},
{
"created": "Mon, 7 Nov 2022 13:57:58 GMT",
"version": "v3"
}
] | 2022-11-08 | [
[
"Dey",
"Arnab",
""
],
[
"Ahmine",
"Yassine",
""
],
[
"Comport",
"Andrew I.",
""
]
] | Neural scene representations, such as Neural Radiance Fields (NeRF), are based on training a multilayer perceptron (MLP) using a set of color images with known poses. An increasing number of devices now produce RGB-D(color + depth) information, which has been shown to be very important for a wide range of tasks. Therefore, the aim of this paper is to investigate what improvements can be made to these promising implicit representations by incorporating depth information with the color images. In particular, the recently proposed Mip-NeRF approach, which uses conical frustums instead of rays for volume rendering, allows one to account for the varying area of a pixel with distance from the camera center. The proposed method additionally models depth uncertainty. This allows to address major limitations of NeRF-based approaches including improving the accuracy of geometry, reduced artifacts, faster training time, and shortened prediction time. Experiments are performed on well-known benchmark scenes, and comparisons show improved accuracy in scene geometry and photometric reconstruction, while reducing the training time by 3 - 5 times. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.