id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2106.01416
|
Absalom Ezugwu
|
Olaide N. Oyelade and Absalom E. Ezugwu
|
Ebola Optimization Search Algorithm (EOSA): A new metaheuristic
algorithm based on the propagation model of Ebola virus disease
| null | null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Ebola virus and the disease in effect tend to randomly move individuals
in the population around susceptible, infected, quarantined, hospitalized,
recovered, and dead sub-population. Motivated by the effectiveness in
propagating the disease through the virus, a new bio-inspired and
population-based optimization algorithm is proposed. This paper presents a
novel metaheuristic algorithm named Ebola optimization algorithm (EOSA). To
correctly achieve this, this study models the propagation mechanism of the
Ebola virus disease, emphasising all consistent states of the propagation. The
model was further represented using a mathematical model based on first-order
differential equations. After that, the combined propagation and mathematical
models were adapted for developing the new metaheuristic algorithm. To evaluate
the proposed method's performance and capability compared with other
optimization methods, the underlying propagation and mathematical models were
first investigated to determine how they successfully simulate the EVD.
Furthermore, two sets of benchmark functions consisting of forty-seven (47)
classical and over thirty (30) constrained IEEE CEC-2017 benchmark functions
are investigated numerically. The results indicate that the performance of the
proposed algorithm is competitive with other state-of-the-art optimization
methods based on scalability analysis, convergence analysis, and sensitivity
analysis. Extensive simulation results indicate that the EOSA outperforms other
state-of-the-art popular metaheuristic optimization algorithms such as the
Particle Swarm Optimization Algorithm (PSO), Genetic Algorithm (GA), and
Artificial Bee Colony Algorithm (ABC) on some shifted, high dimensional and
large search range problems.
|
[
{
"created": "Wed, 2 Jun 2021 18:41:56 GMT",
"version": "v1"
},
{
"created": "Sat, 19 Jun 2021 21:02:53 GMT",
"version": "v2"
}
] |
2021-06-22
|
[
[
"Oyelade",
"Olaide N.",
""
],
[
"Ezugwu",
"Absalom E.",
""
]
] |
The Ebola virus and the disease in effect tend to randomly move individuals in the population around susceptible, infected, quarantined, hospitalized, recovered, and dead sub-population. Motivated by the effectiveness in propagating the disease through the virus, a new bio-inspired and population-based optimization algorithm is proposed. This paper presents a novel metaheuristic algorithm named Ebola optimization algorithm (EOSA). To correctly achieve this, this study models the propagation mechanism of the Ebola virus disease, emphasising all consistent states of the propagation. The model was further represented using a mathematical model based on first-order differential equations. After that, the combined propagation and mathematical models were adapted for developing the new metaheuristic algorithm. To evaluate the proposed method's performance and capability compared with other optimization methods, the underlying propagation and mathematical models were first investigated to determine how they successfully simulate the EVD. Furthermore, two sets of benchmark functions consisting of forty-seven (47) classical and over thirty (30) constrained IEEE CEC-2017 benchmark functions are investigated numerically. The results indicate that the performance of the proposed algorithm is competitive with other state-of-the-art optimization methods based on scalability analysis, convergence analysis, and sensitivity analysis. Extensive simulation results indicate that the EOSA outperforms other state-of-the-art popular metaheuristic optimization algorithms such as the Particle Swarm Optimization Algorithm (PSO), Genetic Algorithm (GA), and Artificial Bee Colony Algorithm (ABC) on some shifted, high dimensional and large search range problems.
|
1904.02832
|
Dacheng Tao
|
Chen Gong, Tongliang Liu, Yuanyan Tang, Jian Yang, Jie Yang, Dacheng
Tao
|
A Regularization Approach for Instance-Based Superset Label Learning
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Different from the traditional supervised learning in which each training
example has only one explicit label, superset label learning (SLL) refers to
the problem that a training example can be associated with a set of candidate
labels, and only one of them is correct. Existing SLL methods are either
regularization-based or instance-based, and the latter of which has achieved
state-of-the-art performance. This is because the latest instance-based methods
contain an explicit disambiguation operation that accurately picks up the
groundtruth label of each training example from its ambiguous candidate labels.
However, such disambiguation operation does not fully consider the mutually
exclusive relationship among different candidate labels, so the disambiguated
labels are usually generated in a nondiscriminative way, which is unfavorable
for the instance-based methods to obtain satisfactory performance. To address
this defect, we develop a novel regularization approach for instance-based
superset label (RegISL) learning so that our instance-based method also
inherits the good discriminative ability possessed by the regularization
scheme. Specifically, we employ a graph to represent the training set, and
require the examples that are adjacent on the graph to obtain similar labels.
More importantly, a discrimination term is proposed to enlarge the gap of
values between possible labels and unlikely labels for every training example.
As a result, the intrinsic constraints among different candidate labels are
deployed, and the disambiguated labels generated by RegISL are more
discriminative and accurate than those output by existing instance-based
algorithms. The experimental results on various tasks convincingly demonstrate
the superiority of our RegISL to other typical SLL methods in terms of both
training accuracy and test accuracy.
|
[
{
"created": "Fri, 5 Apr 2019 00:22:26 GMT",
"version": "v1"
}
] |
2019-04-08
|
[
[
"Gong",
"Chen",
""
],
[
"Liu",
"Tongliang",
""
],
[
"Tang",
"Yuanyan",
""
],
[
"Yang",
"Jian",
""
],
[
"Yang",
"Jie",
""
],
[
"Tao",
"Dacheng",
""
]
] |
Different from the traditional supervised learning in which each training example has only one explicit label, superset label learning (SLL) refers to the problem that a training example can be associated with a set of candidate labels, and only one of them is correct. Existing SLL methods are either regularization-based or instance-based, and the latter of which has achieved state-of-the-art performance. This is because the latest instance-based methods contain an explicit disambiguation operation that accurately picks up the groundtruth label of each training example from its ambiguous candidate labels. However, such disambiguation operation does not fully consider the mutually exclusive relationship among different candidate labels, so the disambiguated labels are usually generated in a nondiscriminative way, which is unfavorable for the instance-based methods to obtain satisfactory performance. To address this defect, we develop a novel regularization approach for instance-based superset label (RegISL) learning so that our instance-based method also inherits the good discriminative ability possessed by the regularization scheme. Specifically, we employ a graph to represent the training set, and require the examples that are adjacent on the graph to obtain similar labels. More importantly, a discrimination term is proposed to enlarge the gap of values between possible labels and unlikely labels for every training example. As a result, the intrinsic constraints among different candidate labels are deployed, and the disambiguated labels generated by RegISL are more discriminative and accurate than those output by existing instance-based algorithms. The experimental results on various tasks convincingly demonstrate the superiority of our RegISL to other typical SLL methods in terms of both training accuracy and test accuracy.
|
2402.16479
|
Jin Ding
|
Jin Ding, Jie-Chao Zhao, Yong-Zhi Sun, Ping Tan, Jia-Wei Wang, Ji-En
Ma, You-Tong Fang
|
Edge Detectors Can Make Deep Convolutional Neural Networks More Robust
|
26 pages, 18 figures, 7 tables
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Deep convolutional neural networks (DCNN for short) are vulnerable to
examples with small perturbations. Improving DCNN's robustness is of great
significance to the safety-critical applications, such as autonomous driving
and industry automation. Inspired by the principal way that human eyes
recognize objects, i.e., largely relying on the shape features, this paper
first employs the edge detectors as layer kernels and designs a binary edge
feature branch (BEFB for short) to learn the binary edge features, which can be
easily integrated into any popular backbone. The four edge detectors can learn
the horizontal, vertical, positive diagonal, and negative diagonal edge
features, respectively, and the branch is stacked by multiple Sobel layers
(using edge detectors as kernels) and one threshold layer. The binary edge
features learned by the branch, concatenated with the texture features learned
by the backbone, are fed into the fully connected layers for classification. We
integrate the proposed branch into VGG16 and ResNet34, respectively, and
conduct experiments on multiple datasets. Experimental results demonstrate the
BEFB is lightweight and has no side effects on training. And the accuracy of
the BEFB integrated models is better than the original ones on all datasets
when facing FGSM, PGD, and C\&W attacks. Besides, BEFB integrated models
equipped with the robustness enhancing techniques can achieve better
classification accuracy compared to the original models. The work in this paper
for the first time shows it is feasible to enhance the robustness of DCNNs
through combining both shape-like features and texture features.
|
[
{
"created": "Mon, 26 Feb 2024 10:54:26 GMT",
"version": "v1"
},
{
"created": "Wed, 24 Jul 2024 02:53:00 GMT",
"version": "v2"
}
] |
2024-07-25
|
[
[
"Ding",
"Jin",
""
],
[
"Zhao",
"Jie-Chao",
""
],
[
"Sun",
"Yong-Zhi",
""
],
[
"Tan",
"Ping",
""
],
[
"Wang",
"Jia-Wei",
""
],
[
"Ma",
"Ji-En",
""
],
[
"Fang",
"You-Tong",
""
]
] |
Deep convolutional neural networks (DCNN for short) are vulnerable to examples with small perturbations. Improving DCNN's robustness is of great significance to the safety-critical applications, such as autonomous driving and industry automation. Inspired by the principal way that human eyes recognize objects, i.e., largely relying on the shape features, this paper first employs the edge detectors as layer kernels and designs a binary edge feature branch (BEFB for short) to learn the binary edge features, which can be easily integrated into any popular backbone. The four edge detectors can learn the horizontal, vertical, positive diagonal, and negative diagonal edge features, respectively, and the branch is stacked by multiple Sobel layers (using edge detectors as kernels) and one threshold layer. The binary edge features learned by the branch, concatenated with the texture features learned by the backbone, are fed into the fully connected layers for classification. We integrate the proposed branch into VGG16 and ResNet34, respectively, and conduct experiments on multiple datasets. Experimental results demonstrate the BEFB is lightweight and has no side effects on training. And the accuracy of the BEFB integrated models is better than the original ones on all datasets when facing FGSM, PGD, and C\&W attacks. Besides, BEFB integrated models equipped with the robustness enhancing techniques can achieve better classification accuracy compared to the original models. The work in this paper for the first time shows it is feasible to enhance the robustness of DCNNs through combining both shape-like features and texture features.
|
2204.08962
|
Joshua Mack
|
Joshua Mack, Sahil Hassan, Nirmal Kumbhare, Miguel Castro-Gonzalez,
Ali Akoglu
|
CEDR -- A Compiler-integrated, Extensible DSSoC Runtime
|
35 pages single column, 16 figures, 7 tables. Accepted for
publication in the ACM Transactions on Embedded and Computing Systems
| null |
10.1145/3529257
| null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we present CEDR, a Compiler-integrated, Extensible Domain
Specific System on Chip Runtime ecosystem to facilitate research towards
addressing the challenges of architecture, system software and application
development with distinct plug-and-play integration points in a unified compile
time and run time workflow. We demonstrate the utility of CEDR on the Xilinx
Zynq MPSoC-ZCU102 for evaluating performance of pre-silicon hardware in the
trade space of SoC configuration, scheduling policy and workload complexity
based on dynamically arriving workload scenarios composed of real-life signal
processing applications scaling to thousands of application instances with FFT
and matrix multiply accelerators. We provide insights into the trade-offs
present in this design space through a number of distinct case studies. CEDR is
portable and has been deployed and validated on Odroid-XU3, X86 and Nvidia
Jetson Xavier based SoC platforms. Taken together, CEDR is a capable
environment for enabling research in exploring the boundaries of productive
application development, resource management heuristic development, and
hardware configuration analysis for heterogeneous architectures.
|
[
{
"created": "Fri, 15 Apr 2022 19:54:39 GMT",
"version": "v1"
}
] |
2022-04-20
|
[
[
"Mack",
"Joshua",
""
],
[
"Hassan",
"Sahil",
""
],
[
"Kumbhare",
"Nirmal",
""
],
[
"Castro-Gonzalez",
"Miguel",
""
],
[
"Akoglu",
"Ali",
""
]
] |
In this work, we present CEDR, a Compiler-integrated, Extensible Domain Specific System on Chip Runtime ecosystem to facilitate research towards addressing the challenges of architecture, system software and application development with distinct plug-and-play integration points in a unified compile time and run time workflow. We demonstrate the utility of CEDR on the Xilinx Zynq MPSoC-ZCU102 for evaluating performance of pre-silicon hardware in the trade space of SoC configuration, scheduling policy and workload complexity based on dynamically arriving workload scenarios composed of real-life signal processing applications scaling to thousands of application instances with FFT and matrix multiply accelerators. We provide insights into the trade-offs present in this design space through a number of distinct case studies. CEDR is portable and has been deployed and validated on Odroid-XU3, X86 and Nvidia Jetson Xavier based SoC platforms. Taken together, CEDR is a capable environment for enabling research in exploring the boundaries of productive application development, resource management heuristic development, and hardware configuration analysis for heterogeneous architectures.
|
2309.08414
|
Bernhard Bermeitinger
|
Bernhard Bermeitinger, Tomas Hrycej, Siegfried Handschuh
|
Make Deep Networks Shallow Again
|
to be published at KDIR2023, Rome
|
Proceedings of the 15th International Joint Conference on
Knowledge Discovery, Knowledge Engineering and Knowledge Management -
KDIR2023
|
10.5220/0012203800003598
| null |
cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Deep neural networks have a good success record and are thus viewed as the
best architecture choice for complex applications. Their main shortcoming has
been, for a long time, the vanishing gradient which prevented the numerical
optimization algorithms from acceptable convergence. A breakthrough has been
achieved by the concept of residual connections -- an identity mapping parallel
to a conventional layer. This concept is applicable to stacks of layers of the
same dimension and substantially alleviates the vanishing gradient problem. A
stack of residual connection layers can be expressed as an expansion of terms
similar to the Taylor expansion. This expansion suggests the possibility of
truncating the higher-order terms and receiving an architecture consisting of a
single broad layer composed of all initially stacked layers in parallel. In
other words, a sequential deep architecture is substituted by a parallel
shallow one. Prompted by this theory, we investigated the performance
capabilities of the parallel architecture in comparison to the sequential one.
The computer vision datasets MNIST and CIFAR10 were used to train both
architectures for a total of 6912 combinations of varying numbers of
convolutional layers, numbers of filters, kernel sizes, and other meta
parameters. Our findings demonstrate a surprising equivalence between the deep
(sequential) and shallow (parallel) architectures. Both layouts produced
similar results in terms of training and validation set loss. This discovery
implies that a wide, shallow architecture can potentially replace a deep
network without sacrificing performance. Such substitution has the potential to
simplify network architectures, improve optimization efficiency, and accelerate
the training process.
|
[
{
"created": "Fri, 15 Sep 2023 14:18:21 GMT",
"version": "v1"
}
] |
2024-05-02
|
[
[
"Bermeitinger",
"Bernhard",
""
],
[
"Hrycej",
"Tomas",
""
],
[
"Handschuh",
"Siegfried",
""
]
] |
Deep neural networks have a good success record and are thus viewed as the best architecture choice for complex applications. Their main shortcoming has been, for a long time, the vanishing gradient which prevented the numerical optimization algorithms from acceptable convergence. A breakthrough has been achieved by the concept of residual connections -- an identity mapping parallel to a conventional layer. This concept is applicable to stacks of layers of the same dimension and substantially alleviates the vanishing gradient problem. A stack of residual connection layers can be expressed as an expansion of terms similar to the Taylor expansion. This expansion suggests the possibility of truncating the higher-order terms and receiving an architecture consisting of a single broad layer composed of all initially stacked layers in parallel. In other words, a sequential deep architecture is substituted by a parallel shallow one. Prompted by this theory, we investigated the performance capabilities of the parallel architecture in comparison to the sequential one. The computer vision datasets MNIST and CIFAR10 were used to train both architectures for a total of 6912 combinations of varying numbers of convolutional layers, numbers of filters, kernel sizes, and other meta parameters. Our findings demonstrate a surprising equivalence between the deep (sequential) and shallow (parallel) architectures. Both layouts produced similar results in terms of training and validation set loss. This discovery implies that a wide, shallow architecture can potentially replace a deep network without sacrificing performance. Such substitution has the potential to simplify network architectures, improve optimization efficiency, and accelerate the training process.
|
2107.05050
|
Ben Hayes
|
Ben Hayes, Charalampos Saitis, Gy\"orgy Fazekas
|
Neural Waveshaping Synthesis
|
Accepted to ISMIR 2021; See online supplement at
https://benhayes.net/projects/nws/
| null | null | null |
cs.SD cs.LG eess.AS eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present the Neural Waveshaping Unit (NEWT): a novel, lightweight, fully
causal approach to neural audio synthesis which operates directly in the
waveform domain, with an accompanying optimisation (FastNEWT) for efficient CPU
inference. The NEWT uses time-distributed multilayer perceptrons with periodic
activations to implicitly learn nonlinear transfer functions that encode the
characteristics of a target timbre. Once trained, a NEWT can produce complex
timbral evolutions by simple affine transformations of its input and output
signals. We paired the NEWT with a differentiable noise synthesiser and reverb
and found it capable of generating realistic musical instrument performances
with only 260k total model parameters, conditioned on F0 and loudness features.
We compared our method to state-of-the-art benchmarks with a multi-stimulus
listening test and the Fr\'echet Audio Distance and found it performed
competitively across the tested timbral domains. Our method significantly
outperformed the benchmarks in terms of generation speed, and achieved
real-time performance on a consumer CPU, both with and without FastNEWT,
suggesting it is a viable basis for future creative sound design tools.
|
[
{
"created": "Sun, 11 Jul 2021 13:50:59 GMT",
"version": "v1"
},
{
"created": "Tue, 27 Jul 2021 14:28:39 GMT",
"version": "v2"
}
] |
2021-07-28
|
[
[
"Hayes",
"Ben",
""
],
[
"Saitis",
"Charalampos",
""
],
[
"Fazekas",
"György",
""
]
] |
We present the Neural Waveshaping Unit (NEWT): a novel, lightweight, fully causal approach to neural audio synthesis which operates directly in the waveform domain, with an accompanying optimisation (FastNEWT) for efficient CPU inference. The NEWT uses time-distributed multilayer perceptrons with periodic activations to implicitly learn nonlinear transfer functions that encode the characteristics of a target timbre. Once trained, a NEWT can produce complex timbral evolutions by simple affine transformations of its input and output signals. We paired the NEWT with a differentiable noise synthesiser and reverb and found it capable of generating realistic musical instrument performances with only 260k total model parameters, conditioned on F0 and loudness features. We compared our method to state-of-the-art benchmarks with a multi-stimulus listening test and the Fr\'echet Audio Distance and found it performed competitively across the tested timbral domains. Our method significantly outperformed the benchmarks in terms of generation speed, and achieved real-time performance on a consumer CPU, both with and without FastNEWT, suggesting it is a viable basis for future creative sound design tools.
|
1805.08180
|
Andrew Levy
|
Andrew Levy, Robert Platt, Kate Saenko
|
Hierarchical Reinforcement Learning with Hindsight
|
Duplicate. See arXiv:1712.00948 "Learning Multi-Level Hierarchies
with Hindsight" for latest version
| null | null | null |
cs.LG cs.AI cs.NE cs.RO stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reinforcement Learning (RL) algorithms can suffer from poor sample efficiency
when rewards are delayed and sparse. We introduce a solution that enables
agents to learn temporally extended actions at multiple levels of abstraction
in a sample efficient and automated fashion. Our approach combines universal
value functions and hindsight learning, allowing agents to learn policies
belonging to different time scales in parallel. We show that our method
significantly accelerates learning in a variety of discrete and continuous
tasks.
|
[
{
"created": "Mon, 21 May 2018 17:02:53 GMT",
"version": "v1"
},
{
"created": "Fri, 8 Mar 2019 17:52:47 GMT",
"version": "v2"
}
] |
2019-03-11
|
[
[
"Levy",
"Andrew",
""
],
[
"Platt",
"Robert",
""
],
[
"Saenko",
"Kate",
""
]
] |
Reinforcement Learning (RL) algorithms can suffer from poor sample efficiency when rewards are delayed and sparse. We introduce a solution that enables agents to learn temporally extended actions at multiple levels of abstraction in a sample efficient and automated fashion. Our approach combines universal value functions and hindsight learning, allowing agents to learn policies belonging to different time scales in parallel. We show that our method significantly accelerates learning in a variety of discrete and continuous tasks.
|
2001.00336
|
Thatchaphol Saranurak
|
Sayan Bhattacharya, Danupon Nanongkai, Thatchaphol Saranurak
|
Coarse-Grained Complexity for Dynamic Algorithms
|
Published at SODA 2020. The abstract is truncated
| null | null | null |
cs.CC cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To date, the only way to argue polynomial lower bounds for dynamic algorithms
is via fine-grained complexity arguments. These arguments rely on strong
assumptions about specific problems such as the Strong Exponential Time
Hypothesis (SETH) and the Online Matrix-Vector Multiplication Conjecture (OMv).
While they have led to many exciting discoveries, dynamic algorithms still miss
out some benefits and lessons from the traditional ``coarse-grained'' approach
that relates together classes of problems such as P and NP. In this paper we
initiate the study of coarse-grained complexity theory for dynamic algorithms.
Below are among questions that this theory can answer.
What if dynamic Orthogonal Vector (OV) is easy in the cell-probe model? A
research program for proving polynomial unconditional lower bounds for dynamic
OV in the cell-probe model is motivated by the fact that many conditional lower
bounds can be shown via reductions from the dynamic OV problem. Since the
cell-probe model is more powerful than word RAM and has historically allowed
smaller upper bounds, it might turn out that dynamic OV is easy in the
cell-probe model, making this research direction infeasible. Our theory implies
that if this is the case, there will be very interesting algorithmic
consequences: If dynamic OV can be maintained in polylogarithmic worst-case
update time in the cell-probe model, then so are several important dynamic
problems such as $k$-edge connectivity, $(1+\epsilon)$-approximate mincut,
$(1+\epsilon)$-approximate matching, planar nearest neighbors, Chan's subset
union and 3-vs-4 diameter. The same conclusion can be made when we replace
dynamic OV by, e.g., subgraph connectivity, single source reachability, Chan's
subset union, and 3-vs-4 diameter.
Lower bounds for $k$-edge connectivity via dynamic OV? (see the full abstract
in the pdf file).
|
[
{
"created": "Thu, 2 Jan 2020 06:14:54 GMT",
"version": "v1"
},
{
"created": "Wed, 26 Jul 2023 13:55:49 GMT",
"version": "v2"
}
] |
2023-07-27
|
[
[
"Bhattacharya",
"Sayan",
""
],
[
"Nanongkai",
"Danupon",
""
],
[
"Saranurak",
"Thatchaphol",
""
]
] |
To date, the only way to argue polynomial lower bounds for dynamic algorithms is via fine-grained complexity arguments. These arguments rely on strong assumptions about specific problems such as the Strong Exponential Time Hypothesis (SETH) and the Online Matrix-Vector Multiplication Conjecture (OMv). While they have led to many exciting discoveries, dynamic algorithms still miss out some benefits and lessons from the traditional ``coarse-grained'' approach that relates together classes of problems such as P and NP. In this paper we initiate the study of coarse-grained complexity theory for dynamic algorithms. Below are among questions that this theory can answer. What if dynamic Orthogonal Vector (OV) is easy in the cell-probe model? A research program for proving polynomial unconditional lower bounds for dynamic OV in the cell-probe model is motivated by the fact that many conditional lower bounds can be shown via reductions from the dynamic OV problem. Since the cell-probe model is more powerful than word RAM and has historically allowed smaller upper bounds, it might turn out that dynamic OV is easy in the cell-probe model, making this research direction infeasible. Our theory implies that if this is the case, there will be very interesting algorithmic consequences: If dynamic OV can be maintained in polylogarithmic worst-case update time in the cell-probe model, then so are several important dynamic problems such as $k$-edge connectivity, $(1+\epsilon)$-approximate mincut, $(1+\epsilon)$-approximate matching, planar nearest neighbors, Chan's subset union and 3-vs-4 diameter. The same conclusion can be made when we replace dynamic OV by, e.g., subgraph connectivity, single source reachability, Chan's subset union, and 3-vs-4 diameter. Lower bounds for $k$-edge connectivity via dynamic OV? (see the full abstract in the pdf file).
|
2302.00378
|
Mohammad Akbar-Tajari
|
Mohammad Akbar-Tajari, Sara Rajaee, and Mohammad Taher Pilehvar
|
An Empirical Study on the Transferability of Transformer Modules in
Parameter-Efficient Fine-Tuning
|
Accepted at EMNLP 2022 (main conference),
https://aclanthology.org/2022.emnlp-main.726
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Parameter-efficient fine-tuning approaches have recently garnered a lot of
attention. Having considerably lower number of trainable weights, these methods
can bring about scalability and computational effectiveness. In this paper, we
look for optimal sub-networks and investigate the capability of different
transformer modules in transferring knowledge from a pre-trained model to a
downstream task. Our empirical results suggest that every transformer module in
BERT can act as a winning ticket: fine-tuning each specific module while
keeping the rest of the network frozen can lead to comparable performance to
the full fine-tuning. Among different modules, LayerNorms exhibit the best
capacity for knowledge transfer with limited trainable weights, to the extent
that, with only 0.003% of all parameters in the layer-wise analysis, they show
acceptable performance on various target tasks. On the reasons behind their
effectiveness, we argue that their notable performance could be attributed to
their high-magnitude weights compared to that of the other modules in the
pre-trained BERT.
|
[
{
"created": "Wed, 1 Feb 2023 11:20:18 GMT",
"version": "v1"
},
{
"created": "Wed, 22 Feb 2023 16:56:58 GMT",
"version": "v2"
}
] |
2023-02-23
|
[
[
"Akbar-Tajari",
"Mohammad",
""
],
[
"Rajaee",
"Sara",
""
],
[
"Pilehvar",
"Mohammad Taher",
""
]
] |
Parameter-efficient fine-tuning approaches have recently garnered a lot of attention. Having considerably lower number of trainable weights, these methods can bring about scalability and computational effectiveness. In this paper, we look for optimal sub-networks and investigate the capability of different transformer modules in transferring knowledge from a pre-trained model to a downstream task. Our empirical results suggest that every transformer module in BERT can act as a winning ticket: fine-tuning each specific module while keeping the rest of the network frozen can lead to comparable performance to the full fine-tuning. Among different modules, LayerNorms exhibit the best capacity for knowledge transfer with limited trainable weights, to the extent that, with only 0.003% of all parameters in the layer-wise analysis, they show acceptable performance on various target tasks. On the reasons behind their effectiveness, we argue that their notable performance could be attributed to their high-magnitude weights compared to that of the other modules in the pre-trained BERT.
|
2404.00576
|
Rizwan Muhammad
|
PoTsang B. Huang, Muhammad Rizwan, and Mehboob Ali
|
Automated Bi-Fold Weighted Ensemble Algorithms and its Application to
Brain Tumor Detection and Classification
| null | null | null | null |
cs.LG cs.AI cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The uncontrolled and unstructured growth of brain cells is known as brain
tumor, which has one of the highest mortality rates among diseases from all
types of cancers. Due to limited diagnostic and treatment capabilities, they
pose significant challenges, especially in third-world countries. Early
diagnosis plays a vital role in effectively managing brain tumors and reducing
mortality rates. However, the availability of diagnostic methods is hindered by
various limitations, including high costs and lengthy result acquisition times,
impeding early detection of the disease. In this study, we present two
cutting-edge bi-fold weighted voting ensemble models that aim to boost the
effectiveness of weighted ensemble methods. These two proposed methods combine
the classification outcomes from multiple classifiers and determine the optimal
result by selecting the one with the highest probability in the first approach,
and the highest weighted prediction in the second technique. These approaches
significantly improve the overall performance of weighted ensemble techniques.
In the first proposed method, we improve the soft voting technique (SVT) by
introducing a novel unsupervised weight calculating schema (UWCS) to enhance
its weight assigning capability, known as the extended soft voting technique
(ESVT). Secondly, we propose a novel weighted method (NWM) by using the
proposed UWCS. Both of our approaches incorporate three distinct models: a
custom-built CNN, VGG-16, and InceptionResNetV2 which has been trained on
publicly available datasets. The effectiveness of our proposed systems is
evaluated through blind testing, where exceptional results are achieved. We
then establish a comparative analysis of the performance of our proposed
methods with that of SVT to show their superiority and effectiveness.
|
[
{
"created": "Sun, 31 Mar 2024 06:38:08 GMT",
"version": "v1"
}
] |
2024-04-02
|
[
[
"Huang",
"PoTsang B.",
""
],
[
"Rizwan",
"Muhammad",
""
],
[
"Ali",
"Mehboob",
""
]
] |
The uncontrolled and unstructured growth of brain cells is known as brain tumor, which has one of the highest mortality rates among diseases from all types of cancers. Due to limited diagnostic and treatment capabilities, they pose significant challenges, especially in third-world countries. Early diagnosis plays a vital role in effectively managing brain tumors and reducing mortality rates. However, the availability of diagnostic methods is hindered by various limitations, including high costs and lengthy result acquisition times, impeding early detection of the disease. In this study, we present two cutting-edge bi-fold weighted voting ensemble models that aim to boost the effectiveness of weighted ensemble methods. These two proposed methods combine the classification outcomes from multiple classifiers and determine the optimal result by selecting the one with the highest probability in the first approach, and the highest weighted prediction in the second technique. These approaches significantly improve the overall performance of weighted ensemble techniques. In the first proposed method, we improve the soft voting technique (SVT) by introducing a novel unsupervised weight calculating schema (UWCS) to enhance its weight assigning capability, known as the extended soft voting technique (ESVT). Secondly, we propose a novel weighted method (NWM) by using the proposed UWCS. Both of our approaches incorporate three distinct models: a custom-built CNN, VGG-16, and InceptionResNetV2 which has been trained on publicly available datasets. The effectiveness of our proposed systems is evaluated through blind testing, where exceptional results are achieved. We then establish a comparative analysis of the performance of our proposed methods with that of SVT to show their superiority and effectiveness.
|
1903.01344
|
Zhou Fan
|
Zhou Fan, Rui Su, Weinan Zhang and Yong Yu
|
Hybrid Actor-Critic Reinforcement Learning in Parameterized Action Space
| null | null | null | null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we propose a hybrid architecture of actor-critic algorithms for
reinforcement learning in parameterized action space, which consists of
multiple parallel sub-actor networks to decompose the structured action space
into simpler action spaces along with a critic network to guide the training of
all sub-actor networks. While this paper is mainly focused on parameterized
action space, the proposed architecture, which we call hybrid actor-critic, can
be extended for more general action spaces which has a hierarchical structure.
We present an instance of the hybrid actor-critic architecture based on
proximal policy optimization (PPO), which we refer to as hybrid proximal policy
optimization (H-PPO). Our experiments test H-PPO on a collection of tasks with
parameterized action space, where H-PPO demonstrates superior performance over
previous methods of parameterized action reinforcement learning.
|
[
{
"created": "Mon, 4 Mar 2019 16:33:15 GMT",
"version": "v1"
},
{
"created": "Sat, 25 May 2019 08:32:06 GMT",
"version": "v2"
},
{
"created": "Thu, 30 May 2019 13:02:58 GMT",
"version": "v3"
}
] |
2019-05-31
|
[
[
"Fan",
"Zhou",
""
],
[
"Su",
"Rui",
""
],
[
"Zhang",
"Weinan",
""
],
[
"Yu",
"Yong",
""
]
] |
In this paper we propose a hybrid architecture of actor-critic algorithms for reinforcement learning in parameterized action space, which consists of multiple parallel sub-actor networks to decompose the structured action space into simpler action spaces along with a critic network to guide the training of all sub-actor networks. While this paper is mainly focused on parameterized action space, the proposed architecture, which we call hybrid actor-critic, can be extended for more general action spaces which has a hierarchical structure. We present an instance of the hybrid actor-critic architecture based on proximal policy optimization (PPO), which we refer to as hybrid proximal policy optimization (H-PPO). Our experiments test H-PPO on a collection of tasks with parameterized action space, where H-PPO demonstrates superior performance over previous methods of parameterized action reinforcement learning.
|
2302.08783
|
Amit Attia
|
Amit Attia and Tomer Koren
|
SGD with AdaGrad Stepsizes: Full Adaptivity with High Probability to
Unknown Parameters, Unbounded Gradients and Affine Variance
|
27 pages
| null | null | null |
cs.LG math.OC stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study Stochastic Gradient Descent with AdaGrad stepsizes: a popular
adaptive (self-tuning) method for first-order stochastic optimization. Despite
being well studied, existing analyses of this method suffer from various
shortcomings: they either assume some knowledge of the problem parameters,
impose strong global Lipschitz conditions, or fail to give bounds that hold
with high probability. We provide a comprehensive analysis of this basic method
without any of these limitations, in both the convex and non-convex (smooth)
cases, that additionally supports a general ``affine variance'' noise model and
provides sharp rates of convergence in both the low-noise and
high-noise~regimes.
|
[
{
"created": "Fri, 17 Feb 2023 09:46:08 GMT",
"version": "v1"
},
{
"created": "Sun, 11 Jun 2023 15:59:35 GMT",
"version": "v2"
}
] |
2023-06-13
|
[
[
"Attia",
"Amit",
""
],
[
"Koren",
"Tomer",
""
]
] |
We study Stochastic Gradient Descent with AdaGrad stepsizes: a popular adaptive (self-tuning) method for first-order stochastic optimization. Despite being well studied, existing analyses of this method suffer from various shortcomings: they either assume some knowledge of the problem parameters, impose strong global Lipschitz conditions, or fail to give bounds that hold with high probability. We provide a comprehensive analysis of this basic method without any of these limitations, in both the convex and non-convex (smooth) cases, that additionally supports a general ``affine variance'' noise model and provides sharp rates of convergence in both the low-noise and high-noise~regimes.
|
2101.07172
|
Chien-Hsiang Huang
|
Chien-Hsiang Huang, Hung-Yu Wu, and Youn-Long Lin
|
HarDNet-MSEG: A Simple Encoder-Decoder Polyp Segmentation Neural Network
that Achieves over 0.9 Mean Dice and 86 FPS
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We propose a new convolution neural network called HarDNet-MSEG for polyp
segmentation. It achieves SOTA in both accuracy and inference speed on five
popular datasets. For Kvasir-SEG, HarDNet-MSEG delivers 0.904 mean Dice running
at 86.7 FPS on a GeForce RTX 2080 Ti GPU. It consists of a backbone and a
decoder. The backbone is a low memory traffic CNN called HarDNet68, which has
been successfully applied to various CV tasks including image classification,
object detection, multi-object tracking and semantic segmentation, etc. The
decoder part is inspired by the Cascaded Partial Decoder, known for fast and
accurate salient object detection. We have evaluated HarDNet-MSEG using those
five popular datasets. The code and all experiment details are available at
Github. https://github.com/james128333/HarDNet-MSEG
|
[
{
"created": "Mon, 18 Jan 2021 17:20:11 GMT",
"version": "v1"
},
{
"created": "Wed, 20 Jan 2021 15:58:47 GMT",
"version": "v2"
}
] |
2021-01-21
|
[
[
"Huang",
"Chien-Hsiang",
""
],
[
"Wu",
"Hung-Yu",
""
],
[
"Lin",
"Youn-Long",
""
]
] |
We propose a new convolution neural network called HarDNet-MSEG for polyp segmentation. It achieves SOTA in both accuracy and inference speed on five popular datasets. For Kvasir-SEG, HarDNet-MSEG delivers 0.904 mean Dice running at 86.7 FPS on a GeForce RTX 2080 Ti GPU. It consists of a backbone and a decoder. The backbone is a low memory traffic CNN called HarDNet68, which has been successfully applied to various CV tasks including image classification, object detection, multi-object tracking and semantic segmentation, etc. The decoder part is inspired by the Cascaded Partial Decoder, known for fast and accurate salient object detection. We have evaluated HarDNet-MSEG using those five popular datasets. The code and all experiment details are available at Github. https://github.com/james128333/HarDNet-MSEG
|
2303.10888
|
Chanjun Park
|
Chanjun Park, Hyeonseok Moon, Seolhwa Lee, Jaehyung Seo, Sugyeong Eo
and Heuiseok Lim
|
Self-Improving-Leaderboard(SIL): A Call for Real-World Centric Natural
Language Processing Leaderboards
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Leaderboard systems allow researchers to objectively evaluate Natural
Language Processing (NLP) models and are typically used to identify models that
exhibit superior performance on a given task in a predetermined setting.
However, we argue that evaluation on a given test dataset is just one of many
performance indications of the model. In this paper, we claim leaderboard
competitions should also aim to identify models that exhibit the best
performance in a real-world setting. We highlight three issues with current
leaderboard systems: (1) the use of a single, static test set, (2) discrepancy
between testing and real-world application (3) the tendency for
leaderboard-centric competition to be biased towards the test set. As a
solution, we propose a new paradigm of leaderboard systems that addresses these
issues of current leaderboard system. Through this study, we hope to induce a
paradigm shift towards more real -world-centric leaderboard competitions.
|
[
{
"created": "Mon, 20 Mar 2023 06:13:03 GMT",
"version": "v1"
}
] |
2023-03-21
|
[
[
"Park",
"Chanjun",
""
],
[
"Moon",
"Hyeonseok",
""
],
[
"Lee",
"Seolhwa",
""
],
[
"Seo",
"Jaehyung",
""
],
[
"Eo",
"Sugyeong",
""
],
[
"Lim",
"Heuiseok",
""
]
] |
Leaderboard systems allow researchers to objectively evaluate Natural Language Processing (NLP) models and are typically used to identify models that exhibit superior performance on a given task in a predetermined setting. However, we argue that evaluation on a given test dataset is just one of many performance indications of the model. In this paper, we claim leaderboard competitions should also aim to identify models that exhibit the best performance in a real-world setting. We highlight three issues with current leaderboard systems: (1) the use of a single, static test set, (2) discrepancy between testing and real-world application (3) the tendency for leaderboard-centric competition to be biased towards the test set. As a solution, we propose a new paradigm of leaderboard systems that addresses these issues of current leaderboard system. Through this study, we hope to induce a paradigm shift towards more real -world-centric leaderboard competitions.
|
1706.06246
|
Shuling Wang
|
Dimitar Guelev, Shuling Wang, Naijun Zhan
|
Compositional Hoare-style Reasoning about Hybrid CSP in the Duration
Calculus
| null | null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deductive methods for the verification of hybrid systems vary on the format
of statements in correctness proofs. Building on the example of Hoare
triple-based reasoning, we have investigated several such methods for systems
described in Hybrid CSP, each based on a different assertion language, notation
for time, and notation for proofs, and each having its pros and cons with
respect to expressive power, compositionality and practical convenience. In
this paper we propose a new approach based on weakly monotonic time as the
semantics for interleaving, the Duration Calculus (DC) with infinite intervals
and general fixpoints as the logic language, and a new meaning for Hoare-like
triples which unifies assertions and temporal conditions. We include a proof
system for reasoning about the properties of systems written in the new form of
triples that is complete relative to validity in DC.
|
[
{
"created": "Tue, 20 Jun 2017 02:44:22 GMT",
"version": "v1"
},
{
"created": "Wed, 28 Jun 2017 02:13:04 GMT",
"version": "v2"
}
] |
2017-06-29
|
[
[
"Guelev",
"Dimitar",
""
],
[
"Wang",
"Shuling",
""
],
[
"Zhan",
"Naijun",
""
]
] |
Deductive methods for the verification of hybrid systems vary on the format of statements in correctness proofs. Building on the example of Hoare triple-based reasoning, we have investigated several such methods for systems described in Hybrid CSP, each based on a different assertion language, notation for time, and notation for proofs, and each having its pros and cons with respect to expressive power, compositionality and practical convenience. In this paper we propose a new approach based on weakly monotonic time as the semantics for interleaving, the Duration Calculus (DC) with infinite intervals and general fixpoints as the logic language, and a new meaning for Hoare-like triples which unifies assertions and temporal conditions. We include a proof system for reasoning about the properties of systems written in the new form of triples that is complete relative to validity in DC.
|
1803.07180
|
Abraham P. Vinod
|
Abraham P. Vinod and Meeko M. K. Oishi
|
Probabilistic Occupancy Function and Sets Using Forward Stochastic
Reachability for Rigid-Body Dynamic Obstacles
|
Updated text for submission to IEEE Transactions on Automatic Control
| null | null | null |
cs.SY math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present theory and algorithms for the computation of probability-weighted
"keep-out" sets to assure probabilistically safe navigation in the presence of
multiple rigid body obstacles with stochastic dynamics. Our forward stochastic
reachability-based approach characterizes the stochasticity of the future
obstacle states in a grid-free and recursion-free manner, using Fourier
transforms and computational geometry. We consider discrete-time Markovian
switched systems with affine parameter-varying stochastic subsystems (DMSP) as
the obstacle dynamics, which includes Markov jump affine systems and
discrete-time affine parameter-varying stochastic systems (DPV). We define a
probabilistic occupancy function, to describe the probability that a given
state is occupied by a rigid body obstacle with stochastic dynamics at a given
time; keep-out sets are the super-level sets of this occupancy function. We
provide sufficient conditions that ensure convexity and compactness of these
keep-out sets for DPV obstacle dynamics. We also propose two computationally
efficient algorithms to overapproximate the keep-out sets --- a tight polytopic
approximation using projections, and an overapproximation using Minkowski sum.
For DMSP obstacle dynamics, we compute a union of convex and compact sets that
covers the potentially non-convex keep-out set. Numerical simulations show the
efficacy of the proposed algorithms for a modified version of the classical
unicycle dynamics, modeled as a DMSP.
|
[
{
"created": "Mon, 19 Mar 2018 22:15:28 GMT",
"version": "v1"
},
{
"created": "Wed, 19 Sep 2018 04:27:27 GMT",
"version": "v2"
}
] |
2018-09-20
|
[
[
"Vinod",
"Abraham P.",
""
],
[
"Oishi",
"Meeko M. K.",
""
]
] |
We present theory and algorithms for the computation of probability-weighted "keep-out" sets to assure probabilistically safe navigation in the presence of multiple rigid body obstacles with stochastic dynamics. Our forward stochastic reachability-based approach characterizes the stochasticity of the future obstacle states in a grid-free and recursion-free manner, using Fourier transforms and computational geometry. We consider discrete-time Markovian switched systems with affine parameter-varying stochastic subsystems (DMSP) as the obstacle dynamics, which includes Markov jump affine systems and discrete-time affine parameter-varying stochastic systems (DPV). We define a probabilistic occupancy function, to describe the probability that a given state is occupied by a rigid body obstacle with stochastic dynamics at a given time; keep-out sets are the super-level sets of this occupancy function. We provide sufficient conditions that ensure convexity and compactness of these keep-out sets for DPV obstacle dynamics. We also propose two computationally efficient algorithms to overapproximate the keep-out sets --- a tight polytopic approximation using projections, and an overapproximation using Minkowski sum. For DMSP obstacle dynamics, we compute a union of convex and compact sets that covers the potentially non-convex keep-out set. Numerical simulations show the efficacy of the proposed algorithms for a modified version of the classical unicycle dynamics, modeled as a DMSP.
|
2101.00433
|
Michael Saxon
|
Michael Saxon, Sharon Levy, Xinyi Wang, Alon Albalak, William Yang
Wang
|
Modeling Disclosive Transparency in NLP Application Descriptions
|
To appear at EMNLP 2021. 15 pages, 10 figures, 7 tables
|
Proceedings of the 2021 Conference on Empirical Methods in Natural
Language Processing, pp 2023-2037
|
10.18653/v1/2021.emnlp-main.153
| null |
cs.CL cs.AI cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Broader disclosive transparency$-$truth and clarity in communication
regarding the function of AI systems$-$is widely considered desirable.
Unfortunately, it is a nebulous concept, difficult to both define and quantify.
This is problematic, as previous work has demonstrated possible trade-offs and
negative consequences to disclosive transparency, such as a confusion effect,
where "too much information" clouds a reader's understanding of what a system
description means. Disclosive transparency's subjective nature has rendered
deep study into these problems and their remedies difficult. To improve this
state of affairs, We introduce neural language model-based probabilistic
metrics to directly model disclosive transparency, and demonstrate that they
correlate with user and expert opinions of system transparency, making them a
valid objective proxy. Finally, we demonstrate the use of these metrics in a
pilot study quantifying the relationships between transparency, confusion, and
user perceptions in a corpus of real NLP system descriptions.
|
[
{
"created": "Sat, 2 Jan 2021 11:46:17 GMT",
"version": "v1"
},
{
"created": "Sat, 17 Apr 2021 03:42:18 GMT",
"version": "v2"
},
{
"created": "Fri, 27 Aug 2021 03:30:20 GMT",
"version": "v3"
},
{
"created": "Fri, 10 Sep 2021 17:54:54 GMT",
"version": "v4"
}
] |
2022-05-26
|
[
[
"Saxon",
"Michael",
""
],
[
"Levy",
"Sharon",
""
],
[
"Wang",
"Xinyi",
""
],
[
"Albalak",
"Alon",
""
],
[
"Wang",
"William Yang",
""
]
] |
Broader disclosive transparency$-$truth and clarity in communication regarding the function of AI systems$-$is widely considered desirable. Unfortunately, it is a nebulous concept, difficult to both define and quantify. This is problematic, as previous work has demonstrated possible trade-offs and negative consequences to disclosive transparency, such as a confusion effect, where "too much information" clouds a reader's understanding of what a system description means. Disclosive transparency's subjective nature has rendered deep study into these problems and their remedies difficult. To improve this state of affairs, We introduce neural language model-based probabilistic metrics to directly model disclosive transparency, and demonstrate that they correlate with user and expert opinions of system transparency, making them a valid objective proxy. Finally, we demonstrate the use of these metrics in a pilot study quantifying the relationships between transparency, confusion, and user perceptions in a corpus of real NLP system descriptions.
|
2012.01411
|
Fangjinhua Wang
|
Fangjinhua Wang, Silvano Galliani, Christoph Vogel, Pablo Speciale,
Marc Pollefeys
|
PatchmatchNet: Learned Multi-View Patchmatch Stereo
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present PatchmatchNet, a novel and learnable cascade formulation of
Patchmatch for high-resolution multi-view stereo. With high computation speed
and low memory requirement, PatchmatchNet can process higher resolution imagery
and is more suited to run on resource limited devices than competitors that
employ 3D cost volume regularization. For the first time we introduce an
iterative multi-scale Patchmatch in an end-to-end trainable architecture and
improve the Patchmatch core algorithm with a novel and learned adaptive
propagation and evaluation scheme for each iteration. Extensive experiments
show a very competitive performance and generalization for our method on DTU,
Tanks & Temples and ETH3D, but at a significantly higher efficiency than all
existing top-performing models: at least two and a half times faster than
state-of-the-art methods with twice less memory usage.
|
[
{
"created": "Wed, 2 Dec 2020 18:59:02 GMT",
"version": "v1"
}
] |
2020-12-03
|
[
[
"Wang",
"Fangjinhua",
""
],
[
"Galliani",
"Silvano",
""
],
[
"Vogel",
"Christoph",
""
],
[
"Speciale",
"Pablo",
""
],
[
"Pollefeys",
"Marc",
""
]
] |
We present PatchmatchNet, a novel and learnable cascade formulation of Patchmatch for high-resolution multi-view stereo. With high computation speed and low memory requirement, PatchmatchNet can process higher resolution imagery and is more suited to run on resource limited devices than competitors that employ 3D cost volume regularization. For the first time we introduce an iterative multi-scale Patchmatch in an end-to-end trainable architecture and improve the Patchmatch core algorithm with a novel and learned adaptive propagation and evaluation scheme for each iteration. Extensive experiments show a very competitive performance and generalization for our method on DTU, Tanks & Temples and ETH3D, but at a significantly higher efficiency than all existing top-performing models: at least two and a half times faster than state-of-the-art methods with twice less memory usage.
|
2010.02977
|
Hirokazu Kameoka
|
Hirokazu Kameoka, Takuhiro Kaneko, Kou Tanaka, Nobukatsu Hojo, Shogo
Seki
|
VoiceGrad: Non-Parallel Any-to-Many Voice Conversion with Annealed
Langevin Dynamics
|
For more details on the baseline method used for comparison, please
refer to our article in arXiv:2008.12604
| null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a non-parallel any-to-many voice conversion (VC)
method termed VoiceGrad. Inspired by WaveGrad, a recently introduced novel
waveform generation method, VoiceGrad is based upon the concepts of score
matching and Langevin dynamics. It uses weighted denoising score matching to
train a score approximator, a fully convolutional network with a U-Net
structure designed to predict the gradient of the log density of the speech
feature sequences of multiple speakers, and performs VC by using annealed
Langevin dynamics to iteratively update an input feature sequence towards the
nearest stationary point of the target distribution based on the trained score
approximator network. Thanks to the nature of this concept, VoiceGrad enables
any-to-many VC, a VC scenario in which the speaker of input speech can be
arbitrary, and allows for non-parallel training, which requires no parallel
utterances or transcriptions.
|
[
{
"created": "Tue, 6 Oct 2020 19:09:37 GMT",
"version": "v1"
},
{
"created": "Sat, 10 Oct 2020 09:59:40 GMT",
"version": "v2"
},
{
"created": "Sat, 9 Mar 2024 16:30:50 GMT",
"version": "v3"
}
] |
2024-03-12
|
[
[
"Kameoka",
"Hirokazu",
""
],
[
"Kaneko",
"Takuhiro",
""
],
[
"Tanaka",
"Kou",
""
],
[
"Hojo",
"Nobukatsu",
""
],
[
"Seki",
"Shogo",
""
]
] |
In this paper, we propose a non-parallel any-to-many voice conversion (VC) method termed VoiceGrad. Inspired by WaveGrad, a recently introduced novel waveform generation method, VoiceGrad is based upon the concepts of score matching and Langevin dynamics. It uses weighted denoising score matching to train a score approximator, a fully convolutional network with a U-Net structure designed to predict the gradient of the log density of the speech feature sequences of multiple speakers, and performs VC by using annealed Langevin dynamics to iteratively update an input feature sequence towards the nearest stationary point of the target distribution based on the trained score approximator network. Thanks to the nature of this concept, VoiceGrad enables any-to-many VC, a VC scenario in which the speaker of input speech can be arbitrary, and allows for non-parallel training, which requires no parallel utterances or transcriptions.
|
1803.08394
|
Andrey Kuehlkamp
|
Andrey Kuehlkamp and Kevin Bowyer
|
Found a good match: should I keep searching? - Accuracy and Performance
in Iris Matching Using 1-to-First Search
| null |
Image and Vision Computing vol 73, May 2018, pp. 17-27
|
10.1016/j.imavis.2018.03.003
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Iris recognition is used in many applications around the world, with
enrollment sizes as large as over one billion persons in India's Aadhaar
program. Large enrollment sizes can require special optimizations in order to
achieve fast database searches. One such optimization that has been used in
some operational scenarios is 1:First search. In this approach, instead of
scanning the entire database, the search is terminated when the first
sufficiently good match is found. This saves time, but ignores potentially
better matches that may exist in the unexamined portion of the enrollments. At
least one prominent and successful border-crossing program used this approach
for nearly a decade, in order to allow users a fast "token-free" search. Our
work investigates the search accuracy of 1:First and compares it to the
traditional 1:N search. Several different scenarios are considered trying to
emulate real environments as best as possible: a range of enrollment sizes,
closed- and open-set configurations, two iris matchers, and different
permutations of the galleries. Results confirm the expected accuracy
degradation using 1:First search, and also allow us to identify acceptable
working parameters where significant search time reduction is achieved, while
maintaining accuracy similar to 1:N search.
|
[
{
"created": "Thu, 22 Mar 2018 15:07:53 GMT",
"version": "v1"
}
] |
2018-04-20
|
[
[
"Kuehlkamp",
"Andrey",
""
],
[
"Bowyer",
"Kevin",
""
]
] |
Iris recognition is used in many applications around the world, with enrollment sizes as large as over one billion persons in India's Aadhaar program. Large enrollment sizes can require special optimizations in order to achieve fast database searches. One such optimization that has been used in some operational scenarios is 1:First search. In this approach, instead of scanning the entire database, the search is terminated when the first sufficiently good match is found. This saves time, but ignores potentially better matches that may exist in the unexamined portion of the enrollments. At least one prominent and successful border-crossing program used this approach for nearly a decade, in order to allow users a fast "token-free" search. Our work investigates the search accuracy of 1:First and compares it to the traditional 1:N search. Several different scenarios are considered trying to emulate real environments as best as possible: a range of enrollment sizes, closed- and open-set configurations, two iris matchers, and different permutations of the galleries. Results confirm the expected accuracy degradation using 1:First search, and also allow us to identify acceptable working parameters where significant search time reduction is achieved, while maintaining accuracy similar to 1:N search.
|
1708.06850
|
Enoch Yeung Ph.D.
|
Enoch Yeung, Soumya Kundu, Nathan Hodas
|
Learning Deep Neural Network Representations for Koopman Operators of
Nonlinear Dynamical Systems
|
16 pages, 5 figures
| null | null | null |
cs.LG cs.AI math.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Koopman operator has recently garnered much attention for its value in
dynamical systems analysis and data-driven model discovery. However, its
application has been hindered by the computational complexity of extended
dynamic mode decomposition; this requires a combinatorially large basis set to
adequately describe many nonlinear systems of interest, e.g. cyber-physical
infrastructure systems, biological networks, social systems, and fluid
dynamics. Often the dictionaries generated for these problems are manually
curated, requiring domain-specific knowledge and painstaking tuning. In this
paper we introduce a deep learning framework for learning Koopman operators of
nonlinear dynamical systems. We show that this novel method automatically
selects efficient deep dictionaries, outperforming state-of-the-art methods. We
benchmark this method on partially observed nonlinear systems, including the
glycolytic oscillator and show it is able to predict quantitatively 100 steps
into the future, using only a single timepoint, and qualitative oscillatory
behavior 400 steps into the future.
|
[
{
"created": "Tue, 22 Aug 2017 23:32:19 GMT",
"version": "v1"
},
{
"created": "Fri, 17 Nov 2017 19:36:19 GMT",
"version": "v2"
}
] |
2017-12-11
|
[
[
"Yeung",
"Enoch",
""
],
[
"Kundu",
"Soumya",
""
],
[
"Hodas",
"Nathan",
""
]
] |
The Koopman operator has recently garnered much attention for its value in dynamical systems analysis and data-driven model discovery. However, its application has been hindered by the computational complexity of extended dynamic mode decomposition; this requires a combinatorially large basis set to adequately describe many nonlinear systems of interest, e.g. cyber-physical infrastructure systems, biological networks, social systems, and fluid dynamics. Often the dictionaries generated for these problems are manually curated, requiring domain-specific knowledge and painstaking tuning. In this paper we introduce a deep learning framework for learning Koopman operators of nonlinear dynamical systems. We show that this novel method automatically selects efficient deep dictionaries, outperforming state-of-the-art methods. We benchmark this method on partially observed nonlinear systems, including the glycolytic oscillator and show it is able to predict quantitatively 100 steps into the future, using only a single timepoint, and qualitative oscillatory behavior 400 steps into the future.
|
1811.01768
|
Isabella Pozzi
|
Isabella Pozzi, Sander Boht\'e and Pieter Roelfsema
|
A Biologically Plausible Learning Rule for Deep Learning in the Brain
| null | null | null | null |
cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Researchers have proposed that deep learning, which is providing important
progress in a wide range of high complexity tasks, might inspire new insights
into learning in the brain. However, the methods used for deep learning by
artificial neural networks are biologically unrealistic and would need to be
replaced by biologically realistic counterparts. Previous biologically
plausible reinforcement learning rules, like AGREL and AuGMEnT, showed
promising results but focused on shallow networks with three layers. Will these
learning rules also generalize to networks with more layers and can they handle
tasks of higher complexity? We demonstrate the learning scheme on classical and
hard image-classification benchmarks, namely MNIST, CIFAR10 and CIFAR100, cast
as direct reward tasks, both for fully connected, convolutional and locally
connected architectures. We show that our learning rule - Q-AGREL - performs
comparably to supervised learning via error-backpropagation, with this type of
trial-and-error reinforcement learning requiring only 1.5-2.5 times more
epochs, even when classifying 100 different classes as in CIFAR100. Our results
provide new insights into how deep learning may be implemented in the brain.
|
[
{
"created": "Mon, 5 Nov 2018 15:01:59 GMT",
"version": "v1"
},
{
"created": "Fri, 28 Jun 2019 07:37:39 GMT",
"version": "v2"
},
{
"created": "Tue, 2 Jul 2019 09:45:26 GMT",
"version": "v3"
}
] |
2019-07-03
|
[
[
"Pozzi",
"Isabella",
""
],
[
"Bohté",
"Sander",
""
],
[
"Roelfsema",
"Pieter",
""
]
] |
Researchers have proposed that deep learning, which is providing important progress in a wide range of high complexity tasks, might inspire new insights into learning in the brain. However, the methods used for deep learning by artificial neural networks are biologically unrealistic and would need to be replaced by biologically realistic counterparts. Previous biologically plausible reinforcement learning rules, like AGREL and AuGMEnT, showed promising results but focused on shallow networks with three layers. Will these learning rules also generalize to networks with more layers and can they handle tasks of higher complexity? We demonstrate the learning scheme on classical and hard image-classification benchmarks, namely MNIST, CIFAR10 and CIFAR100, cast as direct reward tasks, both for fully connected, convolutional and locally connected architectures. We show that our learning rule - Q-AGREL - performs comparably to supervised learning via error-backpropagation, with this type of trial-and-error reinforcement learning requiring only 1.5-2.5 times more epochs, even when classifying 100 different classes as in CIFAR100. Our results provide new insights into how deep learning may be implemented in the brain.
|
2202.05917
|
Delaram Kahrobaei
|
Delaram Kahrobaei, Ram\'on Flores, Marialaura Noce
|
Group-based Cryptography in the Quantum Era
|
To appear in the Notices of the American Mathematical Society
| null | null | null |
cs.CR math.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this expository article we present an overview of the current
state-of-the-art in post-quantum group-based cryptography. We describe several
families of groups that have been proposed as platforms, with special emphasis
in polycyclic groups and graph groups, dealing in particular with their
algorithmic properties and cryptographic applications. We then, describe some
applications of combinatorial algebra in fully homomorphic encryption. In the
end we discussing several open problems in this direction.
|
[
{
"created": "Fri, 11 Feb 2022 22:01:45 GMT",
"version": "v1"
},
{
"created": "Sat, 19 Feb 2022 17:22:40 GMT",
"version": "v2"
},
{
"created": "Thu, 24 Feb 2022 15:01:28 GMT",
"version": "v3"
},
{
"created": "Tue, 17 Jan 2023 11:52:12 GMT",
"version": "v4"
}
] |
2023-01-18
|
[
[
"Kahrobaei",
"Delaram",
""
],
[
"Flores",
"Ramón",
""
],
[
"Noce",
"Marialaura",
""
]
] |
In this expository article we present an overview of the current state-of-the-art in post-quantum group-based cryptography. We describe several families of groups that have been proposed as platforms, with special emphasis in polycyclic groups and graph groups, dealing in particular with their algorithmic properties and cryptographic applications. We then, describe some applications of combinatorial algebra in fully homomorphic encryption. In the end we discussing several open problems in this direction.
|
1811.11992
|
Hui Liu Mr
|
Ruijian He, Bo Yang, Hui Liu, Zhangxin Chen
|
A New In-Situ Combustion Simulator for Parallel Computers
| null | null | null | null |
cs.CE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As a competitive recovery method for heavy oil, In-Situ Combustion (ISC)
shows its great potential accompanied by technological advances in recent
years. Reservoir simulation will play an indispensable role in the prediction
of the implementation of ISC projects. With the computational complexity, it is
imperative to develop an effective and robust parallel in-situ combustion
simulator. In this paper, a mathematical model for In Situ Combustion is
proposed, which takes full consideration for related physical phenomena,
including multi-dimensional multi-component three-phase flow, heat convection
and conduction, chemical reactions, and mass transfer between phases. In the
mathematical model, different governing equations and constraints are involved,
forming a complicated PDE (partial differential equation) system. For physical
and chemical behaviors, some special treatments for the ISC simulator are
discussed and applied. Also, a modified PER (Pseudo-Equilibrium Ratio) method
is proposed in the thesis. A fully implicit scheme is applied, and
discretization is implemented with the FDM (Finite Difference Method). In
solving nonlinear systems, the Newton Method is introduced, and both numerical
and analytical Jacobian matrices are applied. Due to the complexity of an ISC
problem, an appropriate decoupling method must be considered. Thus the
Gauss-Jordan transformation is raised. Then, with certain preconditioners and
iterative solvers, a numerical solution can be obtained. The results of
different models are given, which are validated with the results from CMG
STARS. Also, the scalability of parallelization is proved, indicating the
excellent performance of parallel computing. This accurate, efficient, parallel
ISC simulator applies to complex reservoir models.
|
[
{
"created": "Thu, 29 Nov 2018 07:25:14 GMT",
"version": "v1"
}
] |
2018-11-30
|
[
[
"He",
"Ruijian",
""
],
[
"Yang",
"Bo",
""
],
[
"Liu",
"Hui",
""
],
[
"Chen",
"Zhangxin",
""
]
] |
As a competitive recovery method for heavy oil, In-Situ Combustion (ISC) shows its great potential accompanied by technological advances in recent years. Reservoir simulation will play an indispensable role in the prediction of the implementation of ISC projects. With the computational complexity, it is imperative to develop an effective and robust parallel in-situ combustion simulator. In this paper, a mathematical model for In Situ Combustion is proposed, which takes full consideration for related physical phenomena, including multi-dimensional multi-component three-phase flow, heat convection and conduction, chemical reactions, and mass transfer between phases. In the mathematical model, different governing equations and constraints are involved, forming a complicated PDE (partial differential equation) system. For physical and chemical behaviors, some special treatments for the ISC simulator are discussed and applied. Also, a modified PER (Pseudo-Equilibrium Ratio) method is proposed in the thesis. A fully implicit scheme is applied, and discretization is implemented with the FDM (Finite Difference Method). In solving nonlinear systems, the Newton Method is introduced, and both numerical and analytical Jacobian matrices are applied. Due to the complexity of an ISC problem, an appropriate decoupling method must be considered. Thus the Gauss-Jordan transformation is raised. Then, with certain preconditioners and iterative solvers, a numerical solution can be obtained. The results of different models are given, which are validated with the results from CMG STARS. Also, the scalability of parallelization is proved, indicating the excellent performance of parallel computing. This accurate, efficient, parallel ISC simulator applies to complex reservoir models.
|
1902.04255
|
Mumin Cebe
|
Mumin Cebe, Kemal Akkaya
|
Communication-efficient Certificate Revocation Management for Advanced
Metering Infrastructure and IoT
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Advanced Metering Infrastructure forms a communication network for the
collection of power data from smart meters in Smart Grid. As the communication
between smart meters could be secured utilizing public-key cryptography,
however, public-key cryptography still has certain challenges in terms of
certificate revocation and management particularly related distribution and
storage overhead of revoked certificates. To address this challenge, in this
paper, we propose a novel revocation management approach by utilizing
cryptographic accumulators which reduces the space requirements for revocation
information significantly and thus enables efficient distribution of such
information to all smart meters. We implemented the proposed approach on both
ns-3 network simulator and a testbed. We demonstrated its superior performance
with respect to traditional methods for revocation management.
|
[
{
"created": "Tue, 12 Feb 2019 06:30:17 GMT",
"version": "v1"
},
{
"created": "Tue, 6 Aug 2019 15:49:00 GMT",
"version": "v2"
},
{
"created": "Wed, 5 Aug 2020 22:36:17 GMT",
"version": "v3"
}
] |
2020-08-07
|
[
[
"Cebe",
"Mumin",
""
],
[
"Akkaya",
"Kemal",
""
]
] |
Advanced Metering Infrastructure forms a communication network for the collection of power data from smart meters in Smart Grid. As the communication between smart meters could be secured utilizing public-key cryptography, however, public-key cryptography still has certain challenges in terms of certificate revocation and management particularly related distribution and storage overhead of revoked certificates. To address this challenge, in this paper, we propose a novel revocation management approach by utilizing cryptographic accumulators which reduces the space requirements for revocation information significantly and thus enables efficient distribution of such information to all smart meters. We implemented the proposed approach on both ns-3 network simulator and a testbed. We demonstrated its superior performance with respect to traditional methods for revocation management.
|
2401.14526
|
Anna Feldman
|
Patrick Lee, Alain Chirino Trujillo, Diana Cuevas Plancarte, Olumide
Ebenezer Ojo, Xinyi Liu, Iyanuoluwa Shode, Yuan Zhao, Jing Peng, Anna Feldman
|
MEDs for PETs: Multilingual Euphemism Disambiguation for Potentially
Euphemistic Terms
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This study investigates the computational processing of euphemisms, a
universal linguistic phenomenon, across multiple languages. We train a
multilingual transformer model (XLM-RoBERTa) to disambiguate potentially
euphemistic terms (PETs) in multilingual and cross-lingual settings. In line
with current trends, we demonstrate that zero-shot learning across languages
takes place. We also show cases where multilingual models perform better on the
task compared to monolingual models by a statistically significant margin,
indicating that multilingual data presents additional opportunities for models
to learn about cross-lingual, computational properties of euphemisms. In a
follow-up analysis, we focus on universal euphemistic "categories" such as
death and bodily functions among others. We test to see whether cross-lingual
data of the same domain is more important than within-language data of other
domains to further understand the nature of the cross-lingual transfer.
|
[
{
"created": "Thu, 25 Jan 2024 21:38:30 GMT",
"version": "v1"
}
] |
2024-01-29
|
[
[
"Lee",
"Patrick",
""
],
[
"Trujillo",
"Alain Chirino",
""
],
[
"Plancarte",
"Diana Cuevas",
""
],
[
"Ojo",
"Olumide Ebenezer",
""
],
[
"Liu",
"Xinyi",
""
],
[
"Shode",
"Iyanuoluwa",
""
],
[
"Zhao",
"Yuan",
""
],
[
"Peng",
"Jing",
""
],
[
"Feldman",
"Anna",
""
]
] |
This study investigates the computational processing of euphemisms, a universal linguistic phenomenon, across multiple languages. We train a multilingual transformer model (XLM-RoBERTa) to disambiguate potentially euphemistic terms (PETs) in multilingual and cross-lingual settings. In line with current trends, we demonstrate that zero-shot learning across languages takes place. We also show cases where multilingual models perform better on the task compared to monolingual models by a statistically significant margin, indicating that multilingual data presents additional opportunities for models to learn about cross-lingual, computational properties of euphemisms. In a follow-up analysis, we focus on universal euphemistic "categories" such as death and bodily functions among others. We test to see whether cross-lingual data of the same domain is more important than within-language data of other domains to further understand the nature of the cross-lingual transfer.
|
1211.3169
|
Pierre-Olivier Amblard
|
Pierre-Olivier Amblard and Olivier J. J. Michel
|
The relation between Granger causality and directed information theory:
a review
| null | null |
10.3390/e15010113
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This report reviews the conceptual and theoretical links between Granger
causality and directed information theory. We begin with a short historical
tour of Granger causality, concentrating on its closeness to information
theory. The definitions of Granger causality based on prediction are recalled,
and the importance of the observation set is discussed. We present the
definitions based on conditional independence. The notion of instantaneous
coupling is included in the definitions. The concept of Granger causality
graphs is discussed. We present directed information theory from the
perspective of studies of causal influences between stochastic processes.
Causal conditioning appears to be the cornerstone for the relation between
information theory and Granger causality. In the bivariate case, the
fundamental measure is the directed information, which decomposes as the sum of
the transfer entropies and a term quantifying instantaneous coupling. We show
the decomposition of the mutual information into the sums of the transfer
entropies and the instantaneous coupling measure, a relation known for the
linear Gaussian case. We study the multivariate case, showing that the useful
decomposition is blurred by instantaneous coupling. The links are further
developed by studying how measures based on directed information theory
naturally emerge from Granger causality inference frameworks as hypothesis
testing.
|
[
{
"created": "Wed, 14 Nov 2012 00:13:27 GMT",
"version": "v1"
}
] |
2015-06-12
|
[
[
"Amblard",
"Pierre-Olivier",
""
],
[
"Michel",
"Olivier J. J.",
""
]
] |
This report reviews the conceptual and theoretical links between Granger causality and directed information theory. We begin with a short historical tour of Granger causality, concentrating on its closeness to information theory. The definitions of Granger causality based on prediction are recalled, and the importance of the observation set is discussed. We present the definitions based on conditional independence. The notion of instantaneous coupling is included in the definitions. The concept of Granger causality graphs is discussed. We present directed information theory from the perspective of studies of causal influences between stochastic processes. Causal conditioning appears to be the cornerstone for the relation between information theory and Granger causality. In the bivariate case, the fundamental measure is the directed information, which decomposes as the sum of the transfer entropies and a term quantifying instantaneous coupling. We show the decomposition of the mutual information into the sums of the transfer entropies and the instantaneous coupling measure, a relation known for the linear Gaussian case. We study the multivariate case, showing that the useful decomposition is blurred by instantaneous coupling. The links are further developed by studying how measures based on directed information theory naturally emerge from Granger causality inference frameworks as hypothesis testing.
|
1911.07755
|
Alberto Marchesi
|
Alberto Marchesi, Francesco Trov\`o, Nicola Gatti
|
Learning Probably Approximately Correct Maximin Strategies in
Simulation-Based Games with Infinite Strategy Spaces
| null | null | null | null |
cs.GT cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We tackle the problem of learning equilibria in simulation-based games. In
such games, the players' utility functions cannot be described analytically, as
they are given through a black-box simulator that can be queried to obtain
noisy estimates of the utilities. This is the case in many real-world games in
which a complete description of the elements involved is not available upfront,
such as complex military settings and online auctions. In these situations, one
usually needs to run costly simulation processes to get an accurate estimate of
the game outcome. As a result, solving these games begets the challenge of
designing learning algorithms that can find (approximate) equilibria with high
confidence, using as few simulator queries as possible. Moreover, since running
the simulator during the game is unfeasible, the algorithms must first perform
a pure exploration learning phase and, then, use the (approximate) equilibrium
learned this way to play the game. In this work, we focus on two-player
zero-sum games with infinite strategy spaces. Drawing from the best arm
identification literature, we design two algorithms with theoretical guarantees
to learn maximin strategies in these games. The first one works in the
fixed-confidence setting, guaranteeing the desired confidence level while
minimizing the number of queries. Instead, the second algorithm fits the
fixed-budget setting, maximizing the confidence without exceeding the given
maximum number of queries. First, we formally prove {\delta}-PAC theoretical
guarantees for our algorithms under some regularity assumptions, which are
encoded by letting the utility functions be drawn from a Gaussian process.
Then, we experimentally evaluate our techniques on a testbed made of randomly
generated games and instances representing simple real-world security settings.
|
[
{
"created": "Mon, 18 Nov 2019 16:37:08 GMT",
"version": "v1"
},
{
"created": "Tue, 25 Feb 2020 15:59:44 GMT",
"version": "v2"
}
] |
2020-02-26
|
[
[
"Marchesi",
"Alberto",
""
],
[
"Trovò",
"Francesco",
""
],
[
"Gatti",
"Nicola",
""
]
] |
We tackle the problem of learning equilibria in simulation-based games. In such games, the players' utility functions cannot be described analytically, as they are given through a black-box simulator that can be queried to obtain noisy estimates of the utilities. This is the case in many real-world games in which a complete description of the elements involved is not available upfront, such as complex military settings and online auctions. In these situations, one usually needs to run costly simulation processes to get an accurate estimate of the game outcome. As a result, solving these games begets the challenge of designing learning algorithms that can find (approximate) equilibria with high confidence, using as few simulator queries as possible. Moreover, since running the simulator during the game is unfeasible, the algorithms must first perform a pure exploration learning phase and, then, use the (approximate) equilibrium learned this way to play the game. In this work, we focus on two-player zero-sum games with infinite strategy spaces. Drawing from the best arm identification literature, we design two algorithms with theoretical guarantees to learn maximin strategies in these games. The first one works in the fixed-confidence setting, guaranteeing the desired confidence level while minimizing the number of queries. Instead, the second algorithm fits the fixed-budget setting, maximizing the confidence without exceeding the given maximum number of queries. First, we formally prove {\delta}-PAC theoretical guarantees for our algorithms under some regularity assumptions, which are encoded by letting the utility functions be drawn from a Gaussian process. Then, we experimentally evaluate our techniques on a testbed made of randomly generated games and instances representing simple real-world security settings.
|
2104.07324
|
Shayan Hashemi
|
Shayan Hashemi, Mika M\"antyl\"a
|
OneLog: Towards End-to-End Training in Software Log Anomaly Detection
| null | null |
10.1007/s10515-024-00428-x
| null |
cs.SE cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
With the growth of online services, IoT devices, and DevOps-oriented software
development, software log anomaly detection is becoming increasingly important.
Prior works mainly follow a traditional four-staged architecture (Preprocessor,
Parser, Vectorizer, and Classifier). This paper proposes OneLog, which utilizes
a single Deep Neural Network (DNN) instead of multiple separate components.
OneLog harnesses Convolutional Neural Networks (CNN) at the character level to
take digits, numbers, and punctuations, which were removed in prior works, into
account alongside the main natural language text. We evaluate our approach in
six message- and sequence-based data sets: HDFS, Hadoop, BGL, Thunderbird,
Spirit, and Liberty. We experiment with Onelog with single-, multi-, and
cross-project setups. Onelog offers state-of-the-art performance in our
datasets. Onelog can utilize multi-project datasets simultaneously during
training, which suggests our model can generalize between datasets.
Multi-project training also improves Onelog performance making it ideal when
limited training data is available for an individual project. We also found
that cross-project anomaly detection is possible with a single project pair
(Liberty and Spirit). Analysis of model internals shows that one log has
multiple modes of detecting anomalies and that the model learns manually
validated parsing rules for the log messages. We conclude that character-based
CNNs are a promising approach toward end-to-end learning in log anomaly
detection. They offer good performance and generalization over multiple
datasets. We will make our scripts publicly available upon the acceptance of
this paper.
|
[
{
"created": "Thu, 15 Apr 2021 09:23:32 GMT",
"version": "v1"
},
{
"created": "Tue, 27 Feb 2024 17:07:34 GMT",
"version": "v2"
}
] |
2024-08-06
|
[
[
"Hashemi",
"Shayan",
""
],
[
"Mäntylä",
"Mika",
""
]
] |
With the growth of online services, IoT devices, and DevOps-oriented software development, software log anomaly detection is becoming increasingly important. Prior works mainly follow a traditional four-staged architecture (Preprocessor, Parser, Vectorizer, and Classifier). This paper proposes OneLog, which utilizes a single Deep Neural Network (DNN) instead of multiple separate components. OneLog harnesses Convolutional Neural Networks (CNN) at the character level to take digits, numbers, and punctuations, which were removed in prior works, into account alongside the main natural language text. We evaluate our approach in six message- and sequence-based data sets: HDFS, Hadoop, BGL, Thunderbird, Spirit, and Liberty. We experiment with Onelog with single-, multi-, and cross-project setups. Onelog offers state-of-the-art performance in our datasets. Onelog can utilize multi-project datasets simultaneously during training, which suggests our model can generalize between datasets. Multi-project training also improves Onelog performance making it ideal when limited training data is available for an individual project. We also found that cross-project anomaly detection is possible with a single project pair (Liberty and Spirit). Analysis of model internals shows that one log has multiple modes of detecting anomalies and that the model learns manually validated parsing rules for the log messages. We conclude that character-based CNNs are a promising approach toward end-to-end learning in log anomaly detection. They offer good performance and generalization over multiple datasets. We will make our scripts publicly available upon the acceptance of this paper.
|
2103.06766
|
Jan T\"onshoff
|
Jan Toenshoff, Neta Friedman, Martin Grohe, Benny Kimelfeld
|
Stable Tuple Embeddings for Dynamic Databases
| null | null | null | null |
cs.DB
|
http://creativecommons.org/licenses/by/4.0/
|
We study the problem of computing an embedding of the tuples of a relational
database in a manner that is extensible to dynamic changes of the database. In
this problem, the embedding should be stable in the sense that it should not
change on the existing tuples due to the embedding of newly inserted tuples (as
database applications might already rely on existing embeddings); at the same
time, the embedding of all tuples, old and new, should retain high quality.
This task is challenging since inter-dependencies among the embeddings of
different entities are inherent in state-of-the-art embedding techniques for
structured data. We study two approaches to solving the problem. The first is
an adaptation of Node2Vec to dynamic databases. The second is the FoRWaRD
algorithm (Foreign Key Random Walk Embeddings for Relational Databases) that
draws from embedding techniques for general graphs and knowledge graphs, and is
inherently utilizing the schema and its key and foreign-key constraints. We
evaluate the embedding algorithms using a collection of downstream tasks of
column prediction over geographical and biological domains. We find that in the
traditional static setting, our two embedding methods achieve comparable
results that are compatible with the state-of-the-art for the specific
applications. In the dynamic setting, we find that the FoRWaRD algorithm
generally outperforms and runs faster than the alternatives, and moreover, it
features only a mild reduction of quality even when the database consists of
more than half newly inserted tuples after the initial training of the
embedding.
|
[
{
"created": "Thu, 11 Mar 2021 16:23:03 GMT",
"version": "v1"
},
{
"created": "Tue, 27 Sep 2022 16:48:58 GMT",
"version": "v2"
}
] |
2022-09-28
|
[
[
"Toenshoff",
"Jan",
""
],
[
"Friedman",
"Neta",
""
],
[
"Grohe",
"Martin",
""
],
[
"Kimelfeld",
"Benny",
""
]
] |
We study the problem of computing an embedding of the tuples of a relational database in a manner that is extensible to dynamic changes of the database. In this problem, the embedding should be stable in the sense that it should not change on the existing tuples due to the embedding of newly inserted tuples (as database applications might already rely on existing embeddings); at the same time, the embedding of all tuples, old and new, should retain high quality. This task is challenging since inter-dependencies among the embeddings of different entities are inherent in state-of-the-art embedding techniques for structured data. We study two approaches to solving the problem. The first is an adaptation of Node2Vec to dynamic databases. The second is the FoRWaRD algorithm (Foreign Key Random Walk Embeddings for Relational Databases) that draws from embedding techniques for general graphs and knowledge graphs, and is inherently utilizing the schema and its key and foreign-key constraints. We evaluate the embedding algorithms using a collection of downstream tasks of column prediction over geographical and biological domains. We find that in the traditional static setting, our two embedding methods achieve comparable results that are compatible with the state-of-the-art for the specific applications. In the dynamic setting, we find that the FoRWaRD algorithm generally outperforms and runs faster than the alternatives, and moreover, it features only a mild reduction of quality even when the database consists of more than half newly inserted tuples after the initial training of the embedding.
|
2203.09848
|
Marcos Faundez-Zanuy
|
Enric Sesa-Nogueras, Marcos Faundez-Zanuy, Josep Roure-Alcob\'e
|
Gender classification by means of online uppercase handwriting: A
text-dependent allographic approach
|
25 pages, published in Cogn Comput 8, pages 15 to 29, year 2016
|
Cognitive computation vol. 8 pages 15-19, 2016
|
10.1007/s12559-015-9332-1
| null |
cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper presents a gender classification schema based on online
handwriting. Using samples acquired with a digital tablet that captures the
dynamics of the writing, it classifies the writer as a male or a female. The
method proposed is allographic, regarding strokes as the structural units of
handwriting. Strokes performed while the writing device is not exerting any
pressure on the writing surface, pen-up (in-air) strokes, are also taken into
account. The method is also text-dependent meaning that training and testing is
done with exactly the same text. Text-dependency allows classification be
performed with very small amounts of text. Experimentation, performed with
samples from the BiosecurID database, yields results that fall in the range of
the classification averages expected from human judges. With only four
repetitions of a single uppercase word, the average rate of well classified
writers is 68%; with sixteen words, the rate rises to an average 72.6%.
Statistical analysis reveals that the aforementioned rates are highly
significant. In order to explore the classification potential of the pen-up
strokes, these are also considered. Although in this case results are not
conclusive, an outstanding average of 74% of well classified writers is
obtained when information from pen-up strokes is combined with information from
pen-down ones.
|
[
{
"created": "Fri, 18 Mar 2022 10:37:19 GMT",
"version": "v1"
}
] |
2022-03-21
|
[
[
"Sesa-Nogueras",
"Enric",
""
],
[
"Faundez-Zanuy",
"Marcos",
""
],
[
"Roure-Alcobé",
"Josep",
""
]
] |
This paper presents a gender classification schema based on online handwriting. Using samples acquired with a digital tablet that captures the dynamics of the writing, it classifies the writer as a male or a female. The method proposed is allographic, regarding strokes as the structural units of handwriting. Strokes performed while the writing device is not exerting any pressure on the writing surface, pen-up (in-air) strokes, are also taken into account. The method is also text-dependent meaning that training and testing is done with exactly the same text. Text-dependency allows classification be performed with very small amounts of text. Experimentation, performed with samples from the BiosecurID database, yields results that fall in the range of the classification averages expected from human judges. With only four repetitions of a single uppercase word, the average rate of well classified writers is 68%; with sixteen words, the rate rises to an average 72.6%. Statistical analysis reveals that the aforementioned rates are highly significant. In order to explore the classification potential of the pen-up strokes, these are also considered. Although in this case results are not conclusive, an outstanding average of 74% of well classified writers is obtained when information from pen-up strokes is combined with information from pen-down ones.
|
2208.06092
|
Adeilson Silva
|
Adeilson Antonio da Silva and Mauricio Pamplona Segundo
|
On deceiving malware classification with section injection
| null | null | null | null |
cs.CR cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We investigate how to modify executable files to deceive malware
classification systems. This work's main contribution is a methodology to
inject bytes across a malware file randomly and use it both as an attack to
decrease classification accuracy but also as a defensive method, augmenting the
data available for training. It respects the operating system file format to
make sure the malware will still execute after our injection and will not
change its behavior. We reproduced five state-of-the-art malware classification
approaches to evaluate our injection scheme: one based on GIST+KNN, three CNN
variations and one Gated CNN. We performed our experiments on a public dataset
with 9,339 malware samples from 25 different families. Our results show that a
mere increase of 7% in the malware size causes an accuracy drop between 25% and
40% for malware family classification. They show that a automatic malware
classification system may not be as trustworthy as initially reported in the
literature. We also evaluate using modified malwares alongside the original
ones to increase networks robustness against mentioned attacks. Results show
that a combination of reordering malware sections and injecting random data can
improve overall performance of the classification. Code available at
https://github.com/adeilsonsilva/malware-injection.
|
[
{
"created": "Fri, 12 Aug 2022 02:43:17 GMT",
"version": "v1"
}
] |
2022-08-15
|
[
[
"da Silva",
"Adeilson Antonio",
""
],
[
"Segundo",
"Mauricio Pamplona",
""
]
] |
We investigate how to modify executable files to deceive malware classification systems. This work's main contribution is a methodology to inject bytes across a malware file randomly and use it both as an attack to decrease classification accuracy but also as a defensive method, augmenting the data available for training. It respects the operating system file format to make sure the malware will still execute after our injection and will not change its behavior. We reproduced five state-of-the-art malware classification approaches to evaluate our injection scheme: one based on GIST+KNN, three CNN variations and one Gated CNN. We performed our experiments on a public dataset with 9,339 malware samples from 25 different families. Our results show that a mere increase of 7% in the malware size causes an accuracy drop between 25% and 40% for malware family classification. They show that a automatic malware classification system may not be as trustworthy as initially reported in the literature. We also evaluate using modified malwares alongside the original ones to increase networks robustness against mentioned attacks. Results show that a combination of reordering malware sections and injecting random data can improve overall performance of the classification. Code available at https://github.com/adeilsonsilva/malware-injection.
|
2107.06219
|
Xingxuan Zhang
|
Xingxuan Zhang, Linjun Zhou, Renzhe Xu, Peng Cui, Zheyan Shen, Haoxin
Liu
|
Towards Unsupervised Domain Generalization
|
Accepted by CVPR2022
| null | null | null |
cs.CV cs.LG cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Domain generalization (DG) aims to help models trained on a set of source
domains generalize better on unseen target domains. The performances of current
DG methods largely rely on sufficient labeled data, which are usually costly or
unavailable, however. Since unlabeled data are far more accessible, we seek to
explore how unsupervised learning can help deep models generalize across
domains. Specifically, we study a novel generalization problem called
unsupervised domain generalization (UDG), which aims to learn generalizable
models with unlabeled data and analyze the effects of pre-training on DG. In
UDG, models are pretrained with unlabeled data from various source domains
before being trained on labeled source data and eventually tested on unseen
target domains. Then we propose a method named Domain-Aware Representation
LearnING (DARLING) to cope with the significant and misleading heterogeneity
within unlabeled pretraining data and severe distribution shifts between source
and target data. Surprisingly we observe that DARLING can not only
counterbalance the scarcity of labeled data but also further strengthen the
generalization ability of models when the labeled data are insufficient. As a
pretraining approach, DARLING shows superior or comparable performance compared
with ImageNet pretraining protocol even when the available data are unlabeled
and of a vastly smaller amount compared to ImageNet, which may shed light on
improving generalization with large-scale unlabeled data.
|
[
{
"created": "Tue, 13 Jul 2021 16:20:50 GMT",
"version": "v1"
},
{
"created": "Tue, 12 Apr 2022 03:36:35 GMT",
"version": "v2"
}
] |
2022-04-13
|
[
[
"Zhang",
"Xingxuan",
""
],
[
"Zhou",
"Linjun",
""
],
[
"Xu",
"Renzhe",
""
],
[
"Cui",
"Peng",
""
],
[
"Shen",
"Zheyan",
""
],
[
"Liu",
"Haoxin",
""
]
] |
Domain generalization (DG) aims to help models trained on a set of source domains generalize better on unseen target domains. The performances of current DG methods largely rely on sufficient labeled data, which are usually costly or unavailable, however. Since unlabeled data are far more accessible, we seek to explore how unsupervised learning can help deep models generalize across domains. Specifically, we study a novel generalization problem called unsupervised domain generalization (UDG), which aims to learn generalizable models with unlabeled data and analyze the effects of pre-training on DG. In UDG, models are pretrained with unlabeled data from various source domains before being trained on labeled source data and eventually tested on unseen target domains. Then we propose a method named Domain-Aware Representation LearnING (DARLING) to cope with the significant and misleading heterogeneity within unlabeled pretraining data and severe distribution shifts between source and target data. Surprisingly we observe that DARLING can not only counterbalance the scarcity of labeled data but also further strengthen the generalization ability of models when the labeled data are insufficient. As a pretraining approach, DARLING shows superior or comparable performance compared with ImageNet pretraining protocol even when the available data are unlabeled and of a vastly smaller amount compared to ImageNet, which may shed light on improving generalization with large-scale unlabeled data.
|
2012.14283
|
Sarah Schwettmann
|
Sarah Schwettmann, Hendrik Strobelt, Mauro Martino
|
Latent Compass: Creation by Navigation
|
3 pages, 2 figures, accepted at the 4th Workshop on Machine Learning
for Creativity and Design at NeurIPS 2020
| null | null | null |
cs.AI cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In Marius von Senden's Space and Sight, a newly sighted blind patient
describes the experience of a corner as lemon-like, because corners "prick"
sight like lemons prick the tongue. Prickliness, here, is a dimension in the
feature space of sensory experience, an effect of the perceived on the
perceiver that arises where the two interact. In the account of the newly
sighted, an effect familiar from one interaction translates to a novel context.
Perception serves as the vehicle for generalization, in that an effect shared
across different experiences produces a concrete abstraction grounded in those
experiences. Cezanne and the post-impressionists, fluent in the language of
experience translation, realized that the way to paint a concrete form that
best reflected reality was to paint not what they saw, but what it was like to
see. We envision a future of creation using AI where what it is like to see is
replicable, transferrable, manipulable - part of the artist's palette that is
both grounded in a particular context, and generalizable beyond it.
An active line of research maps human-interpretable features onto directions
in GAN latent space. Supervised and self-supervised approaches that search for
anticipated directions or use off-the-shelf classifiers to drive image
manipulation in embedding space are limited in the variety of features they can
uncover. Unsupervised approaches that discover useful new directions show that
the space of perceptually meaningful directions is nowhere close to being fully
mapped. As this space is broad and full of creative potential, we want tools
for direction discovery that capture the richness and generalizability of human
perception. Our approach puts creators in the discovery loop during real-time
tool use, in order to identify directions that are perceptually meaningful to
them, and generate interpretable image translations along those directions.
|
[
{
"created": "Sun, 20 Dec 2020 04:18:23 GMT",
"version": "v1"
}
] |
2020-12-29
|
[
[
"Schwettmann",
"Sarah",
""
],
[
"Strobelt",
"Hendrik",
""
],
[
"Martino",
"Mauro",
""
]
] |
In Marius von Senden's Space and Sight, a newly sighted blind patient describes the experience of a corner as lemon-like, because corners "prick" sight like lemons prick the tongue. Prickliness, here, is a dimension in the feature space of sensory experience, an effect of the perceived on the perceiver that arises where the two interact. In the account of the newly sighted, an effect familiar from one interaction translates to a novel context. Perception serves as the vehicle for generalization, in that an effect shared across different experiences produces a concrete abstraction grounded in those experiences. Cezanne and the post-impressionists, fluent in the language of experience translation, realized that the way to paint a concrete form that best reflected reality was to paint not what they saw, but what it was like to see. We envision a future of creation using AI where what it is like to see is replicable, transferrable, manipulable - part of the artist's palette that is both grounded in a particular context, and generalizable beyond it. An active line of research maps human-interpretable features onto directions in GAN latent space. Supervised and self-supervised approaches that search for anticipated directions or use off-the-shelf classifiers to drive image manipulation in embedding space are limited in the variety of features they can uncover. Unsupervised approaches that discover useful new directions show that the space of perceptually meaningful directions is nowhere close to being fully mapped. As this space is broad and full of creative potential, we want tools for direction discovery that capture the richness and generalizability of human perception. Our approach puts creators in the discovery loop during real-time tool use, in order to identify directions that are perceptually meaningful to them, and generate interpretable image translations along those directions.
|
2106.10468
|
Hou Pong Chan
|
Hou Pong Chan and Irwin King
|
A Condense-then-Select Strategy for Text Summarization
|
Accepted by Knowledge-Based Systems (KBS) journal
| null |
10.1016/j.knosys.2021.107235
| null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Select-then-compress is a popular hybrid, framework for text summarization
due to its high efficiency. This framework first selects salient sentences and
then independently condenses each of the selected sentences into a concise
version. However, compressing sentences separately ignores the context
information of the document, and is therefore prone to delete salient
information. To address this limitation, we propose a novel
condense-then-select framework for text summarization. Our framework first
concurrently condenses each document sentence. Original document sentences and
their compressed versions then become the candidates for extraction. Finally,
an extractor utilizes the context information of the document to select
candidates and assembles them into a summary. If salient information is deleted
during condensing, the extractor can select an original sentence to retain the
information. Thus, our framework helps to avoid the loss of salient
information, while preserving the high efficiency of sentence-level
compression. Experiment results on the CNN/DailyMail, DUC-2002, and Pubmed
datasets demonstrate that our framework outperforms the select-then-compress
framework and other strong baselines.
|
[
{
"created": "Sat, 19 Jun 2021 10:33:10 GMT",
"version": "v1"
}
] |
2021-06-22
|
[
[
"Chan",
"Hou Pong",
""
],
[
"King",
"Irwin",
""
]
] |
Select-then-compress is a popular hybrid, framework for text summarization due to its high efficiency. This framework first selects salient sentences and then independently condenses each of the selected sentences into a concise version. However, compressing sentences separately ignores the context information of the document, and is therefore prone to delete salient information. To address this limitation, we propose a novel condense-then-select framework for text summarization. Our framework first concurrently condenses each document sentence. Original document sentences and their compressed versions then become the candidates for extraction. Finally, an extractor utilizes the context information of the document to select candidates and assembles them into a summary. If salient information is deleted during condensing, the extractor can select an original sentence to retain the information. Thus, our framework helps to avoid the loss of salient information, while preserving the high efficiency of sentence-level compression. Experiment results on the CNN/DailyMail, DUC-2002, and Pubmed datasets demonstrate that our framework outperforms the select-then-compress framework and other strong baselines.
|
1902.04045
|
Panos Giannopoulos
|
Mikkel Abrahamsen and Panos Giannopoulos and Maarten L\"offler and
G\"unter Rote
|
Geometric Multicut
|
24 pages, 15 figures
|
Discrete & Computational Geometry 64 (2020), 575-607
|
10.1007/s00454-020-00232-w
| null |
cs.CG cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the following separation problem: Given a collection of colored
objects in the plane, compute a shortest "fence" $F$, i.e., a union of curves
of minimum total length, that separates every two objects of different colors.
Two objects are separated if $F$ contains a simple closed curve that has one
object in the interior and the other in the exterior. We refer to the problem
as GEOMETRIC $k$-CUT, where $k$ is the number of different colors, as it can be
seen as a geometric analogue to the well-studied multicut problem on graphs. We
first give an $O(n^4\log^3 n)$-time algorithm that computes an optimal fence
for the case where the input consists of polygons of two colors and $n$ corners
in total. We then show that the problem is NP-hard for the case of three
colors. Finally, we give a $(2-4/3k)$-approximation algorithm.
|
[
{
"created": "Mon, 11 Feb 2019 18:44:40 GMT",
"version": "v1"
}
] |
2021-05-11
|
[
[
"Abrahamsen",
"Mikkel",
""
],
[
"Giannopoulos",
"Panos",
""
],
[
"Löffler",
"Maarten",
""
],
[
"Rote",
"Günter",
""
]
] |
We study the following separation problem: Given a collection of colored objects in the plane, compute a shortest "fence" $F$, i.e., a union of curves of minimum total length, that separates every two objects of different colors. Two objects are separated if $F$ contains a simple closed curve that has one object in the interior and the other in the exterior. We refer to the problem as GEOMETRIC $k$-CUT, where $k$ is the number of different colors, as it can be seen as a geometric analogue to the well-studied multicut problem on graphs. We first give an $O(n^4\log^3 n)$-time algorithm that computes an optimal fence for the case where the input consists of polygons of two colors and $n$ corners in total. We then show that the problem is NP-hard for the case of three colors. Finally, we give a $(2-4/3k)$-approximation algorithm.
|
2401.09180
|
Antonio Almud\'evar
|
Antonio Almud\'evar and Th\'eo Mariotte and Alfonso Ortega and Marie
Tahon
|
Unsupervised Multiple Domain Translation through Controlled
Disentanglement in Variational Autoencoder
| null | null | null | null |
cs.LG cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Unsupervised Multiple Domain Translation is the task of transforming data
from one domain to other domains without having paired data to train the
systems. Typically, methods based on Generative Adversarial Networks (GANs) are
used to address this task. However, our proposal exclusively relies on a
modified version of a Variational Autoencoder. This modification consists of
the use of two latent variables disentangled in a controlled way by design. One
of this latent variables is imposed to depend exclusively on the domain, while
the other one must depend on the rest of the variability factors of the data.
Additionally, the conditions imposed over the domain latent variable allow for
better control and understanding of the latent space. We empirically
demonstrate that our approach works on different vision datasets improving the
performance of other well known methods. Finally, we prove that, indeed, one of
the latent variables stores all the information related to the domain and the
other one hardly contains any domain information.
|
[
{
"created": "Wed, 17 Jan 2024 12:43:28 GMT",
"version": "v1"
},
{
"created": "Thu, 18 Jan 2024 09:51:46 GMT",
"version": "v2"
}
] |
2024-01-19
|
[
[
"Almudévar",
"Antonio",
""
],
[
"Mariotte",
"Théo",
""
],
[
"Ortega",
"Alfonso",
""
],
[
"Tahon",
"Marie",
""
]
] |
Unsupervised Multiple Domain Translation is the task of transforming data from one domain to other domains without having paired data to train the systems. Typically, methods based on Generative Adversarial Networks (GANs) are used to address this task. However, our proposal exclusively relies on a modified version of a Variational Autoencoder. This modification consists of the use of two latent variables disentangled in a controlled way by design. One of this latent variables is imposed to depend exclusively on the domain, while the other one must depend on the rest of the variability factors of the data. Additionally, the conditions imposed over the domain latent variable allow for better control and understanding of the latent space. We empirically demonstrate that our approach works on different vision datasets improving the performance of other well known methods. Finally, we prove that, indeed, one of the latent variables stores all the information related to the domain and the other one hardly contains any domain information.
|
2405.13928
|
Ivan Damnjanovi\'c
|
Ivan Sto\v{s}i\'c, Ivan Damnjanovi\'c, \v{Z}arko Ran{\dj}elovi\'c
|
Counting the number of inequivalent arithmetic expressions on $n$
variables
| null | null | null | null |
cs.DM math.CO math.NT
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
An expression is any mathematical formula that contains certain formal
variables and operations to be executed in a specified order. In computer
science, it is usually convenient to represent each expression in the form of
an expression tree. Here, we consider only arithmetic expressions, i.e., those
that contain only the four standard arithmetic operations: addition,
subtraction, multiplication and division, alongside additive inversion. We
first provide certain theoretical results concerning the equivalence of such
expressions and then disclose a $\Theta(n^2)$ algorithm that computes the
number of inequivalent arithmetic expressions on $n$ distinct variables.
|
[
{
"created": "Wed, 22 May 2024 18:58:23 GMT",
"version": "v1"
}
] |
2024-05-24
|
[
[
"Stošić",
"Ivan",
""
],
[
"Damnjanović",
"Ivan",
""
],
[
"Ranđelović",
"Žarko",
""
]
] |
An expression is any mathematical formula that contains certain formal variables and operations to be executed in a specified order. In computer science, it is usually convenient to represent each expression in the form of an expression tree. Here, we consider only arithmetic expressions, i.e., those that contain only the four standard arithmetic operations: addition, subtraction, multiplication and division, alongside additive inversion. We first provide certain theoretical results concerning the equivalence of such expressions and then disclose a $\Theta(n^2)$ algorithm that computes the number of inequivalent arithmetic expressions on $n$ distinct variables.
|
2107.06056
|
Prathamesh Kalamkar
|
Prathamesh Kalamkar, Janani Venugopalan Ph.D., Vivek Raghavan Ph.D
|
Indian Legal NLP Benchmarks : A Survey
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Availability of challenging benchmarks is the key to advancement of AI in a
specific field.Since Legal Text is significantly different than normal English
text, there is a need to create separate Natural Language Processing benchmarks
for Indian Legal Text which are challenging and focus on tasks specific to
Legal Systems. This will spur innovation in applications of Natural language
Processing for Indian Legal Text and will benefit AI community and Legal
fraternity. We review the existing work in this area and propose ideas to
create new benchmarks for Indian Legal Natural Language Processing.
|
[
{
"created": "Tue, 13 Jul 2021 13:10:10 GMT",
"version": "v1"
}
] |
2022-10-11
|
[
[
"Kalamkar",
"Prathamesh",
""
],
[
"D.",
"Janani Venugopalan Ph.",
""
],
[
"D",
"Vivek Raghavan Ph.",
""
]
] |
Availability of challenging benchmarks is the key to advancement of AI in a specific field.Since Legal Text is significantly different than normal English text, there is a need to create separate Natural Language Processing benchmarks for Indian Legal Text which are challenging and focus on tasks specific to Legal Systems. This will spur innovation in applications of Natural language Processing for Indian Legal Text and will benefit AI community and Legal fraternity. We review the existing work in this area and propose ideas to create new benchmarks for Indian Legal Natural Language Processing.
|
1801.04735
|
Alejandro Cohen
|
Alejandro Cohen, Asaf Cohen and Omer Gurewitz
|
Secure Adaptive Group Testing
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
\emph{Group Testing} (GT) addresses the problem of identifying a small subset
of defective items from a large population, by grouping items into as few test
pools as possible. In \emph{Adaptive GT} (AGT), outcomes of previous tests can
influence the makeup of future tests. Using an information theoretic point of
view, Aldridge $2012$ showed that in the regime of a few defectives, adaptivity
does not help much, as the number of tests required is essentially the same as
for non-adaptive GT.
\emph{Secure GT} considers a scenario where there is an eavesdropper who may
observe a fraction $\delta$ of the tests results, yet should not be able to
infer the status of the items. In the non-adaptive scenario, the number of
tests required is $1/(1-\delta)$ times the number of tests without the secrecy
constraint.
In this paper, we consider \emph{Secure Adaptive GT}. Specifically, when
during the makeup of the pools one has access to a private feedback link from
the lab, of rate $R_f$. We prove that the number of tests required for both
correct reconstruction at the legitimate lab, with high probability, and
negligible mutual information at the eavesdropper is $1/min\{1,1-\delta+R_f\}$
times the number of tests required with no secrecy constraint. Thus, unlike
non-secure GT, where an adaptive algorithm has only a mild impact, under a
security constraint it can significantly boost performance. A key insight is
that not only the adaptive link should disregard the actual test results and
simply send keys, these keys should be enhanced through a "secret sharing"
scheme before usage. We drive sufficiency and necessity bounds that completely
characterizes the Secure Adaptive GT capacity.
|
[
{
"created": "Mon, 15 Jan 2018 11:07:10 GMT",
"version": "v1"
},
{
"created": "Fri, 14 Aug 2020 17:50:25 GMT",
"version": "v2"
}
] |
2020-08-17
|
[
[
"Cohen",
"Alejandro",
""
],
[
"Cohen",
"Asaf",
""
],
[
"Gurewitz",
"Omer",
""
]
] |
\emph{Group Testing} (GT) addresses the problem of identifying a small subset of defective items from a large population, by grouping items into as few test pools as possible. In \emph{Adaptive GT} (AGT), outcomes of previous tests can influence the makeup of future tests. Using an information theoretic point of view, Aldridge $2012$ showed that in the regime of a few defectives, adaptivity does not help much, as the number of tests required is essentially the same as for non-adaptive GT. \emph{Secure GT} considers a scenario where there is an eavesdropper who may observe a fraction $\delta$ of the tests results, yet should not be able to infer the status of the items. In the non-adaptive scenario, the number of tests required is $1/(1-\delta)$ times the number of tests without the secrecy constraint. In this paper, we consider \emph{Secure Adaptive GT}. Specifically, when during the makeup of the pools one has access to a private feedback link from the lab, of rate $R_f$. We prove that the number of tests required for both correct reconstruction at the legitimate lab, with high probability, and negligible mutual information at the eavesdropper is $1/min\{1,1-\delta+R_f\}$ times the number of tests required with no secrecy constraint. Thus, unlike non-secure GT, where an adaptive algorithm has only a mild impact, under a security constraint it can significantly boost performance. A key insight is that not only the adaptive link should disregard the actual test results and simply send keys, these keys should be enhanced through a "secret sharing" scheme before usage. We drive sufficiency and necessity bounds that completely characterizes the Secure Adaptive GT capacity.
|
2106.05237
|
Xiaohua Zhai
|
Lucas Beyer, Xiaohua Zhai, Am\'elie Royer, Larisa Markeeva, Rohan
Anil, Alexander Kolesnikov
|
Knowledge distillation: A good teacher is patient and consistent
|
Lucas, Xiaohua, Am\'elie, Larisa, and Alex contributed equally; CVPR
2022
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There is a growing discrepancy in computer vision between large-scale models
that achieve state-of-the-art performance and models that are affordable in
practical applications. In this paper we address this issue and significantly
bridge the gap between these two types of models. Throughout our empirical
investigation we do not aim to necessarily propose a new method, but strive to
identify a robust and effective recipe for making state-of-the-art large scale
models affordable in practice. We demonstrate that, when performed correctly,
knowledge distillation can be a powerful tool for reducing the size of large
models without compromising their performance. In particular, we uncover that
there are certain implicit design choices, which may drastically affect the
effectiveness of distillation. Our key contribution is the explicit
identification of these design choices, which were not previously articulated
in the literature. We back up our findings by a comprehensive empirical study,
demonstrate compelling results on a wide range of vision datasets and, in
particular, obtain a state-of-the-art ResNet-50 model for ImageNet, which
achieves 82.8% top-1 accuracy.
|
[
{
"created": "Wed, 9 Jun 2021 17:20:40 GMT",
"version": "v1"
},
{
"created": "Tue, 21 Jun 2022 09:46:14 GMT",
"version": "v2"
}
] |
2022-06-22
|
[
[
"Beyer",
"Lucas",
""
],
[
"Zhai",
"Xiaohua",
""
],
[
"Royer",
"Amélie",
""
],
[
"Markeeva",
"Larisa",
""
],
[
"Anil",
"Rohan",
""
],
[
"Kolesnikov",
"Alexander",
""
]
] |
There is a growing discrepancy in computer vision between large-scale models that achieve state-of-the-art performance and models that are affordable in practical applications. In this paper we address this issue and significantly bridge the gap between these two types of models. Throughout our empirical investigation we do not aim to necessarily propose a new method, but strive to identify a robust and effective recipe for making state-of-the-art large scale models affordable in practice. We demonstrate that, when performed correctly, knowledge distillation can be a powerful tool for reducing the size of large models without compromising their performance. In particular, we uncover that there are certain implicit design choices, which may drastically affect the effectiveness of distillation. Our key contribution is the explicit identification of these design choices, which were not previously articulated in the literature. We back up our findings by a comprehensive empirical study, demonstrate compelling results on a wide range of vision datasets and, in particular, obtain a state-of-the-art ResNet-50 model for ImageNet, which achieves 82.8% top-1 accuracy.
|
1911.05894
|
Aren Jansen
|
Aren Jansen, Daniel P. W. Ellis, Shawn Hershey, R. Channing Moore,
Manoj Plakal, Ashok C. Popat, Rif A. Saurous
|
Coincidence, Categorization, and Consolidation: Learning to Recognize
Sounds with Minimal Supervision
|
This extended version of a ICASSP 2020 submission under same title
has an added figure and additional discussion for easier consumption
| null | null | null |
cs.SD eess.AS stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Humans do not acquire perceptual abilities in the way we train machines.
While machine learning algorithms typically operate on large collections of
randomly-chosen, explicitly-labeled examples, human acquisition relies more
heavily on multimodal unsupervised learning (as infants) and active learning
(as children). With this motivation, we present a learning framework for sound
representation and recognition that combines (i) a self-supervised objective
based on a general notion of unimodal and cross-modal coincidence, (ii) a
clustering objective that reflects our need to impose categorical structure on
our experiences, and (iii) a cluster-based active learning procedure that
solicits targeted weak supervision to consolidate categories into relevant
semantic classes. By training a combined sound
embedding/clustering/classification network according to these criteria, we
achieve a new state-of-the-art unsupervised audio representation and
demonstrate up to a 20-fold reduction in the number of labels required to reach
a desired classification performance.
|
[
{
"created": "Thu, 14 Nov 2019 02:07:47 GMT",
"version": "v1"
}
] |
2019-11-15
|
[
[
"Jansen",
"Aren",
""
],
[
"Ellis",
"Daniel P. W.",
""
],
[
"Hershey",
"Shawn",
""
],
[
"Moore",
"R. Channing",
""
],
[
"Plakal",
"Manoj",
""
],
[
"Popat",
"Ashok C.",
""
],
[
"Saurous",
"Rif A.",
""
]
] |
Humans do not acquire perceptual abilities in the way we train machines. While machine learning algorithms typically operate on large collections of randomly-chosen, explicitly-labeled examples, human acquisition relies more heavily on multimodal unsupervised learning (as infants) and active learning (as children). With this motivation, we present a learning framework for sound representation and recognition that combines (i) a self-supervised objective based on a general notion of unimodal and cross-modal coincidence, (ii) a clustering objective that reflects our need to impose categorical structure on our experiences, and (iii) a cluster-based active learning procedure that solicits targeted weak supervision to consolidate categories into relevant semantic classes. By training a combined sound embedding/clustering/classification network according to these criteria, we achieve a new state-of-the-art unsupervised audio representation and demonstrate up to a 20-fold reduction in the number of labels required to reach a desired classification performance.
|
2101.07241
|
Haoyu Xiong
|
Haoyu Xiong, Quanzhou Li, Yun-Chun Chen, Homanga Bharadhwaj, Samarth
Sinha, Animesh Garg
|
Learning by Watching: Physical Imitation of Manipulation Skills from
Human Videos
|
Project Website: https://www.pair.toronto.edu/lbw-kp/
|
IROS 2021
| null | null |
cs.RO cs.CV cs.LG
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Learning from visual data opens the potential to accrue a large range of
manipulation behaviors by leveraging human demonstrations without specifying
each of them mathematically, but rather through natural task specification. In
this paper, we present Learning by Watching (LbW), an algorithmic framework for
policy learning through imitation from a single video specifying the task. The
key insights of our method are two-fold. First, since the human arms may not
have the same morphology as robot arms, our framework learns unsupervised human
to robot translation to overcome the morphology mismatch issue. Second, to
capture the details in salient regions that are crucial for learning state
representations, our model performs unsupervised keypoint detection on the
translated robot videos. The detected keypoints form a structured
representation that contains semantically meaningful information and can be
used directly for computing reward and policy learning. We evaluate the
effectiveness of our LbW framework on five robot manipulation tasks, including
reaching, pushing, sliding, coffee making, and drawer closing. Extensive
experimental evaluations demonstrate that our method performs favorably against
the state-of-the-art approaches.
|
[
{
"created": "Mon, 18 Jan 2021 18:50:32 GMT",
"version": "v1"
},
{
"created": "Sun, 14 Nov 2021 15:05:21 GMT",
"version": "v2"
}
] |
2021-11-16
|
[
[
"Xiong",
"Haoyu",
""
],
[
"Li",
"Quanzhou",
""
],
[
"Chen",
"Yun-Chun",
""
],
[
"Bharadhwaj",
"Homanga",
""
],
[
"Sinha",
"Samarth",
""
],
[
"Garg",
"Animesh",
""
]
] |
Learning from visual data opens the potential to accrue a large range of manipulation behaviors by leveraging human demonstrations without specifying each of them mathematically, but rather through natural task specification. In this paper, we present Learning by Watching (LbW), an algorithmic framework for policy learning through imitation from a single video specifying the task. The key insights of our method are two-fold. First, since the human arms may not have the same morphology as robot arms, our framework learns unsupervised human to robot translation to overcome the morphology mismatch issue. Second, to capture the details in salient regions that are crucial for learning state representations, our model performs unsupervised keypoint detection on the translated robot videos. The detected keypoints form a structured representation that contains semantically meaningful information and can be used directly for computing reward and policy learning. We evaluate the effectiveness of our LbW framework on five robot manipulation tasks, including reaching, pushing, sliding, coffee making, and drawer closing. Extensive experimental evaluations demonstrate that our method performs favorably against the state-of-the-art approaches.
|
1906.07011
|
Nees Jan van Eck
|
Nees Jan van Eck, Ludo Waltman
|
Accuracy of citation data in Web of Science and Scopus
|
Paper published in the Proceedings of the 16th International
Conference of the International Society for Scientometrics and Informetrics
(pp. 1087-1092)
| null | null | null |
cs.DL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a large-scale analysis of the accuracy of citation data in the Web
of Science and Scopus databases. The analysis is based on citations given in
publications in Elsevier journals. We reveal significant data quality problems
for both databases. Missing and incorrect references are important problems in
Web of Science. Duplicate publications are a serious problem in Scopus.
|
[
{
"created": "Mon, 17 Jun 2019 13:03:45 GMT",
"version": "v1"
}
] |
2019-06-18
|
[
[
"van Eck",
"Nees Jan",
""
],
[
"Waltman",
"Ludo",
""
]
] |
We present a large-scale analysis of the accuracy of citation data in the Web of Science and Scopus databases. The analysis is based on citations given in publications in Elsevier journals. We reveal significant data quality problems for both databases. Missing and incorrect references are important problems in Web of Science. Duplicate publications are a serious problem in Scopus.
|
2310.10669
|
Zhengmian Hu
|
Zhengmian Hu, Lichang Chen, Xidong Wu, Yihan Wu, Hongyang Zhang, Heng
Huang
|
Unbiased Watermark for Large Language Models
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The recent advancements in large language models (LLMs) have sparked a
growing apprehension regarding the potential misuse. One approach to mitigating
this risk is to incorporate watermarking techniques into LLMs, allowing for the
tracking and attribution of model outputs. This study examines a crucial aspect
of watermarking: how significantly watermarks impact the quality of
model-generated outputs. Previous studies have suggested a trade-off between
watermark strength and output quality. However, our research demonstrates that
it is possible to integrate watermarks without affecting the output probability
distribution with appropriate implementation. We refer to this type of
watermark as an unbiased watermark. This has significant implications for the
use of LLMs, as it becomes impossible for users to discern whether a service
provider has incorporated watermarks or not. Furthermore, the presence of
watermarks does not compromise the performance of the model in downstream
tasks, ensuring that the overall utility of the language model is preserved.
Our findings contribute to the ongoing discussion around responsible AI
development, suggesting that unbiased watermarks can serve as an effective
means of tracking and attributing model outputs without sacrificing output
quality.
|
[
{
"created": "Fri, 22 Sep 2023 12:46:38 GMT",
"version": "v1"
},
{
"created": "Wed, 18 Oct 2023 02:02:08 GMT",
"version": "v2"
}
] |
2023-10-19
|
[
[
"Hu",
"Zhengmian",
""
],
[
"Chen",
"Lichang",
""
],
[
"Wu",
"Xidong",
""
],
[
"Wu",
"Yihan",
""
],
[
"Zhang",
"Hongyang",
""
],
[
"Huang",
"Heng",
""
]
] |
The recent advancements in large language models (LLMs) have sparked a growing apprehension regarding the potential misuse. One approach to mitigating this risk is to incorporate watermarking techniques into LLMs, allowing for the tracking and attribution of model outputs. This study examines a crucial aspect of watermarking: how significantly watermarks impact the quality of model-generated outputs. Previous studies have suggested a trade-off between watermark strength and output quality. However, our research demonstrates that it is possible to integrate watermarks without affecting the output probability distribution with appropriate implementation. We refer to this type of watermark as an unbiased watermark. This has significant implications for the use of LLMs, as it becomes impossible for users to discern whether a service provider has incorporated watermarks or not. Furthermore, the presence of watermarks does not compromise the performance of the model in downstream tasks, ensuring that the overall utility of the language model is preserved. Our findings contribute to the ongoing discussion around responsible AI development, suggesting that unbiased watermarks can serve as an effective means of tracking and attributing model outputs without sacrificing output quality.
|
2108.07047
|
Robert Gilles
|
Subhadip Chakrabarti, Loyimee Gogoi, Robert P Gilles, Surajit
Borkotokey, Rajnish Kumar
|
Expected Values for Variable Network Games
| null | null | null | null |
cs.GT econ.TH physics.soc-ph
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
A network game assigns a level of collectively generated wealth to every
network that can form on a given set of players. A variable network game
combines a network game with a network formation probability distribution,
describing certain restrictions on network formation. Expected levels of
collectively generated wealth and expected individual payoffs can be formulated
in this setting.
We investigate properties of the resulting expected wealth levels as well as
the expected variants of well-established network game values as allocation
rules that assign to every variable network game a payoff to the players in a
variable network game. We establish two axiomatizations of the Expected Myerson
Value, originally formulated and proven on the class of communication
situations, based on the well-established component balance, equal bargaining
power and balanced contributions properties. Furthermore, we extend an
established axiomatization of the Position Value based on the balanced link
contribution property to the Expected Position Value.
|
[
{
"created": "Mon, 16 Aug 2021 12:35:40 GMT",
"version": "v1"
},
{
"created": "Fri, 28 Oct 2022 15:48:17 GMT",
"version": "v2"
}
] |
2022-10-31
|
[
[
"Chakrabarti",
"Subhadip",
""
],
[
"Gogoi",
"Loyimee",
""
],
[
"Gilles",
"Robert P",
""
],
[
"Borkotokey",
"Surajit",
""
],
[
"Kumar",
"Rajnish",
""
]
] |
A network game assigns a level of collectively generated wealth to every network that can form on a given set of players. A variable network game combines a network game with a network formation probability distribution, describing certain restrictions on network formation. Expected levels of collectively generated wealth and expected individual payoffs can be formulated in this setting. We investigate properties of the resulting expected wealth levels as well as the expected variants of well-established network game values as allocation rules that assign to every variable network game a payoff to the players in a variable network game. We establish two axiomatizations of the Expected Myerson Value, originally formulated and proven on the class of communication situations, based on the well-established component balance, equal bargaining power and balanced contributions properties. Furthermore, we extend an established axiomatization of the Position Value based on the balanced link contribution property to the Expected Position Value.
|
1506.05217
|
Mohsin Junaid
|
Mohsin Junaid, Donggang Liu and David Kung
|
Dexteroid: Detecting Malicious Behaviors in Android Apps Using
Reverse-Engineered Life Cycle Models
| null |
Computers & Security, Volume 59,Pages 92-117, ISSN 0167-4048, June
2016
|
10.1016/j.cose.2016.01.008
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The amount of Android malware has increased greatly during the last few
years. Static analysis is widely used in detecting such malware by analyzing
the code without execution. The effectiveness of current tools relies on the
app model as well as the malware detection algorithm which analyzes the app
model. If the model and/or the algorithm is inadequate, then sophisticated
attacks that are triggered by specific sequences of events will not be
detected.
This paper presents a static analysis framework called Dexteroid, which uses
reverse-engineered life cycle models to accurately capture the behaviors of
Android components. Dexteroid systematically derives event sequences from the
models, and uses them to detect attacks launched by specific ordering of
events. A prototype implementation of Dexteroid detects two types of attacks:
(1) leakage of private information, and (2) sending SMS to premium-rate
numbers. A series of experiments are conducted on 1526 Google Play apps, 1259
Genome Malware apps, and a suite of benchmark apps called DroidBench and the
results are compared with a state-of-the-art static analysis tool called
FlowDroid. The evaluation results show that the proposed framework is effective
and efficient in terms of precision, recall, and execution time.
|
[
{
"created": "Wed, 17 Jun 2015 06:38:37 GMT",
"version": "v1"
},
{
"created": "Fri, 8 Apr 2016 19:38:43 GMT",
"version": "v2"
}
] |
2016-04-11
|
[
[
"Junaid",
"Mohsin",
""
],
[
"Liu",
"Donggang",
""
],
[
"Kung",
"David",
""
]
] |
The amount of Android malware has increased greatly during the last few years. Static analysis is widely used in detecting such malware by analyzing the code without execution. The effectiveness of current tools relies on the app model as well as the malware detection algorithm which analyzes the app model. If the model and/or the algorithm is inadequate, then sophisticated attacks that are triggered by specific sequences of events will not be detected. This paper presents a static analysis framework called Dexteroid, which uses reverse-engineered life cycle models to accurately capture the behaviors of Android components. Dexteroid systematically derives event sequences from the models, and uses them to detect attacks launched by specific ordering of events. A prototype implementation of Dexteroid detects two types of attacks: (1) leakage of private information, and (2) sending SMS to premium-rate numbers. A series of experiments are conducted on 1526 Google Play apps, 1259 Genome Malware apps, and a suite of benchmark apps called DroidBench and the results are compared with a state-of-the-art static analysis tool called FlowDroid. The evaluation results show that the proposed framework is effective and efficient in terms of precision, recall, and execution time.
|
2203.00762
|
Biyi Fang
|
Biyi Fang, Kripa Rajshekhar, Diego Klabjan
|
Topic Analysis for Text with Side Data
| null | null | null | null |
cs.LG cs.CL cs.IR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Although latent factor models (e.g., matrix factorization) obtain good
performance in predictions, they suffer from several problems including
cold-start, non-transparency, and suboptimal recommendations. In this paper, we
employ text with side data to tackle these limitations. We introduce a hybrid
generative probabilistic model that combines a neural network with a latent
topic model, which is a four-level hierarchical Bayesian model. In the model,
each document is modeled as a finite mixture over an underlying set of topics
and each topic is modeled as an infinite mixture over an underlying set of
topic probabilities. Furthermore, each topic probability is modeled as a finite
mixture over side data. In the context of text, the neural network provides an
overview distribution about side data for the corresponding text, which is the
prior distribution in LDA to help perform topic grouping. The approach is
evaluated on several different datasets, where the model is shown to outperform
standard LDA and Dirichlet-multinomial regression (DMR) in terms of topic
grouping, model perplexity, classification and comment generation.
|
[
{
"created": "Tue, 1 Mar 2022 22:06:30 GMT",
"version": "v1"
}
] |
2022-03-03
|
[
[
"Fang",
"Biyi",
""
],
[
"Rajshekhar",
"Kripa",
""
],
[
"Klabjan",
"Diego",
""
]
] |
Although latent factor models (e.g., matrix factorization) obtain good performance in predictions, they suffer from several problems including cold-start, non-transparency, and suboptimal recommendations. In this paper, we employ text with side data to tackle these limitations. We introduce a hybrid generative probabilistic model that combines a neural network with a latent topic model, which is a four-level hierarchical Bayesian model. In the model, each document is modeled as a finite mixture over an underlying set of topics and each topic is modeled as an infinite mixture over an underlying set of topic probabilities. Furthermore, each topic probability is modeled as a finite mixture over side data. In the context of text, the neural network provides an overview distribution about side data for the corresponding text, which is the prior distribution in LDA to help perform topic grouping. The approach is evaluated on several different datasets, where the model is shown to outperform standard LDA and Dirichlet-multinomial regression (DMR) in terms of topic grouping, model perplexity, classification and comment generation.
|
2406.00031
|
Achuth Chandrasekhar
|
Achuth Chandrasekhar, Jonathan Chan, Francis Ogoke, Olabode
Ajenifujah, Amir Barati Farimani
|
AMGPT: a Large Language Model for Contextual Querying in Additive
Manufacturing
|
54 pages, 4 figures
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Generalized large language models (LLMs) such as GPT-4 may not provide
specific answers to queries formulated by materials science researchers. These
models may produce a high-level outline but lack the capacity to return
detailed instructions on manufacturing and material properties of novel alloys.
Enhancing a smaller model with specialized domain knowledge may provide an
advantage over large language models which cannot be retrained quickly enough
to keep up with the rapid pace of research in metal additive manufacturing
(AM). We introduce "AMGPT," a specialized LLM text generator designed for metal
AM queries. The goal of AMGPT is to assist researchers and users in navigating
the extensive corpus of literature in AM. Instead of training from scratch, we
employ a pre-trained Llama2-7B model from Hugging Face in a Retrieval-Augmented
Generation (RAG) setup, utilizing it to dynamically incorporate information
from $\sim$50 AM papers and textbooks in PDF format. Mathpix is used to convert
these PDF documents into TeX format, facilitating their integration into the
RAG pipeline managed by LlamaIndex. Expert evaluations of this project
highlight that specific embeddings from the RAG setup accelerate response times
and maintain coherence in the generated text.
|
[
{
"created": "Fri, 24 May 2024 20:03:32 GMT",
"version": "v1"
}
] |
2024-06-04
|
[
[
"Chandrasekhar",
"Achuth",
""
],
[
"Chan",
"Jonathan",
""
],
[
"Ogoke",
"Francis",
""
],
[
"Ajenifujah",
"Olabode",
""
],
[
"Farimani",
"Amir Barati",
""
]
] |
Generalized large language models (LLMs) such as GPT-4 may not provide specific answers to queries formulated by materials science researchers. These models may produce a high-level outline but lack the capacity to return detailed instructions on manufacturing and material properties of novel alloys. Enhancing a smaller model with specialized domain knowledge may provide an advantage over large language models which cannot be retrained quickly enough to keep up with the rapid pace of research in metal additive manufacturing (AM). We introduce "AMGPT," a specialized LLM text generator designed for metal AM queries. The goal of AMGPT is to assist researchers and users in navigating the extensive corpus of literature in AM. Instead of training from scratch, we employ a pre-trained Llama2-7B model from Hugging Face in a Retrieval-Augmented Generation (RAG) setup, utilizing it to dynamically incorporate information from $\sim$50 AM papers and textbooks in PDF format. Mathpix is used to convert these PDF documents into TeX format, facilitating their integration into the RAG pipeline managed by LlamaIndex. Expert evaluations of this project highlight that specific embeddings from the RAG setup accelerate response times and maintain coherence in the generated text.
|
2003.05026
|
Max Van Kleek
|
Max Van Kleek
|
Super-reflective Data: Speculative Imaginings of a World Where Data
Works for People
| null | null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It's the year 2020, and every space and place on- and off-line has been
augmented with digital things that observe, record, transmit, and compute, for
the purposes of recording endless data traces of what is happening in the
world. Individually, these things (and the invisible services the power them)
have reached considerable sophistication in their ability to analyse and
dissect such observations, turning streams of audio and video into informative
data fragments. Yet somehow, individuals as end-users of platforms and services
have not seen the full potential of such data. In this speculative paper, we
propose two hypothetical mini scenarios different from our current digital
world. In the former, instead of hoarding it, data controllers turn captured
data over to those who need it as quickly as possible, working together to
combine, validate, and refine it for maximum usefulness. This simultaneously
addresses the data fragmentation and privacy problem, by handing over long-term
data governance to those that value it the most In the latter, we discuss
ethical dilemmas using the long-term use of such rich data and its tendency to
cause people to relentlessly optimise.
|
[
{
"created": "Tue, 10 Mar 2020 22:54:10 GMT",
"version": "v1"
}
] |
2020-03-12
|
[
[
"Van Kleek",
"Max",
""
]
] |
It's the year 2020, and every space and place on- and off-line has been augmented with digital things that observe, record, transmit, and compute, for the purposes of recording endless data traces of what is happening in the world. Individually, these things (and the invisible services the power them) have reached considerable sophistication in their ability to analyse and dissect such observations, turning streams of audio and video into informative data fragments. Yet somehow, individuals as end-users of platforms and services have not seen the full potential of such data. In this speculative paper, we propose two hypothetical mini scenarios different from our current digital world. In the former, instead of hoarding it, data controllers turn captured data over to those who need it as quickly as possible, working together to combine, validate, and refine it for maximum usefulness. This simultaneously addresses the data fragmentation and privacy problem, by handing over long-term data governance to those that value it the most In the latter, we discuss ethical dilemmas using the long-term use of such rich data and its tendency to cause people to relentlessly optimise.
|
2112.13585
|
Lanning Wei
|
Lanning Wei, Huan Zhao, Zhiqiang He
|
Learn Layer-wise Connections in Graph Neural Networks
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years, Graph Neural Networks (GNNs) have shown superior performance
on diverse applications on real-world datasets. To improve the model capacity
and alleviate the over-smoothing problem, several methods proposed to
incorporate the intermediate layers by layer-wise connections. However, due to
the highly diverse graph types, the performance of existing methods vary on
diverse graphs, leading to a need for data-specific layer-wise connection
methods. To address this problem, we propose a novel framework LLC (Learn
Layer-wise Connections) based on neural architecture search (NAS) to learn
adaptive connections among intermediate layers in GNNs. LLC contains one novel
search space which consists of 3 types of blocks and learnable connections, and
one differentiable search algorithm to enable the efficient search process.
Extensive experiments on five real-world datasets are conducted, and the
results show that the searched layer-wise connections can not only improve the
performance but also alleviate the over-smoothing problem.
|
[
{
"created": "Mon, 27 Dec 2021 09:33:22 GMT",
"version": "v1"
}
] |
2021-12-28
|
[
[
"Wei",
"Lanning",
""
],
[
"Zhao",
"Huan",
""
],
[
"He",
"Zhiqiang",
""
]
] |
In recent years, Graph Neural Networks (GNNs) have shown superior performance on diverse applications on real-world datasets. To improve the model capacity and alleviate the over-smoothing problem, several methods proposed to incorporate the intermediate layers by layer-wise connections. However, due to the highly diverse graph types, the performance of existing methods vary on diverse graphs, leading to a need for data-specific layer-wise connection methods. To address this problem, we propose a novel framework LLC (Learn Layer-wise Connections) based on neural architecture search (NAS) to learn adaptive connections among intermediate layers in GNNs. LLC contains one novel search space which consists of 3 types of blocks and learnable connections, and one differentiable search algorithm to enable the efficient search process. Extensive experiments on five real-world datasets are conducted, and the results show that the searched layer-wise connections can not only improve the performance but also alleviate the over-smoothing problem.
|
2102.10290
|
Luca Lugini
|
Luca Lugini, Diane Litman
|
Contextual Argument Component Classification for Class Discussions
| null |
In Proceedings of the 28th International Conference on
Computational Linguistics, pp. 1475-1480. 2020
|
10.18653/v1/2020.coling-main.128
| null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Argument mining systems often consider contextual information, i.e.
information outside of an argumentative discourse unit, when trained to
accomplish tasks such as argument component identification, classification, and
relation extraction. However, prior work has not carefully analyzed the utility
of different contextual properties in context-aware models. In this work, we
show how two different types of contextual information, local discourse context
and speaker context, can be incorporated into a computational model for
classifying argument components in multi-party classroom discussions. We find
that both context types can improve performance, although the improvements are
dependent on context size and position.
|
[
{
"created": "Sat, 20 Feb 2021 08:48:07 GMT",
"version": "v1"
}
] |
2021-02-23
|
[
[
"Lugini",
"Luca",
""
],
[
"Litman",
"Diane",
""
]
] |
Argument mining systems often consider contextual information, i.e. information outside of an argumentative discourse unit, when trained to accomplish tasks such as argument component identification, classification, and relation extraction. However, prior work has not carefully analyzed the utility of different contextual properties in context-aware models. In this work, we show how two different types of contextual information, local discourse context and speaker context, can be incorporated into a computational model for classifying argument components in multi-party classroom discussions. We find that both context types can improve performance, although the improvements are dependent on context size and position.
|
2108.07854
|
Chethan Kumar Anjinappa
|
Chethan K. Anjinappa and Ismail Guvenc
|
Coverage Hole Detection for mmWave Networks: An Unsupervised Learning
Approach
|
This paper appears in: IEEE Communications Letters
| null |
10.1109/LCOMM.2021.3106251
| null |
cs.LG cs.IT cs.NI math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The utilization of millimeter-wave (mmWave) bands in 5G networks poses new
challenges to network planning. Vulnerability to blockages at mmWave bands can
cause coverage holes (CHs) in the radio environment, leading to radio link
failure when a user enters these CHs. Detection of the CHs carries critical
importance so that necessary remedies can be introduced to improve coverage. In
this letter, we propose a novel approach to identify the CHs in an unsupervised
fashion using a state-of-the-art manifold learning technique: uniform manifold
approximation and projection. The key idea is to preserve the
local-connectedness structure inherent in the collected unlabelled channel
samples, such that the CHs from the service area are detectable. Our results on
the DeepMIMO dataset scenario demonstrate that the proposed method can learn
the structure within the data samples and provide visual holes in the
low-dimensional embedding while preserving the CH boundaries. Once the CH
boundary is determined in the low-dimensional embedding, channel-based
localization techniques can be applied to these samples to obtain the
geographical boundaries of the CHs.
|
[
{
"created": "Tue, 17 Aug 2021 19:55:36 GMT",
"version": "v1"
},
{
"created": "Sat, 21 Aug 2021 18:58:54 GMT",
"version": "v2"
}
] |
2021-08-24
|
[
[
"Anjinappa",
"Chethan K.",
""
],
[
"Guvenc",
"Ismail",
""
]
] |
The utilization of millimeter-wave (mmWave) bands in 5G networks poses new challenges to network planning. Vulnerability to blockages at mmWave bands can cause coverage holes (CHs) in the radio environment, leading to radio link failure when a user enters these CHs. Detection of the CHs carries critical importance so that necessary remedies can be introduced to improve coverage. In this letter, we propose a novel approach to identify the CHs in an unsupervised fashion using a state-of-the-art manifold learning technique: uniform manifold approximation and projection. The key idea is to preserve the local-connectedness structure inherent in the collected unlabelled channel samples, such that the CHs from the service area are detectable. Our results on the DeepMIMO dataset scenario demonstrate that the proposed method can learn the structure within the data samples and provide visual holes in the low-dimensional embedding while preserving the CH boundaries. Once the CH boundary is determined in the low-dimensional embedding, channel-based localization techniques can be applied to these samples to obtain the geographical boundaries of the CHs.
|
2112.14842
|
Irfan Khan
|
Syed Wali and Irfan Khan
|
Explainable Signature-based Machine Learning Approach for Identification
of Faults in Grid-Connected Photovoltaic Systems
|
6 pages, 9 figures
| null | null | null |
cs.LG cs.AI cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
The transformation of conventional power networks into smart grids with the
heavy penetration level of renewable energy resources, particularly
grid-connected Photovoltaic (PV) systems, has increased the need for efficient
fault identification systems. Malfunctioning any single component in
grid-connected PV systems may lead to grid instability and other serious
consequences, showing that a reliable fault identification system is the utmost
requirement for ensuring operational integrity. Therefore, this paper presents
a novel fault identification approach based on statistical signatures of PV
operational states. These signatures are unique because each fault has a
different nature and distinctive impact on the electrical system. Thus, the
Random Forest Classifier trained on these extracted signatures showed 100%
accuracy in identifying all types of faults. Furthermore, the performance
comparison of the proposed framework with other Machine Learning classifiers
depicts its credibility. Moreover, to elevate user trust in the predicted
outcomes, SHAP (Shapley Additive Explanation) was utilized during the training
phase to extract a complete model response (global explanation). This extracted
global explanation can help in the assessment of predicted outcomes credibility
by decoding each prediction in terms of features contribution. Hence, the
proposed explainable signature-based fault identification technique is highly
credible and fulfills all the requirements of smart grids.
|
[
{
"created": "Sat, 25 Dec 2021 15:11:18 GMT",
"version": "v1"
}
] |
2022-01-03
|
[
[
"Wali",
"Syed",
""
],
[
"Khan",
"Irfan",
""
]
] |
The transformation of conventional power networks into smart grids with the heavy penetration level of renewable energy resources, particularly grid-connected Photovoltaic (PV) systems, has increased the need for efficient fault identification systems. Malfunctioning any single component in grid-connected PV systems may lead to grid instability and other serious consequences, showing that a reliable fault identification system is the utmost requirement for ensuring operational integrity. Therefore, this paper presents a novel fault identification approach based on statistical signatures of PV operational states. These signatures are unique because each fault has a different nature and distinctive impact on the electrical system. Thus, the Random Forest Classifier trained on these extracted signatures showed 100% accuracy in identifying all types of faults. Furthermore, the performance comparison of the proposed framework with other Machine Learning classifiers depicts its credibility. Moreover, to elevate user trust in the predicted outcomes, SHAP (Shapley Additive Explanation) was utilized during the training phase to extract a complete model response (global explanation). This extracted global explanation can help in the assessment of predicted outcomes credibility by decoding each prediction in terms of features contribution. Hence, the proposed explainable signature-based fault identification technique is highly credible and fulfills all the requirements of smart grids.
|
1806.02953
|
Elahe Sadeghabadi
|
Elahe Sadeghabadi, Seyed Mohammad Azimi-Abarghouyi, Behrooz Makki,
Masoumeh Nasiri-Kenari
|
Asynchronous Downlink Massive MIMO Networks: A Stochastic Geometry
Approach
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Massive multiple-input multiple-output (MIMO) is recognized as a promising
technology for the next generation of wireless networks because of its
potential to increase the spectral efficiency. In initial studies of massive
MIMO, the system has been considered to be perfectly synchronized throughout
the entire cells. However, perfect synchronization may be hard to attain in
practice. Therefore, we study a massive MIMO system whose cells are not
synchronous to each other, while transmissions in a cell are still synchronous.
We analyze an asynchronous downlink massive MIMO system in terms of the
coverage probability and the ergodic rate by means of the stochastic geometry
tool. For comparison, we also obtain the results for the synchronous systems.
In addition, we investigate the effect of the uplink power control and the
number of pilot symbols on the downlink ergodic rate, and we observe that there
is an optimal value for the number of pilot symbols maximizing the downlink
ergodic rate of a cell. Our results also indicate that, compared to the cases
with synchronous transmission, the downlink ergodic rate is more sensitive to
the uplink power control in the asynchronous mode.
|
[
{
"created": "Fri, 8 Jun 2018 02:51:51 GMT",
"version": "v1"
}
] |
2018-06-11
|
[
[
"Sadeghabadi",
"Elahe",
""
],
[
"Azimi-Abarghouyi",
"Seyed Mohammad",
""
],
[
"Makki",
"Behrooz",
""
],
[
"Nasiri-Kenari",
"Masoumeh",
""
]
] |
Massive multiple-input multiple-output (MIMO) is recognized as a promising technology for the next generation of wireless networks because of its potential to increase the spectral efficiency. In initial studies of massive MIMO, the system has been considered to be perfectly synchronized throughout the entire cells. However, perfect synchronization may be hard to attain in practice. Therefore, we study a massive MIMO system whose cells are not synchronous to each other, while transmissions in a cell are still synchronous. We analyze an asynchronous downlink massive MIMO system in terms of the coverage probability and the ergodic rate by means of the stochastic geometry tool. For comparison, we also obtain the results for the synchronous systems. In addition, we investigate the effect of the uplink power control and the number of pilot symbols on the downlink ergodic rate, and we observe that there is an optimal value for the number of pilot symbols maximizing the downlink ergodic rate of a cell. Our results also indicate that, compared to the cases with synchronous transmission, the downlink ergodic rate is more sensitive to the uplink power control in the asynchronous mode.
|
2312.03293
|
Mandar Khoje
|
Mandar Khoje
|
Securing Data Platforms: Strategic Masking Techniques for Privacy and
Security for B2B Enterprise Data
| null | null |
10.14445/22312803/IJCTT-V71I11P107
| null |
cs.CR cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
In today's digital age, the imperative to protect data privacy and security
is a paramount concern, especially for business-to-business (B2B) enterprises
that handle sensitive information. These enterprises are increasingly
constructing data platforms, which are integrated suites of technology
solutions architected for the efficient management, processing, storage, and
data analysis. It has become critical to design these data platforms with
mechanisms that inherently support data privacy and security, particularly as
they encounter the added complexity of safeguarding unstructured data types
such as log files and text documents. Within this context, data masking stands
out as a vital feature of data platform architecture. It proactively conceals
sensitive elements, ensuring data privacy while preserving the information's
value for business operations and analytics. This protective measure entails a
strategic two-fold process: firstly, accurately pinpointing the sensitive data
that necessitates concealment, and secondly, applying sophisticated methods to
disguise that data effectively within the data platform infrastructure. This
research delves into the nuances of embedding advanced data masking techniques
within the very fabric of data platforms and an in-depth exploration of how
enterprises can adopt a comprehensive approach toward effective data masking
implementation by exploring different identification and anonymization
techniques.
|
[
{
"created": "Wed, 6 Dec 2023 05:04:37 GMT",
"version": "v1"
}
] |
2023-12-07
|
[
[
"Khoje",
"Mandar",
""
]
] |
In today's digital age, the imperative to protect data privacy and security is a paramount concern, especially for business-to-business (B2B) enterprises that handle sensitive information. These enterprises are increasingly constructing data platforms, which are integrated suites of technology solutions architected for the efficient management, processing, storage, and data analysis. It has become critical to design these data platforms with mechanisms that inherently support data privacy and security, particularly as they encounter the added complexity of safeguarding unstructured data types such as log files and text documents. Within this context, data masking stands out as a vital feature of data platform architecture. It proactively conceals sensitive elements, ensuring data privacy while preserving the information's value for business operations and analytics. This protective measure entails a strategic two-fold process: firstly, accurately pinpointing the sensitive data that necessitates concealment, and secondly, applying sophisticated methods to disguise that data effectively within the data platform infrastructure. This research delves into the nuances of embedding advanced data masking techniques within the very fabric of data platforms and an in-depth exploration of how enterprises can adopt a comprehensive approach toward effective data masking implementation by exploring different identification and anonymization techniques.
|
cs/9809037
|
David Eppstein
|
Nina Amenta, Marshall Bern, David Eppstein, Shang-Hua Teng
|
Regression Depth and Center Points
|
14 pages, 3 figures
|
Discrete Comput. Geom. 23(3):305-323, 2000
|
10.1007/PL00009502
| null |
cs.CG math.CO
| null |
We show that, for any set of n points in d dimensions, there exists a
hyperplane with regression depth at least ceiling(n/(d+1)). as had been
conjectured by Rousseeuw and Hubert. Dually, for any arrangement of n
hyperplanes in d dimensions there exists a point that cannot escape to infinity
without crossing at least ceiling(n/(d+1)) hyperplanes. We also apply our
approach to related questions on the existence of partitions of the data into
subsets such that a common plane has nonzero regression depth in each subset,
and to the computational complexity of regression depth problems.
|
[
{
"created": "Mon, 21 Sep 1998 21:55:49 GMT",
"version": "v1"
},
{
"created": "Mon, 26 Jul 1999 22:00:37 GMT",
"version": "v2"
}
] |
2010-01-21
|
[
[
"Amenta",
"Nina",
""
],
[
"Bern",
"Marshall",
""
],
[
"Eppstein",
"David",
""
],
[
"Teng",
"Shang-Hua",
""
]
] |
We show that, for any set of n points in d dimensions, there exists a hyperplane with regression depth at least ceiling(n/(d+1)). as had been conjectured by Rousseeuw and Hubert. Dually, for any arrangement of n hyperplanes in d dimensions there exists a point that cannot escape to infinity without crossing at least ceiling(n/(d+1)) hyperplanes. We also apply our approach to related questions on the existence of partitions of the data into subsets such that a common plane has nonzero regression depth in each subset, and to the computational complexity of regression depth problems.
|
2203.07511
|
Robert Wolfe
|
Robert Wolfe, Aylin Caliskan
|
Contrastive Visual Semantic Pretraining Magnifies the Semantics of
Natural Language Representations
|
To be published in ACL 2022
| null | null | null |
cs.CL cs.AI cs.CY cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We examine the effects of contrastive visual semantic pretraining by
comparing the geometry and semantic properties of contextualized English
language representations formed by GPT-2 and CLIP, a zero-shot multimodal image
classifier which adapts the GPT-2 architecture to encode image captions. We
find that contrastive visual semantic pretraining significantly mitigates the
anisotropy found in contextualized word embeddings from GPT-2, such that the
intra-layer self-similarity (mean pairwise cosine similarity) of CLIP word
embeddings is under .25 in all layers, compared to greater than .95 in the top
layer of GPT-2. CLIP word embeddings outperform GPT-2 on word-level semantic
intrinsic evaluation tasks, and achieve a new corpus-based state of the art for
the RG65 evaluation, at .88. CLIP also forms fine-grained semantic
representations of sentences, and obtains Spearman's rho = .73 on the
SemEval-2017 Semantic Textual Similarity Benchmark with no fine-tuning,
compared to no greater than rho = .45 in any layer of GPT-2. Finally,
intra-layer self-similarity of CLIP sentence embeddings decreases as the layer
index increases, finishing at .25 in the top layer, while the self-similarity
of GPT-2 sentence embeddings formed using the EOS token increases
layer-over-layer and never falls below .97. Our results indicate that high
anisotropy is not an inevitable consequence of contextualization, and that
visual semantic pretraining is beneficial not only for ordering visual
representations, but also for encoding useful semantic representations of
language, both on the word level and the sentence level.
|
[
{
"created": "Mon, 14 Mar 2022 21:42:13 GMT",
"version": "v1"
}
] |
2022-03-16
|
[
[
"Wolfe",
"Robert",
""
],
[
"Caliskan",
"Aylin",
""
]
] |
We examine the effects of contrastive visual semantic pretraining by comparing the geometry and semantic properties of contextualized English language representations formed by GPT-2 and CLIP, a zero-shot multimodal image classifier which adapts the GPT-2 architecture to encode image captions. We find that contrastive visual semantic pretraining significantly mitigates the anisotropy found in contextualized word embeddings from GPT-2, such that the intra-layer self-similarity (mean pairwise cosine similarity) of CLIP word embeddings is under .25 in all layers, compared to greater than .95 in the top layer of GPT-2. CLIP word embeddings outperform GPT-2 on word-level semantic intrinsic evaluation tasks, and achieve a new corpus-based state of the art for the RG65 evaluation, at .88. CLIP also forms fine-grained semantic representations of sentences, and obtains Spearman's rho = .73 on the SemEval-2017 Semantic Textual Similarity Benchmark with no fine-tuning, compared to no greater than rho = .45 in any layer of GPT-2. Finally, intra-layer self-similarity of CLIP sentence embeddings decreases as the layer index increases, finishing at .25 in the top layer, while the self-similarity of GPT-2 sentence embeddings formed using the EOS token increases layer-over-layer and never falls below .97. Our results indicate that high anisotropy is not an inevitable consequence of contextualization, and that visual semantic pretraining is beneficial not only for ordering visual representations, but also for encoding useful semantic representations of language, both on the word level and the sentence level.
|
2305.13991
|
Alessandro De Palma
|
Alessandro De Palma, Rudy Bunel, Krishnamurthy Dvijotham, M. Pawan
Kumar, Robert Stanforth, Alessio Lomuscio
|
Expressive Losses for Verified Robustness via Convex Combinations
|
ICLR 2024
| null | null | null |
cs.LG cs.CR stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In order to train networks for verified adversarial robustness, it is common
to over-approximate the worst-case loss over perturbation regions, resulting in
networks that attain verifiability at the expense of standard performance. As
shown in recent work, better trade-offs between accuracy and robustness can be
obtained by carefully coupling adversarial training with over-approximations.
We hypothesize that the expressivity of a loss function, which we formalize as
the ability to span a range of trade-offs between lower and upper bounds to the
worst-case loss through a single parameter (the over-approximation
coefficient), is key to attaining state-of-the-art performance. To support our
hypothesis, we show that trivial expressive losses, obtained via convex
combinations between adversarial attacks and IBP bounds, yield state-of-the-art
results across a variety of settings in spite of their conceptual simplicity.
We provide a detailed analysis of the relationship between the
over-approximation coefficient and performance profiles across different
expressive losses, showing that, while expressivity is essential, better
approximations of the worst-case loss are not necessarily linked to superior
robustness-accuracy trade-offs.
|
[
{
"created": "Tue, 23 May 2023 12:20:29 GMT",
"version": "v1"
},
{
"created": "Thu, 14 Mar 2024 16:20:50 GMT",
"version": "v2"
},
{
"created": "Mon, 18 Mar 2024 14:35:21 GMT",
"version": "v3"
}
] |
2024-03-19
|
[
[
"De Palma",
"Alessandro",
""
],
[
"Bunel",
"Rudy",
""
],
[
"Dvijotham",
"Krishnamurthy",
""
],
[
"Kumar",
"M. Pawan",
""
],
[
"Stanforth",
"Robert",
""
],
[
"Lomuscio",
"Alessio",
""
]
] |
In order to train networks for verified adversarial robustness, it is common to over-approximate the worst-case loss over perturbation regions, resulting in networks that attain verifiability at the expense of standard performance. As shown in recent work, better trade-offs between accuracy and robustness can be obtained by carefully coupling adversarial training with over-approximations. We hypothesize that the expressivity of a loss function, which we formalize as the ability to span a range of trade-offs between lower and upper bounds to the worst-case loss through a single parameter (the over-approximation coefficient), is key to attaining state-of-the-art performance. To support our hypothesis, we show that trivial expressive losses, obtained via convex combinations between adversarial attacks and IBP bounds, yield state-of-the-art results across a variety of settings in spite of their conceptual simplicity. We provide a detailed analysis of the relationship between the over-approximation coefficient and performance profiles across different expressive losses, showing that, while expressivity is essential, better approximations of the worst-case loss are not necessarily linked to superior robustness-accuracy trade-offs.
|
2210.02235
|
Jialing Liao
|
Jialing Liao, Zheng Chen, and Erik G. Larsson
|
Over-the-Air Federated Learning with Privacy Protection via Correlated
Additive Perturbations
|
8 pages, 4 figures, Allerton 2022
| null | null | null |
cs.LG cs.CR cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we consider privacy aspects of wireless federated learning
(FL) with Over-the-Air (OtA) transmission of gradient updates from multiple
users/agents to an edge server. By exploiting the waveform superposition
property of multiple access channels, OtA FL enables the users to transmit
their updates simultaneously with linear processing techniques, which improves
resource efficiency. However, this setting is vulnerable to privacy leakage
since an adversary node can hear directly the uncoded message. Traditional
perturbation-based methods provide privacy protection while sacrificing the
training accuracy due to the reduced signal-to-noise ratio. In this work, we
aim at minimizing privacy leakage to the adversary and the degradation of model
accuracy at the edge server at the same time. More explicitly, spatially
correlated perturbations are added to the gradient vectors at the users before
transmission. Using the zero-sum property of the correlated perturbations, the
side effect of the added perturbation on the aggregated gradients at the edge
server can be minimized. In the meanwhile, the added perturbation will not be
canceled out at the adversary, which prevents privacy leakage. Theoretical
analysis of the perturbation covariance matrix, differential privacy, and model
convergence is provided, based on which an optimization problem is formulated
to jointly design the covariance matrix and the power scaling factor to balance
between privacy protection and convergence performance. Simulation results
validate the correlated perturbation approach can provide strong defense
ability while guaranteeing high learning accuracy.
|
[
{
"created": "Wed, 5 Oct 2022 13:13:35 GMT",
"version": "v1"
}
] |
2022-10-12
|
[
[
"Liao",
"Jialing",
""
],
[
"Chen",
"Zheng",
""
],
[
"Larsson",
"Erik G.",
""
]
] |
In this paper, we consider privacy aspects of wireless federated learning (FL) with Over-the-Air (OtA) transmission of gradient updates from multiple users/agents to an edge server. By exploiting the waveform superposition property of multiple access channels, OtA FL enables the users to transmit their updates simultaneously with linear processing techniques, which improves resource efficiency. However, this setting is vulnerable to privacy leakage since an adversary node can hear directly the uncoded message. Traditional perturbation-based methods provide privacy protection while sacrificing the training accuracy due to the reduced signal-to-noise ratio. In this work, we aim at minimizing privacy leakage to the adversary and the degradation of model accuracy at the edge server at the same time. More explicitly, spatially correlated perturbations are added to the gradient vectors at the users before transmission. Using the zero-sum property of the correlated perturbations, the side effect of the added perturbation on the aggregated gradients at the edge server can be minimized. In the meanwhile, the added perturbation will not be canceled out at the adversary, which prevents privacy leakage. Theoretical analysis of the perturbation covariance matrix, differential privacy, and model convergence is provided, based on which an optimization problem is formulated to jointly design the covariance matrix and the power scaling factor to balance between privacy protection and convergence performance. Simulation results validate the correlated perturbation approach can provide strong defense ability while guaranteeing high learning accuracy.
|
2404.13340
|
Kefan Li
|
Kefan Li, Yuan Yuan
|
Large Language Models as Test Case Generators: Performance Evaluation
and Enhancement
| null | null | null | null |
cs.SE cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Code generation with Large Language Models (LLMs) has been extensively
studied and achieved remarkable progress. As a complementary aspect to code
generation, test case generation is of crucial importance in ensuring the
quality and reliability of code. However, using LLMs as test case generators
has been much less explored. Current research along this line primarily focuses
on enhancing code generation with assistance from test cases generated by LLMs,
while the performance of LLMs in test case generation alone has not been
comprehensively examined. To bridge this gap, we conduct extensive experiments
to study how well LLMs can generate high-quality test cases. We find that as
the problem difficulty increases, state-of-the-art LLMs struggle to generate
correct test cases, largely due to their inherent limitations in computation
and reasoning. To mitigate this issue, we further propose a multi-agent
framework called \emph{TestChain} that decouples the generation of test inputs
and test outputs. Notably, TestChain uses a ReAct format conversation chain for
LLMs to interact with a Python interpreter in order to provide more accurate
test outputs. Our results indicate that TestChain outperforms the baseline by a
large margin. Particularly, in terms of the accuracy of test cases, TestChain
using GPT-4 as the backbone achieves a 13.84\% improvement over the baseline on
the LeetCode-hard dataset.
|
[
{
"created": "Sat, 20 Apr 2024 10:27:01 GMT",
"version": "v1"
}
] |
2024-04-23
|
[
[
"Li",
"Kefan",
""
],
[
"Yuan",
"Yuan",
""
]
] |
Code generation with Large Language Models (LLMs) has been extensively studied and achieved remarkable progress. As a complementary aspect to code generation, test case generation is of crucial importance in ensuring the quality and reliability of code. However, using LLMs as test case generators has been much less explored. Current research along this line primarily focuses on enhancing code generation with assistance from test cases generated by LLMs, while the performance of LLMs in test case generation alone has not been comprehensively examined. To bridge this gap, we conduct extensive experiments to study how well LLMs can generate high-quality test cases. We find that as the problem difficulty increases, state-of-the-art LLMs struggle to generate correct test cases, largely due to their inherent limitations in computation and reasoning. To mitigate this issue, we further propose a multi-agent framework called \emph{TestChain} that decouples the generation of test inputs and test outputs. Notably, TestChain uses a ReAct format conversation chain for LLMs to interact with a Python interpreter in order to provide more accurate test outputs. Our results indicate that TestChain outperforms the baseline by a large margin. Particularly, in terms of the accuracy of test cases, TestChain using GPT-4 as the backbone achieves a 13.84\% improvement over the baseline on the LeetCode-hard dataset.
|
1811.01544
|
Myoungsoo Jung
|
Donghyun Gouk, Miryeong Kwon, Jie Zhang, Sungjoon Koh, Wonil Choi, Nam
Sung Kim, Mahmut Kandemir and Myoungsoo Jung
|
Amber: Enabling Precise Full-System Simulation with Detailed Modeling of
All SSD Resources
|
This paper has been accepted at the 51st Annual IEEE/ACM
International Symposium on Microarchitecture (MICRO '51), 2018. This material
is presented to ensure timely dissemination of scholarly and technical work
| null | null | null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
SSDs become a major storage component in modern memory hierarchies, and SSD
research demands exploring future simulation-based studies by integrating SSD
subsystems into a full-system environment. However, several challenges exist to
model SSDs under a full-system simulations; SSDs are composed upon their own
complete system and architecture, which employ all necessary hardware, such as
CPUs, DRAM and interconnect network. Employing the hardware components, SSDs
also require to have multiple device controllers, internal caches and software
modules that respect a wide spectrum of storage interfaces and protocols. These
SSD hardware and software are all necessary to incarnate storage subsystems
under full-system environment, which can operate in parallel with the host
system. In this work, we introduce a new SSD simulation framework, SimpleSSD
2.0, namely Amber, that models embedded CPU cores, DRAMs, and various flash
technologies (within an SSD), and operate under the full system simulation
environment by enabling a data transfer emulation. Amber also includes full
firmware stack, including DRAM cache logic, flash firmware, such as FTL and
HIL, and obey diverse standard protocols by revising the host DMA engines and
system buses of a popular full system simulator's all functional and timing CPU
models (gem5). The proposed simulator can capture the details of dynamic
performance and power of embedded cores, DRAMs, firmware and flash under the
executions of various OS systems and hardware platforms. Using Amber, we
characterize several system-level challenges by simulating different types of
fullsystems, such as mobile devices and general-purpose computers, and offer
comprehensive analyses by comparing passive storage and active storage
architectures.
|
[
{
"created": "Mon, 5 Nov 2018 07:59:34 GMT",
"version": "v1"
}
] |
2018-11-06
|
[
[
"Gouk",
"Donghyun",
""
],
[
"Kwon",
"Miryeong",
""
],
[
"Zhang",
"Jie",
""
],
[
"Koh",
"Sungjoon",
""
],
[
"Choi",
"Wonil",
""
],
[
"Kim",
"Nam Sung",
""
],
[
"Kandemir",
"Mahmut",
""
],
[
"Jung",
"Myoungsoo",
""
]
] |
SSDs become a major storage component in modern memory hierarchies, and SSD research demands exploring future simulation-based studies by integrating SSD subsystems into a full-system environment. However, several challenges exist to model SSDs under a full-system simulations; SSDs are composed upon their own complete system and architecture, which employ all necessary hardware, such as CPUs, DRAM and interconnect network. Employing the hardware components, SSDs also require to have multiple device controllers, internal caches and software modules that respect a wide spectrum of storage interfaces and protocols. These SSD hardware and software are all necessary to incarnate storage subsystems under full-system environment, which can operate in parallel with the host system. In this work, we introduce a new SSD simulation framework, SimpleSSD 2.0, namely Amber, that models embedded CPU cores, DRAMs, and various flash technologies (within an SSD), and operate under the full system simulation environment by enabling a data transfer emulation. Amber also includes full firmware stack, including DRAM cache logic, flash firmware, such as FTL and HIL, and obey diverse standard protocols by revising the host DMA engines and system buses of a popular full system simulator's all functional and timing CPU models (gem5). The proposed simulator can capture the details of dynamic performance and power of embedded cores, DRAMs, firmware and flash under the executions of various OS systems and hardware platforms. Using Amber, we characterize several system-level challenges by simulating different types of fullsystems, such as mobile devices and general-purpose computers, and offer comprehensive analyses by comparing passive storage and active storage architectures.
|
1910.03892
|
Daan de Geus
|
Daan de Geus, Panagiotis Meletis, Gijs Dubbelman
|
Fast Panoptic Segmentation Network
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we present an end-to-end network for fast panoptic
segmentation. This network, called Fast Panoptic Segmentation Network (FPSNet),
does not require computationally costly instance mask predictions or merging
heuristics. This is achieved by casting the panoptic task into a custom dense
pixel-wise classification task, which assigns a class label or an instance id
to each pixel. We evaluate FPSNet on the Cityscapes and Pascal VOC datasets,
and find that FPSNet is faster than existing panoptic segmentation methods,
while achieving better or similar panoptic segmentation performance. On the
Cityscapes validation set, we achieve a Panoptic Quality score of 55.1%, at
prediction times of 114 milliseconds for images with a resolution of 1024x2048
pixels. For lower resolutions of the Cityscapes dataset and for the Pascal VOC
dataset, FPSNet runs at 22 and 35 frames per second, respectively.
|
[
{
"created": "Wed, 9 Oct 2019 10:41:28 GMT",
"version": "v1"
}
] |
2019-10-10
|
[
[
"de Geus",
"Daan",
""
],
[
"Meletis",
"Panagiotis",
""
],
[
"Dubbelman",
"Gijs",
""
]
] |
In this work, we present an end-to-end network for fast panoptic segmentation. This network, called Fast Panoptic Segmentation Network (FPSNet), does not require computationally costly instance mask predictions or merging heuristics. This is achieved by casting the panoptic task into a custom dense pixel-wise classification task, which assigns a class label or an instance id to each pixel. We evaluate FPSNet on the Cityscapes and Pascal VOC datasets, and find that FPSNet is faster than existing panoptic segmentation methods, while achieving better or similar panoptic segmentation performance. On the Cityscapes validation set, we achieve a Panoptic Quality score of 55.1%, at prediction times of 114 milliseconds for images with a resolution of 1024x2048 pixels. For lower resolutions of the Cityscapes dataset and for the Pascal VOC dataset, FPSNet runs at 22 and 35 frames per second, respectively.
|
1508.04278
|
Fabian Fuchs
|
Fabian Fuchs, Matthias Wolf
|
On the Distributed Computation of Fractional Connected Dominating Set
Packings
| null | null | null | null |
cs.DC cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One of the most fundamental problems in wireless networks is to achieve high
throughput. Fractional Connected Dominating Set (FCDS) Packings can achieve a
throughput of ${\Theta}(k/\log n)$ messages for networks with node connectivity
$k$, which is optimal regarding routing-based message transmission. FCDS were
proposed by Censor-Hillel \emph{et al.} [SODA'14,PODC'14] and are a natural
generalization to Connected Dominating Sets (CDS), allowing each node to
participate with a fraction of its weight in multiple FCDS. Thus, $\Omega(k)$
co-existing transmission backbones are established, taking full advantage of
the networks connectivity. We propose a modified distributed algorithm that
improves upon previous algorithms for $k\Delta \in o(\min\{\frac{n \log n}{k}
,D,\sqrt{n \log n} \log^* n\}\log n)$, where $\Delta$ is the maximum node
degree, $D$ the diameter and $n$ the number of nodes in the network. We achieve
this by explicitly computing connections between tentative dominating sets.
|
[
{
"created": "Tue, 18 Aug 2015 11:28:39 GMT",
"version": "v1"
}
] |
2015-08-19
|
[
[
"Fuchs",
"Fabian",
""
],
[
"Wolf",
"Matthias",
""
]
] |
One of the most fundamental problems in wireless networks is to achieve high throughput. Fractional Connected Dominating Set (FCDS) Packings can achieve a throughput of ${\Theta}(k/\log n)$ messages for networks with node connectivity $k$, which is optimal regarding routing-based message transmission. FCDS were proposed by Censor-Hillel \emph{et al.} [SODA'14,PODC'14] and are a natural generalization to Connected Dominating Sets (CDS), allowing each node to participate with a fraction of its weight in multiple FCDS. Thus, $\Omega(k)$ co-existing transmission backbones are established, taking full advantage of the networks connectivity. We propose a modified distributed algorithm that improves upon previous algorithms for $k\Delta \in o(\min\{\frac{n \log n}{k} ,D,\sqrt{n \log n} \log^* n\}\log n)$, where $\Delta$ is the maximum node degree, $D$ the diameter and $n$ the number of nodes in the network. We achieve this by explicitly computing connections between tentative dominating sets.
|
1907.08865
|
Marc Hellmuth
|
Marc Hellmuth, Manuela Gei{\ss} and Peter F. Stadler
|
Complexity of Modification Problems for Reciprocal Best Match Graphs
| null | null | null | null |
cs.CC cs.DS math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reciprocal best match graphs (RBMGs) are vertex colored graphs whose vertices
represent genes and the colors the species where the genes reside. Edges
identify pairs of genes that are most closely related with respect to an
underlying evolutionary tree. In practical applications this tree is unknown
and the edges of the RBMGs are inferred by quantifying sequence similarity. Due
to noise in the data, these empirically determined graphs in general violate
the condition of being a ``biologically feasible'' RBMG. Therefore, it is of
practical interest in computational biology to correct the initial estimate.
Here we consider deletion (remove at most $k$ edges) and editing (add or delete
at most $k$ edges) problems. We show that the decision version of the deletion
and editing problem to obtain RBMGs from vertex colored graphs is NP-hard.
Using known results for the so-called bicluster editing, we show that the RBMG
editing problem for $2$-colored graphs is fixed-parameter tractable.
A restricted class of RBMGs appears in the context of orthology detection.
These are cographs with a specific type of vertex coloring known as
hierarchical coloring. We show that the decision problem of modifying a
vertex-colored graph (either by edge-deletion or editing) into an RBMG with
cograph structure or, equivalently, to an hierarchically colored cograph is
NP-complete.
|
[
{
"created": "Sat, 20 Jul 2019 20:22:08 GMT",
"version": "v1"
}
] |
2019-07-23
|
[
[
"Hellmuth",
"Marc",
""
],
[
"Geiß",
"Manuela",
""
],
[
"Stadler",
"Peter F.",
""
]
] |
Reciprocal best match graphs (RBMGs) are vertex colored graphs whose vertices represent genes and the colors the species where the genes reside. Edges identify pairs of genes that are most closely related with respect to an underlying evolutionary tree. In practical applications this tree is unknown and the edges of the RBMGs are inferred by quantifying sequence similarity. Due to noise in the data, these empirically determined graphs in general violate the condition of being a ``biologically feasible'' RBMG. Therefore, it is of practical interest in computational biology to correct the initial estimate. Here we consider deletion (remove at most $k$ edges) and editing (add or delete at most $k$ edges) problems. We show that the decision version of the deletion and editing problem to obtain RBMGs from vertex colored graphs is NP-hard. Using known results for the so-called bicluster editing, we show that the RBMG editing problem for $2$-colored graphs is fixed-parameter tractable. A restricted class of RBMGs appears in the context of orthology detection. These are cographs with a specific type of vertex coloring known as hierarchical coloring. We show that the decision problem of modifying a vertex-colored graph (either by edge-deletion or editing) into an RBMG with cograph structure or, equivalently, to an hierarchically colored cograph is NP-complete.
|
1807.04723
|
Iulian Vlad Serban
|
Iulian Vlad Serban, Chinnadhurai Sankar, Michael Pieper, Joelle
Pineau, Yoshua Bengio
|
The Bottleneck Simulator: A Model-based Deep Reinforcement Learning
Approach
|
26 pages, 2 figures, 4 tables
| null | null | null |
cs.LG cs.AI cs.CL cs.NE stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep reinforcement learning has recently shown many impressive successes.
However, one major obstacle towards applying such methods to real-world
problems is their lack of data-efficiency. To this end, we propose the
Bottleneck Simulator: a model-based reinforcement learning method which
combines a learned, factorized transition model of the environment with rollout
simulations to learn an effective policy from few examples. The learned
transition model employs an abstract, discrete (bottleneck) state, which
increases sample efficiency by reducing the number of model parameters and by
exploiting structural properties of the environment. We provide a mathematical
analysis of the Bottleneck Simulator in terms of fixed points of the learned
policy, which reveals how performance is affected by four distinct sources of
error: an error related to the abstract space structure, an error related to
the transition model estimation variance, an error related to the transition
model estimation bias, and an error related to the transition model class bias.
Finally, we evaluate the Bottleneck Simulator on two natural language
processing tasks: a text adventure game and a real-world, complex dialogue
response selection task. On both tasks, the Bottleneck Simulator yields
excellent performance beating competing approaches.
|
[
{
"created": "Thu, 12 Jul 2018 16:59:28 GMT",
"version": "v1"
}
] |
2018-07-13
|
[
[
"Serban",
"Iulian Vlad",
""
],
[
"Sankar",
"Chinnadhurai",
""
],
[
"Pieper",
"Michael",
""
],
[
"Pineau",
"Joelle",
""
],
[
"Bengio",
"Yoshua",
""
]
] |
Deep reinforcement learning has recently shown many impressive successes. However, one major obstacle towards applying such methods to real-world problems is their lack of data-efficiency. To this end, we propose the Bottleneck Simulator: a model-based reinforcement learning method which combines a learned, factorized transition model of the environment with rollout simulations to learn an effective policy from few examples. The learned transition model employs an abstract, discrete (bottleneck) state, which increases sample efficiency by reducing the number of model parameters and by exploiting structural properties of the environment. We provide a mathematical analysis of the Bottleneck Simulator in terms of fixed points of the learned policy, which reveals how performance is affected by four distinct sources of error: an error related to the abstract space structure, an error related to the transition model estimation variance, an error related to the transition model estimation bias, and an error related to the transition model class bias. Finally, we evaluate the Bottleneck Simulator on two natural language processing tasks: a text adventure game and a real-world, complex dialogue response selection task. On both tasks, the Bottleneck Simulator yields excellent performance beating competing approaches.
|
1509.06361
|
Supartha Podder
|
Raghav Kulkarni, Supartha Podder
|
Quantum Query Complexity of Subgraph Isomorphism and Homomorphism
|
16 pages, 2 figures
| null | null | null |
cs.CC quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Let $H$ be a fixed graph on $n$ vertices. Let $f_H(G) = 1$ iff the input
graph $G$ on $n$ vertices contains $H$ as a (not necessarily induced) subgraph.
Let $\alpha_H$ denote the cardinality of a maximum independent set of $H$. In
this paper we show:
\[Q(f_H) = \Omega\left(\sqrt{\alpha_H \cdot n}\right),\] where $Q(f_H)$
denotes the quantum query complexity of $f_H$.
As a consequence we obtain a lower bounds for $Q(f_H)$ in terms of several
other parameters of $H$ such as the average degree, minimum vertex cover,
chromatic number, and the critical probability.
We also use the above bound to show that $Q(f_H) = \Omega(n^{3/4})$ for any
$H$, improving on the previously best known bound of $\Omega(n^{2/3})$. Until
very recently, it was believed that the quantum query complexity is at least
square root of the randomized one. Our $\Omega(n^{3/4})$ bound for $Q(f_H)$
matches the square root of the current best known bound for the randomized
query complexity of $f_H$, which is $\Omega(n^{3/2})$ due to Gr\"oger.
Interestingly, the randomized bound of $\Omega(\alpha_H \cdot n)$ for $f_H$
still remains open.
We also study the Subgraph Homomorphism Problem, denoted by $f_{[H]}$, and
show that $Q(f_{[H]}) = \Omega(n)$.
Finally we extend our results to the $3$-uniform hypergraphs. In particular,
we show an $\Omega(n^{4/5})$ bound for quantum query complexity of the Subgraph
Isomorphism, improving on the previously known $\Omega(n^{3/4})$ bound. For the
Subgraph Homomorphism, we obtain an $\Omega(n^{3/2})$ bound for the same.
|
[
{
"created": "Mon, 21 Sep 2015 19:54:51 GMT",
"version": "v1"
},
{
"created": "Tue, 22 Sep 2015 02:53:42 GMT",
"version": "v2"
}
] |
2015-09-23
|
[
[
"Kulkarni",
"Raghav",
""
],
[
"Podder",
"Supartha",
""
]
] |
Let $H$ be a fixed graph on $n$ vertices. Let $f_H(G) = 1$ iff the input graph $G$ on $n$ vertices contains $H$ as a (not necessarily induced) subgraph. Let $\alpha_H$ denote the cardinality of a maximum independent set of $H$. In this paper we show: \[Q(f_H) = \Omega\left(\sqrt{\alpha_H \cdot n}\right),\] where $Q(f_H)$ denotes the quantum query complexity of $f_H$. As a consequence we obtain a lower bounds for $Q(f_H)$ in terms of several other parameters of $H$ such as the average degree, minimum vertex cover, chromatic number, and the critical probability. We also use the above bound to show that $Q(f_H) = \Omega(n^{3/4})$ for any $H$, improving on the previously best known bound of $\Omega(n^{2/3})$. Until very recently, it was believed that the quantum query complexity is at least square root of the randomized one. Our $\Omega(n^{3/4})$ bound for $Q(f_H)$ matches the square root of the current best known bound for the randomized query complexity of $f_H$, which is $\Omega(n^{3/2})$ due to Gr\"oger. Interestingly, the randomized bound of $\Omega(\alpha_H \cdot n)$ for $f_H$ still remains open. We also study the Subgraph Homomorphism Problem, denoted by $f_{[H]}$, and show that $Q(f_{[H]}) = \Omega(n)$. Finally we extend our results to the $3$-uniform hypergraphs. In particular, we show an $\Omega(n^{4/5})$ bound for quantum query complexity of the Subgraph Isomorphism, improving on the previously known $\Omega(n^{3/4})$ bound. For the Subgraph Homomorphism, we obtain an $\Omega(n^{3/2})$ bound for the same.
|
2311.11164
|
Eleftherios Tsonis
|
Eleftherios Tsonis, Paraskevi Tzouveli, Athanasios Voulodimos
|
Mitigating Exposure Bias in Discriminator Guided Diffusion Models
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Diffusion Models have demonstrated remarkable performance in image
generation. However, their demanding computational requirements for training
have prompted ongoing efforts to enhance the quality of generated images
through modifications in the sampling process. A recent approach, known as
Discriminator Guidance, seeks to bridge the gap between the model score and the
data score by incorporating an auxiliary term, derived from a discriminator
network. We show that despite significantly improving sample quality, this
technique has not resolved the persistent issue of Exposure Bias and we propose
SEDM-G++, which incorporates a modified sampling approach, combining
Discriminator Guidance and Epsilon Scaling. Our proposed approach outperforms
the current state-of-the-art, by achieving an FID score of 1.73 on the
unconditional CIFAR-10 dataset.
|
[
{
"created": "Sat, 18 Nov 2023 20:49:50 GMT",
"version": "v1"
}
] |
2023-11-21
|
[
[
"Tsonis",
"Eleftherios",
""
],
[
"Tzouveli",
"Paraskevi",
""
],
[
"Voulodimos",
"Athanasios",
""
]
] |
Diffusion Models have demonstrated remarkable performance in image generation. However, their demanding computational requirements for training have prompted ongoing efforts to enhance the quality of generated images through modifications in the sampling process. A recent approach, known as Discriminator Guidance, seeks to bridge the gap between the model score and the data score by incorporating an auxiliary term, derived from a discriminator network. We show that despite significantly improving sample quality, this technique has not resolved the persistent issue of Exposure Bias and we propose SEDM-G++, which incorporates a modified sampling approach, combining Discriminator Guidance and Epsilon Scaling. Our proposed approach outperforms the current state-of-the-art, by achieving an FID score of 1.73 on the unconditional CIFAR-10 dataset.
|
1406.0062
|
Fahem Kebair fk
|
Fahem Kebair and Fr\'ed\'eric Serin
|
Towards a Multiagent Decision Support System for crisis Management
|
14 pages. arXiv admin note: text overlap with arXiv:0907.0499
|
J. Intelligent Systems 20(1): 47-60 (2011)
| null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Crisis management is a complex problem raised by the scientific community
currently. Decision support systems are a suitable solution for such issues,
they are indeed able to help emergency managers to prevent and to manage crisis
in emergency situations. However, they should be enough flexible and adaptive
in order to be reliable to solve complex problems that are plunged in dynamic
and unpredictable environments. The approach we propose in this paper addresses
this challenge. We expose here a modelling of information for an emergency
environment and an architecture of a multiagent decision support system that
deals with these information in order to prevent and to manage the occur of a
crisis in emergency situations. We focus on the first level of the system
mechanism which intends to perceive and to reflect the evolution of the current
situation. The general approach and experimentations are provided here.
|
[
{
"created": "Sat, 31 May 2014 09:57:02 GMT",
"version": "v1"
}
] |
2014-06-03
|
[
[
"Kebair",
"Fahem",
""
],
[
"Serin",
"Frédéric",
""
]
] |
Crisis management is a complex problem raised by the scientific community currently. Decision support systems are a suitable solution for such issues, they are indeed able to help emergency managers to prevent and to manage crisis in emergency situations. However, they should be enough flexible and adaptive in order to be reliable to solve complex problems that are plunged in dynamic and unpredictable environments. The approach we propose in this paper addresses this challenge. We expose here a modelling of information for an emergency environment and an architecture of a multiagent decision support system that deals with these information in order to prevent and to manage the occur of a crisis in emergency situations. We focus on the first level of the system mechanism which intends to perceive and to reflect the evolution of the current situation. The general approach and experimentations are provided here.
|
2103.06641
|
Aaron Roth
|
Sergul Aydore, William Brown, Michael Kearns, Krishnaram Kenthapadi,
Luca Melis, Aaron Roth, Ankit Siva
|
Differentially Private Query Release Through Adaptive Projection
| null | null | null | null |
cs.LG cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose, implement, and evaluate a new algorithm for releasing answers to
very large numbers of statistical queries like $k$-way marginals, subject to
differential privacy. Our algorithm makes adaptive use of a continuous
relaxation of the Projection Mechanism, which answers queries on the private
dataset using simple perturbation, and then attempts to find the synthetic
dataset that most closely matches the noisy answers. We use a continuous
relaxation of the synthetic dataset domain which makes the projection loss
differentiable, and allows us to use efficient ML optimization techniques and
tooling. Rather than answering all queries up front, we make judicious use of
our privacy budget by iteratively and adaptively finding queries for which our
(relaxed) synthetic data has high error, and then repeating the projection. We
perform extensive experimental evaluations across a range of parameters and
datasets, and find that our method outperforms existing algorithms in many
cases, especially when the privacy budget is small or the query class is large.
|
[
{
"created": "Thu, 11 Mar 2021 12:43:18 GMT",
"version": "v1"
},
{
"created": "Wed, 23 Jun 2021 15:44:57 GMT",
"version": "v2"
}
] |
2021-06-24
|
[
[
"Aydore",
"Sergul",
""
],
[
"Brown",
"William",
""
],
[
"Kearns",
"Michael",
""
],
[
"Kenthapadi",
"Krishnaram",
""
],
[
"Melis",
"Luca",
""
],
[
"Roth",
"Aaron",
""
],
[
"Siva",
"Ankit",
""
]
] |
We propose, implement, and evaluate a new algorithm for releasing answers to very large numbers of statistical queries like $k$-way marginals, subject to differential privacy. Our algorithm makes adaptive use of a continuous relaxation of the Projection Mechanism, which answers queries on the private dataset using simple perturbation, and then attempts to find the synthetic dataset that most closely matches the noisy answers. We use a continuous relaxation of the synthetic dataset domain which makes the projection loss differentiable, and allows us to use efficient ML optimization techniques and tooling. Rather than answering all queries up front, we make judicious use of our privacy budget by iteratively and adaptively finding queries for which our (relaxed) synthetic data has high error, and then repeating the projection. We perform extensive experimental evaluations across a range of parameters and datasets, and find that our method outperforms existing algorithms in many cases, especially when the privacy budget is small or the query class is large.
|
1908.08908
|
Huynh Manh
|
Manh Huynh and Gita Alaghband
|
Trajectory Prediction by Coupling Scene-LSTM with Human Movement LSTM
|
To appear in ISVC 2019
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We develop a novel human trajectory prediction system that incorporates the
scene information (Scene-LSTM) as well as individual pedestrian movement
(Pedestrian-LSTM) trained simultaneously within static crowded scenes. We
superimpose a two-level grid structure (grid cells and subgrids) on the scene
to encode spatial granularity plus common human movements. The Scene-LSTM
captures the commonly traveled paths that can be used to significantly
influence the accuracy of human trajectory prediction in local areas (i.e. grid
cells). We further design scene data filters, consisting of a hard filter and a
soft filter, to select the relevant scene information in a local region when
necessary and combine it with Pedestrian-LSTM for forecasting a pedestrian's
future locations. The experimental results on several publicly available
datasets demonstrate that our method outperforms related works and can produce
more accurate predicted trajectories in different scene contexts.
|
[
{
"created": "Fri, 23 Aug 2019 17:31:59 GMT",
"version": "v1"
}
] |
2019-08-26
|
[
[
"Huynh",
"Manh",
""
],
[
"Alaghband",
"Gita",
""
]
] |
We develop a novel human trajectory prediction system that incorporates the scene information (Scene-LSTM) as well as individual pedestrian movement (Pedestrian-LSTM) trained simultaneously within static crowded scenes. We superimpose a two-level grid structure (grid cells and subgrids) on the scene to encode spatial granularity plus common human movements. The Scene-LSTM captures the commonly traveled paths that can be used to significantly influence the accuracy of human trajectory prediction in local areas (i.e. grid cells). We further design scene data filters, consisting of a hard filter and a soft filter, to select the relevant scene information in a local region when necessary and combine it with Pedestrian-LSTM for forecasting a pedestrian's future locations. The experimental results on several publicly available datasets demonstrate that our method outperforms related works and can produce more accurate predicted trajectories in different scene contexts.
|
1711.07211
|
Ilias Diakonikolas
|
Ilias Diakonikolas and Daniel M. Kane and Alistair Stewart
|
List-Decodable Robust Mean Estimation and Learning Mixtures of Spherical
Gaussians
| null | null | null | null |
cs.DS cs.CC cs.IT cs.LG math.IT math.ST stat.TH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the problem of list-decodable Gaussian mean estimation and the
related problem of learning mixtures of separated spherical Gaussians. We
develop a set of techniques that yield new efficient algorithms with
significantly improved guarantees for these problems.
{\bf List-Decodable Mean Estimation.} Fix any $d \in \mathbb{Z}_+$ and $0<
\alpha <1/2$. We design an algorithm with runtime $O
(\mathrm{poly}(n/\alpha)^{d})$ that outputs a list of $O(1/\alpha)$ many
candidate vectors such that with high probability one of the candidates is
within $\ell_2$-distance $O(\alpha^{-1/(2d)})$ from the true mean. The only
previous algorithm for this problem achieved error $\tilde O(\alpha^{-1/2})$
under second moment conditions. For $d = O(1/\epsilon)$, our algorithm runs in
polynomial time and achieves error $O(\alpha^{\epsilon})$. We also give a
Statistical Query lower bound suggesting that the complexity of our algorithm
is qualitatively close to best possible.
{\bf Learning Mixtures of Spherical Gaussians.} We give a learning algorithm
for mixtures of spherical Gaussians that succeeds under significantly weaker
separation assumptions compared to prior work. For the prototypical case of a
uniform mixture of $k$ identity covariance Gaussians we obtain: For any
$\epsilon>0$, if the pairwise separation between the means is at least
$\Omega(k^{\epsilon}+\sqrt{\log(1/\delta)})$, our algorithm learns the unknown
parameters within accuracy $\delta$ with sample complexity and running time
$\mathrm{poly} (n, 1/\delta, (k/\epsilon)^{1/\epsilon})$. The previously best
known polynomial time algorithm required separation at least $k^{1/4}
\mathrm{polylog}(k/\delta)$.
Our main technical contribution is a new technique, using degree-$d$
multivariate polynomials, to remove outliers from high-dimensional datasets
where the majority of the points are corrupted.
|
[
{
"created": "Mon, 20 Nov 2017 09:07:08 GMT",
"version": "v1"
}
] |
2017-11-21
|
[
[
"Diakonikolas",
"Ilias",
""
],
[
"Kane",
"Daniel M.",
""
],
[
"Stewart",
"Alistair",
""
]
] |
We study the problem of list-decodable Gaussian mean estimation and the related problem of learning mixtures of separated spherical Gaussians. We develop a set of techniques that yield new efficient algorithms with significantly improved guarantees for these problems. {\bf List-Decodable Mean Estimation.} Fix any $d \in \mathbb{Z}_+$ and $0< \alpha <1/2$. We design an algorithm with runtime $O (\mathrm{poly}(n/\alpha)^{d})$ that outputs a list of $O(1/\alpha)$ many candidate vectors such that with high probability one of the candidates is within $\ell_2$-distance $O(\alpha^{-1/(2d)})$ from the true mean. The only previous algorithm for this problem achieved error $\tilde O(\alpha^{-1/2})$ under second moment conditions. For $d = O(1/\epsilon)$, our algorithm runs in polynomial time and achieves error $O(\alpha^{\epsilon})$. We also give a Statistical Query lower bound suggesting that the complexity of our algorithm is qualitatively close to best possible. {\bf Learning Mixtures of Spherical Gaussians.} We give a learning algorithm for mixtures of spherical Gaussians that succeeds under significantly weaker separation assumptions compared to prior work. For the prototypical case of a uniform mixture of $k$ identity covariance Gaussians we obtain: For any $\epsilon>0$, if the pairwise separation between the means is at least $\Omega(k^{\epsilon}+\sqrt{\log(1/\delta)})$, our algorithm learns the unknown parameters within accuracy $\delta$ with sample complexity and running time $\mathrm{poly} (n, 1/\delta, (k/\epsilon)^{1/\epsilon})$. The previously best known polynomial time algorithm required separation at least $k^{1/4} \mathrm{polylog}(k/\delta)$. Our main technical contribution is a new technique, using degree-$d$ multivariate polynomials, to remove outliers from high-dimensional datasets where the majority of the points are corrupted.
|
2107.11536
|
Bingbing Rao
|
Bingbing Rao, Zixia Liu, Hong Zhang, Siyang Lu, Liqiang Wang
|
SODA: A Semantics-Aware Optimization Framework for Data-Intensive
Applications Using Hybrid Program Analysis
|
2021 IEEE International Conference on Cloud Computing
| null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
In the era of data explosion, a growing number of data-intensive computing
frameworks, such as Apache Hadoop and Spark, have been proposed to handle the
massive volume of unstructured data in parallel. Since programming models
provided by these frameworks allow users to specify complex and diversified
user-defined functions (UDFs) with predefined operations, the grand challenge
of tuning up entire system performance arises if programmers do not fully
understand the semantics of code, data, and runtime systems. In this paper, we
design a holistic semantics-aware optimization for data-intensive applications
using hybrid program analysis} (SODA) to assist programmers to tune performance
issues. SODA is a two-phase framework: the offline phase is a static analysis
that analyzes code and performance profiling data from the online phase of
prior executions to generate a parameterized and instrumented application; the
online phase is a dynamic analysis that keeps track of the application's
execution and collects runtime information of data and system. Extensive
experimental results on four real-world Spark applications show that SODA can
gain up to 60%, 10%, 8%, faster than its original implementation, with the
three proposed optimization strategies, i.e., cache management, operation
reordering, and element pruning, respectively.
|
[
{
"created": "Sat, 24 Jul 2021 05:33:05 GMT",
"version": "v1"
}
] |
2021-07-27
|
[
[
"Rao",
"Bingbing",
""
],
[
"Liu",
"Zixia",
""
],
[
"Zhang",
"Hong",
""
],
[
"Lu",
"Siyang",
""
],
[
"Wang",
"Liqiang",
""
]
] |
In the era of data explosion, a growing number of data-intensive computing frameworks, such as Apache Hadoop and Spark, have been proposed to handle the massive volume of unstructured data in parallel. Since programming models provided by these frameworks allow users to specify complex and diversified user-defined functions (UDFs) with predefined operations, the grand challenge of tuning up entire system performance arises if programmers do not fully understand the semantics of code, data, and runtime systems. In this paper, we design a holistic semantics-aware optimization for data-intensive applications using hybrid program analysis} (SODA) to assist programmers to tune performance issues. SODA is a two-phase framework: the offline phase is a static analysis that analyzes code and performance profiling data from the online phase of prior executions to generate a parameterized and instrumented application; the online phase is a dynamic analysis that keeps track of the application's execution and collects runtime information of data and system. Extensive experimental results on four real-world Spark applications show that SODA can gain up to 60%, 10%, 8%, faster than its original implementation, with the three proposed optimization strategies, i.e., cache management, operation reordering, and element pruning, respectively.
|
2004.14983
|
Nora Hollenstein
|
Giuseppe Russo, Nora Hollenstein, Claudiu Musat, Ce Zhang
|
Control, Generate, Augment: A Scalable Framework for Multi-Attribute
Text Generation
|
Accepted at Findings of EMNLP 2020
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce CGA, a conditional VAE architecture, to control, generate, and
augment text. CGA is able to generate natural English sentences controlling
multiple semantic and syntactic attributes by combining adversarial learning
with a context-aware loss and a cyclical word dropout routine. We demonstrate
the value of the individual model components in an ablation study. The
scalability of our approach is ensured through a single discriminator,
independently of the number of attributes. We show high quality, diversity and
attribute control in the generated sentences through a series of automatic and
human assessments. As the main application of our work, we test the potential
of this new NLG model in a data augmentation scenario. In a downstream NLP
task, the sentences generated by our CGA model show significant improvements
over a strong baseline, and a classification performance often comparable to
adding same amount of additional real data.
|
[
{
"created": "Thu, 30 Apr 2020 17:31:16 GMT",
"version": "v1"
},
{
"created": "Fri, 2 Oct 2020 12:23:16 GMT",
"version": "v2"
}
] |
2020-10-05
|
[
[
"Russo",
"Giuseppe",
""
],
[
"Hollenstein",
"Nora",
""
],
[
"Musat",
"Claudiu",
""
],
[
"Zhang",
"Ce",
""
]
] |
We introduce CGA, a conditional VAE architecture, to control, generate, and augment text. CGA is able to generate natural English sentences controlling multiple semantic and syntactic attributes by combining adversarial learning with a context-aware loss and a cyclical word dropout routine. We demonstrate the value of the individual model components in an ablation study. The scalability of our approach is ensured through a single discriminator, independently of the number of attributes. We show high quality, diversity and attribute control in the generated sentences through a series of automatic and human assessments. As the main application of our work, we test the potential of this new NLG model in a data augmentation scenario. In a downstream NLP task, the sentences generated by our CGA model show significant improvements over a strong baseline, and a classification performance often comparable to adding same amount of additional real data.
|
2305.03378
|
Zenghao Chai
|
Zhengzhuo Xu and Zenghao Chai and Chengyin Xu and Chun Yuan and Haiqin
Yang
|
Towards Effective Collaborative Learning in Long-Tailed Recognition
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Real-world data usually suffers from severe class imbalance and long-tailed
distributions, where minority classes are significantly underrepresented
compared to the majority ones. Recent research prefers to utilize multi-expert
architectures to mitigate the model uncertainty on the minority, where
collaborative learning is employed to aggregate the knowledge of experts, i.e.,
online distillation. In this paper, we observe that the knowledge transfer
between experts is imbalanced in terms of class distribution, which results in
limited performance improvement of the minority classes. To address it, we
propose a re-weighted distillation loss by comparing two classifiers'
predictions, which are supervised by online distillation and label annotations,
respectively. We also emphasize that feature-level distillation will
significantly improve model performance and increase feature robustness.
Finally, we propose an Effective Collaborative Learning (ECL) framework that
integrates a contrastive proxy task branch to further improve feature quality.
Quantitative and qualitative experiments on four standard datasets demonstrate
that ECL achieves state-of-the-art performance and the detailed ablation
studies manifest the effectiveness of each component in ECL.
|
[
{
"created": "Fri, 5 May 2023 09:16:06 GMT",
"version": "v1"
}
] |
2023-05-08
|
[
[
"Xu",
"Zhengzhuo",
""
],
[
"Chai",
"Zenghao",
""
],
[
"Xu",
"Chengyin",
""
],
[
"Yuan",
"Chun",
""
],
[
"Yang",
"Haiqin",
""
]
] |
Real-world data usually suffers from severe class imbalance and long-tailed distributions, where minority classes are significantly underrepresented compared to the majority ones. Recent research prefers to utilize multi-expert architectures to mitigate the model uncertainty on the minority, where collaborative learning is employed to aggregate the knowledge of experts, i.e., online distillation. In this paper, we observe that the knowledge transfer between experts is imbalanced in terms of class distribution, which results in limited performance improvement of the minority classes. To address it, we propose a re-weighted distillation loss by comparing two classifiers' predictions, which are supervised by online distillation and label annotations, respectively. We also emphasize that feature-level distillation will significantly improve model performance and increase feature robustness. Finally, we propose an Effective Collaborative Learning (ECL) framework that integrates a contrastive proxy task branch to further improve feature quality. Quantitative and qualitative experiments on four standard datasets demonstrate that ECL achieves state-of-the-art performance and the detailed ablation studies manifest the effectiveness of each component in ECL.
|
2003.11184
|
Haiyang Xu
|
Haiyang Xu, Junwen Chen, Kun Han, Xiangang Li
|
Adversarial Multi-Binary Neural Network for Multi-class Classification
| null | null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-class text classification is one of the key problems in machine
learning and natural language processing. Emerging neural networks deal with
the problem using a multi-output softmax layer and achieve substantial
progress, but they do not explicitly learn the correlation among classes. In
this paper, we use a multi-task framework to address multi-class
classification, where a multi-class classifier and multiple binary classifiers
are trained together. Moreover, we employ adversarial training to distinguish
the class-specific features and the class-agnostic features. The model benefits
from better feature representation. We conduct experiments on two large-scale
multi-class text classification tasks and demonstrate that the proposed
architecture outperforms baseline approaches.
|
[
{
"created": "Wed, 25 Mar 2020 02:19:17 GMT",
"version": "v1"
}
] |
2020-03-26
|
[
[
"Xu",
"Haiyang",
""
],
[
"Chen",
"Junwen",
""
],
[
"Han",
"Kun",
""
],
[
"Li",
"Xiangang",
""
]
] |
Multi-class text classification is one of the key problems in machine learning and natural language processing. Emerging neural networks deal with the problem using a multi-output softmax layer and achieve substantial progress, but they do not explicitly learn the correlation among classes. In this paper, we use a multi-task framework to address multi-class classification, where a multi-class classifier and multiple binary classifiers are trained together. Moreover, we employ adversarial training to distinguish the class-specific features and the class-agnostic features. The model benefits from better feature representation. We conduct experiments on two large-scale multi-class text classification tasks and demonstrate that the proposed architecture outperforms baseline approaches.
|
2307.10558
|
Hai Wang
|
Shiyang Li, Jun Yan, Hai Wang, Zheng Tang, Xiang Ren, Vijay
Srinivasan, Hongxia Jin
|
Instruction-following Evaluation through Verbalizer Manipulation
|
NAACL 2024 findings
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While instruction-tuned models have shown remarkable success in various
natural language processing tasks, accurately evaluating their ability to
follow instructions remains challenging. Existing benchmarks primarily focus on
common instructions that align well with what the model learned during
training. However, proficiency in responding to these instructions does not
necessarily imply strong ability in instruction following. In this paper, we
propose a novel instruction-following evaluation protocol called verbalizer
manipulation. It instructs the model to verbalize the task label with words
aligning with model priors to different extents, adopting verbalizers from
highly aligned (e.g., outputting ``postive'' for positive sentiment), to
minimally aligned (e.g., outputting ``negative'' for positive sentiment).
Verbalizer manipulation can be seamlessly integrated with any classification
benchmark to examine the model's reliance on priors and its ability to override
them to accurately follow the instructions. We conduct a comprehensive
evaluation of four major model families across nine datasets, employing twelve
sets of verbalizers for each of them. We observe that the instruction-following
abilities of models, across different families and scales, are significantly
distinguished by their performance on less natural verbalizers. Even the
strongest GPT-4 model struggles to perform better than random guessing on the
most challenging verbalizer, emphasizing the need for continued advancements to
improve their instruction-following abilities.
|
[
{
"created": "Thu, 20 Jul 2023 03:54:24 GMT",
"version": "v1"
},
{
"created": "Tue, 2 Apr 2024 04:38:21 GMT",
"version": "v2"
}
] |
2024-04-03
|
[
[
"Li",
"Shiyang",
""
],
[
"Yan",
"Jun",
""
],
[
"Wang",
"Hai",
""
],
[
"Tang",
"Zheng",
""
],
[
"Ren",
"Xiang",
""
],
[
"Srinivasan",
"Vijay",
""
],
[
"Jin",
"Hongxia",
""
]
] |
While instruction-tuned models have shown remarkable success in various natural language processing tasks, accurately evaluating their ability to follow instructions remains challenging. Existing benchmarks primarily focus on common instructions that align well with what the model learned during training. However, proficiency in responding to these instructions does not necessarily imply strong ability in instruction following. In this paper, we propose a novel instruction-following evaluation protocol called verbalizer manipulation. It instructs the model to verbalize the task label with words aligning with model priors to different extents, adopting verbalizers from highly aligned (e.g., outputting ``postive'' for positive sentiment), to minimally aligned (e.g., outputting ``negative'' for positive sentiment). Verbalizer manipulation can be seamlessly integrated with any classification benchmark to examine the model's reliance on priors and its ability to override them to accurately follow the instructions. We conduct a comprehensive evaluation of four major model families across nine datasets, employing twelve sets of verbalizers for each of them. We observe that the instruction-following abilities of models, across different families and scales, are significantly distinguished by their performance on less natural verbalizers. Even the strongest GPT-4 model struggles to perform better than random guessing on the most challenging verbalizer, emphasizing the need for continued advancements to improve their instruction-following abilities.
|
2211.11386
|
Satoshi Ikehata Mr.
|
Satoshi Ikehata
|
PS-Transformer: Learning Sparse Photometric Stereo Network using
Self-Attention Mechanism
|
BMVC2021. Code and Supplementary are available at
https://github.com/satoshi-ikehata/PS-Transformer-BMVC2021
|
BMVC. Vol. 2. No. 4. 2021
| null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Existing deep calibrated photometric stereo networks basically aggregate
observations under different lights based on the pre-defined operations such as
linear projection and max pooling. While they are effective with the dense
capture, simple first-order operations often fail to capture the high-order
interactions among observations under small number of different lights. To
tackle this issue, this paper presents a deep sparse calibrated photometric
stereo network named {\it PS-Transformer} which leverages the learnable
self-attention mechanism to properly capture the complex inter-image
interactions. PS-Transformer builds upon the dual-branch design to explore both
pixel-wise and image-wise features and individual feature is trained with the
intermediate surface normal supervision to maximize geometric feasibility. A
new synthetic dataset named CyclesPS+ is also presented with the comprehensive
analysis to successfully train the photometric stereo networks. Extensive
results on the publicly available benchmark datasets demonstrate that the
surface normal prediction accuracy of the proposed method significantly
outperforms other state-of-the-art algorithms with the same number of input
images and is even comparable to that of dense algorithms which input
10$\times$ larger number of images.
|
[
{
"created": "Mon, 21 Nov 2022 11:58:25 GMT",
"version": "v1"
}
] |
2022-11-22
|
[
[
"Ikehata",
"Satoshi",
""
]
] |
Existing deep calibrated photometric stereo networks basically aggregate observations under different lights based on the pre-defined operations such as linear projection and max pooling. While they are effective with the dense capture, simple first-order operations often fail to capture the high-order interactions among observations under small number of different lights. To tackle this issue, this paper presents a deep sparse calibrated photometric stereo network named {\it PS-Transformer} which leverages the learnable self-attention mechanism to properly capture the complex inter-image interactions. PS-Transformer builds upon the dual-branch design to explore both pixel-wise and image-wise features and individual feature is trained with the intermediate surface normal supervision to maximize geometric feasibility. A new synthetic dataset named CyclesPS+ is also presented with the comprehensive analysis to successfully train the photometric stereo networks. Extensive results on the publicly available benchmark datasets demonstrate that the surface normal prediction accuracy of the proposed method significantly outperforms other state-of-the-art algorithms with the same number of input images and is even comparable to that of dense algorithms which input 10$\times$ larger number of images.
|
1207.3027
|
Reza Khosravi-Farsani
|
Reza K. Farsani
|
Fundamental Limits of Communications in Interference Networks-Part II:
Information Flow in Degraded Networks
|
A table of contents is given
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this second part of our multi-part papers, the information flow in
degraded interference networks is studied. A full characterization of the
sum-rate capacity for the degraded networks with any possible configuration is
established. It is shown that a successive decoding scheme is sum-rate optimal
for these networks. Also, it is proved that the transmission of only a certain
subset of messages is sufficient to achieve the sum-rate capacity in such
networks. Algorithms are presented to determine this subset of messages
explicitly. According to these algorithms, the optimal strategy to achieve the
sum-rate capacity in degraded networks is that the transmitters try to send
information for the stronger receivers and, if possible, avoid sending the
messages with respect to the weaker receivers. The algorithms are easily
understood using our graphical illustrations for the achievability schemes
based on directed graphs. The sum-rate expression for the degraded networks is
then used to derive a unified outer bound on the sum-rate capacity of arbitrary
non-degraded networks. Several variations of the degraded networks are
identified for which the derived outer bound is sum-rate optimal. Specifically,
noisy interference regimes are derived for certain classes of
multi-user/multi-message interference networks. Also, for the first time,
network scenarios are identified where the incorporation of both successive
decoding and treating interference as noise achieves their sum-rate capacity.
Finally, by taking insight from our results for degraded networks, we establish
a unified outer bound on the entire capacity region of the general interference
networks. These outer bounds for a broad range of network scenarios are tighter
than the existing cut-set bound.
|
[
{
"created": "Thu, 12 Jul 2012 17:24:38 GMT",
"version": "v1"
},
{
"created": "Fri, 15 Feb 2013 06:34:25 GMT",
"version": "v2"
}
] |
2013-02-18
|
[
[
"Farsani",
"Reza K.",
""
]
] |
In this second part of our multi-part papers, the information flow in degraded interference networks is studied. A full characterization of the sum-rate capacity for the degraded networks with any possible configuration is established. It is shown that a successive decoding scheme is sum-rate optimal for these networks. Also, it is proved that the transmission of only a certain subset of messages is sufficient to achieve the sum-rate capacity in such networks. Algorithms are presented to determine this subset of messages explicitly. According to these algorithms, the optimal strategy to achieve the sum-rate capacity in degraded networks is that the transmitters try to send information for the stronger receivers and, if possible, avoid sending the messages with respect to the weaker receivers. The algorithms are easily understood using our graphical illustrations for the achievability schemes based on directed graphs. The sum-rate expression for the degraded networks is then used to derive a unified outer bound on the sum-rate capacity of arbitrary non-degraded networks. Several variations of the degraded networks are identified for which the derived outer bound is sum-rate optimal. Specifically, noisy interference regimes are derived for certain classes of multi-user/multi-message interference networks. Also, for the first time, network scenarios are identified where the incorporation of both successive decoding and treating interference as noise achieves their sum-rate capacity. Finally, by taking insight from our results for degraded networks, we establish a unified outer bound on the entire capacity region of the general interference networks. These outer bounds for a broad range of network scenarios are tighter than the existing cut-set bound.
|
2202.10101
|
Lisa K\"uhnel
|
Lisa K\"uhnel, Alexander Schulz, Barbara Hammer and Juliane Fluck
|
BERT WEAVER: Using WEight AVERaging to enable lifelong learning for
transformer-based models in biomedical semantic search engines
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Recent developments in transfer learning have boosted the advancements in
natural language processing tasks. The performance is, however, dependent on
high-quality, manually annotated training data. Especially in the biomedical
domain, it has been shown that one training corpus is not enough to learn
generic models that are able to efficiently predict on new data. Therefore, in
order to be used in real world applications state-of-the-art models need the
ability of lifelong learning to improve performance as soon as new data are
available - without the need of re-training the whole model from scratch. We
present WEAVER, a simple, yet efficient post-processing method that infuses old
knowledge into the new model, thereby reducing catastrophic forgetting. We show
that applying WEAVER in a sequential manner results in similar word embedding
distributions as doing a combined training on all data at once, while being
computationally more efficient. Because there is no need of data sharing, the
presented method is also easily applicable to federated learning settings and
can for example be beneficial for the mining of electronic health records from
different clinics.
|
[
{
"created": "Mon, 21 Feb 2022 10:34:41 GMT",
"version": "v1"
},
{
"created": "Tue, 9 May 2023 12:32:36 GMT",
"version": "v2"
},
{
"created": "Tue, 31 Oct 2023 15:36:12 GMT",
"version": "v3"
}
] |
2023-11-01
|
[
[
"Kühnel",
"Lisa",
""
],
[
"Schulz",
"Alexander",
""
],
[
"Hammer",
"Barbara",
""
],
[
"Fluck",
"Juliane",
""
]
] |
Recent developments in transfer learning have boosted the advancements in natural language processing tasks. The performance is, however, dependent on high-quality, manually annotated training data. Especially in the biomedical domain, it has been shown that one training corpus is not enough to learn generic models that are able to efficiently predict on new data. Therefore, in order to be used in real world applications state-of-the-art models need the ability of lifelong learning to improve performance as soon as new data are available - without the need of re-training the whole model from scratch. We present WEAVER, a simple, yet efficient post-processing method that infuses old knowledge into the new model, thereby reducing catastrophic forgetting. We show that applying WEAVER in a sequential manner results in similar word embedding distributions as doing a combined training on all data at once, while being computationally more efficient. Because there is no need of data sharing, the presented method is also easily applicable to federated learning settings and can for example be beneficial for the mining of electronic health records from different clinics.
|
2407.10906
|
Peng Liang
|
Zengyang Li, Jiabao Ji, Peng Liang, Ran Mo, Hui Liu
|
An Exploratory Study on Just-in-Time Multi-Programming-Language Bug
Prediction
|
Preprint accepted for publication in Information and Software
Technology, 2024
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Context: An increasing number of software systems are written in multiple
programming languages (PLs), which are called multi-programming-language (MPL)
systems. MPL bugs (MPLBs) refers to the bugs whose resolution involves multiple
PLs. Despite high complexity of MPLB resolution, there lacks MPLB prediction
methods. Objective: This work aims to construct just-in-time (JIT) MPLB
prediction models with selected prediction metrics, analyze the significance of
the metrics, and then evaluate the performance of cross-project JIT MPLB
prediction. Method: We develop JIT MPLB prediction models with the selected
metrics using machine learning algorithms and evaluate the models in
within-project and cross-project contexts with our constructed dataset based on
18 Apache MPL projects. Results: Random Forest is appropriate for JIT MPLB
prediction. Changed LOC of all files, added LOC of all files, and the total
number of lines of all files of the project currently are the most crucial
metrics in JIT MPLB prediction. The prediction models can be simplified using a
few top-ranked metrics. Training on the dataset from multiple projects can
yield significantly higher AUC than training on the dataset from a single
project for cross-project JIT MPLB prediction. Conclusions: JIT MPLB prediction
models can be constructed with the selected set of metrics, which can be
reduced to build simplified JIT MPLB prediction models, and cross-project JIT
MPLB prediction is feasible.
|
[
{
"created": "Mon, 15 Jul 2024 17:06:18 GMT",
"version": "v1"
}
] |
2024-07-16
|
[
[
"Li",
"Zengyang",
""
],
[
"Ji",
"Jiabao",
""
],
[
"Liang",
"Peng",
""
],
[
"Mo",
"Ran",
""
],
[
"Liu",
"Hui",
""
]
] |
Context: An increasing number of software systems are written in multiple programming languages (PLs), which are called multi-programming-language (MPL) systems. MPL bugs (MPLBs) refers to the bugs whose resolution involves multiple PLs. Despite high complexity of MPLB resolution, there lacks MPLB prediction methods. Objective: This work aims to construct just-in-time (JIT) MPLB prediction models with selected prediction metrics, analyze the significance of the metrics, and then evaluate the performance of cross-project JIT MPLB prediction. Method: We develop JIT MPLB prediction models with the selected metrics using machine learning algorithms and evaluate the models in within-project and cross-project contexts with our constructed dataset based on 18 Apache MPL projects. Results: Random Forest is appropriate for JIT MPLB prediction. Changed LOC of all files, added LOC of all files, and the total number of lines of all files of the project currently are the most crucial metrics in JIT MPLB prediction. The prediction models can be simplified using a few top-ranked metrics. Training on the dataset from multiple projects can yield significantly higher AUC than training on the dataset from a single project for cross-project JIT MPLB prediction. Conclusions: JIT MPLB prediction models can be constructed with the selected set of metrics, which can be reduced to build simplified JIT MPLB prediction models, and cross-project JIT MPLB prediction is feasible.
|
1309.5124
|
Brandon Oselio
|
Brandon Oselio and Alex Kulesza and Alfred O. Hero III
|
Multi-layer graph analysis for dynamic social networks
|
10 pages, 9 figures
| null |
10.1109/JSTSP.2014.2328312
| null |
cs.SI physics.soc-ph stat.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern social networks frequently encompass multiple distinct types of
connectivity information; for instance, explicitly acknowledged friend
relationships might complement behavioral measures that link users according to
their actions or interests. One way to represent these networks is as
multi-layer graphs, where each layer contains a unique set of edges over the
same underlying vertices (users). Edges in different layers typically have
related but distinct semantics; depending on the application multiple layers
might be used to reduce noise through averaging, to perform multifaceted
analyses, or a combination of the two. However, it is not obvious how to extend
standard graph analysis techniques to the multi-layer setting in a flexible
way. In this paper we develop latent variable models and methods for mining
multi-layer networks for connectivity patterns based on noisy data.
|
[
{
"created": "Fri, 20 Sep 2013 01:06:43 GMT",
"version": "v1"
},
{
"created": "Mon, 12 May 2014 02:53:41 GMT",
"version": "v2"
}
] |
2015-06-17
|
[
[
"Oselio",
"Brandon",
""
],
[
"Kulesza",
"Alex",
""
],
[
"Hero",
"Alfred O.",
"III"
]
] |
Modern social networks frequently encompass multiple distinct types of connectivity information; for instance, explicitly acknowledged friend relationships might complement behavioral measures that link users according to their actions or interests. One way to represent these networks is as multi-layer graphs, where each layer contains a unique set of edges over the same underlying vertices (users). Edges in different layers typically have related but distinct semantics; depending on the application multiple layers might be used to reduce noise through averaging, to perform multifaceted analyses, or a combination of the two. However, it is not obvious how to extend standard graph analysis techniques to the multi-layer setting in a flexible way. In this paper we develop latent variable models and methods for mining multi-layer networks for connectivity patterns based on noisy data.
|
1910.08102
|
Jiacheng Zhu
|
Jiacheng Zhu, Shenghao Qin, Wenshuo Wang, and Ding Zhao
|
Probabilistic Trajectory Prediction for Autonomous Vehicles with
Attentive Recurrent Neural Process
|
7 pages, 5 figures, submitted to ICRA 2020
| null | null | null |
cs.RO cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Predicting surrounding vehicle behaviors are critical to autonomous vehicles
when negotiating in multi-vehicle interaction scenarios. Most existing
approaches require tedious training process with large amounts of data and may
fail to capture the propagating uncertainty in interaction behaviors. The
multi-vehicle behaviors are assumed to be generated from a stochastic process.
This paper proposes an attentive recurrent neural process (ARNP) approach to
overcome the above limitations, which uses a neural process (NP) to learn a
distribution of multi-vehicle interaction behavior. Our proposed model inherits
the flexibility of neural networks while maintaining Bayesian probabilistic
characteristics. Constructed by incorporating NPs with recurrent neural
networks (RNNs), the ARNP model predicts the distribution of a target vehicle
trajectory conditioned on the observed long-term sequential data of all
surrounding vehicles. This approach is verified by learning and predicting
lane-changing trajectories in complex traffic scenarios. Experimental results
demonstrate that our proposed method outperforms previous counterparts in terms
of accuracy and uncertainty expressiveness. Moreover, the meta-learning
instinct of NPs enables our proposed ARNP model to capture global information
of all observations, thereby being able to adapt to new targets efficiently.
|
[
{
"created": "Thu, 17 Oct 2019 18:26:31 GMT",
"version": "v1"
}
] |
2019-10-21
|
[
[
"Zhu",
"Jiacheng",
""
],
[
"Qin",
"Shenghao",
""
],
[
"Wang",
"Wenshuo",
""
],
[
"Zhao",
"Ding",
""
]
] |
Predicting surrounding vehicle behaviors are critical to autonomous vehicles when negotiating in multi-vehicle interaction scenarios. Most existing approaches require tedious training process with large amounts of data and may fail to capture the propagating uncertainty in interaction behaviors. The multi-vehicle behaviors are assumed to be generated from a stochastic process. This paper proposes an attentive recurrent neural process (ARNP) approach to overcome the above limitations, which uses a neural process (NP) to learn a distribution of multi-vehicle interaction behavior. Our proposed model inherits the flexibility of neural networks while maintaining Bayesian probabilistic characteristics. Constructed by incorporating NPs with recurrent neural networks (RNNs), the ARNP model predicts the distribution of a target vehicle trajectory conditioned on the observed long-term sequential data of all surrounding vehicles. This approach is verified by learning and predicting lane-changing trajectories in complex traffic scenarios. Experimental results demonstrate that our proposed method outperforms previous counterparts in terms of accuracy and uncertainty expressiveness. Moreover, the meta-learning instinct of NPs enables our proposed ARNP model to capture global information of all observations, thereby being able to adapt to new targets efficiently.
|
2212.08834
|
Ajoy Mondal Dr.
|
Ajoy Mondal, Rohit saluja, and C. V. Jawahar
|
Towards Robust Handwritten Text Recognition with On-the-fly User
Participation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Long-term OCR services aim to provide high-quality output to their users at
competitive costs. It is essential to upgrade the models because of the complex
data loaded by the users. The service providers encourage the users who provide
data where the OCR model fails by rewarding them based on data complexity,
readability, and available budget. Hitherto, the OCR works include preparing
the models on standard datasets without considering the end-users. We propose a
strategy of consistently upgrading an existing Handwritten Hindi OCR model
three times on the dataset of 15 users. We fix the budget of 4 users for each
iteration. For the first iteration, the model directly trains on the dataset
from the first four users. For the rest iteration, all remaining users write a
page each, which service providers later analyze to select the 4 (new) best
users based on the quality of predictions on the human-readable words. Selected
users write 23 more pages for upgrading the model. We upgrade the model with
Curriculum Learning (CL) on the data available in the current iteration and
compare the subset from previous iterations. The upgraded model is tested on a
held-out set of one page each from all 23 users. We provide insights into our
investigations on the effect of CL, user selection, and especially the data
from unseen writing styles. Our work can be used for long-term OCR services in
crowd-sourcing scenarios for the service providers and end users.
|
[
{
"created": "Sat, 17 Dec 2022 10:20:39 GMT",
"version": "v1"
}
] |
2022-12-20
|
[
[
"Mondal",
"Ajoy",
""
],
[
"saluja",
"Rohit",
""
],
[
"Jawahar",
"C. V.",
""
]
] |
Long-term OCR services aim to provide high-quality output to their users at competitive costs. It is essential to upgrade the models because of the complex data loaded by the users. The service providers encourage the users who provide data where the OCR model fails by rewarding them based on data complexity, readability, and available budget. Hitherto, the OCR works include preparing the models on standard datasets without considering the end-users. We propose a strategy of consistently upgrading an existing Handwritten Hindi OCR model three times on the dataset of 15 users. We fix the budget of 4 users for each iteration. For the first iteration, the model directly trains on the dataset from the first four users. For the rest iteration, all remaining users write a page each, which service providers later analyze to select the 4 (new) best users based on the quality of predictions on the human-readable words. Selected users write 23 more pages for upgrading the model. We upgrade the model with Curriculum Learning (CL) on the data available in the current iteration and compare the subset from previous iterations. The upgraded model is tested on a held-out set of one page each from all 23 users. We provide insights into our investigations on the effect of CL, user selection, and especially the data from unseen writing styles. Our work can be used for long-term OCR services in crowd-sourcing scenarios for the service providers and end users.
|
2008.06048
|
Jeff Ens Mr
|
Jeff Ens, Philippe Pasquier
|
MMM : Exploring Conditional Multi-Track Music Generation with the
Transformer
| null | null | null | null |
cs.SD cs.LG cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
We propose the Multi-Track Music Machine (MMM), a generative system based on
the Transformer architecture that is capable of generating multi-track music.
In contrast to previous work, which represents musical material as a single
time-ordered sequence, where the musical events corresponding to different
tracks are interleaved, we create a time-ordered sequence of musical events for
each track and concatenate several tracks into a single sequence. This takes
advantage of the Transformer's attention-mechanism, which can adeptly handle
long-term dependencies. We explore how various representations can offer the
user a high degree of control at generation time, providing an interactive demo
that accommodates track-level and bar-level inpainting, and offers control over
track instrumentation and note density.
|
[
{
"created": "Thu, 13 Aug 2020 02:36:34 GMT",
"version": "v1"
},
{
"created": "Thu, 20 Aug 2020 19:13:39 GMT",
"version": "v2"
}
] |
2020-08-24
|
[
[
"Ens",
"Jeff",
""
],
[
"Pasquier",
"Philippe",
""
]
] |
We propose the Multi-Track Music Machine (MMM), a generative system based on the Transformer architecture that is capable of generating multi-track music. In contrast to previous work, which represents musical material as a single time-ordered sequence, where the musical events corresponding to different tracks are interleaved, we create a time-ordered sequence of musical events for each track and concatenate several tracks into a single sequence. This takes advantage of the Transformer's attention-mechanism, which can adeptly handle long-term dependencies. We explore how various representations can offer the user a high degree of control at generation time, providing an interactive demo that accommodates track-level and bar-level inpainting, and offers control over track instrumentation and note density.
|
1704.07647
|
Ahmet Cetinkaya
|
Ahmet Cetinkaya, Hideaki Ishii, Tomohisa Hayakawa
|
Analysis of Stochastic Switched Systems with Application to Networked
Control Under Jamming Attacks
|
Change title of Section 3; Resize figures
| null | null | null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We investigate the stability problem for discrete-time stochastic switched
linear systems under the specific scenarios where information about the
switching patterns and the probability of switches are not available. Our
analysis focuses on the average number of times each mode becomes active in the
long run and, in particular, utilizes their lower- and upper-bounds. This setup
is motivated by cyber security issues for networked control systems in the
presence of packet losses due to malicious jamming attacks where the attacker's
strategy is not known a priori. We derive a sufficient condition for almost
sure asymptotic stability of the switched systems which can be examined by
solving a linear programming problem. Our approach exploits the dynamics of an
equivalent system that describes the evolution of the switched system's state
at every few steps; the stability analysis may become less conservative by
increasing the step size. The computational efficiency is further enhanced by
exploiting the structure in the stability analysis problem, and we introduce an
alternative linear programming problem that has fewer variables. We demonstrate
the efficacy of our results by analyzing networked control problems where
communication channels face random packet losses as well as jamming attacks.
|
[
{
"created": "Tue, 25 Apr 2017 11:54:32 GMT",
"version": "v1"
},
{
"created": "Tue, 17 Oct 2017 07:03:14 GMT",
"version": "v2"
},
{
"created": "Wed, 21 Feb 2018 03:31:15 GMT",
"version": "v3"
},
{
"created": "Fri, 20 Apr 2018 12:45:17 GMT",
"version": "v4"
}
] |
2018-04-23
|
[
[
"Cetinkaya",
"Ahmet",
""
],
[
"Ishii",
"Hideaki",
""
],
[
"Hayakawa",
"Tomohisa",
""
]
] |
We investigate the stability problem for discrete-time stochastic switched linear systems under the specific scenarios where information about the switching patterns and the probability of switches are not available. Our analysis focuses on the average number of times each mode becomes active in the long run and, in particular, utilizes their lower- and upper-bounds. This setup is motivated by cyber security issues for networked control systems in the presence of packet losses due to malicious jamming attacks where the attacker's strategy is not known a priori. We derive a sufficient condition for almost sure asymptotic stability of the switched systems which can be examined by solving a linear programming problem. Our approach exploits the dynamics of an equivalent system that describes the evolution of the switched system's state at every few steps; the stability analysis may become less conservative by increasing the step size. The computational efficiency is further enhanced by exploiting the structure in the stability analysis problem, and we introduce an alternative linear programming problem that has fewer variables. We demonstrate the efficacy of our results by analyzing networked control problems where communication channels face random packet losses as well as jamming attacks.
|
1210.1709
|
Ravi Murugesan
|
Ravi Murugesan
|
Promising outcomes of an online course in research writing at a Rwandan
university
| null |
European Science Editing, August 2012, 38(3), 60-64
| null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Background: Researchers in developing countries often do not have access to
training on research writing. The purpose of this study was to test whether
researchers in Rwanda might complete and benefit from a pilot online course in
research writing. Methods: The pilot course was set up on Moodle, an
open-source online learning environment, and facilitated by the author. The
lessons and assignment were spread over six weeks, followed by a two-week
extension period. Twenty-eight faculty members of the National University of
Rwanda enrolled themselves in the course. Results: Twenty-five of the 28
learners completed the course. After the course, these learners expressed high
satisfaction, e.g, 24 of them felt that they were ready to write a research
paper for publication. Conclusion: The high completion rate (89%) is noteworthy
for two reasons: e-learning courses tend to have lower completion rates than
classroom courses, and 76% of the learners in the pilot course had not taken an
e-learning course before. This result and the positive feedback indicate that
online courses can benefit researchers in developing countries who may not have
access to classroom courses on research writing.
|
[
{
"created": "Fri, 5 Oct 2012 11:14:20 GMT",
"version": "v1"
}
] |
2012-10-08
|
[
[
"Murugesan",
"Ravi",
""
]
] |
Background: Researchers in developing countries often do not have access to training on research writing. The purpose of this study was to test whether researchers in Rwanda might complete and benefit from a pilot online course in research writing. Methods: The pilot course was set up on Moodle, an open-source online learning environment, and facilitated by the author. The lessons and assignment were spread over six weeks, followed by a two-week extension period. Twenty-eight faculty members of the National University of Rwanda enrolled themselves in the course. Results: Twenty-five of the 28 learners completed the course. After the course, these learners expressed high satisfaction, e.g, 24 of them felt that they were ready to write a research paper for publication. Conclusion: The high completion rate (89%) is noteworthy for two reasons: e-learning courses tend to have lower completion rates than classroom courses, and 76% of the learners in the pilot course had not taken an e-learning course before. This result and the positive feedback indicate that online courses can benefit researchers in developing countries who may not have access to classroom courses on research writing.
|
2008.00811
|
Leah Epstein
|
Janos Balogh and Leah Epstein and Asaf Levin
|
Truly asymptotic lower bounds for online vector bin packing
|
Submitted to SODA 2021
| null | null | null |
cs.DS cs.DM math.CO math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we consider online vector bin packing. It is known that no
algorithm can have a competitive ratio of $o(d/\log^2 d)$ in the absolute
sense, though upper bounds for this problem were always shown in the asymptotic
sense. Since variants of bin packing are traditionally studied with respect to
the asymptotic measure and since the two measures are different, we focus on
the asymptotic measure and prove new lower bounds on the asymptotic competitive
ratio. The existing lower bounds prior to this work were much smaller than $3$
even for very large dimensions.
We significantly improve the best known lower bounds on the asymptotic
competitive ratio (and as a byproduct, on the absolute competitive ratio) for
online vector packing of vectors with $d \geq 3$ dimensions, for every such
dimension $d$. To obtain these results, we use several different constructions,
one of which is an adaptive construction showing a lower bound of
$\Omega(\sqrt{d})$. Our main result is that the lower bound of $\Omega(d/\log^2
d)$ on the competitive ratio holds also in the asymptotic sense. The last
result requires a careful adaptation of constructions for online coloring
rather than simple black-box reductions.
|
[
{
"created": "Mon, 3 Aug 2020 12:08:43 GMT",
"version": "v1"
}
] |
2020-08-04
|
[
[
"Balogh",
"Janos",
""
],
[
"Epstein",
"Leah",
""
],
[
"Levin",
"Asaf",
""
]
] |
In this work, we consider online vector bin packing. It is known that no algorithm can have a competitive ratio of $o(d/\log^2 d)$ in the absolute sense, though upper bounds for this problem were always shown in the asymptotic sense. Since variants of bin packing are traditionally studied with respect to the asymptotic measure and since the two measures are different, we focus on the asymptotic measure and prove new lower bounds on the asymptotic competitive ratio. The existing lower bounds prior to this work were much smaller than $3$ even for very large dimensions. We significantly improve the best known lower bounds on the asymptotic competitive ratio (and as a byproduct, on the absolute competitive ratio) for online vector packing of vectors with $d \geq 3$ dimensions, for every such dimension $d$. To obtain these results, we use several different constructions, one of which is an adaptive construction showing a lower bound of $\Omega(\sqrt{d})$. Our main result is that the lower bound of $\Omega(d/\log^2 d)$ on the competitive ratio holds also in the asymptotic sense. The last result requires a careful adaptation of constructions for online coloring rather than simple black-box reductions.
|
1205.5979
|
Elham Bahmani
|
Elham Bahmani and Ghosheh Abed Hodtani
|
Achievable Rate Regions for the Dirty Multiple Access Channel with
Partial Side Information at the Transmitters
|
5 pages, 3 figures, This paper was accepted at IEEE-ISIT2012
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we establish achievable rate regions for the multiple access
channel (MAC) with side information partially known (estimated or sensed
version) at the transmitters. Actually, we extend the lattice strategies used
by Philosof-Zamir for the MAC with full side information at the transmitters to
the partially known case. We show that the sensed or estimated side information
reduces the rate regions, the same as that occurs for Costa Gaussian channel.
|
[
{
"created": "Sun, 27 May 2012 15:54:23 GMT",
"version": "v1"
}
] |
2012-05-29
|
[
[
"Bahmani",
"Elham",
""
],
[
"Hodtani",
"Ghosheh Abed",
""
]
] |
In this paper, we establish achievable rate regions for the multiple access channel (MAC) with side information partially known (estimated or sensed version) at the transmitters. Actually, we extend the lattice strategies used by Philosof-Zamir for the MAC with full side information at the transmitters to the partially known case. We show that the sensed or estimated side information reduces the rate regions, the same as that occurs for Costa Gaussian channel.
|
2205.04567
|
Michael Dikshtein
|
Michael Dikshtein, Nir Weinberger, and Shlomo Shamai (Shitz)
|
The Compound Information Bottleneck Outlook
|
This work has been submitted to the IEEE for possible publication
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We formulate and analyze the compound information bottleneck programming. In
this problem, a Markov chain $ \mathsf{X} \rightarrow \mathsf{Y} \rightarrow
\mathsf{Z} $ is assumed with fixed marginal distributions
$\mathsf{P}_{\mathsf{X}}$ and $\mathsf{P}_{\mathsf{Y}}$, and the mutual
information between $ \mathsf{X} $ and $ \mathsf{Z} $ is sought to be maximized
over the choice of conditional probability of $\mathsf{Z}$ given $\mathsf{Y}$
from a given class, under the \textit{worst choice} of the joint probability of
the pair $(\mathsf{X},\mathsf{Y})$ from a different class. We consider several
classes based on extremes of: mutual information; minimal correlation; total
variation; and the relative entropy class. We provide values, bounds, and
various characterizations for specific instances of this problem: the binary
symmetric case, the scalar Gaussian case, the vector Gaussian case and the
symmetric modulo-additive case. Finally, for the general case, we propose a
Blahut-Arimoto type of alternating iterations algorithm to find a consistent
solution to this problem.
|
[
{
"created": "Mon, 9 May 2022 21:27:45 GMT",
"version": "v1"
}
] |
2022-05-11
|
[
[
"Dikshtein",
"Michael",
"",
"Shitz"
],
[
"Weinberger",
"Nir",
"",
"Shitz"
],
[
"Shamai",
"Shlomo",
"",
"Shitz"
]
] |
We formulate and analyze the compound information bottleneck programming. In this problem, a Markov chain $ \mathsf{X} \rightarrow \mathsf{Y} \rightarrow \mathsf{Z} $ is assumed with fixed marginal distributions $\mathsf{P}_{\mathsf{X}}$ and $\mathsf{P}_{\mathsf{Y}}$, and the mutual information between $ \mathsf{X} $ and $ \mathsf{Z} $ is sought to be maximized over the choice of conditional probability of $\mathsf{Z}$ given $\mathsf{Y}$ from a given class, under the \textit{worst choice} of the joint probability of the pair $(\mathsf{X},\mathsf{Y})$ from a different class. We consider several classes based on extremes of: mutual information; minimal correlation; total variation; and the relative entropy class. We provide values, bounds, and various characterizations for specific instances of this problem: the binary symmetric case, the scalar Gaussian case, the vector Gaussian case and the symmetric modulo-additive case. Finally, for the general case, we propose a Blahut-Arimoto type of alternating iterations algorithm to find a consistent solution to this problem.
|
1302.5997
|
EPTCS
|
Rachid Echahed (CNRS, University of Grenoble, France), Detlef Plump
(University of York, UK)
|
Proceedings 7th International Workshop on Computing with Terms and
Graphs
| null |
EPTCS 110, 2013
|
10.4204/EPTCS.110
| null |
cs.SC cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This volume contains the proceedings of the Seventh International Workshop on
Computing with Terms and Graphs (TERMGRAPH 2013). The workshop took place in
Rome, Italy, on March 23rd, 2013, as part of the sixteenth edition of the
European Joint Conferences on Theory and Practice of Software (ETAPS 2013).
Research in term and graph rewriting ranges from theoretical questions to
practical issues. Computing with graphs handles the sharing of common
subexpressions in a natural and seamless way, and improves the efficiency of
computations in space and time. Sharing is ubiquitous in several research
areas, as witnessed by the modelling of first- and higher-order term rewriting
by (acyclic or cyclic) graph rewriting, the modelling of biological or chemical
abstract machines, and the implementation techniques of programming languages:
many implementations of functional, logic, object-oriented, concurrent and
mobile calculi are based on term graphs. Term graphs are also used in automated
theorem proving and symbolic computation systems working on shared structures.
The aim of this workshop is to bring together researchers working in different
domains on term and graph transformation and to foster their interaction, to
provide a forum for presenting new ideas and work in progress, and to enable
newcomers to learn about current activities in term graph rewriting.
These proceedings contain six accepted papers and the abstracts of two
invited talks. All submissions were subject to careful refereeing. The topics
of accepted papers range over a wide spectrum, including theoretical aspects of
term graph rewriting, concurrency, semantics as well as application issues of
term graph transformation.
|
[
{
"created": "Mon, 25 Feb 2013 05:49:14 GMT",
"version": "v1"
}
] |
2013-02-26
|
[
[
"Echahed",
"Rachid",
"",
"CNRS, University of Grenoble, France"
],
[
"Plump",
"Detlef",
"",
"University of York, UK"
]
] |
This volume contains the proceedings of the Seventh International Workshop on Computing with Terms and Graphs (TERMGRAPH 2013). The workshop took place in Rome, Italy, on March 23rd, 2013, as part of the sixteenth edition of the European Joint Conferences on Theory and Practice of Software (ETAPS 2013). Research in term and graph rewriting ranges from theoretical questions to practical issues. Computing with graphs handles the sharing of common subexpressions in a natural and seamless way, and improves the efficiency of computations in space and time. Sharing is ubiquitous in several research areas, as witnessed by the modelling of first- and higher-order term rewriting by (acyclic or cyclic) graph rewriting, the modelling of biological or chemical abstract machines, and the implementation techniques of programming languages: many implementations of functional, logic, object-oriented, concurrent and mobile calculi are based on term graphs. Term graphs are also used in automated theorem proving and symbolic computation systems working on shared structures. The aim of this workshop is to bring together researchers working in different domains on term and graph transformation and to foster their interaction, to provide a forum for presenting new ideas and work in progress, and to enable newcomers to learn about current activities in term graph rewriting. These proceedings contain six accepted papers and the abstracts of two invited talks. All submissions were subject to careful refereeing. The topics of accepted papers range over a wide spectrum, including theoretical aspects of term graph rewriting, concurrency, semantics as well as application issues of term graph transformation.
|
2006.01038
|
Lei Cui
|
Minghao Li, Yiheng Xu, Lei Cui, Shaohan Huang, Furu Wei, Zhoujun Li,
Ming Zhou
|
DocBank: A Benchmark Dataset for Document Layout Analysis
|
COLING 2020
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Document layout analysis usually relies on computer vision models to
understand documents while ignoring textual information that is vital to
capture. Meanwhile, high quality labeled datasets with both visual and textual
information are still insufficient. In this paper, we present \textbf{DocBank},
a benchmark dataset that contains 500K document pages with fine-grained
token-level annotations for document layout analysis. DocBank is constructed
using a simple yet effective way with weak supervision from the \LaTeX{}
documents available on the arXiv.com. With DocBank, models from different
modalities can be compared fairly and multi-modal approaches will be further
investigated and boost the performance of document layout analysis. We build
several strong baselines and manually split train/dev/test sets for evaluation.
Experiment results show that models trained on DocBank accurately recognize the
layout information for a variety of documents. The DocBank dataset is publicly
available at \url{https://github.com/doc-analysis/DocBank}.
|
[
{
"created": "Mon, 1 Jun 2020 16:04:30 GMT",
"version": "v1"
},
{
"created": "Wed, 30 Sep 2020 08:05:48 GMT",
"version": "v2"
},
{
"created": "Wed, 11 Nov 2020 05:08:05 GMT",
"version": "v3"
}
] |
2020-11-12
|
[
[
"Li",
"Minghao",
""
],
[
"Xu",
"Yiheng",
""
],
[
"Cui",
"Lei",
""
],
[
"Huang",
"Shaohan",
""
],
[
"Wei",
"Furu",
""
],
[
"Li",
"Zhoujun",
""
],
[
"Zhou",
"Ming",
""
]
] |
Document layout analysis usually relies on computer vision models to understand documents while ignoring textual information that is vital to capture. Meanwhile, high quality labeled datasets with both visual and textual information are still insufficient. In this paper, we present \textbf{DocBank}, a benchmark dataset that contains 500K document pages with fine-grained token-level annotations for document layout analysis. DocBank is constructed using a simple yet effective way with weak supervision from the \LaTeX{} documents available on the arXiv.com. With DocBank, models from different modalities can be compared fairly and multi-modal approaches will be further investigated and boost the performance of document layout analysis. We build several strong baselines and manually split train/dev/test sets for evaluation. Experiment results show that models trained on DocBank accurately recognize the layout information for a variety of documents. The DocBank dataset is publicly available at \url{https://github.com/doc-analysis/DocBank}.
|
2106.07115
|
Qi Lyu
|
Qi Lyu, Xiao Fu, Weiran Wang and Songtao Lu
|
Understanding Latent Correlation-Based Multiview Learning and
Self-Supervision: An Identifiability Perspective
|
Accepted to ICLR 2022 Spotlight, 37 pages, 11 figures
| null | null | null |
cs.LG cs.AI cs.CV stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Multiple views of data, both naturally acquired (e.g., image and audio) and
artificially produced (e.g., via adding different noise to data samples), have
proven useful in enhancing representation learning. Natural views are often
handled by multiview analysis tools, e.g., (deep) canonical correlation
analysis [(D)CCA], while the artificial ones are frequently used in
self-supervised learning (SSL) paradigms, e.g., BYOL and Barlow Twins. Both
types of approaches often involve learning neural feature extractors such that
the embeddings of data exhibit high cross-view correlations. Although
intuitive, the effectiveness of correlation-based neural embedding is mostly
empirically validated.
This work aims to understand latent correlation maximization-based deep
multiview learning from a latent component identification viewpoint. An
intuitive generative model of multiview data is adopted, where the views are
different nonlinear mixtures of shared and private components. Since the shared
components are view/distortion-invariant, representing the data using such
components is believed to reveal the identity of the samples effectively and
robustly. Under this model, latent correlation maximization is shown to
guarantee the extraction of the shared components across views (up to certain
ambiguities). In addition, it is further shown that the private information in
each view can be provably disentangled from the shared using proper
regularization design. A finite sample analysis, which has been rare in
nonlinear mixture identifiability study, is also presented. The theoretical
results and newly designed regularization are tested on a series of tasks.
|
[
{
"created": "Mon, 14 Jun 2021 00:12:36 GMT",
"version": "v1"
},
{
"created": "Thu, 17 Jun 2021 16:51:29 GMT",
"version": "v2"
},
{
"created": "Fri, 8 Apr 2022 19:37:10 GMT",
"version": "v3"
}
] |
2022-04-12
|
[
[
"Lyu",
"Qi",
""
],
[
"Fu",
"Xiao",
""
],
[
"Wang",
"Weiran",
""
],
[
"Lu",
"Songtao",
""
]
] |
Multiple views of data, both naturally acquired (e.g., image and audio) and artificially produced (e.g., via adding different noise to data samples), have proven useful in enhancing representation learning. Natural views are often handled by multiview analysis tools, e.g., (deep) canonical correlation analysis [(D)CCA], while the artificial ones are frequently used in self-supervised learning (SSL) paradigms, e.g., BYOL and Barlow Twins. Both types of approaches often involve learning neural feature extractors such that the embeddings of data exhibit high cross-view correlations. Although intuitive, the effectiveness of correlation-based neural embedding is mostly empirically validated. This work aims to understand latent correlation maximization-based deep multiview learning from a latent component identification viewpoint. An intuitive generative model of multiview data is adopted, where the views are different nonlinear mixtures of shared and private components. Since the shared components are view/distortion-invariant, representing the data using such components is believed to reveal the identity of the samples effectively and robustly. Under this model, latent correlation maximization is shown to guarantee the extraction of the shared components across views (up to certain ambiguities). In addition, it is further shown that the private information in each view can be provably disentangled from the shared using proper regularization design. A finite sample analysis, which has been rare in nonlinear mixture identifiability study, is also presented. The theoretical results and newly designed regularization are tested on a series of tasks.
|
2401.08095
|
Hyung-Seok Oh
|
Hyung-Seok Oh, Sang-Hoon Lee, Deok-Hyeon Cho, Seong-Whan Lee
|
DurFlex-EVC: Duration-Flexible Emotional Voice Conversion with Parallel
Generation
|
14 pages, 11 figures, 12 tables
| null | null | null |
cs.SD cs.AI eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Emotional voice conversion involves modifying the pitch, spectral envelope,
and other acoustic characteristics of speech to match a desired emotional state
while maintaining the speaker's identity. Recent advances in EVC involve
simultaneously modeling pitch and duration by exploiting the potential of
sequence-to-sequence models. In this study, we focus on parallel speech
generation to increase the reliability and efficiency of conversion. We
introduce a duration-flexible EVC (DurFlex-EVC) that integrates a style
autoencoder and a unit aligner. The previous variable-duration parallel
generation model required text-to-speech alignment. We consider self-supervised
model representation and discrete speech units to be the core of our parallel
generation. The style autoencoder promotes content style disentanglement by
separating the source style of the input features and applying them with the
target style. The unit aligner encodes unit-level features by modeling
emotional context. Furthermore, we enhance the style of the features with a
hierarchical stylize encoder and generate high-quality Mel-spectrograms with a
diffusion-based generator. The effectiveness of the approach has been validated
through subjective and objective evaluations and has been demonstrated to be
superior to baseline models.
|
[
{
"created": "Tue, 16 Jan 2024 03:39:35 GMT",
"version": "v1"
},
{
"created": "Thu, 7 Mar 2024 08:40:01 GMT",
"version": "v2"
},
{
"created": "Thu, 8 Aug 2024 23:54:14 GMT",
"version": "v3"
}
] |
2024-08-12
|
[
[
"Oh",
"Hyung-Seok",
""
],
[
"Lee",
"Sang-Hoon",
""
],
[
"Cho",
"Deok-Hyeon",
""
],
[
"Lee",
"Seong-Whan",
""
]
] |
Emotional voice conversion involves modifying the pitch, spectral envelope, and other acoustic characteristics of speech to match a desired emotional state while maintaining the speaker's identity. Recent advances in EVC involve simultaneously modeling pitch and duration by exploiting the potential of sequence-to-sequence models. In this study, we focus on parallel speech generation to increase the reliability and efficiency of conversion. We introduce a duration-flexible EVC (DurFlex-EVC) that integrates a style autoencoder and a unit aligner. The previous variable-duration parallel generation model required text-to-speech alignment. We consider self-supervised model representation and discrete speech units to be the core of our parallel generation. The style autoencoder promotes content style disentanglement by separating the source style of the input features and applying them with the target style. The unit aligner encodes unit-level features by modeling emotional context. Furthermore, we enhance the style of the features with a hierarchical stylize encoder and generate high-quality Mel-spectrograms with a diffusion-based generator. The effectiveness of the approach has been validated through subjective and objective evaluations and has been demonstrated to be superior to baseline models.
|
1905.04403
|
Maximilian Weininger
|
Pranav Ashok, Jan K\v{r}et\'insk\'y and Maximilian Weininger
|
PAC Statistical Model Checking for Markov Decision Processes and
Stochastic Games
| null | null |
10.1007/978-3-030-25540-4_29
| null |
cs.SY cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Statistical model checking (SMC) is a technique for analysis of probabilistic
systems that may be (partially) unknown. We present an SMC algorithm for
(unbounded) reachability yielding probably approximately correct (PAC)
guarantees on the results. We consider both the setting (i) with no knowledge
of the transition function (with the only quantity required a bound on the
minimum transition probability) and (ii) with knowledge of the topology of the
underlying graph. On the one hand, it is the first algorithm for stochastic
games. On the other hand, it is the first practical algorithm even for Markov
decision processes. Compared to previous approaches where PAC guarantees
require running times longer than the age of universe even for systems with a
handful of states, our algorithm often yields reasonably precise results within
minutes, not requiring the knowledge of mixing time or the topology of the
whole model.
|
[
{
"created": "Fri, 10 May 2019 23:36:05 GMT",
"version": "v1"
},
{
"created": "Fri, 24 May 2019 11:15:44 GMT",
"version": "v2"
},
{
"created": "Mon, 1 Feb 2021 15:00:17 GMT",
"version": "v3"
}
] |
2021-02-02
|
[
[
"Ashok",
"Pranav",
""
],
[
"Křetínský",
"Jan",
""
],
[
"Weininger",
"Maximilian",
""
]
] |
Statistical model checking (SMC) is a technique for analysis of probabilistic systems that may be (partially) unknown. We present an SMC algorithm for (unbounded) reachability yielding probably approximately correct (PAC) guarantees on the results. We consider both the setting (i) with no knowledge of the transition function (with the only quantity required a bound on the minimum transition probability) and (ii) with knowledge of the topology of the underlying graph. On the one hand, it is the first algorithm for stochastic games. On the other hand, it is the first practical algorithm even for Markov decision processes. Compared to previous approaches where PAC guarantees require running times longer than the age of universe even for systems with a handful of states, our algorithm often yields reasonably precise results within minutes, not requiring the knowledge of mixing time or the topology of the whole model.
|
1702.02901
|
Dongrui Wu
|
Dongrui Wu, Vernon J. Lawhern, Stephen Gordon, Brent J. Lance,
Chin-Teng Lin
|
Driver Drowsiness Estimation from EEG Signals Using Online Weighted
Adaptation Regularization for Regression (OwARR)
|
in press
|
IEEE Trans.on Fuzzy Systems, 25(6), pp. 1522-1535, 2017
|
10.1109/TFUZZ.2016.2633379
| null |
cs.LG cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One big challenge that hinders the transition of brain-computer interfaces
(BCIs) from laboratory settings to real-life applications is the availability
of high-performance and robust learning algorithms that can effectively handle
individual differences, i.e., algorithms that can be applied to a new subject
with zero or very little subject-specific calibration data. Transfer learning
and domain adaptation have been extensively used for this purpose. However,
most previous works focused on classification problems. This paper considers an
important regression problem in BCI, namely, online driver drowsiness
estimation from EEG signals. By integrating fuzzy sets with domain adaptation,
we propose a novel online weighted adaptation regularization for regression
(OwARR) algorithm to reduce the amount of subject-specific calibration data,
and also a source domain selection (SDS) approach to save about half of the
computational cost of OwARR. Using a simulated driving dataset with 15
subjects, we show that OwARR and OwARR-SDS can achieve significantly smaller
estimation errors than several other approaches. We also provide comprehensive
analyses on the robustness of OwARR and OwARR-SDS.
|
[
{
"created": "Thu, 9 Feb 2017 17:14:15 GMT",
"version": "v1"
}
] |
2020-02-13
|
[
[
"Wu",
"Dongrui",
""
],
[
"Lawhern",
"Vernon J.",
""
],
[
"Gordon",
"Stephen",
""
],
[
"Lance",
"Brent J.",
""
],
[
"Lin",
"Chin-Teng",
""
]
] |
One big challenge that hinders the transition of brain-computer interfaces (BCIs) from laboratory settings to real-life applications is the availability of high-performance and robust learning algorithms that can effectively handle individual differences, i.e., algorithms that can be applied to a new subject with zero or very little subject-specific calibration data. Transfer learning and domain adaptation have been extensively used for this purpose. However, most previous works focused on classification problems. This paper considers an important regression problem in BCI, namely, online driver drowsiness estimation from EEG signals. By integrating fuzzy sets with domain adaptation, we propose a novel online weighted adaptation regularization for regression (OwARR) algorithm to reduce the amount of subject-specific calibration data, and also a source domain selection (SDS) approach to save about half of the computational cost of OwARR. Using a simulated driving dataset with 15 subjects, we show that OwARR and OwARR-SDS can achieve significantly smaller estimation errors than several other approaches. We also provide comprehensive analyses on the robustness of OwARR and OwARR-SDS.
|
2005.05165
|
Kwabena Doku-Amponsah
|
Enoch Sakyi-Yeboah, Charles Kwofie and Kwabena Doku-Amponsah
|
Large Deviation Principle for Empirical SINR Measure of Critical
Telecommunication Networks
|
12 pages
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For a \emph{ powered Poisson process}, we define
\emph{Signal-to-Interference-plus-Noise Ratio}(SINR) and thesinr network as a
Telecommunication Network. We define the Empirical Measures (\emph{empirical
powered measure}, \emph{empirical link measure} and \emph{empirical sinr
measure}) of a class of Telecommunication Networks. For this class of
Telecommunication Network we prove a joint large deviation principle for the
empirical measures of the Telecommunication Networks. All our rate functions
are expressed in terms of relative entropies.
|
[
{
"created": "Mon, 11 May 2020 14:58:33 GMT",
"version": "v1"
},
{
"created": "Sat, 16 May 2020 16:48:59 GMT",
"version": "v2"
}
] |
2020-05-19
|
[
[
"Sakyi-Yeboah",
"Enoch",
""
],
[
"Kwofie",
"Charles",
""
],
[
"Doku-Amponsah",
"Kwabena",
""
]
] |
For a \emph{ powered Poisson process}, we define \emph{Signal-to-Interference-plus-Noise Ratio}(SINR) and thesinr network as a Telecommunication Network. We define the Empirical Measures (\emph{empirical powered measure}, \emph{empirical link measure} and \emph{empirical sinr measure}) of a class of Telecommunication Networks. For this class of Telecommunication Network we prove a joint large deviation principle for the empirical measures of the Telecommunication Networks. All our rate functions are expressed in terms of relative entropies.
|
2106.14439
|
Yuhao Liu
|
Yuhao Liu, Jiake Xie, Yu Qiao, Yong Tang and, Xin Yang
|
Prior-Induced Information Alignment for Image Matting
|
IEEE TMM
| null |
10.1109/TMM.2021.3087007.
| null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Image matting is an ill-posed problem that aims to estimate the opacity of
foreground pixels in an image. However, most existing deep learning-based
methods still suffer from the coarse-grained details. In general, these
algorithms are incapable of felicitously distinguishing the degree of
exploration between deterministic domains (certain FG and BG pixels) and
undetermined domains (uncertain in-between pixels), or inevitably lose
information in the continuous sampling process, leading to a sub-optimal
result. In this paper, we propose a novel network named Prior-Induced
Information Alignment Matting Network (PIIAMatting), which can efficiently
model the distinction of pixel-wise response maps and the correlation of
layer-wise feature maps. It mainly consists of a Dynamic Gaussian Modulation
mechanism (DGM) and an Information Alignment strategy (IA). Specifically, the
DGM can dynamically acquire a pixel-wise domain response map learned from the
prior distribution. The response map can present the relationship between the
opacity variation and the convergence process during training. On the other
hand, the IA comprises an Information Match Module (IMM) and an Information
Aggregation Module (IAM), jointly scheduled to match and aggregate the adjacent
layer-wise features adaptively. Besides, we also develop a Multi-Scale
Refinement (MSR) module to integrate multi-scale receptive field information at
the refinement stage to recover the fluctuating appearance details. Extensive
quantitative and qualitative evaluations demonstrate that the proposed
PIIAMatting performs favourably against state-of-the-art image matting methods
on the Alphamatting.com, Composition-1K and Distinctions-646 dataset.
|
[
{
"created": "Mon, 28 Jun 2021 07:46:59 GMT",
"version": "v1"
}
] |
2021-06-29
|
[
[
"Liu",
"Yuhao",
""
],
[
"Xie",
"Jiake",
""
],
[
"Qiao",
"Yu",
""
],
[
"and",
"Yong Tang",
""
],
[
"Yang",
"Xin",
""
]
] |
Image matting is an ill-posed problem that aims to estimate the opacity of foreground pixels in an image. However, most existing deep learning-based methods still suffer from the coarse-grained details. In general, these algorithms are incapable of felicitously distinguishing the degree of exploration between deterministic domains (certain FG and BG pixels) and undetermined domains (uncertain in-between pixels), or inevitably lose information in the continuous sampling process, leading to a sub-optimal result. In this paper, we propose a novel network named Prior-Induced Information Alignment Matting Network (PIIAMatting), which can efficiently model the distinction of pixel-wise response maps and the correlation of layer-wise feature maps. It mainly consists of a Dynamic Gaussian Modulation mechanism (DGM) and an Information Alignment strategy (IA). Specifically, the DGM can dynamically acquire a pixel-wise domain response map learned from the prior distribution. The response map can present the relationship between the opacity variation and the convergence process during training. On the other hand, the IA comprises an Information Match Module (IMM) and an Information Aggregation Module (IAM), jointly scheduled to match and aggregate the adjacent layer-wise features adaptively. Besides, we also develop a Multi-Scale Refinement (MSR) module to integrate multi-scale receptive field information at the refinement stage to recover the fluctuating appearance details. Extensive quantitative and qualitative evaluations demonstrate that the proposed PIIAMatting performs favourably against state-of-the-art image matting methods on the Alphamatting.com, Composition-1K and Distinctions-646 dataset.
|
2306.06476
|
Abdelhamid Haouhat
|
Abdelhamid Haouhat, Slimane Bellaouar, Attia Nehar, Hadda Cherroun
|
Modality Influence in Multimodal Machine Learning
|
10 pages
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Multimodal Machine Learning has emerged as a prominent research direction
across various applications such as Sentiment Analysis, Emotion Recognition,
Machine Translation, Hate Speech Recognition, and Movie Genre Classification.
This approach has shown promising results by utilizing modern deep learning
architectures. Despite the achievements made, challenges remain in data
representation, alignment techniques, reasoning, generation, and quantification
within multimodal learning. Additionally, assumptions about the dominant role
of textual modality in decision-making have been made. However, limited
investigations have been conducted on the influence of different modalities in
Multimodal Machine Learning systems. This paper aims to address this gap by
studying the impact of each modality on multimodal learning tasks. The research
focuses on verifying presumptions and gaining insights into the usage of
different modalities. The main contribution of this work is the proposal of a
methodology to determine the effect of each modality on several Multimodal
Machine Learning models and datasets from various tasks. Specifically, the
study examines Multimodal Sentiment Analysis, Multimodal Emotion Recognition,
Multimodal Hate Speech Recognition, and Multimodal Disease Detection. The study
objectives include training SOTA MultiModal Machine Learning models with masked
modalities to evaluate their impact on performance. Furthermore, the research
aims to identify the most influential modality or set of modalities for each
task and draw conclusions for diverse multimodal classification tasks. By
undertaking these investigations, this research contributes to a better
understanding of the role of individual modalities in multi-modal learning and
provides valuable insights for future advancements in this field.
|
[
{
"created": "Sat, 10 Jun 2023 16:28:52 GMT",
"version": "v1"
}
] |
2023-06-13
|
[
[
"Haouhat",
"Abdelhamid",
""
],
[
"Bellaouar",
"Slimane",
""
],
[
"Nehar",
"Attia",
""
],
[
"Cherroun",
"Hadda",
""
]
] |
Multimodal Machine Learning has emerged as a prominent research direction across various applications such as Sentiment Analysis, Emotion Recognition, Machine Translation, Hate Speech Recognition, and Movie Genre Classification. This approach has shown promising results by utilizing modern deep learning architectures. Despite the achievements made, challenges remain in data representation, alignment techniques, reasoning, generation, and quantification within multimodal learning. Additionally, assumptions about the dominant role of textual modality in decision-making have been made. However, limited investigations have been conducted on the influence of different modalities in Multimodal Machine Learning systems. This paper aims to address this gap by studying the impact of each modality on multimodal learning tasks. The research focuses on verifying presumptions and gaining insights into the usage of different modalities. The main contribution of this work is the proposal of a methodology to determine the effect of each modality on several Multimodal Machine Learning models and datasets from various tasks. Specifically, the study examines Multimodal Sentiment Analysis, Multimodal Emotion Recognition, Multimodal Hate Speech Recognition, and Multimodal Disease Detection. The study objectives include training SOTA MultiModal Machine Learning models with masked modalities to evaluate their impact on performance. Furthermore, the research aims to identify the most influential modality or set of modalities for each task and draw conclusions for diverse multimodal classification tasks. By undertaking these investigations, this research contributes to a better understanding of the role of individual modalities in multi-modal learning and provides valuable insights for future advancements in this field.
|
2210.08128
|
Sergio Ram\'irez
|
Santiago Quintero, Carlos Pinz\'on, Sergio Ram\'irez, Frank Valencia
|
On the Computation of Distributed Knowledge as the Greatest Lower Bound
of Knowledge
| null | null | null | null |
cs.MA
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Let $L$ be a finite lattice and $\mathcal{E}(L)$ be the set of join
endomorphisms of $L$. We consider the problem of given $L$ and $f,g \in
\mathcal{E}(L)$, finding the greatest lower bound $f \sqcap_{{\scriptsize
\mathcal{E}(L)}} g$ in the lattice $\mathcal{E}(L)$. (1) We show that if $L$ is
distributive, the problem can be solved in time $O(n)$ where $n=| L |$. The
previous upper bound was $O(n^2)$. (2) We provide new algorithms for arbitrary
lattices and give experimental evidence that they are significantly faster than
the existing algorithm. (3) We characterize the standard notion of distributed
knowledge of a group as the greatest lower bound of the join-endomorphisms
representing the knowledge of each member of the group. (4) We show that
deciding whether an agent has the distributed knowledge of two other agents can
be computed in time $O(n^2)$ where $n$ is the size of the underlying set of
states. (5) For the special case of $S5$ knowledge, we show that it can be
decided in time $O(n\alpha_{n})$ where $\alpha_{n}$ is the inverse of the
Ackermann function.
|
[
{
"created": "Fri, 14 Oct 2022 21:54:15 GMT",
"version": "v1"
},
{
"created": "Tue, 25 Oct 2022 00:12:24 GMT",
"version": "v2"
}
] |
2022-10-26
|
[
[
"Quintero",
"Santiago",
""
],
[
"Pinzón",
"Carlos",
""
],
[
"Ramírez",
"Sergio",
""
],
[
"Valencia",
"Frank",
""
]
] |
Let $L$ be a finite lattice and $\mathcal{E}(L)$ be the set of join endomorphisms of $L$. We consider the problem of given $L$ and $f,g \in \mathcal{E}(L)$, finding the greatest lower bound $f \sqcap_{{\scriptsize \mathcal{E}(L)}} g$ in the lattice $\mathcal{E}(L)$. (1) We show that if $L$ is distributive, the problem can be solved in time $O(n)$ where $n=| L |$. The previous upper bound was $O(n^2)$. (2) We provide new algorithms for arbitrary lattices and give experimental evidence that they are significantly faster than the existing algorithm. (3) We characterize the standard notion of distributed knowledge of a group as the greatest lower bound of the join-endomorphisms representing the knowledge of each member of the group. (4) We show that deciding whether an agent has the distributed knowledge of two other agents can be computed in time $O(n^2)$ where $n$ is the size of the underlying set of states. (5) For the special case of $S5$ knowledge, we show that it can be decided in time $O(n\alpha_{n})$ where $\alpha_{n}$ is the inverse of the Ackermann function.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.