id stringlengths 9 10 | submitter stringlengths 1 64 ⌀ | authors stringlengths 4 20.7k | title stringlengths 4 246 | comments stringlengths 1 523 ⌀ | journal-ref stringlengths 4 404 ⌀ | doi stringlengths 11 153 ⌀ | report-no stringlengths 2 254 ⌀ | categories stringlengths 5 98 | license stringclasses 9 values | orig_abstract stringlengths 14 3.35k | versions listlengths 1 60 | update_date stringlengths 10 10 | authors_parsed listlengths 1 1.35k | abstract stringlengths 11 3.34k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2208.12488 | Vaclav Skala | Vaclav Skala | Polar, Spherical and Orthogonal Space Subdivisions for an Algorithm
Acceleration: O(1) Point-in-Polygon/Polyhedron Test | 4 pages, 8 figures | null | null | null | cs.CG | http://creativecommons.org/licenses/by/4.0/ | Acceleration of algorithms is becoming a crucial problem, if larger data sets
are to be processed. Evaluation of algorithms is mostly done by using
computational geometry approach and evaluation of computational complexity.
However in todays engineering problems this approach does not respect that
number of processed items is always limited and a significant role plays also
speed of read/write operations. One general method how to speed up an algorithm
is application of space subdivision technique and usually the orthogonal space
subdivision is used. In this paper non-orthogonal subdivisions are described.
The proposed approach can significantly improve memory consumption and run-time
complexity. The proposed modified space subdivision techniques are demonstrated
on two simple problems Point-in-Convex Polygon and Point-in-Convex Polyhedron
tests.
| [
{
"created": "Fri, 26 Aug 2022 07:59:13 GMT",
"version": "v1"
}
] | 2022-08-29 | [
[
"Skala",
"Vaclav",
""
]
] | Acceleration of algorithms is becoming a crucial problem, if larger data sets are to be processed. Evaluation of algorithms is mostly done by using computational geometry approach and evaluation of computational complexity. However in todays engineering problems this approach does not respect that number of processed items is always limited and a significant role plays also speed of read/write operations. One general method how to speed up an algorithm is application of space subdivision technique and usually the orthogonal space subdivision is used. In this paper non-orthogonal subdivisions are described. The proposed approach can significantly improve memory consumption and run-time complexity. The proposed modified space subdivision techniques are demonstrated on two simple problems Point-in-Convex Polygon and Point-in-Convex Polyhedron tests. |
1407.1103 | Hanbaek Lyu | Hanbaek Lyu | Synchronization of finite-state pulse-coupled oscillators | 23 pages, 17 figures, To appear in Physica D: Nonlinear Phenomena | null | 10.1016/j.physd.2015.03.007 | null | cs.SY math.CO math.DS math.OC nlin.CG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel generalized cellular automaton(GCA) model for
discrete-time pulse-coupled oscillators and study the emergence of synchrony.
Given a finite simple graph and an integer $n\ge 3$, each vertex is an
identical oscillator of period $n$ with the following weak coupling along the
edges: each oscillator inhibits its phase update if it has at least one
neighboring oscillator at a particular "blinking" state and if its state is
ahead of this blinking state. We obtain conditions on initial configurations
and on network topologies for which states of all vertices eventually
synchronize. We show that our GCA model synchronizes arbitrary initial
configurations on paths, trees, and with random perturbation, any connected
graph. In particular, our main result is the following local-global principle
for tree networks: for $n\in \{3,4,5,6\}$, any $n$-periodic network on a tree
synchronizes arbitrary initial configuration if and only if the maximum degree
of the tree is less than the period $n$.
| [
{
"created": "Fri, 4 Jul 2014 01:02:45 GMT",
"version": "v1"
},
{
"created": "Sat, 12 Jul 2014 18:16:35 GMT",
"version": "v2"
},
{
"created": "Mon, 30 Mar 2015 02:00:17 GMT",
"version": "v3"
}
] | 2018-01-25 | [
[
"Lyu",
"Hanbaek",
""
]
] | We propose a novel generalized cellular automaton(GCA) model for discrete-time pulse-coupled oscillators and study the emergence of synchrony. Given a finite simple graph and an integer $n\ge 3$, each vertex is an identical oscillator of period $n$ with the following weak coupling along the edges: each oscillator inhibits its phase update if it has at least one neighboring oscillator at a particular "blinking" state and if its state is ahead of this blinking state. We obtain conditions on initial configurations and on network topologies for which states of all vertices eventually synchronize. We show that our GCA model synchronizes arbitrary initial configurations on paths, trees, and with random perturbation, any connected graph. In particular, our main result is the following local-global principle for tree networks: for $n\in \{3,4,5,6\}$, any $n$-periodic network on a tree synchronizes arbitrary initial configuration if and only if the maximum degree of the tree is less than the period $n$. |
2008.05900 | Ninghan Chen | Ninghan Chen, Zhiqiang Zhong, Jun Pang | An Exploratory Study of COVID-19 Information on Twitter in the Greater
Region | null | null | null | null | cs.SI cs.CY cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The outbreak of the COVID-19 leads to a burst of information in major online
social networks (OSNs). Facing this constantly changing situation, OSNs have
become an essential platform for people expressing opinions and seeking
up-to-the-minute information. Thus, discussions on OSNs may become a reflection
of reality. This paper aims to figure out the distinctive characteristics of
the Greater Region (GR) through conducting a data-driven exploratory study of
Twitter COVID-19 information in the GR and related countries using machine
learning and representation learning methods. We find that tweets volume and
COVID-19 cases in GR and related countries are correlated, but this correlation
only exists in a particular period of the pandemic. Moreover, we plot the
changing of topics in each country and region from 2020-01-22 to 2020-06-05,
figuring out the main differences between GR and related countries.
| [
{
"created": "Wed, 12 Aug 2020 16:37:58 GMT",
"version": "v1"
},
{
"created": "Wed, 2 Dec 2020 12:49:02 GMT",
"version": "v2"
}
] | 2020-12-03 | [
[
"Chen",
"Ninghan",
""
],
[
"Zhong",
"Zhiqiang",
""
],
[
"Pang",
"Jun",
""
]
] | The outbreak of the COVID-19 leads to a burst of information in major online social networks (OSNs). Facing this constantly changing situation, OSNs have become an essential platform for people expressing opinions and seeking up-to-the-minute information. Thus, discussions on OSNs may become a reflection of reality. This paper aims to figure out the distinctive characteristics of the Greater Region (GR) through conducting a data-driven exploratory study of Twitter COVID-19 information in the GR and related countries using machine learning and representation learning methods. We find that tweets volume and COVID-19 cases in GR and related countries are correlated, but this correlation only exists in a particular period of the pandemic. Moreover, we plot the changing of topics in each country and region from 2020-01-22 to 2020-06-05, figuring out the main differences between GR and related countries. |
1809.03149 | Weixun Wang | Weixun Wang, Junqi Jin, Jianye Hao, Chunjie Chen, Chuan Yu, Weinan
Zhang, Jun Wang, Xiaotian Hao, Yixi Wang, Han Li, Jian Xu, Kun Gai | Learning Adaptive Display Exposure for Real-Time Advertising | accepted by CIKM2019 | null | null | null | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In E-commerce advertising, where product recommendations and product ads are
presented to users simultaneously, the traditional setting is to display ads at
fixed positions. However, under such a setting, the advertising system loses
the flexibility to control the number and positions of ads, resulting in
sub-optimal platform revenue and user experience. Consequently, major
e-commerce platforms (e.g., Taobao.com) have begun to consider more flexible
ways to display ads. In this paper, we investigate the problem of advertising
with adaptive exposure: can we dynamically determine the number and positions
of ads for each user visit under certain business constraints so that the
platform revenue can be increased? More specifically, we consider two types of
constraints: request-level constraint ensures user experience for each user
visit, and platform-level constraint controls the overall platform monetization
rate. We model this problem as a Constrained Markov Decision Process with
per-state constraint (psCMDP) and propose a constrained two-level reinforcement
learning approach to decompose the original problem into two relatively
independent sub-problems. To accelerate policy learning, we also devise a
constrained hindsight experience replay mechanism. Experimental evaluations on
industry-scale real-world datasets demonstrate the merits of our approach in
both obtaining higher revenue under the constraints and the effectiveness of
the constrained hindsight experience replay mechanism.
| [
{
"created": "Mon, 10 Sep 2018 06:15:42 GMT",
"version": "v1"
},
{
"created": "Tue, 3 Sep 2019 01:55:56 GMT",
"version": "v2"
}
] | 2019-09-04 | [
[
"Wang",
"Weixun",
""
],
[
"Jin",
"Junqi",
""
],
[
"Hao",
"Jianye",
""
],
[
"Chen",
"Chunjie",
""
],
[
"Yu",
"Chuan",
""
],
[
"Zhang",
"Weinan",
""
],
[
"Wang",
"Jun",
""
],
[
"Hao",
"Xiaotian",
""
],
[
"Wang",
"Yixi",
""
],
[
"Li",
"Han",
""
],
[
"Xu",
"Jian",
""
],
[
"Gai",
"Kun",
""
]
] | In E-commerce advertising, where product recommendations and product ads are presented to users simultaneously, the traditional setting is to display ads at fixed positions. However, under such a setting, the advertising system loses the flexibility to control the number and positions of ads, resulting in sub-optimal platform revenue and user experience. Consequently, major e-commerce platforms (e.g., Taobao.com) have begun to consider more flexible ways to display ads. In this paper, we investigate the problem of advertising with adaptive exposure: can we dynamically determine the number and positions of ads for each user visit under certain business constraints so that the platform revenue can be increased? More specifically, we consider two types of constraints: request-level constraint ensures user experience for each user visit, and platform-level constraint controls the overall platform monetization rate. We model this problem as a Constrained Markov Decision Process with per-state constraint (psCMDP) and propose a constrained two-level reinforcement learning approach to decompose the original problem into two relatively independent sub-problems. To accelerate policy learning, we also devise a constrained hindsight experience replay mechanism. Experimental evaluations on industry-scale real-world datasets demonstrate the merits of our approach in both obtaining higher revenue under the constraints and the effectiveness of the constrained hindsight experience replay mechanism. |
1106.1998 | Yi Sun | Yi Sun and Faustino Gomez and Tom Schaul and Juergen Schmidhuber | A Linear Time Natural Evolution Strategy for Non-Separable Functions | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a novel Natural Evolution Strategy (NES) variant, the Rank-One NES
(R1-NES), which uses a low rank approximation of the search distribution
covariance matrix. The algorithm allows computation of the natural gradient
with cost linear in the dimensionality of the parameter space, and excels in
solving high-dimensional non-separable problems, including the best result to
date on the Rosenbrock function (512 dimensions).
| [
{
"created": "Fri, 10 Jun 2011 09:56:00 GMT",
"version": "v1"
},
{
"created": "Mon, 13 Jun 2011 09:57:57 GMT",
"version": "v2"
}
] | 2011-06-14 | [
[
"Sun",
"Yi",
""
],
[
"Gomez",
"Faustino",
""
],
[
"Schaul",
"Tom",
""
],
[
"Schmidhuber",
"Juergen",
""
]
] | We present a novel Natural Evolution Strategy (NES) variant, the Rank-One NES (R1-NES), which uses a low rank approximation of the search distribution covariance matrix. The algorithm allows computation of the natural gradient with cost linear in the dimensionality of the parameter space, and excels in solving high-dimensional non-separable problems, including the best result to date on the Rosenbrock function (512 dimensions). |
2305.13700 | Chenglong Wang | Chenglong Wang, Jiangyan Yi, Jianhua Tao, Chuyuan Zhang, Shuai Zhang
and Xun Chen | Detection of Cross-Dataset Fake Audio Based on Prosodic and
Pronunciation Features | Interspeech2023 | null | null | null | cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing fake audio detection systems perform well in in-domain testing, but
still face many challenges in out-of-domain testing. This is due to the
mismatch between the training and test data, as well as the poor
generalizability of features extracted from limited views. To address this, we
propose multi-view features for fake audio detection, which aim to capture more
generalized features from prosodic, pronunciation, and wav2vec dimensions.
Specifically, the phoneme duration features are extracted from a pre-trained
model based on a large amount of speech data. For the pronunciation features, a
Conformer-based phoneme recognition model is first trained, keeping the
acoustic encoder part as a deeply embedded feature extractor. Furthermore, the
prosodic and pronunciation features are fused with wav2vec features based on an
attention mechanism to improve the generalization of fake audio detection
models. Results show that the proposed approach achieves significant
performance gains in several cross-dataset experiments.
| [
{
"created": "Tue, 23 May 2023 05:27:39 GMT",
"version": "v1"
}
] | 2023-05-24 | [
[
"Wang",
"Chenglong",
""
],
[
"Yi",
"Jiangyan",
""
],
[
"Tao",
"Jianhua",
""
],
[
"Zhang",
"Chuyuan",
""
],
[
"Zhang",
"Shuai",
""
],
[
"Chen",
"Xun",
""
]
] | Existing fake audio detection systems perform well in in-domain testing, but still face many challenges in out-of-domain testing. This is due to the mismatch between the training and test data, as well as the poor generalizability of features extracted from limited views. To address this, we propose multi-view features for fake audio detection, which aim to capture more generalized features from prosodic, pronunciation, and wav2vec dimensions. Specifically, the phoneme duration features are extracted from a pre-trained model based on a large amount of speech data. For the pronunciation features, a Conformer-based phoneme recognition model is first trained, keeping the acoustic encoder part as a deeply embedded feature extractor. Furthermore, the prosodic and pronunciation features are fused with wav2vec features based on an attention mechanism to improve the generalization of fake audio detection models. Results show that the proposed approach achieves significant performance gains in several cross-dataset experiments. |
1403.5071 | Stefano Gurciullo | Stefano Gurciullo | Organised crime infiltration in the legitimate private economy - An
empirical network analysis approach | 27 pages, 6, figures, International Crime and Intelligence Analysis
Conference - 2012, Manchester | null | null | null | cs.CY cs.SI physics.soc-ph | http://creativecommons.org/licenses/by-nc-sa/3.0/ | It is estimated that Italian Mafias registered 135 billion euros in profits
only in 2010. Part of this huge amount of money, coming mostly from the drugs,
prostitution and arms illicit markets, is often used to invest into legitimate
private economies. As a consequence, the affected economies destabilise, become
entrenched with violent forms of competition and are bound to stagnation.
Nonetheless, few are the attempts to uncover the patterns followed by criminal
organisations in their business ventures. The reason lays mostly in the poor
availability of data on criminal activity, or in the highly risky task of
gather it.
This paper partially fills this gap thanks to access to information about the
Sicilian Mafia in a city. More specifically, it tries to analyse the nature and
extent of criminal infiltration into the legitimate private economy of the
case-study using network techniques. The research demonstrates that sectors
with a high degree of centrality and comprising fewer firms are the most
vulnerable to this kind of security threat. It also shows that centrality is
also the key criterion that makes a firm sensitive to infiltration, provided it
belongs to a susceptible economic sector.
| [
{
"created": "Thu, 20 Mar 2014 09:05:49 GMT",
"version": "v1"
}
] | 2014-03-21 | [
[
"Gurciullo",
"Stefano",
""
]
] | It is estimated that Italian Mafias registered 135 billion euros in profits only in 2010. Part of this huge amount of money, coming mostly from the drugs, prostitution and arms illicit markets, is often used to invest into legitimate private economies. As a consequence, the affected economies destabilise, become entrenched with violent forms of competition and are bound to stagnation. Nonetheless, few are the attempts to uncover the patterns followed by criminal organisations in their business ventures. The reason lays mostly in the poor availability of data on criminal activity, or in the highly risky task of gather it. This paper partially fills this gap thanks to access to information about the Sicilian Mafia in a city. More specifically, it tries to analyse the nature and extent of criminal infiltration into the legitimate private economy of the case-study using network techniques. The research demonstrates that sectors with a high degree of centrality and comprising fewer firms are the most vulnerable to this kind of security threat. It also shows that centrality is also the key criterion that makes a firm sensitive to infiltration, provided it belongs to a susceptible economic sector. |
1711.05345 | Yu-An Chung | Yu-An Chung and Hung-Yi Lee and James Glass | Supervised and Unsupervised Transfer Learning for Question Answering | To appear in NAACL HLT 2018 (long paper) | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although transfer learning has been shown to be successful for tasks like
object and speech recognition, its applicability to question answering (QA) has
yet to be well-studied. In this paper, we conduct extensive experiments to
investigate the transferability of knowledge learned from a source QA dataset
to a target dataset using two QA models. The performance of both models on a
TOEFL listening comprehension test (Tseng et al., 2016) and MCTest (Richardson
et al., 2013) is significantly improved via a simple transfer learning
technique from MovieQA (Tapaswi et al., 2016). In particular, one of the models
achieves the state-of-the-art on all target datasets; for the TOEFL listening
comprehension test, it outperforms the previous best model by 7%. Finally, we
show that transfer learning is helpful even in unsupervised scenarios when
correct answers for target QA dataset examples are not available.
| [
{
"created": "Tue, 14 Nov 2017 22:57:24 GMT",
"version": "v1"
},
{
"created": "Wed, 21 Feb 2018 19:58:45 GMT",
"version": "v2"
},
{
"created": "Sat, 21 Apr 2018 19:20:20 GMT",
"version": "v3"
}
] | 2018-04-24 | [
[
"Chung",
"Yu-An",
""
],
[
"Lee",
"Hung-Yi",
""
],
[
"Glass",
"James",
""
]
] | Although transfer learning has been shown to be successful for tasks like object and speech recognition, its applicability to question answering (QA) has yet to be well-studied. In this paper, we conduct extensive experiments to investigate the transferability of knowledge learned from a source QA dataset to a target dataset using two QA models. The performance of both models on a TOEFL listening comprehension test (Tseng et al., 2016) and MCTest (Richardson et al., 2013) is significantly improved via a simple transfer learning technique from MovieQA (Tapaswi et al., 2016). In particular, one of the models achieves the state-of-the-art on all target datasets; for the TOEFL listening comprehension test, it outperforms the previous best model by 7%. Finally, we show that transfer learning is helpful even in unsupervised scenarios when correct answers for target QA dataset examples are not available. |
2010.10858 | Befekadu Gebraselase | Befekadu G. Gebraselase, Bjarne E. Helvik, Yuming Jiang | Transaction Characteristics of Bitcoin | null | null | null | null | cs.CR cs.DC | http://creativecommons.org/licenses/by/4.0/ | Blockchain has been considered as an important technique to enable secure
management of virtual network functions and network slices. To understand such
capabilities of a blockchain, e.g. transaction confirmation time, demands a
thorough study on the transaction characteristics of the blockchain. This paper
presents a comprehensive study on the transaction characteristics of Bitcoin --
the first blockchain application, focusing on the underlying fundamental
processes. A set of results and findings are obtained, which provide new
insight into understanding the transaction and traffic characteristics of
Bitcoin. As a highlight, the validity of several hypotheses/assumptions used in
the literature is examined with measurement for the first time.
| [
{
"created": "Wed, 21 Oct 2020 09:35:36 GMT",
"version": "v1"
}
] | 2020-10-22 | [
[
"Gebraselase",
"Befekadu G.",
""
],
[
"Helvik",
"Bjarne E.",
""
],
[
"Jiang",
"Yuming",
""
]
] | Blockchain has been considered as an important technique to enable secure management of virtual network functions and network slices. To understand such capabilities of a blockchain, e.g. transaction confirmation time, demands a thorough study on the transaction characteristics of the blockchain. This paper presents a comprehensive study on the transaction characteristics of Bitcoin -- the first blockchain application, focusing on the underlying fundamental processes. A set of results and findings are obtained, which provide new insight into understanding the transaction and traffic characteristics of Bitcoin. As a highlight, the validity of several hypotheses/assumptions used in the literature is examined with measurement for the first time. |
2111.05410 | Fatemeh Vahedian | Fatemeh Vahedian, Ruiyu Li, Puja Trivedi, Di Jin, Danai Koutra | Leveraging the Graph Structure of Neural Network Training Dynamics | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Understanding the training dynamics of deep neural networks (DNNs) is
important as it can lead to improved training efficiency and task performance.
Recent works have demonstrated that representing the wirings of static graph
cannot capture how DNNs change over the course of training. Thus, in this work,
we propose a compact, expressive temporal graph framework that effectively
captures the dynamics of many workhorse architectures in computer vision.
Specifically, it extracts an informative summary of graph properties (e.g.,
eigenvector centrality) over a sequence of DNN graphs obtained during training.
We demonstrate that our framework captures useful dynamics by accurately
predicting trained, task performance when using a summary over early training
epochs (<5) across four different architectures and two image datasets.
Moreover, by using a novel, highly-scalable DNN graph representation, we also
show that the proposed framework captures generalizable dynamics as summaries
extracted from smaller-width networks are effective when evaluated on larger
widths.
| [
{
"created": "Tue, 9 Nov 2021 20:38:48 GMT",
"version": "v1"
},
{
"created": "Mon, 20 Feb 2023 21:31:26 GMT",
"version": "v2"
}
] | 2023-02-22 | [
[
"Vahedian",
"Fatemeh",
""
],
[
"Li",
"Ruiyu",
""
],
[
"Trivedi",
"Puja",
""
],
[
"Jin",
"Di",
""
],
[
"Koutra",
"Danai",
""
]
] | Understanding the training dynamics of deep neural networks (DNNs) is important as it can lead to improved training efficiency and task performance. Recent works have demonstrated that representing the wirings of static graph cannot capture how DNNs change over the course of training. Thus, in this work, we propose a compact, expressive temporal graph framework that effectively captures the dynamics of many workhorse architectures in computer vision. Specifically, it extracts an informative summary of graph properties (e.g., eigenvector centrality) over a sequence of DNN graphs obtained during training. We demonstrate that our framework captures useful dynamics by accurately predicting trained, task performance when using a summary over early training epochs (<5) across four different architectures and two image datasets. Moreover, by using a novel, highly-scalable DNN graph representation, we also show that the proposed framework captures generalizable dynamics as summaries extracted from smaller-width networks are effective when evaluated on larger widths. |
1909.04436 | Martin Shepperd | Martin Shepperd, Yuchen Guo, Ning Li, Mahir Arzoky, Andrea Capiluppi,
Steve Counsell, Giuseppe Destefanis, Stephen Swift, Allan Tucker, and Leila
Yousefi | The Prevalence of Errors in Machine Learning Experiments | 20th International Conference on Intelligent Data Engineering and
Automated Learning (IDEAL), 14--16 November 2019 | null | null | null | cs.LG cs.AI stat.AP stat.ML | http://creativecommons.org/licenses/by/4.0/ | Context: Conducting experiments is central to research machine learning
research to benchmark, evaluate and compare learning algorithms. Consequently
it is important we conduct reliable, trustworthy experiments. Objective: We
investigate the incidence of errors in a sample of machine learning experiments
in the domain of software defect prediction. Our focus is simple arithmetical
and statistical errors. Method: We analyse 49 papers describing 2456 individual
experimental results from a previously undertaken systematic review comparing
supervised and unsupervised defect prediction classifiers. We extract the
confusion matrices and test for relevant constraints, e.g., the marginal
probabilities must sum to one. We also check for multiple statistical
significance testing errors. Results: We find that a total of 22 out of 49
papers contain demonstrable errors. Of these 7 were statistical and 16 related
to confusion matrix inconsistency (one paper contained both classes of error).
Conclusions: Whilst some errors may be of a relatively trivial nature, e.g.,
transcription errors their presence does not engender confidence. We strongly
urge researchers to follow open science principles so errors can be more easily
be detected and corrected, thus as a community reduce this worryingly high
error rate with our computational experiments.
| [
{
"created": "Tue, 10 Sep 2019 12:32:00 GMT",
"version": "v1"
}
] | 2019-09-11 | [
[
"Shepperd",
"Martin",
""
],
[
"Guo",
"Yuchen",
""
],
[
"Li",
"Ning",
""
],
[
"Arzoky",
"Mahir",
""
],
[
"Capiluppi",
"Andrea",
""
],
[
"Counsell",
"Steve",
""
],
[
"Destefanis",
"Giuseppe",
""
],
[
"Swift",
"Stephen",
""
],
[
"Tucker",
"Allan",
""
],
[
"Yousefi",
"Leila",
""
]
] | Context: Conducting experiments is central to research machine learning research to benchmark, evaluate and compare learning algorithms. Consequently it is important we conduct reliable, trustworthy experiments. Objective: We investigate the incidence of errors in a sample of machine learning experiments in the domain of software defect prediction. Our focus is simple arithmetical and statistical errors. Method: We analyse 49 papers describing 2456 individual experimental results from a previously undertaken systematic review comparing supervised and unsupervised defect prediction classifiers. We extract the confusion matrices and test for relevant constraints, e.g., the marginal probabilities must sum to one. We also check for multiple statistical significance testing errors. Results: We find that a total of 22 out of 49 papers contain demonstrable errors. Of these 7 were statistical and 16 related to confusion matrix inconsistency (one paper contained both classes of error). Conclusions: Whilst some errors may be of a relatively trivial nature, e.g., transcription errors their presence does not engender confidence. We strongly urge researchers to follow open science principles so errors can be more easily be detected and corrected, thus as a community reduce this worryingly high error rate with our computational experiments. |
2402.09769 | Ayon Borthakur | Aditya Somasundaram, Pushkal Mishra, Ayon Borthakur | Representation Learning Using a Single Forward Pass | Under review | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a neuroscience-inspired Solo Pass Embedded Learning Algorithm
(SPELA). SPELA is a prime candidate for training and inference applications in
Edge AI devices. At the same time, SPELA can optimally cater to the need for a
framework to study perceptual representation learning and formation. SPELA has
distinctive features such as neural priors (in the form of embedded vectors),
no weight transport, no update locking of weights, complete local Hebbian
learning, single forward pass with no storage of activations, and single weight
update per sample. Juxtaposed with traditional approaches, SPELA operates
without the need for backpropagation. We show that our algorithm can perform
nonlinear classification on a noisy boolean operation dataset. Additionally, we
exhibit high performance using SPELA across MNIST, KMNIST, and Fashion MNIST.
Lastly, we show the few-shot and 1-epoch learning capabilities of SPELA on
MNIST, KMNIST, and Fashion MNIST, where it consistently outperforms
backpropagation.
| [
{
"created": "Thu, 15 Feb 2024 07:47:10 GMT",
"version": "v1"
}
] | 2024-02-16 | [
[
"Somasundaram",
"Aditya",
""
],
[
"Mishra",
"Pushkal",
""
],
[
"Borthakur",
"Ayon",
""
]
] | We propose a neuroscience-inspired Solo Pass Embedded Learning Algorithm (SPELA). SPELA is a prime candidate for training and inference applications in Edge AI devices. At the same time, SPELA can optimally cater to the need for a framework to study perceptual representation learning and formation. SPELA has distinctive features such as neural priors (in the form of embedded vectors), no weight transport, no update locking of weights, complete local Hebbian learning, single forward pass with no storage of activations, and single weight update per sample. Juxtaposed with traditional approaches, SPELA operates without the need for backpropagation. We show that our algorithm can perform nonlinear classification on a noisy boolean operation dataset. Additionally, we exhibit high performance using SPELA across MNIST, KMNIST, and Fashion MNIST. Lastly, we show the few-shot and 1-epoch learning capabilities of SPELA on MNIST, KMNIST, and Fashion MNIST, where it consistently outperforms backpropagation. |
2003.12725 | Chence Shi | Chence Shi, Minkai Xu, Hongyu Guo, Ming Zhang, Jian Tang | A Graph to Graphs Framework for Retrosynthesis Prediction | ICML 2020 | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A fundamental problem in computational chemistry is to find a set of
reactants to synthesize a target molecule, a.k.a. retrosynthesis prediction.
Existing state-of-the-art methods rely on matching the target molecule with a
large set of reaction templates, which are very computationally expensive and
also suffer from the problem of coverage. In this paper, we propose a novel
template-free approach called G2Gs by transforming a target molecular graph
into a set of reactant molecular graphs. G2Gs first splits the target molecular
graph into a set of synthons by identifying the reaction centers, and then
translates the synthons to the final reactant graphs via a variational graph
translation framework. Experimental results show that G2Gs significantly
outperforms existing template-free approaches by up to 63% in terms of the
top-1 accuracy and achieves a performance close to that of state-of-the-art
template based approaches, but does not require domain knowledge and is much
more scalable.
| [
{
"created": "Sat, 28 Mar 2020 06:16:56 GMT",
"version": "v1"
},
{
"created": "Mon, 7 Jun 2021 09:43:56 GMT",
"version": "v2"
},
{
"created": "Fri, 20 Aug 2021 03:14:26 GMT",
"version": "v3"
}
] | 2021-08-23 | [
[
"Shi",
"Chence",
""
],
[
"Xu",
"Minkai",
""
],
[
"Guo",
"Hongyu",
""
],
[
"Zhang",
"Ming",
""
],
[
"Tang",
"Jian",
""
]
] | A fundamental problem in computational chemistry is to find a set of reactants to synthesize a target molecule, a.k.a. retrosynthesis prediction. Existing state-of-the-art methods rely on matching the target molecule with a large set of reaction templates, which are very computationally expensive and also suffer from the problem of coverage. In this paper, we propose a novel template-free approach called G2Gs by transforming a target molecular graph into a set of reactant molecular graphs. G2Gs first splits the target molecular graph into a set of synthons by identifying the reaction centers, and then translates the synthons to the final reactant graphs via a variational graph translation framework. Experimental results show that G2Gs significantly outperforms existing template-free approaches by up to 63% in terms of the top-1 accuracy and achieves a performance close to that of state-of-the-art template based approaches, but does not require domain knowledge and is much more scalable. |
1306.3297 | Ognjen Arandjelovi\'c PhD | Ognjen Arandjelovic | Matching objects across the textured-smooth continuum | Australasian Conference on Robotics and Automation, 2012 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The problem of 3D object recognition is of immense practical importance, with
the last decade witnessing a number of breakthroughs in the state of the art.
Most of the previous work has focused on the matching of textured objects using
local appearance descriptors extracted around salient image points. The
recently proposed bag of boundaries method was the first to address directly
the problem of matching smooth objects using boundary features. However, no
previous work has attempted to achieve a holistic treatment of the problem by
jointly using textural and shape features which is what we describe herein. Due
to the complementarity of the two modalities, we fuse the corresponding
matching scores and learn their relative weighting in a data specific manner by
optimizing discriminative performance on synthetically distorted data. For the
textural description of an object we adopt a representation in the form of a
histogram of SIFT based visual words. Similarly the apparent shape of an object
is represented by a histogram of discretized features capturing local shape. On
a large public database of a diverse set of objects, the proposed method is
shown to outperform significantly both purely textural and purely shape based
approaches for matching across viewpoint variation.
| [
{
"created": "Fri, 14 Jun 2013 05:52:58 GMT",
"version": "v1"
}
] | 2013-06-17 | [
[
"Arandjelovic",
"Ognjen",
""
]
] | The problem of 3D object recognition is of immense practical importance, with the last decade witnessing a number of breakthroughs in the state of the art. Most of the previous work has focused on the matching of textured objects using local appearance descriptors extracted around salient image points. The recently proposed bag of boundaries method was the first to address directly the problem of matching smooth objects using boundary features. However, no previous work has attempted to achieve a holistic treatment of the problem by jointly using textural and shape features which is what we describe herein. Due to the complementarity of the two modalities, we fuse the corresponding matching scores and learn their relative weighting in a data specific manner by optimizing discriminative performance on synthetically distorted data. For the textural description of an object we adopt a representation in the form of a histogram of SIFT based visual words. Similarly the apparent shape of an object is represented by a histogram of discretized features capturing local shape. On a large public database of a diverse set of objects, the proposed method is shown to outperform significantly both purely textural and purely shape based approaches for matching across viewpoint variation. |
2103.11982 | Ahmed Elzanaty Dr. | Amanat Kafizov, Ahmed Elzanaty, Lav R. Varshney, Mohamed-Slim Alouini | Wireless Network Coding with Intelligent Reflecting Surfaces | null | null | null | null | cs.NI cs.ET eess.SP | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Conventional wireless techniques are becoming inadequate for beyond
fifth-generation (5G) networks due to latency and bandwidth considerations. To
improve the error performance and throughput of wireless communication systems,
we propose physical layer network coding (PNC) in an intelligent reflecting
surface (IRS)-assisted environment. We consider an IRS-aided butterfly network,
where we propose an algorithm for obtaining the optimal IRS phases. Also,
analytic expressions for the bit error rate (BER) are derived. The numerical
results demonstrate that the proposed scheme significantly improves the BER
performance. For instance, the BER at the relay in the presence of a 32-element
IRS is three orders of magnitudes less than that without an IRS.
| [
{
"created": "Mon, 22 Mar 2021 16:29:51 GMT",
"version": "v1"
}
] | 2021-03-23 | [
[
"Kafizov",
"Amanat",
""
],
[
"Elzanaty",
"Ahmed",
""
],
[
"Varshney",
"Lav R.",
""
],
[
"Alouini",
"Mohamed-Slim",
""
]
] | Conventional wireless techniques are becoming inadequate for beyond fifth-generation (5G) networks due to latency and bandwidth considerations. To improve the error performance and throughput of wireless communication systems, we propose physical layer network coding (PNC) in an intelligent reflecting surface (IRS)-assisted environment. We consider an IRS-aided butterfly network, where we propose an algorithm for obtaining the optimal IRS phases. Also, analytic expressions for the bit error rate (BER) are derived. The numerical results demonstrate that the proposed scheme significantly improves the BER performance. For instance, the BER at the relay in the presence of a 32-element IRS is three orders of magnitudes less than that without an IRS. |
1912.12999 | Laura Kinkead | Laura Kinkead, Ahmed Allam, Michael Krauthammer | AutoDiscern: Rating the Quality of Online Health Information with
Hierarchical Encoder Attention-based Neural Networks | null | null | null | null | cs.LG cs.CL cs.CY stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Patients increasingly turn to search engines and online content before, or in
place of, talking with a health professional. Low quality health information,
which is common on the internet, presents risks to the patient in the form of
misinformation and a possibly poorer relationship with their physician. To
address this, the DISCERN criteria (developed at University of Oxford) are used
to evaluate the quality of online health information. However, patients are
unlikely to take the time to apply these criteria to the health websites they
visit. We built an automated implementation of the DISCERN instrument (Brief
version) using machine learning models. We compared the performance of a
traditional model (Random Forest) with that of a hierarchical encoder
attention-based neural network (HEA) model using two language embeddings, BERT
and BioBERT. The HEA BERT and BioBERT models achieved average F1-macro scores
across all criteria of 0.75 and 0.74, respectively, outperforming the Random
Forest model (average F1-macro = 0.69). Overall, the neural network based
models achieved 81% and 86% average accuracy at 100% and 80% coverage,
respectively, compared to 94% manual rating accuracy. The attention mechanism
implemented in the HEA architectures not only provided 'model explainability'
by identifying reasonable supporting sentences for the documents fulfilling the
Brief DISCERN criteria, but also boosted F1 performance by 0.05 compared to the
same architecture without an attention mechanism. Our research suggests that it
is feasible to automate online health information quality assessment, which is
an important step towards empowering patients to become informed partners in
the healthcare process.
| [
{
"created": "Mon, 30 Dec 2019 16:44:41 GMT",
"version": "v1"
},
{
"created": "Thu, 9 Jan 2020 13:52:19 GMT",
"version": "v2"
},
{
"created": "Tue, 26 May 2020 16:01:39 GMT",
"version": "v3"
}
] | 2020-05-27 | [
[
"Kinkead",
"Laura",
""
],
[
"Allam",
"Ahmed",
""
],
[
"Krauthammer",
"Michael",
""
]
] | Patients increasingly turn to search engines and online content before, or in place of, talking with a health professional. Low quality health information, which is common on the internet, presents risks to the patient in the form of misinformation and a possibly poorer relationship with their physician. To address this, the DISCERN criteria (developed at University of Oxford) are used to evaluate the quality of online health information. However, patients are unlikely to take the time to apply these criteria to the health websites they visit. We built an automated implementation of the DISCERN instrument (Brief version) using machine learning models. We compared the performance of a traditional model (Random Forest) with that of a hierarchical encoder attention-based neural network (HEA) model using two language embeddings, BERT and BioBERT. The HEA BERT and BioBERT models achieved average F1-macro scores across all criteria of 0.75 and 0.74, respectively, outperforming the Random Forest model (average F1-macro = 0.69). Overall, the neural network based models achieved 81% and 86% average accuracy at 100% and 80% coverage, respectively, compared to 94% manual rating accuracy. The attention mechanism implemented in the HEA architectures not only provided 'model explainability' by identifying reasonable supporting sentences for the documents fulfilling the Brief DISCERN criteria, but also boosted F1 performance by 0.05 compared to the same architecture without an attention mechanism. Our research suggests that it is feasible to automate online health information quality assessment, which is an important step towards empowering patients to become informed partners in the healthcare process. |
1802.06940 | Alexander Semenov | Irina Gribanova and Alexander Semenov | Using Automatic Generation of Relaxation Constraints to Improve the
Preimage Attack on 39-step MD4 | This paper was submitted to MIPRO 2018 as a conference paper | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we construct preimage attack on the truncated variant of the
MD4 hash function. Specifically, we study the MD4-39 function defined by the
first 39 steps of the MD4 algorithm. We suggest a new attack on MD4-39, which
develops the ideas proposed by H. Dobbertin in 1998. Namely, the special
relaxation constraints are introduced in order to simplify the equations
corresponding to the problem of finding a preimage for an arbitrary MD4-39 hash
value. The equations supplemented with the relaxation constraints are then
reduced to the Boolean Satisfiability Problem (SAT) and solved using the
state-of-the-art SAT solvers. We show that the effectiveness of a set of
relaxation constraints can be evaluated using the black-box function of a
special kind. Thus, we suggest automatic method of relaxation constraints
generation by applying the black-box optimization to this function. The
proposed method made it possible to find new relaxation constraints that
contribute to a SAT-based preimage attack on MD4-39 which significantly
outperforms the competition.
| [
{
"created": "Tue, 20 Feb 2018 02:47:41 GMT",
"version": "v1"
}
] | 2019-10-07 | [
[
"Gribanova",
"Irina",
""
],
[
"Semenov",
"Alexander",
""
]
] | In this paper we construct preimage attack on the truncated variant of the MD4 hash function. Specifically, we study the MD4-39 function defined by the first 39 steps of the MD4 algorithm. We suggest a new attack on MD4-39, which develops the ideas proposed by H. Dobbertin in 1998. Namely, the special relaxation constraints are introduced in order to simplify the equations corresponding to the problem of finding a preimage for an arbitrary MD4-39 hash value. The equations supplemented with the relaxation constraints are then reduced to the Boolean Satisfiability Problem (SAT) and solved using the state-of-the-art SAT solvers. We show that the effectiveness of a set of relaxation constraints can be evaluated using the black-box function of a special kind. Thus, we suggest automatic method of relaxation constraints generation by applying the black-box optimization to this function. The proposed method made it possible to find new relaxation constraints that contribute to a SAT-based preimage attack on MD4-39 which significantly outperforms the competition. |
2305.19131 | Emil Bj\"ornson | Amna Irshad, Emil Bj\"ornson | Optimal Geometries of Dual-Polarized Arrays for Large Point-to-Point
MIMO Channels | 5 pages, 4 figures, to appear at the European Signal Processing
Conference, EUSIPCO 2023 | null | null | null | cs.IT eess.SP math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traditional point-to-point line-of-sight channels have rank 1, irrespective
of the number of antennas and array geometries, due to far-field propagation
conditions. By contrast, recent papers in the holographic multiple-input
multiple-output (MIMO) literature characterize the maximum channel rank that
can be achieved between two continuous array apertures, which is much larger
than 1 under near-field propagation conditions. In this paper, we maximize the
channel capacity between two dual-polarized uniform rectangular arrays (URAs)
with discrete antenna elements for a given propagation distance. In particular,
we derive the antenna spacings that lead to an ideal MIMO channel where all
singular values are as similar as possible. We utilize this analytic result to
find the two array geometries that respectively minimize the aperture area and
the aperture length.
| [
{
"created": "Tue, 30 May 2023 15:42:07 GMT",
"version": "v1"
}
] | 2023-05-31 | [
[
"Irshad",
"Amna",
""
],
[
"Björnson",
"Emil",
""
]
] | Traditional point-to-point line-of-sight channels have rank 1, irrespective of the number of antennas and array geometries, due to far-field propagation conditions. By contrast, recent papers in the holographic multiple-input multiple-output (MIMO) literature characterize the maximum channel rank that can be achieved between two continuous array apertures, which is much larger than 1 under near-field propagation conditions. In this paper, we maximize the channel capacity between two dual-polarized uniform rectangular arrays (URAs) with discrete antenna elements for a given propagation distance. In particular, we derive the antenna spacings that lead to an ideal MIMO channel where all singular values are as similar as possible. We utilize this analytic result to find the two array geometries that respectively minimize the aperture area and the aperture length. |
0910.0227 | Rdv Ijcsis | Renuka A., K. C. Shet | Hierarchical Approach for Key Management in Mobile Ad hoc Networks | 9 pages IEEE format, International Journal of Computer Science and
Information Security, IJCSIS 2009, ISSN 1947 5500, Impact Factor 0.423,
http://sites.google.com/site/ijcsis/ | International Journal of Computer Science and Information
Security, IJCSIS, Vol. 5, No. 1, pp. 87-95, September 2009, USA | null | ISSN 1947 5500 | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mobile Ad-hoc Network (MANET) is a collection of autonomous nodes or
terminals which communicate with each other by forming a multi-hop radio
network and maintaining connectivity in a decentralized manner. The
conventional security solutions to provide key management through accessing
trusted authorities or centralized servers are infeasible for this new
environment since mobile ad hoc networks are characterized by the absence of
any infrastructure, frequent mobility, and wireless links. We propose a
hierarchical group key management scheme that is hierarchical and fully
distributed with no central authority and uses a simple rekeying procedure
which is suitable for large and high mobility mobile ad hoc networks. The
rekeying procedure requires only one round in our scheme and Chinese Remainder
Theorem Diffie Hellman Group Diffie Hellmann and Burmester and Desmedt it is a
constant 3 whereas in other schemes such as Distributed Logical Key Hierarchy
and Distributed One Way Function Trees, it depends on the number of members. We
reduce the energy consumption during communication of the keying materials by
reducing the number of bits in the rekeying message. We show through analysis
and simulations that our scheme has less computation, communication and energy
consumption compared to the existing schemes.
| [
{
"created": "Thu, 1 Oct 2009 18:56:57 GMT",
"version": "v1"
}
] | 2009-10-02 | [
[
"A.",
"Renuka",
""
],
[
"Shet",
"K. C.",
""
]
] | Mobile Ad-hoc Network (MANET) is a collection of autonomous nodes or terminals which communicate with each other by forming a multi-hop radio network and maintaining connectivity in a decentralized manner. The conventional security solutions to provide key management through accessing trusted authorities or centralized servers are infeasible for this new environment since mobile ad hoc networks are characterized by the absence of any infrastructure, frequent mobility, and wireless links. We propose a hierarchical group key management scheme that is hierarchical and fully distributed with no central authority and uses a simple rekeying procedure which is suitable for large and high mobility mobile ad hoc networks. The rekeying procedure requires only one round in our scheme and Chinese Remainder Theorem Diffie Hellman Group Diffie Hellmann and Burmester and Desmedt it is a constant 3 whereas in other schemes such as Distributed Logical Key Hierarchy and Distributed One Way Function Trees, it depends on the number of members. We reduce the energy consumption during communication of the keying materials by reducing the number of bits in the rekeying message. We show through analysis and simulations that our scheme has less computation, communication and energy consumption compared to the existing schemes. |
2309.15245 | Swetava Ganguli | Daria Reshetova and Swetava Ganguli and C. V. Krishnakumar Iyer and
Vipul Pandey | SeMAnD: Self-Supervised Anomaly Detection in Multimodal Geospatial
Datasets | Extended version of the accepted research track paper at the 31st ACM
SIGSPATIAL International Conference on Advances in Geographic Information
Systems (ACM SIGSPATIAL 2023), Hamburg, Germany. 11 pages, 8 figures, 6
tables | null | 10.1145/3589132.3625604 | null | cs.AI cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We propose a Self-supervised Anomaly Detection technique, called SeMAnD, to
detect geometric anomalies in Multimodal geospatial datasets. Geospatial data
comprises of acquired and derived heterogeneous data modalities that we
transform to semantically meaningful, image-like tensors to address the
challenges of representation, alignment, and fusion of multimodal data. SeMAnD
is comprised of (i) a simple data augmentation strategy, called
RandPolyAugment, capable of generating diverse augmentations of vector
geometries, and (ii) a self-supervised training objective with three components
that incentivize learning representations of multimodal data that are
discriminative to local changes in one modality which are not corroborated by
the other modalities. Detecting local defects is crucial for geospatial anomaly
detection where even small anomalies (e.g., shifted, incorrectly connected,
malformed, or missing polygonal vector geometries like roads, buildings,
landcover, etc.) are detrimental to the experience and safety of users of
geospatial applications like mapping, routing, search, and recommendation
systems. Our empirical study on test sets of different types of real-world
geometric geospatial anomalies across 3 diverse geographical regions
demonstrates that SeMAnD is able to detect real-world defects and outperforms
domain-agnostic anomaly detection strategies by 4.8-19.7% as measured using
anomaly classification AUC. We also show that model performance increases (i)
up to 20.4% as the number of input modalities increase and (ii) up to 22.9% as
the diversity and strength of training data augmentations increase.
| [
{
"created": "Tue, 26 Sep 2023 20:18:31 GMT",
"version": "v1"
}
] | 2023-09-28 | [
[
"Reshetova",
"Daria",
""
],
[
"Ganguli",
"Swetava",
""
],
[
"Iyer",
"C. V. Krishnakumar",
""
],
[
"Pandey",
"Vipul",
""
]
] | We propose a Self-supervised Anomaly Detection technique, called SeMAnD, to detect geometric anomalies in Multimodal geospatial datasets. Geospatial data comprises of acquired and derived heterogeneous data modalities that we transform to semantically meaningful, image-like tensors to address the challenges of representation, alignment, and fusion of multimodal data. SeMAnD is comprised of (i) a simple data augmentation strategy, called RandPolyAugment, capable of generating diverse augmentations of vector geometries, and (ii) a self-supervised training objective with three components that incentivize learning representations of multimodal data that are discriminative to local changes in one modality which are not corroborated by the other modalities. Detecting local defects is crucial for geospatial anomaly detection where even small anomalies (e.g., shifted, incorrectly connected, malformed, or missing polygonal vector geometries like roads, buildings, landcover, etc.) are detrimental to the experience and safety of users of geospatial applications like mapping, routing, search, and recommendation systems. Our empirical study on test sets of different types of real-world geometric geospatial anomalies across 3 diverse geographical regions demonstrates that SeMAnD is able to detect real-world defects and outperforms domain-agnostic anomaly detection strategies by 4.8-19.7% as measured using anomaly classification AUC. We also show that model performance increases (i) up to 20.4% as the number of input modalities increase and (ii) up to 22.9% as the diversity and strength of training data augmentations increase. |
2107.07153 | Oriol Corcoll | Oriol Corcoll | Semantic Image Cropping | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Automatic image cropping techniques are commonly used to enhance the
aesthetic quality of an image; they do it by detecting the most beautiful or
the most salient parts of the image and removing the unwanted content to have a
smaller image that is more visually pleasing. In this thesis, I introduce an
additional dimension to the problem of cropping, semantics. I argue that image
cropping can also enhance the image's relevancy for a given entity by using the
semantic information contained in the image. I call this problem, Semantic
Image Cropping. To support my argument, I provide a new dataset containing 100
images with at least two different entities per image and four ground truth
croppings collected using Amazon Mechanical Turk. I use this dataset to show
that state-of-the-art cropping algorithms that only take into account
aesthetics do not perform well in the problem of semantic image cropping.
Additionally, I provide a new deep learning system that takes not just
aesthetics but also semantics into account to generate image croppings, and I
evaluate its performance using my new semantic cropping dataset, showing that
using the semantic information of an image can help to produce better
croppings.
| [
{
"created": "Thu, 15 Jul 2021 06:54:42 GMT",
"version": "v1"
}
] | 2021-07-16 | [
[
"Corcoll",
"Oriol",
""
]
] | Automatic image cropping techniques are commonly used to enhance the aesthetic quality of an image; they do it by detecting the most beautiful or the most salient parts of the image and removing the unwanted content to have a smaller image that is more visually pleasing. In this thesis, I introduce an additional dimension to the problem of cropping, semantics. I argue that image cropping can also enhance the image's relevancy for a given entity by using the semantic information contained in the image. I call this problem, Semantic Image Cropping. To support my argument, I provide a new dataset containing 100 images with at least two different entities per image and four ground truth croppings collected using Amazon Mechanical Turk. I use this dataset to show that state-of-the-art cropping algorithms that only take into account aesthetics do not perform well in the problem of semantic image cropping. Additionally, I provide a new deep learning system that takes not just aesthetics but also semantics into account to generate image croppings, and I evaluate its performance using my new semantic cropping dataset, showing that using the semantic information of an image can help to produce better croppings. |
1701.06109 | Hunter Elliott | David Richmond, Anna Payne-Tobin Jost, Talley Lambert, Jennifer
Waters, Hunter Elliott | DeadNet: Identifying Phototoxicity from Label-free Microscopy Images of
Cells using Deep ConvNets | null | null | null | null | cs.CV q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Exposure to intense illumination light is an unavoidable consequence of
fluorescence microscopy, and poses a risk to the health of the sample in every
live-cell fluorescence microscopy experiment. Furthermore, the possible
side-effects of phototoxicity on the scientific conclusions that are drawn from
an imaging experiment are often unaccounted for. Previously, controlling for
phototoxicity in imaging experiments required additional labels and
experiments, limiting its widespread application. Here we provide a
proof-of-principle demonstration that the phototoxic effects of an imaging
experiment can be identified directly from a single phase-contrast image using
deep convolutional neural networks (ConvNets). This lays the groundwork for an
automated tool for assessing cell health in a wide range of imaging
experiments. Interpretability of such a method is crucial for its adoption. We
take steps towards interpreting the classification mechanism of the trained
ConvNet by visualizing salient features of images that contribute to accurate
classification.
| [
{
"created": "Sun, 22 Jan 2017 01:43:05 GMT",
"version": "v1"
}
] | 2017-01-24 | [
[
"Richmond",
"David",
""
],
[
"Jost",
"Anna Payne-Tobin",
""
],
[
"Lambert",
"Talley",
""
],
[
"Waters",
"Jennifer",
""
],
[
"Elliott",
"Hunter",
""
]
] | Exposure to intense illumination light is an unavoidable consequence of fluorescence microscopy, and poses a risk to the health of the sample in every live-cell fluorescence microscopy experiment. Furthermore, the possible side-effects of phototoxicity on the scientific conclusions that are drawn from an imaging experiment are often unaccounted for. Previously, controlling for phototoxicity in imaging experiments required additional labels and experiments, limiting its widespread application. Here we provide a proof-of-principle demonstration that the phototoxic effects of an imaging experiment can be identified directly from a single phase-contrast image using deep convolutional neural networks (ConvNets). This lays the groundwork for an automated tool for assessing cell health in a wide range of imaging experiments. Interpretability of such a method is crucial for its adoption. We take steps towards interpreting the classification mechanism of the trained ConvNet by visualizing salient features of images that contribute to accurate classification. |
2108.08089 | Siyuan Dong | Siyuan Dong, Jun Cao, Zhong Fan | A Review on Cybersecurity in Smart Local Energy Systems: Requirements,
Challenges, and Standards | null | null | null | null | cs.CR | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Smart local energy system (SLES) is considered as a promising pathway
facilitating a more effective and localised operation, benefited from the
complex information and communication technology (ICT) infrastructures and
Internet of things (IoT) technologies. As a part of the critical
infrastructure, it is important to not only put effective detection and
management to tackle potential cybersecurity issues, but also require
considerable numbers of standards to ensure the security of the internet of
things system to minimise the risks. This study aims to review the existing
standards, investigate how the compatibility with SLES development, and
identify the area to focus on in the future. Although existing standards and
protocols are highly fragmented, our findings suggest that many of them can
meet the requirements of the applications and infrastructures of SLES.
Additionally, many standards have been introduced to protect information
security and personal privacy due to their increasing importance. The research
also suggests that the industry needs to produce more affordable and
cyber-secured devices and services. For the government and regulators, relevant
guidelines on the minimum function and security requirements for applications
should be provided. Additionally, compliance testing and certifications should
be in place and carried out by an independent third party to ensure the
components of SLES ecosystem with a satisfied security level by design.
| [
{
"created": "Wed, 18 Aug 2021 11:12:57 GMT",
"version": "v1"
}
] | 2021-08-19 | [
[
"Dong",
"Siyuan",
""
],
[
"Cao",
"Jun",
""
],
[
"Fan",
"Zhong",
""
]
] | Smart local energy system (SLES) is considered as a promising pathway facilitating a more effective and localised operation, benefited from the complex information and communication technology (ICT) infrastructures and Internet of things (IoT) technologies. As a part of the critical infrastructure, it is important to not only put effective detection and management to tackle potential cybersecurity issues, but also require considerable numbers of standards to ensure the security of the internet of things system to minimise the risks. This study aims to review the existing standards, investigate how the compatibility with SLES development, and identify the area to focus on in the future. Although existing standards and protocols are highly fragmented, our findings suggest that many of them can meet the requirements of the applications and infrastructures of SLES. Additionally, many standards have been introduced to protect information security and personal privacy due to their increasing importance. The research also suggests that the industry needs to produce more affordable and cyber-secured devices and services. For the government and regulators, relevant guidelines on the minimum function and security requirements for applications should be provided. Additionally, compliance testing and certifications should be in place and carried out by an independent third party to ensure the components of SLES ecosystem with a satisfied security level by design. |
2011.04919 | Chunchi Liu | Chunchi Liu, Minghui Xu, Hechuan Guo, Xiuzhen Cheng, Yinhao Xiao,
Dongxiao Yu, Bei Gong, Arkady Yerukhimovich, Shengling Wang and Weifeng Lv | Tokoin: A Coin-Based Accountable Access Control Scheme for Internet of
Things | null | null | null | null | cs.CR cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the prevalence of Internet of Things (IoT) applications, IoT devices
interact closely with our surrounding environments, bringing us unparalleled
smartness and convenience. However, the development of secure IoT solutions is
getting a long way lagged behind, making us exposed to common unauthorized
accesses that may bring malicious attacks and unprecedented danger to our daily
life. Overprivilege attack, a widely reported phenomenon in IoT that accesses
unauthorized or excessive resources, is notoriously hard to prevent, trace and
mitigate. To tackle this challenge, we propose Tokoin-Based Access Control
(TBAC), an accountable access control model enabled by blockchain and Trusted
Execution Environment (TEE) technologies, to offer fine-graininess, strong
auditability, and access procedure control for IoT. TBAC materializes the
virtual access power into a definite-amount and secure cryptographic coin
termed "tokoin" (token+coin), and manages it using atomic and accountable
state-transition functions in a blockchain. We also realize access procedure
control by mandating every tokoin a fine-grained access policy defining who is
allowed to do what at when in where by how. The tokoin is peer-to-peer
transferable, and can be modified only by the resource owner when necessary. We
fully implement TBAC with well-studied cryptographic primitives and blockchain
platforms and present a readily available APP for regular users. We also
present a case study to demonstrate how TBAC is employed to enable autonomous
in-home cargo delivery while guaranteeing the access policy compliance and home
owner's physical security by regulating the physical behaviors of the
deliveryman.
| [
{
"created": "Tue, 10 Nov 2020 05:56:36 GMT",
"version": "v1"
}
] | 2020-11-11 | [
[
"Liu",
"Chunchi",
""
],
[
"Xu",
"Minghui",
""
],
[
"Guo",
"Hechuan",
""
],
[
"Cheng",
"Xiuzhen",
""
],
[
"Xiao",
"Yinhao",
""
],
[
"Yu",
"Dongxiao",
""
],
[
"Gong",
"Bei",
""
],
[
"Yerukhimovich",
"Arkady",
""
],
[
"Wang",
"Shengling",
""
],
[
"Lv",
"Weifeng",
""
]
] | With the prevalence of Internet of Things (IoT) applications, IoT devices interact closely with our surrounding environments, bringing us unparalleled smartness and convenience. However, the development of secure IoT solutions is getting a long way lagged behind, making us exposed to common unauthorized accesses that may bring malicious attacks and unprecedented danger to our daily life. Overprivilege attack, a widely reported phenomenon in IoT that accesses unauthorized or excessive resources, is notoriously hard to prevent, trace and mitigate. To tackle this challenge, we propose Tokoin-Based Access Control (TBAC), an accountable access control model enabled by blockchain and Trusted Execution Environment (TEE) technologies, to offer fine-graininess, strong auditability, and access procedure control for IoT. TBAC materializes the virtual access power into a definite-amount and secure cryptographic coin termed "tokoin" (token+coin), and manages it using atomic and accountable state-transition functions in a blockchain. We also realize access procedure control by mandating every tokoin a fine-grained access policy defining who is allowed to do what at when in where by how. The tokoin is peer-to-peer transferable, and can be modified only by the resource owner when necessary. We fully implement TBAC with well-studied cryptographic primitives and blockchain platforms and present a readily available APP for regular users. We also present a case study to demonstrate how TBAC is employed to enable autonomous in-home cargo delivery while guaranteeing the access policy compliance and home owner's physical security by regulating the physical behaviors of the deliveryman. |
2003.07010 | Jason Gaitonde | Jason Gaitonde, Jon Kleinberg, Eva Tardos | Adversarial Perturbations of Opinion Dynamics in Networks | 28 pages; added new related work, fixed typos | null | null | null | cs.DS cs.GT cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the connections between network structure, opinion dynamics, and an
adversary's power to artificially induce disagreements. We approach these
questions by extending models of opinion formation in the social sciences to
represent scenarios, familiar from recent events, in which external actors seek
to destabilize communities through sophisticated information warfare tactics
via fake news and bots. In many instances, the intrinsic goals of these efforts
are not necessarily to shift the overall sentiment of the network, but rather
to induce discord. These perturbations diffuse via opinion dynamics on the
underlying network, through mechanisms that have been analyzed and abstracted
through work in computer science and the social sciences. We investigate the
properties of such attacks, considering optimal strategies both for the
adversary seeking to create disagreement and for the entities tasked with
defending the network from attack. We show that for different formulations of
these types of objectives, different regimes of the spectral structure of the
network will limit the adversary's capacity to sow discord; this enables us to
qualitatively describe which networks are most vulnerable against these
perturbations. We then consider the algorithmic task of a network defender to
mitigate these sorts of adversarial attacks by insulating nodes
heterogeneously; we show that, by considering the geometry of this problem,
this optimization task can be efficiently solved via convex programming.
Finally, we generalize these results to allow for two network structures, where
the opinion dynamics process and the measurement of disagreement become
uncoupled, and determine how the adversary's power changes; for instance, this
may arise when opinion dynamics are controlled an online community via social
media, while disagreement is measured along "real-world" connections.
| [
{
"created": "Mon, 16 Mar 2020 04:01:09 GMT",
"version": "v1"
},
{
"created": "Mon, 13 Jul 2020 17:00:58 GMT",
"version": "v2"
}
] | 2020-07-14 | [
[
"Gaitonde",
"Jason",
""
],
[
"Kleinberg",
"Jon",
""
],
[
"Tardos",
"Eva",
""
]
] | We study the connections between network structure, opinion dynamics, and an adversary's power to artificially induce disagreements. We approach these questions by extending models of opinion formation in the social sciences to represent scenarios, familiar from recent events, in which external actors seek to destabilize communities through sophisticated information warfare tactics via fake news and bots. In many instances, the intrinsic goals of these efforts are not necessarily to shift the overall sentiment of the network, but rather to induce discord. These perturbations diffuse via opinion dynamics on the underlying network, through mechanisms that have been analyzed and abstracted through work in computer science and the social sciences. We investigate the properties of such attacks, considering optimal strategies both for the adversary seeking to create disagreement and for the entities tasked with defending the network from attack. We show that for different formulations of these types of objectives, different regimes of the spectral structure of the network will limit the adversary's capacity to sow discord; this enables us to qualitatively describe which networks are most vulnerable against these perturbations. We then consider the algorithmic task of a network defender to mitigate these sorts of adversarial attacks by insulating nodes heterogeneously; we show that, by considering the geometry of this problem, this optimization task can be efficiently solved via convex programming. Finally, we generalize these results to allow for two network structures, where the opinion dynamics process and the measurement of disagreement become uncoupled, and determine how the adversary's power changes; for instance, this may arise when opinion dynamics are controlled an online community via social media, while disagreement is measured along "real-world" connections. |
2305.18036 | Ludovic Thomas | Ludovic Thomas (EPFL) and Jean-Yves Le Boudec (EPFL) | Network-Calculus Service Curves of the Interleaved Regulator | 17 pages, 13 figures, 4 tables | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The interleaved regulator (implemented by IEEE TSN Asynchronous Traffic
Shaping) is used in time-sensitive networks for reshaping the flows with
per-flow contracts. When applied to an aggregate of flows that come from a FIFO
system, an interleaved regulator that reshapes the flows with their initial
contracts does not increase the worst-case delay of the aggregate. This
shaping-for-free property supports the computation of end-to-end latency bounds
and the validation of the network's timing requirements. A common method to
establish the properties of a network element is to obtain a network-calculus
service-curve model. The existence of such a model for the interleaved
regulator remains an open question. If a service-curve model were found for the
interleaved regulator, then the analysis of this mechanism would no longer be
limited to the situations where the shaping-for-free holds, which would widen
its use in time-sensitive networks. In this paper, we investigate if
network-calculus service curves can capture the behavior of the interleaved
regulator. We find that an interleaved regulator placed outside of the
shaping-for-free requirements (after a non-FIFO system) can yield unbounded
latencies. Consequently, we prove that no network-calculus service curve exists
to explain the interleaved regulator's behavior. It is still possible to find
non-trivial service curves for the interleaved regulator. However, their
long-term rate cannot be large enough to provide any guarantee (specifically,
we prove that for the regulators that process at least four flows with the same
contract, the long-term rate of any service curve is upper bounded by three
times the rate of the per-flow contract).
| [
{
"created": "Mon, 29 May 2023 11:56:37 GMT",
"version": "v1"
},
{
"created": "Fri, 2 Jun 2023 15:03:13 GMT",
"version": "v2"
}
] | 2023-06-05 | [
[
"Thomas",
"Ludovic",
"",
"EPFL"
],
[
"Boudec",
"Jean-Yves Le",
"",
"EPFL"
]
] | The interleaved regulator (implemented by IEEE TSN Asynchronous Traffic Shaping) is used in time-sensitive networks for reshaping the flows with per-flow contracts. When applied to an aggregate of flows that come from a FIFO system, an interleaved regulator that reshapes the flows with their initial contracts does not increase the worst-case delay of the aggregate. This shaping-for-free property supports the computation of end-to-end latency bounds and the validation of the network's timing requirements. A common method to establish the properties of a network element is to obtain a network-calculus service-curve model. The existence of such a model for the interleaved regulator remains an open question. If a service-curve model were found for the interleaved regulator, then the analysis of this mechanism would no longer be limited to the situations where the shaping-for-free holds, which would widen its use in time-sensitive networks. In this paper, we investigate if network-calculus service curves can capture the behavior of the interleaved regulator. We find that an interleaved regulator placed outside of the shaping-for-free requirements (after a non-FIFO system) can yield unbounded latencies. Consequently, we prove that no network-calculus service curve exists to explain the interleaved regulator's behavior. It is still possible to find non-trivial service curves for the interleaved regulator. However, their long-term rate cannot be large enough to provide any guarantee (specifically, we prove that for the regulators that process at least four flows with the same contract, the long-term rate of any service curve is upper bounded by three times the rate of the per-flow contract). |
2211.13227 | Binxin Yang | Binxin Yang, Shuyang Gu, Bo Zhang, Ting Zhang, Xuejin Chen, Xiaoyan
Sun, Dong Chen and Fang Wen | Paint by Example: Exemplar-based Image Editing with Diffusion Models | Code: https://github.com/Fantasy-Studio/Paint-by-Example | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Language-guided image editing has achieved great success recently. In this
paper, for the first time, we investigate exemplar-guided image editing for
more precise control. We achieve this goal by leveraging self-supervised
training to disentangle and re-organize the source image and the exemplar.
However, the naive approach will cause obvious fusing artifacts. We carefully
analyze it and propose an information bottleneck and strong augmentations to
avoid the trivial solution of directly copying and pasting the exemplar image.
Meanwhile, to ensure the controllability of the editing process, we design an
arbitrary shape mask for the exemplar image and leverage the classifier-free
guidance to increase the similarity to the exemplar image. The whole framework
involves a single forward of the diffusion model without any iterative
optimization. We demonstrate that our method achieves an impressive performance
and enables controllable editing on in-the-wild images with high fidelity.
| [
{
"created": "Wed, 23 Nov 2022 18:59:52 GMT",
"version": "v1"
}
] | 2022-11-24 | [
[
"Yang",
"Binxin",
""
],
[
"Gu",
"Shuyang",
""
],
[
"Zhang",
"Bo",
""
],
[
"Zhang",
"Ting",
""
],
[
"Chen",
"Xuejin",
""
],
[
"Sun",
"Xiaoyan",
""
],
[
"Chen",
"Dong",
""
],
[
"Wen",
"Fang",
""
]
] | Language-guided image editing has achieved great success recently. In this paper, for the first time, we investigate exemplar-guided image editing for more precise control. We achieve this goal by leveraging self-supervised training to disentangle and re-organize the source image and the exemplar. However, the naive approach will cause obvious fusing artifacts. We carefully analyze it and propose an information bottleneck and strong augmentations to avoid the trivial solution of directly copying and pasting the exemplar image. Meanwhile, to ensure the controllability of the editing process, we design an arbitrary shape mask for the exemplar image and leverage the classifier-free guidance to increase the similarity to the exemplar image. The whole framework involves a single forward of the diffusion model without any iterative optimization. We demonstrate that our method achieves an impressive performance and enables controllable editing on in-the-wild images with high fidelity. |
1204.4535 | Jay Shah | Jay Shah and Ayan Mahalanobis | A New Guess-and-Determine Attack on the A5/1 Stream Cipher | 14 pages, 4 figures, 3 tables | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In Europe and North America, the most widely used stream cipher to ensure
privacy and confidentiality of conversations in GSM mobile phones is the A5/1.
In this paper, we present a new attack on the A5/1 stream cipher with an
average time complexity of 2^(48.5), which is much less than the brute-force
attack with a complexity of 2^(64). The attack has a 100% success rate and
requires about 5.65GB storage. We provide a detailed description of our new
attack along with its implementation and results.
| [
{
"created": "Fri, 20 Apr 2012 05:53:40 GMT",
"version": "v1"
},
{
"created": "Thu, 3 May 2012 06:54:15 GMT",
"version": "v2"
}
] | 2012-05-04 | [
[
"Shah",
"Jay",
""
],
[
"Mahalanobis",
"Ayan",
""
]
] | In Europe and North America, the most widely used stream cipher to ensure privacy and confidentiality of conversations in GSM mobile phones is the A5/1. In this paper, we present a new attack on the A5/1 stream cipher with an average time complexity of 2^(48.5), which is much less than the brute-force attack with a complexity of 2^(64). The attack has a 100% success rate and requires about 5.65GB storage. We provide a detailed description of our new attack along with its implementation and results. |
1810.03587 | Chinmay Hegde | Chinmay Hegde | Algorithmic Aspects of Inverse Problems Using Generative Models | null | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | The traditional approach of hand-crafting priors (such as sparsity) for
solving inverse problems is slowly being replaced by the use of richer learned
priors (such as those modeled by generative adversarial networks, or GANs). In
this work, we study the algorithmic aspects of such a learning-based approach
from a theoretical perspective. For certain generative network architectures,
we establish a simple non-convex algorithmic approach that (a) theoretically
enjoys linear convergence guarantees for certain inverse problems, and (b)
empirically improves upon conventional techniques such as back-propagation. We
also propose an extension of our approach that can handle model mismatch (i.e.,
situations where the generative network prior is not exactly applicable.)
Together, our contributions serve as building blocks towards a more complete
algorithmic understanding of generative models in inverse problems.
| [
{
"created": "Mon, 8 Oct 2018 17:29:47 GMT",
"version": "v1"
}
] | 2018-10-09 | [
[
"Hegde",
"Chinmay",
""
]
] | The traditional approach of hand-crafting priors (such as sparsity) for solving inverse problems is slowly being replaced by the use of richer learned priors (such as those modeled by generative adversarial networks, or GANs). In this work, we study the algorithmic aspects of such a learning-based approach from a theoretical perspective. For certain generative network architectures, we establish a simple non-convex algorithmic approach that (a) theoretically enjoys linear convergence guarantees for certain inverse problems, and (b) empirically improves upon conventional techniques such as back-propagation. We also propose an extension of our approach that can handle model mismatch (i.e., situations where the generative network prior is not exactly applicable.) Together, our contributions serve as building blocks towards a more complete algorithmic understanding of generative models in inverse problems. |
1905.12217 | Liwei Wu | Liwei Wu, Hsiang-Fu Yu, Nikhil Rao, James Sharpnack, Cho-Jui Hsieh | Graph DNA: Deep Neighborhood Aware Graph Encoding for Collaborative
Filtering | under review | null | null | null | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we consider recommender systems with side information in the
form of graphs. Existing collaborative filtering algorithms mainly utilize only
immediate neighborhood information and have a hard time taking advantage of
deeper neighborhoods beyond 1-2 hops. The main caveat of exploiting deeper
graph information is the rapidly growing time and space complexity when
incorporating information from these neighborhoods. In this paper, we propose
using Graph DNA, a novel Deep Neighborhood Aware graph encoding algorithm, for
exploiting deeper neighborhood information. DNA encoding computes approximate
deep neighborhood information in linear time using Bloom filters, a
space-efficient probabilistic data structure and results in a per-node encoding
that is logarithmic in the number of nodes in the graph. It can be used in
conjunction with both feature-based and graph-regularization-based
collaborative filtering algorithms. Graph DNA has the advantages of being
memory and time efficient and providing additional regularization when compared
to directly using higher order graph information. We conduct experiments on
real-world datasets, showing graph DNA can be easily used with 4 popular
collaborative filtering algorithms and consistently leads to a performance
boost with little computational and memory overhead.
| [
{
"created": "Wed, 29 May 2019 04:57:02 GMT",
"version": "v1"
}
] | 2019-05-30 | [
[
"Wu",
"Liwei",
""
],
[
"Yu",
"Hsiang-Fu",
""
],
[
"Rao",
"Nikhil",
""
],
[
"Sharpnack",
"James",
""
],
[
"Hsieh",
"Cho-Jui",
""
]
] | In this paper, we consider recommender systems with side information in the form of graphs. Existing collaborative filtering algorithms mainly utilize only immediate neighborhood information and have a hard time taking advantage of deeper neighborhoods beyond 1-2 hops. The main caveat of exploiting deeper graph information is the rapidly growing time and space complexity when incorporating information from these neighborhoods. In this paper, we propose using Graph DNA, a novel Deep Neighborhood Aware graph encoding algorithm, for exploiting deeper neighborhood information. DNA encoding computes approximate deep neighborhood information in linear time using Bloom filters, a space-efficient probabilistic data structure and results in a per-node encoding that is logarithmic in the number of nodes in the graph. It can be used in conjunction with both feature-based and graph-regularization-based collaborative filtering algorithms. Graph DNA has the advantages of being memory and time efficient and providing additional regularization when compared to directly using higher order graph information. We conduct experiments on real-world datasets, showing graph DNA can be easily used with 4 popular collaborative filtering algorithms and consistently leads to a performance boost with little computational and memory overhead. |
1912.00486 | Ali Bereyhi | Saba Asaad and Ali Bereyhi and Ralf R. M\"uller and Rafael F. Schaefer | Secure Regularized Zero Forcing for Multiuser MIMOME Channels | Presented in the 2019 Asilomar Conference on Signals, Systems, and
Computers. 6 pages, 3 figures | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a new linear precoding scheme for downlink transmission
in MIMOME channels, referred to as secure regularized zero forcing. The scheme
modifies regularized zero forcing precoding, such that the beamformers further
suppress the information leakage towards the eavesdroppers. The proposed scheme
is characterized in the large-system limit, and a closed-form expression for
the achievable ergodic secrecy rate per user is derived. Numerical
investigations demonstrate high robustness against the quality of
eavesdroppers' channel.
| [
{
"created": "Sun, 1 Dec 2019 19:29:26 GMT",
"version": "v1"
}
] | 2019-12-03 | [
[
"Asaad",
"Saba",
""
],
[
"Bereyhi",
"Ali",
""
],
[
"Müller",
"Ralf R.",
""
],
[
"Schaefer",
"Rafael F.",
""
]
] | This paper proposes a new linear precoding scheme for downlink transmission in MIMOME channels, referred to as secure regularized zero forcing. The scheme modifies regularized zero forcing precoding, such that the beamformers further suppress the information leakage towards the eavesdroppers. The proposed scheme is characterized in the large-system limit, and a closed-form expression for the achievable ergodic secrecy rate per user is derived. Numerical investigations demonstrate high robustness against the quality of eavesdroppers' channel. |
2406.02929 | Zihan Ye | Zihan Ye, Shreyank N. Gowda, Xiaobo Jin, Xiaowei Huang, Haotian Xu,
Yaochu Jin, Kaizhu Huang | Exploring Data Efficiency in Zero-Shot Learning with Diffusion Models | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Zero-Shot Learning (ZSL) aims to enable classifiers to identify unseen
classes by enhancing data efficiency at the class level. This is achieved by
generating image features from pre-defined semantics of unseen classes.
However, most current approaches heavily depend on the number of samples from
seen classes, i.e. they do not consider instance-level effectiveness. In this
paper, we demonstrate that limited seen examples generally result in
deteriorated performance of generative models. To overcome these challenges, we
propose ZeroDiff, a Diffusion-based Generative ZSL model. This unified
framework incorporates diffusion models to improve data efficiency at both the
class and instance levels. Specifically, for instance-level effectiveness,
ZeroDiff utilizes a forward diffusion chain to transform limited data into an
expanded set of noised data. For class-level effectiveness, we design a
two-branch generation structure that consists of a Diffusion-based Feature
Generator (DFG) and a Diffusion-based Representation Generator (DRG). DFG
focuses on learning and sampling the distribution of cross-entropy-based
features, whilst DRG learns the supervised contrastive-based representation to
boost the zero-shot capabilities of DFG. Additionally, we employ three
discriminators to evaluate generated features from various aspects and
introduce a Wasserstein-distance-based mutual learning loss to transfer
knowledge among discriminators, thereby enhancing guidance for generation.
Demonstrated through extensive experiments on three popular ZSL benchmarks, our
ZeroDiff not only achieves significant improvements over existing ZSL methods
but also maintains robust performance even with scarce training data. Code will
be released upon acceptance.
| [
{
"created": "Wed, 5 Jun 2024 04:37:06 GMT",
"version": "v1"
}
] | 2024-06-06 | [
[
"Ye",
"Zihan",
""
],
[
"Gowda",
"Shreyank N.",
""
],
[
"Jin",
"Xiaobo",
""
],
[
"Huang",
"Xiaowei",
""
],
[
"Xu",
"Haotian",
""
],
[
"Jin",
"Yaochu",
""
],
[
"Huang",
"Kaizhu",
""
]
] | Zero-Shot Learning (ZSL) aims to enable classifiers to identify unseen classes by enhancing data efficiency at the class level. This is achieved by generating image features from pre-defined semantics of unseen classes. However, most current approaches heavily depend on the number of samples from seen classes, i.e. they do not consider instance-level effectiveness. In this paper, we demonstrate that limited seen examples generally result in deteriorated performance of generative models. To overcome these challenges, we propose ZeroDiff, a Diffusion-based Generative ZSL model. This unified framework incorporates diffusion models to improve data efficiency at both the class and instance levels. Specifically, for instance-level effectiveness, ZeroDiff utilizes a forward diffusion chain to transform limited data into an expanded set of noised data. For class-level effectiveness, we design a two-branch generation structure that consists of a Diffusion-based Feature Generator (DFG) and a Diffusion-based Representation Generator (DRG). DFG focuses on learning and sampling the distribution of cross-entropy-based features, whilst DRG learns the supervised contrastive-based representation to boost the zero-shot capabilities of DFG. Additionally, we employ three discriminators to evaluate generated features from various aspects and introduce a Wasserstein-distance-based mutual learning loss to transfer knowledge among discriminators, thereby enhancing guidance for generation. Demonstrated through extensive experiments on three popular ZSL benchmarks, our ZeroDiff not only achieves significant improvements over existing ZSL methods but also maintains robust performance even with scarce training data. Code will be released upon acceptance. |
2404.14935 | Vincent Albert Wolff | Edmir Xhoxhi, Vincent Albert Wolff | A Data-Driven Analysis of Vulnerable Road User Safety in Interaction
with Connected Automated Vehicles | Accepted for 15th IEEE Vehicular Networking Conference (VNC) 2024 | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | According to the World Health Organization, the involvement of Vulnerable
Road Users (VRUs) in traffic accidents remains a significant concern, with VRUs
accounting for over half of traffic fatalities. The increase of automation and
connectivity levels of vehicles has still an uncertain impact on VRU safety. By
deploying the Collective Perception Service (CPS), vehicles can include
information about VRUs in Vehicle-to-Everything (V2X) messages, thus raising
the general perception of the environment. Although an increased awareness is
considered positive, one could argue that the awareness ratio, the metric used
to measure perception, is only implicitly connected to the VRUs' safety. This
paper introduces a tailored metric, the Risk Factor (RF), to measure the risk
level for the interactions between Connected Automated Vehicles (CAVs) and
VRUs. By evaluating the RF, we assess the impact of V2X communication on VRU
risk mitigation. Our results show that high V2X penetration rates can reduce
mean risk, quantified by our proposed metric, by up to 44%. Although the median
risk value shows a significant decrease, suggesting a reduction in overall
risk, the distribution of risk values reveals that CPS's mitigation
effectiveness is overestimated, which is indicated by the divergence between RF
and awareness ratio. Additionally, by analyzing a real-world traffic dataset,
we pinpoint high-risk locations within a scenario, identifying areas near
intersections and behind parked cars as especially dangerous. Our methodology
can be ported and applied to other scenarios in order to identify high-risk
areas. We value the proposed RF as an insightful metric for quantifying VRU
safety in a highly automated and connected environment.
| [
{
"created": "Tue, 23 Apr 2024 11:24:38 GMT",
"version": "v1"
}
] | 2024-04-24 | [
[
"Xhoxhi",
"Edmir",
""
],
[
"Wolff",
"Vincent Albert",
""
]
] | According to the World Health Organization, the involvement of Vulnerable Road Users (VRUs) in traffic accidents remains a significant concern, with VRUs accounting for over half of traffic fatalities. The increase of automation and connectivity levels of vehicles has still an uncertain impact on VRU safety. By deploying the Collective Perception Service (CPS), vehicles can include information about VRUs in Vehicle-to-Everything (V2X) messages, thus raising the general perception of the environment. Although an increased awareness is considered positive, one could argue that the awareness ratio, the metric used to measure perception, is only implicitly connected to the VRUs' safety. This paper introduces a tailored metric, the Risk Factor (RF), to measure the risk level for the interactions between Connected Automated Vehicles (CAVs) and VRUs. By evaluating the RF, we assess the impact of V2X communication on VRU risk mitigation. Our results show that high V2X penetration rates can reduce mean risk, quantified by our proposed metric, by up to 44%. Although the median risk value shows a significant decrease, suggesting a reduction in overall risk, the distribution of risk values reveals that CPS's mitigation effectiveness is overestimated, which is indicated by the divergence between RF and awareness ratio. Additionally, by analyzing a real-world traffic dataset, we pinpoint high-risk locations within a scenario, identifying areas near intersections and behind parked cars as especially dangerous. Our methodology can be ported and applied to other scenarios in order to identify high-risk areas. We value the proposed RF as an insightful metric for quantifying VRU safety in a highly automated and connected environment. |
1804.05358 | Suzhen Wang | Suzhen Wang, Jingjing Luo, Yuan-Hsun Lo, Wing Shing Wong | Forwarding and Optical Indices in an All-Optical BCube Network | 8 pages, 4 figures, a conference paper | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | BCube is a highly scalable and cost-effective networking topology, which has
been widely applied to modular datacenters. Optical technologies based on
Wavelength Division Multiplexing (WDM) are gaining popularity for Data Center
Networks (DCNs) due to their technological strengths such as low communication
latency, low power consumption, and high link bandwidth. Therefore, it is worth
investigating optical techniques into the BCube architecture for future DCNs.
For this purpose, we study the forwarding and optical indices in an all-optical
BCube network. Consider an all-optical BCube network in which every host sets
up a connection with every other host. The optical index is the minimum number
of wavelengths required by the network to support such a host-to-host traffic,
under the restriction that each connection is assigned a wavelength that
remains constant in the network. A routing is a set of directed paths specified
for all host pairs. By defining the maximum link load of a routing as the
maximum number of paths passing through any link, the forwarding index is
measured to be the minimum of maximum link load over all possible routings. The
forwarding index turns out to be a natural lower bound of the optical index. In
this paper, we first compute the forwarding index of an all-optical BCube
network. Then, we derive an upper bound of the optical index by providing an
oblivious routing and wavelength assignment (RWA) schemes, which attains the
lower bound given by the forwarding index in some small cases. Finally, a
tighter upper bound is obtained by means of the chromatic numbers in Graph
Theory.
| [
{
"created": "Sun, 15 Apr 2018 13:44:59 GMT",
"version": "v1"
}
] | 2018-04-17 | [
[
"Wang",
"Suzhen",
""
],
[
"Luo",
"Jingjing",
""
],
[
"Lo",
"Yuan-Hsun",
""
],
[
"Wong",
"Wing Shing",
""
]
] | BCube is a highly scalable and cost-effective networking topology, which has been widely applied to modular datacenters. Optical technologies based on Wavelength Division Multiplexing (WDM) are gaining popularity for Data Center Networks (DCNs) due to their technological strengths such as low communication latency, low power consumption, and high link bandwidth. Therefore, it is worth investigating optical techniques into the BCube architecture for future DCNs. For this purpose, we study the forwarding and optical indices in an all-optical BCube network. Consider an all-optical BCube network in which every host sets up a connection with every other host. The optical index is the minimum number of wavelengths required by the network to support such a host-to-host traffic, under the restriction that each connection is assigned a wavelength that remains constant in the network. A routing is a set of directed paths specified for all host pairs. By defining the maximum link load of a routing as the maximum number of paths passing through any link, the forwarding index is measured to be the minimum of maximum link load over all possible routings. The forwarding index turns out to be a natural lower bound of the optical index. In this paper, we first compute the forwarding index of an all-optical BCube network. Then, we derive an upper bound of the optical index by providing an oblivious routing and wavelength assignment (RWA) schemes, which attains the lower bound given by the forwarding index in some small cases. Finally, a tighter upper bound is obtained by means of the chromatic numbers in Graph Theory. |
1812.06216 | Sebastian Koch | Sebastian Koch, Albert Matveev, Zhongshi Jiang, Francis Williams,
Alexey Artemov, Evgeny Burnaev, Marc Alexa, Denis Zorin, Daniele Panozzo | ABC: A Big CAD Model Dataset For Geometric Deep Learning | 15 pages | null | null | null | cs.GR cs.CG cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce ABC-Dataset, a collection of one million Computer-Aided Design
(CAD) models for research of geometric deep learning methods and applications.
Each model is a collection of explicitly parametrized curves and surfaces,
providing ground truth for differential quantities, patch segmentation,
geometric feature detection, and shape reconstruction. Sampling the parametric
descriptions of surfaces and curves allows generating data in different formats
and resolutions, enabling fair comparisons for a wide range of geometric
learning algorithms. As a use case for our dataset, we perform a large-scale
benchmark for estimation of surface normals, comparing existing data driven
methods and evaluating their performance against both the ground truth and
traditional normal estimation methods.
| [
{
"created": "Sat, 15 Dec 2018 01:21:48 GMT",
"version": "v1"
},
{
"created": "Tue, 30 Apr 2019 07:18:44 GMT",
"version": "v2"
}
] | 2019-05-01 | [
[
"Koch",
"Sebastian",
""
],
[
"Matveev",
"Albert",
""
],
[
"Jiang",
"Zhongshi",
""
],
[
"Williams",
"Francis",
""
],
[
"Artemov",
"Alexey",
""
],
[
"Burnaev",
"Evgeny",
""
],
[
"Alexa",
"Marc",
""
],
[
"Zorin",
"Denis",
""
],
[
"Panozzo",
"Daniele",
""
]
] | We introduce ABC-Dataset, a collection of one million Computer-Aided Design (CAD) models for research of geometric deep learning methods and applications. Each model is a collection of explicitly parametrized curves and surfaces, providing ground truth for differential quantities, patch segmentation, geometric feature detection, and shape reconstruction. Sampling the parametric descriptions of surfaces and curves allows generating data in different formats and resolutions, enabling fair comparisons for a wide range of geometric learning algorithms. As a use case for our dataset, we perform a large-scale benchmark for estimation of surface normals, comparing existing data driven methods and evaluating their performance against both the ground truth and traditional normal estimation methods. |
1409.2318 | Bernhard Rumpe | Arne Haber, Holger Renel, Bernhard Rumpe, Ina Schaefer, Frank van der
Linden | Hierarchical Variability Modeling for Software Architectures | 10 pages, 9 figures. Proceedings of International Software Product
Lines Conference (SPLC 2011), IEEE Computer Society, August 2011 | null | 10.1109/SPLC.2011.28 | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hierarchically decomposed component-based system development reduces design
complexity by supporting distribution of work and component reuse. For product
line development, the variability of the components to be deployed in different
products has to be represented by appropriate means. In this paper, we propose
hierarchical variability modeling which allows specifying component variability
integrated with the component hierarchy and locally to the components.
Components can contain variation points determining where components may vary.
Associated variants define how this variability can be realized in different
component configurations. We present a meta model for hierarchical variability
modeling to formalize the conceptual ideas. In order to obtain an
implementation of the proposed approach together with tool support, we extend
the existing architectural description language MontiArc with hierarchical
variability modeling. We illustrate the presented approach using an example
from the automotive systems domain.
| [
{
"created": "Mon, 8 Sep 2014 12:14:29 GMT",
"version": "v1"
}
] | 2014-09-09 | [
[
"Haber",
"Arne",
""
],
[
"Renel",
"Holger",
""
],
[
"Rumpe",
"Bernhard",
""
],
[
"Schaefer",
"Ina",
""
],
[
"van der Linden",
"Frank",
""
]
] | Hierarchically decomposed component-based system development reduces design complexity by supporting distribution of work and component reuse. For product line development, the variability of the components to be deployed in different products has to be represented by appropriate means. In this paper, we propose hierarchical variability modeling which allows specifying component variability integrated with the component hierarchy and locally to the components. Components can contain variation points determining where components may vary. Associated variants define how this variability can be realized in different component configurations. We present a meta model for hierarchical variability modeling to formalize the conceptual ideas. In order to obtain an implementation of the proposed approach together with tool support, we extend the existing architectural description language MontiArc with hierarchical variability modeling. We illustrate the presented approach using an example from the automotive systems domain. |
2408.02036 | Yujin Ren | Yujin Ren, Jiaxin Zhang, Lianwen Jin | LEGO: Self-Supervised Representation Learning for Scene Text Images | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, significant progress has been made in scene text recognition
by data-driven methods. However, due to the scarcity of annotated real-world
data, the training of these methods predominantly relies on synthetic data. The
distribution gap between synthetic and real data constrains the further
performance improvement of these methods in real-world applications. To tackle
this problem, a highly promising approach is to utilize massive amounts of
unlabeled real data for self-supervised training, which has been widely proven
effective in many NLP and CV tasks. Nevertheless, generic self-supervised
methods are unsuitable for scene text images due to their sequential nature. To
address this issue, we propose a Local Explicit and Global Order-aware
self-supervised representation learning method (LEGO) that accounts for the
characteristics of scene text images. Inspired by the human cognitive process
of learning words, which involves spelling, reading, and writing, we propose
three novel pre-text tasks for LEGO to model sequential, semantic, and
structural features, respectively. The entire pre-training process is optimized
by using a consistent Text Knowledge Codebook. Extensive experiments validate
that LEGO outperforms previous scene text self-supervised methods. The
recognizer incorporated with our pre-trained model achieves superior or
comparable performance compared to state-of-the-art scene text recognition
methods on six benchmarks. Furthermore, we demonstrate that LEGO can achieve
superior performance in other text-related tasks.
| [
{
"created": "Sun, 4 Aug 2024 14:07:14 GMT",
"version": "v1"
}
] | 2024-08-06 | [
[
"Ren",
"Yujin",
""
],
[
"Zhang",
"Jiaxin",
""
],
[
"Jin",
"Lianwen",
""
]
] | In recent years, significant progress has been made in scene text recognition by data-driven methods. However, due to the scarcity of annotated real-world data, the training of these methods predominantly relies on synthetic data. The distribution gap between synthetic and real data constrains the further performance improvement of these methods in real-world applications. To tackle this problem, a highly promising approach is to utilize massive amounts of unlabeled real data for self-supervised training, which has been widely proven effective in many NLP and CV tasks. Nevertheless, generic self-supervised methods are unsuitable for scene text images due to their sequential nature. To address this issue, we propose a Local Explicit and Global Order-aware self-supervised representation learning method (LEGO) that accounts for the characteristics of scene text images. Inspired by the human cognitive process of learning words, which involves spelling, reading, and writing, we propose three novel pre-text tasks for LEGO to model sequential, semantic, and structural features, respectively. The entire pre-training process is optimized by using a consistent Text Knowledge Codebook. Extensive experiments validate that LEGO outperforms previous scene text self-supervised methods. The recognizer incorporated with our pre-trained model achieves superior or comparable performance compared to state-of-the-art scene text recognition methods on six benchmarks. Furthermore, we demonstrate that LEGO can achieve superior performance in other text-related tasks. |
2305.15358 | Siddhant Garg | Luca Di Liello, Siddhant Garg, Alessandro Moschitti | Context-Aware Transformer Pre-Training for Answer Sentence Selection | Accepted at ACL 2023 | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Answer Sentence Selection (AS2) is a core component for building an accurate
Question Answering pipeline. AS2 models rank a set of candidate sentences based
on how likely they answer a given question. The state of the art in AS2
exploits pre-trained transformers by transferring them on large annotated
datasets, while using local contextual information around the candidate
sentence. In this paper, we propose three pre-training objectives designed to
mimic the downstream fine-tuning task of contextual AS2. This allows for
specializing LMs when fine-tuning for contextual AS2. Our experiments on three
public and two large-scale industrial datasets show that our pre-training
approaches (applied to RoBERTa and ELECTRA) can improve baseline contextual AS2
accuracy by up to 8% on some datasets.
| [
{
"created": "Wed, 24 May 2023 17:10:45 GMT",
"version": "v1"
}
] | 2023-05-25 | [
[
"Di Liello",
"Luca",
""
],
[
"Garg",
"Siddhant",
""
],
[
"Moschitti",
"Alessandro",
""
]
] | Answer Sentence Selection (AS2) is a core component for building an accurate Question Answering pipeline. AS2 models rank a set of candidate sentences based on how likely they answer a given question. The state of the art in AS2 exploits pre-trained transformers by transferring them on large annotated datasets, while using local contextual information around the candidate sentence. In this paper, we propose three pre-training objectives designed to mimic the downstream fine-tuning task of contextual AS2. This allows for specializing LMs when fine-tuning for contextual AS2. Our experiments on three public and two large-scale industrial datasets show that our pre-training approaches (applied to RoBERTa and ELECTRA) can improve baseline contextual AS2 accuracy by up to 8% on some datasets. |
2002.11049 | Zhe Yu | Zhe Yu, Fahmid Morshed Fahid, Huy Tu, Tim Menzies | Identifying Self-Admitted Technical Debts with Jitterbug: A Two-step
Approach | 14 pages, 3 pages for appendix, 6+3 figures, 10 tables. Accepted by
TSE journal | null | 10.1109/TSE.2020.3031401 | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Keeping track of and managing Self-Admitted Technical Debts (SATDs) are
important to maintaining a healthy software project. This requires much time
and effort from human experts to identify the SATDs manually. The current
automated solutions do not have satisfactory precision and recall in
identifying SATDs to fully automate the process. To solve the above problems,
we propose a two-step framework called Jitterbug for identifying SATDs.
Jitterbug first identifies the "easy to find" SATDs automatically with close to
100% precision using a novel pattern recognition technique. Subsequently,
machine learning techniques are applied to assist human experts in manually
identifying the remaining "hard to find" SATDs with reduced human effort. Our
simulation studies on ten software projects show that Jitterbug can identify
SATDs more efficiently (with less human effort) than the prior state-of-the-art
methods.
| [
{
"created": "Tue, 25 Feb 2020 17:20:05 GMT",
"version": "v1"
},
{
"created": "Tue, 23 Jun 2020 23:18:26 GMT",
"version": "v2"
},
{
"created": "Sat, 17 Oct 2020 00:13:06 GMT",
"version": "v3"
}
] | 2020-10-20 | [
[
"Yu",
"Zhe",
""
],
[
"Fahid",
"Fahmid Morshed",
""
],
[
"Tu",
"Huy",
""
],
[
"Menzies",
"Tim",
""
]
] | Keeping track of and managing Self-Admitted Technical Debts (SATDs) are important to maintaining a healthy software project. This requires much time and effort from human experts to identify the SATDs manually. The current automated solutions do not have satisfactory precision and recall in identifying SATDs to fully automate the process. To solve the above problems, we propose a two-step framework called Jitterbug for identifying SATDs. Jitterbug first identifies the "easy to find" SATDs automatically with close to 100% precision using a novel pattern recognition technique. Subsequently, machine learning techniques are applied to assist human experts in manually identifying the remaining "hard to find" SATDs with reduced human effort. Our simulation studies on ten software projects show that Jitterbug can identify SATDs more efficiently (with less human effort) than the prior state-of-the-art methods. |
1805.02258 | Andrey Kutuzov | Andrey Kutuzov | Russian word sense induction by clustering averaged word embeddings | Proceedings of the 24rd International Conference on Computational
Linguistics and Intellectual Technologies (Dialogue-2018) | null | null | null | cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | The paper reports our participation in the shared task on word sense
induction and disambiguation for the Russian language (RUSSE-2018). Our team
was ranked 2nd for the wiki-wiki dataset (containing mostly homonyms) and 5th
for the bts-rnc and active-dict datasets (containing mostly polysemous words)
among all 19 participants.
The method we employed was extremely naive. It implied representing contexts
of ambiguous words as averaged word embedding vectors, using off-the-shelf
pre-trained distributional models. Then, these vector representations were
clustered with mainstream clustering techniques, thus producing the groups
corresponding to the ambiguous word senses. As a side result, we show that word
embedding models trained on small but balanced corpora can be superior to those
trained on large but noisy data - not only in intrinsic evaluation, but also in
downstream tasks like word sense induction.
| [
{
"created": "Sun, 6 May 2018 18:25:12 GMT",
"version": "v1"
}
] | 2018-05-08 | [
[
"Kutuzov",
"Andrey",
""
]
] | The paper reports our participation in the shared task on word sense induction and disambiguation for the Russian language (RUSSE-2018). Our team was ranked 2nd for the wiki-wiki dataset (containing mostly homonyms) and 5th for the bts-rnc and active-dict datasets (containing mostly polysemous words) among all 19 participants. The method we employed was extremely naive. It implied representing contexts of ambiguous words as averaged word embedding vectors, using off-the-shelf pre-trained distributional models. Then, these vector representations were clustered with mainstream clustering techniques, thus producing the groups corresponding to the ambiguous word senses. As a side result, we show that word embedding models trained on small but balanced corpora can be superior to those trained on large but noisy data - not only in intrinsic evaluation, but also in downstream tasks like word sense induction. |
2305.02651 | Maciej Wielgosz | Maciej Wielgosz and Stefano Puliti and Phil Wilkes and Rasmus Astrup | Point2Tree(P2T) -- framework for parameter tuning of semantic and
instance segmentation used with mobile laser scanning data in coniferous
forest | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This article introduces Point2Tree, a novel framework that incorporates a
three-stage process involving semantic segmentation, instance segmentation,
optimization analysis of hyperparemeters importance. It introduces a
comprehensive and modular approach to processing laser points clouds in
Forestry. We tested it on two independent datasets. The first area was located
in an actively managed boreal coniferous dominated forest in V{\aa}ler, Norway,
16 circular plots of 400 square meters were selected to cover a range of forest
conditions in terms of species composition and stand density. We trained a
model based on Pointnet++ architecture which achieves 0.92 F1-score in semantic
segmentation. As a second step in our pipeline we used graph-based approach for
instance segmentation which reached F1-score approx. 0.6. The optimization
allowed to further boost the performance of the pipeline by approx. 4 \%
points.
| [
{
"created": "Thu, 4 May 2023 08:45:17 GMT",
"version": "v1"
}
] | 2023-05-05 | [
[
"Wielgosz",
"Maciej",
""
],
[
"Puliti",
"Stefano",
""
],
[
"Wilkes",
"Phil",
""
],
[
"Astrup",
"Rasmus",
""
]
] | This article introduces Point2Tree, a novel framework that incorporates a three-stage process involving semantic segmentation, instance segmentation, optimization analysis of hyperparemeters importance. It introduces a comprehensive and modular approach to processing laser points clouds in Forestry. We tested it on two independent datasets. The first area was located in an actively managed boreal coniferous dominated forest in V{\aa}ler, Norway, 16 circular plots of 400 square meters were selected to cover a range of forest conditions in terms of species composition and stand density. We trained a model based on Pointnet++ architecture which achieves 0.92 F1-score in semantic segmentation. As a second step in our pipeline we used graph-based approach for instance segmentation which reached F1-score approx. 0.6. The optimization allowed to further boost the performance of the pipeline by approx. 4 \% points. |
2107.08362 | Bing Sun | Bing Sun, Jun Sun, Ting Dai, Lijun Zhang | Probabilistic Verification of Neural Networks Against Group Fairness | null | null | null | null | cs.LG cs.AI cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fairness is crucial for neural networks which are used in applications with
important societal implication. Recently, there have been multiple attempts on
improving fairness of neural networks, with a focus on fairness testing (e.g.,
generating individual discriminatory instances) and fairness training (e.g.,
enhancing fairness through augmented training). In this work, we propose an
approach to formally verify neural networks against fairness, with a focus on
independence-based fairness such as group fairness. Our method is built upon an
approach for learning Markov Chains from a user-provided neural network (i.e.,
a feed-forward neural network or a recurrent neural network) which is
guaranteed to facilitate sound analysis. The learned Markov Chain not only
allows us to verify (with Probably Approximate Correctness guarantee) whether
the neural network is fair or not, but also facilities sensitivity analysis
which helps to understand why fairness is violated. We demonstrate that with
our analysis results, the neural weights can be optimized to improve fairness.
Our approach has been evaluated with multiple models trained on benchmark
datasets and the experiment results show that our approach is effective and
efficient.
| [
{
"created": "Sun, 18 Jul 2021 04:34:31 GMT",
"version": "v1"
}
] | 2021-07-20 | [
[
"Sun",
"Bing",
""
],
[
"Sun",
"Jun",
""
],
[
"Dai",
"Ting",
""
],
[
"Zhang",
"Lijun",
""
]
] | Fairness is crucial for neural networks which are used in applications with important societal implication. Recently, there have been multiple attempts on improving fairness of neural networks, with a focus on fairness testing (e.g., generating individual discriminatory instances) and fairness training (e.g., enhancing fairness through augmented training). In this work, we propose an approach to formally verify neural networks against fairness, with a focus on independence-based fairness such as group fairness. Our method is built upon an approach for learning Markov Chains from a user-provided neural network (i.e., a feed-forward neural network or a recurrent neural network) which is guaranteed to facilitate sound analysis. The learned Markov Chain not only allows us to verify (with Probably Approximate Correctness guarantee) whether the neural network is fair or not, but also facilities sensitivity analysis which helps to understand why fairness is violated. We demonstrate that with our analysis results, the neural weights can be optimized to improve fairness. Our approach has been evaluated with multiple models trained on benchmark datasets and the experiment results show that our approach is effective and efficient. |
2207.04914 | Marco Tranzatto | Marco Tranzatto, Mihir Dharmadhikari, Lukas Bernreiter, Marco Camurri,
Shehryar Khattak, Frank Mascarich, Patrick Pfreundschuh, David Wisth, Samuel
Zimmermann, Mihir Kulkarni, Victor Reijgwart, Benoit Casseau, Timon
Homberger, Paolo De Petris, Lionel Ott, Wayne Tubby, Gabriel Waibel, Huan
Nguyen, Cesar Cadena, Russell Buchanan, Lorenz Wellhausen, Nikhil Khedekar,
Olov Andersson, Lintong Zhang, Takahiro Miki, Tung Dang, Matias Mattamala,
Markus Montenegro, Konrad Meyer, Xiangyu Wu, Adrien Briod, Mark Mueller,
Maurice Fallon, Roland Siegwart, Marco Hutter, Kostas Alexis | Team CERBERUS Wins the DARPA Subterranean Challenge: Technical Overview
and Lessons Learned | null | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This article presents the CERBERUS robotic system-of-systems, which won the
DARPA Subterranean Challenge Final Event in 2021. The Subterranean Challenge
was organized by DARPA with the vision to facilitate the novel technologies
necessary to reliably explore diverse underground environments despite the
grueling challenges they present for robotic autonomy. Due to their geometric
complexity, degraded perceptual conditions combined with lack of GPS support,
austere navigation conditions, and denied communications, subterranean settings
render autonomous operations particularly demanding. In response to this
challenge, we developed the CERBERUS system which exploits the synergy of
legged and flying robots, coupled with robust control especially for overcoming
perilous terrain, multi-modal and multi-robot perception for localization and
mapping in conditions of sensor degradation, and resilient autonomy through
unified exploration path planning and local motion planning that reflects
robot-specific limitations. Based on its ability to explore diverse underground
environments and its high-level command and control by a single human
supervisor, CERBERUS demonstrated efficient exploration, reliable detection of
objects of interest, and accurate mapping. In this article, we report results
from both the preliminary runs and the final Prize Round of the DARPA
Subterranean Challenge, and discuss highlights and challenges faced, alongside
lessons learned for the benefit of the community.
| [
{
"created": "Mon, 11 Jul 2022 14:50:11 GMT",
"version": "v1"
}
] | 2022-07-12 | [
[
"Tranzatto",
"Marco",
""
],
[
"Dharmadhikari",
"Mihir",
""
],
[
"Bernreiter",
"Lukas",
""
],
[
"Camurri",
"Marco",
""
],
[
"Khattak",
"Shehryar",
""
],
[
"Mascarich",
"Frank",
""
],
[
"Pfreundschuh",
"Patrick",
""
],
[
"Wisth",
"David",
""
],
[
"Zimmermann",
"Samuel",
""
],
[
"Kulkarni",
"Mihir",
""
],
[
"Reijgwart",
"Victor",
""
],
[
"Casseau",
"Benoit",
""
],
[
"Homberger",
"Timon",
""
],
[
"De Petris",
"Paolo",
""
],
[
"Ott",
"Lionel",
""
],
[
"Tubby",
"Wayne",
""
],
[
"Waibel",
"Gabriel",
""
],
[
"Nguyen",
"Huan",
""
],
[
"Cadena",
"Cesar",
""
],
[
"Buchanan",
"Russell",
""
],
[
"Wellhausen",
"Lorenz",
""
],
[
"Khedekar",
"Nikhil",
""
],
[
"Andersson",
"Olov",
""
],
[
"Zhang",
"Lintong",
""
],
[
"Miki",
"Takahiro",
""
],
[
"Dang",
"Tung",
""
],
[
"Mattamala",
"Matias",
""
],
[
"Montenegro",
"Markus",
""
],
[
"Meyer",
"Konrad",
""
],
[
"Wu",
"Xiangyu",
""
],
[
"Briod",
"Adrien",
""
],
[
"Mueller",
"Mark",
""
],
[
"Fallon",
"Maurice",
""
],
[
"Siegwart",
"Roland",
""
],
[
"Hutter",
"Marco",
""
],
[
"Alexis",
"Kostas",
""
]
] | This article presents the CERBERUS robotic system-of-systems, which won the DARPA Subterranean Challenge Final Event in 2021. The Subterranean Challenge was organized by DARPA with the vision to facilitate the novel technologies necessary to reliably explore diverse underground environments despite the grueling challenges they present for robotic autonomy. Due to their geometric complexity, degraded perceptual conditions combined with lack of GPS support, austere navigation conditions, and denied communications, subterranean settings render autonomous operations particularly demanding. In response to this challenge, we developed the CERBERUS system which exploits the synergy of legged and flying robots, coupled with robust control especially for overcoming perilous terrain, multi-modal and multi-robot perception for localization and mapping in conditions of sensor degradation, and resilient autonomy through unified exploration path planning and local motion planning that reflects robot-specific limitations. Based on its ability to explore diverse underground environments and its high-level command and control by a single human supervisor, CERBERUS demonstrated efficient exploration, reliable detection of objects of interest, and accurate mapping. In this article, we report results from both the preliminary runs and the final Prize Round of the DARPA Subterranean Challenge, and discuss highlights and challenges faced, alongside lessons learned for the benefit of the community. |
1909.00102 | Alexander Hanbo Li | Alexander Hanbo Li, Abhinav Sethy | Knowledge Enhanced Attention for Robust Natural Language Inference | null | null | null | null | cs.CL cs.CR cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural network models have been very successful at achieving high accuracy on
natural language inference (NLI) tasks. However, as demonstrated in recent
literature, when tested on some simple adversarial examples, most of the models
suffer a significant drop in performance. This raises the concern about the
robustness of NLI models. In this paper, we propose to make NLI models robust
by incorporating external knowledge to the attention mechanism using a simple
transformation. We apply the new attention to two popular types of NLI models:
one is Transformer encoder, and the other is a decomposable model, and show
that our method can significantly improve their robustness. Moreover, when
combined with BERT pretraining, our method achieves the human-level performance
on the adversarial SNLI data set.
| [
{
"created": "Sat, 31 Aug 2019 01:04:58 GMT",
"version": "v1"
}
] | 2019-09-04 | [
[
"Li",
"Alexander Hanbo",
""
],
[
"Sethy",
"Abhinav",
""
]
] | Neural network models have been very successful at achieving high accuracy on natural language inference (NLI) tasks. However, as demonstrated in recent literature, when tested on some simple adversarial examples, most of the models suffer a significant drop in performance. This raises the concern about the robustness of NLI models. In this paper, we propose to make NLI models robust by incorporating external knowledge to the attention mechanism using a simple transformation. We apply the new attention to two popular types of NLI models: one is Transformer encoder, and the other is a decomposable model, and show that our method can significantly improve their robustness. Moreover, when combined with BERT pretraining, our method achieves the human-level performance on the adversarial SNLI data set. |
1809.00898 | Marie-Morgane Paumard | M.-M. Paumard, D. Picard, H. Tabia | Image Reassembly Combining Deep Learning and Shortest Path Problem | ECCV 2018 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper addresses the problem of reassembling images from disjointed
fragments. More specifically, given an unordered set of fragments, we aim at
reassembling one or several possibly incomplete images. The main contributions
of this work are: 1) several deep neural architectures to predict the relative
position of image fragments that outperform the previous state of the art; 2)
casting the reassembly problem into the shortest path in a graph problem for
which we provide several construction algorithms depending on available
information; 3) a new dataset of images taken from the Metropolitan Museum of
Art (MET) dedicated to image reassembly for which we provide a clear setup and
a strong baseline.
| [
{
"created": "Tue, 4 Sep 2018 11:39:03 GMT",
"version": "v1"
}
] | 2018-09-05 | [
[
"Paumard",
"M. -M.",
""
],
[
"Picard",
"D.",
""
],
[
"Tabia",
"H.",
""
]
] | This paper addresses the problem of reassembling images from disjointed fragments. More specifically, given an unordered set of fragments, we aim at reassembling one or several possibly incomplete images. The main contributions of this work are: 1) several deep neural architectures to predict the relative position of image fragments that outperform the previous state of the art; 2) casting the reassembly problem into the shortest path in a graph problem for which we provide several construction algorithms depending on available information; 3) a new dataset of images taken from the Metropolitan Museum of Art (MET) dedicated to image reassembly for which we provide a clear setup and a strong baseline. |
2006.06527 | Zhennan Wang | Zhennan Wang, Canqun Xiang, Wenbin Zou, Chen Xu | MMA Regularization: Decorrelating Weights of Neural Networks by
Maximizing the Minimal Angles | NeurIPS2020 | https://proceedings.neurips.cc/paper/2020/file/dcd2f3f312b6705fb06f4f9f1b55b55c-Paper.pdf | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The strong correlation between neurons or filters can significantly weaken
the generalization ability of neural networks. Inspired by the well-known
Tammes problem, we propose a novel diversity regularization method to address
this issue, which makes the normalized weight vectors of neurons or filters
distributed on a hypersphere as uniformly as possible, through maximizing the
minimal pairwise angles (MMA). This method can easily exert its effect by
plugging the MMA regularization term into the loss function with negligible
computational overhead. The MMA regularization is simple, efficient, and
effective. Therefore, it can be used as a basic regularization method in neural
network training. Extensive experiments demonstrate that MMA regularization is
able to enhance the generalization ability of various modern models and
achieves considerable performance improvements on CIFAR100 and TinyImageNet
datasets. In addition, experiments on face verification show that MMA
regularization is also effective for feature learning. Code is available at:
https://github.com/wznpub/MMA_Regularization.
| [
{
"created": "Sat, 6 Jun 2020 14:03:16 GMT",
"version": "v1"
},
{
"created": "Tue, 23 Mar 2021 09:14:49 GMT",
"version": "v2"
}
] | 2021-03-24 | [
[
"Wang",
"Zhennan",
""
],
[
"Xiang",
"Canqun",
""
],
[
"Zou",
"Wenbin",
""
],
[
"Xu",
"Chen",
""
]
] | The strong correlation between neurons or filters can significantly weaken the generalization ability of neural networks. Inspired by the well-known Tammes problem, we propose a novel diversity regularization method to address this issue, which makes the normalized weight vectors of neurons or filters distributed on a hypersphere as uniformly as possible, through maximizing the minimal pairwise angles (MMA). This method can easily exert its effect by plugging the MMA regularization term into the loss function with negligible computational overhead. The MMA regularization is simple, efficient, and effective. Therefore, it can be used as a basic regularization method in neural network training. Extensive experiments demonstrate that MMA regularization is able to enhance the generalization ability of various modern models and achieves considerable performance improvements on CIFAR100 and TinyImageNet datasets. In addition, experiments on face verification show that MMA regularization is also effective for feature learning. Code is available at: https://github.com/wznpub/MMA_Regularization. |
1004.1198 | Dung Nguyen | Dung Viet Nguyen, Bane Vasic, Michael Marcellin, Shashi Kiran
Chilappagari | Structured LDPC Codes from Permutation Matrices Free of Small Trapping
Sets | 5 pages, 3 figures, submitted to ITW Dublin 2010 | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces a class of structured lowdensity parity-check (LDPC)
codes whose parity check matrices are arrays of permutation matrices. The
permutation matrices are obtained from Latin squares and form a finite field
under some matrix operations. They are chosen so that the Tanner graphs do not
contain subgraphs harmful to iterative decoding algorithms. The construction of
column-weight-three codes is presented. Although the codes are optimized for
the Gallager A/B algorithm over the binary symmetric channel (BSC), their error
performance is very good on the additive white Gaussian noise channel (AWGNC)
as well.
| [
{
"created": "Wed, 7 Apr 2010 22:06:06 GMT",
"version": "v1"
}
] | 2010-04-09 | [
[
"Nguyen",
"Dung Viet",
""
],
[
"Vasic",
"Bane",
""
],
[
"Marcellin",
"Michael",
""
],
[
"Chilappagari",
"Shashi Kiran",
""
]
] | This paper introduces a class of structured lowdensity parity-check (LDPC) codes whose parity check matrices are arrays of permutation matrices. The permutation matrices are obtained from Latin squares and form a finite field under some matrix operations. They are chosen so that the Tanner graphs do not contain subgraphs harmful to iterative decoding algorithms. The construction of column-weight-three codes is presented. Although the codes are optimized for the Gallager A/B algorithm over the binary symmetric channel (BSC), their error performance is very good on the additive white Gaussian noise channel (AWGNC) as well. |
2007.02802 | David Carrera | \'Alvaro Villalba, David Carrera | Multi-tenant Pub/Sub Processing for Real-time Data Streams | Partially funded by European Research Council (ERC) under the
European Union's Horizon 2020 research and innovation programme (grant
agreement No 639595) - HiEST Project Published Euro-Par 2018: Euro-Par 2018:
Parallel Processing Workshops pp 251-262 | Euro-Par 2018: Parallel Processing Workshops | 10.1007/978-3-030-10549-5_20 | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Devices and sensors generate streams of data across a diversity of locations
and protocols. That data usually reaches a central platform that is used to
store and process the streams. Processing can be done in real time, with
transformations and enrichment happening on-the-fly, but it can also happen
after data is stored and organized in repositories. In the former case, stream
processing technologies are required to operate on the data; in the latter
batch analytics and queries are of common use.
This paper introduces a runtime to dynamically construct data stream
processing topologies based on user-supplied code. These dynamic topologies are
built on-the-fly using a data subscription model defined by the applications
that consume data. Each user-defined processing unit is called a Service
Object. Every Service Object consumes input data streams and may produce output
streams that others can consume. The subscription-based programing model
enables multiple users to deploy their own data-processing services. The
runtime does the dynamic forwarding of data and execution of Service Objects
from different users. Data streams can originate in real-world devices or they
can be the outputs of Service Objects.
| [
{
"created": "Mon, 6 Jul 2020 15:05:21 GMT",
"version": "v1"
}
] | 2020-07-07 | [
[
"Villalba",
"Álvaro",
""
],
[
"Carrera",
"David",
""
]
] | Devices and sensors generate streams of data across a diversity of locations and protocols. That data usually reaches a central platform that is used to store and process the streams. Processing can be done in real time, with transformations and enrichment happening on-the-fly, but it can also happen after data is stored and organized in repositories. In the former case, stream processing technologies are required to operate on the data; in the latter batch analytics and queries are of common use. This paper introduces a runtime to dynamically construct data stream processing topologies based on user-supplied code. These dynamic topologies are built on-the-fly using a data subscription model defined by the applications that consume data. Each user-defined processing unit is called a Service Object. Every Service Object consumes input data streams and may produce output streams that others can consume. The subscription-based programing model enables multiple users to deploy their own data-processing services. The runtime does the dynamic forwarding of data and execution of Service Objects from different users. Data streams can originate in real-world devices or they can be the outputs of Service Objects. |
1807.03299 | Aurelien Garivier | Gr\'egoire Jauvion, Nicolas Grislain, Pascal Sielenou Dkengne (IMT),
Aur\'elien Garivier (IMT), S\'ebastien Gerchinovitz (IMT) | Optimization of a SSP's Header Bidding Strategy using Thompson Sampling | null | The 24th ACM SIGKDD International Conference on Knowledge
Discovery & Data Mining, Aug 2018, London, United Kingdom | 10.1145/3219819.3219917 | null | cs.LG cs.GT stat.ME stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Over the last decade, digital media (web or app publishers) generalized the
use of real time ad auctions to sell their ad spaces. Multiple auction
platforms, also called Supply-Side Platforms (SSP), were created. Because of
this multiplicity, publishers started to create competition between SSPs. In
this setting, there are two successive auctions: a second price auction in each
SSP and a secondary, first price auction, called header bidding auction,
between SSPs.In this paper, we consider an SSP competing with other SSPs for ad
spaces. The SSP acts as an intermediary between an advertiser wanting to buy ad
spaces and a web publisher wanting to sell its ad spaces, and needs to define a
bidding strategy to be able to deliver to the advertisers as many ads as
possible while spending as little as possible. The revenue optimization of this
SSP can be written as a contextual bandit problem, where the context consists
of the information available about the ad opportunity, such as properties of
the internet user or of the ad placement.Using classical multi-armed bandit
strategies (such as the original versions of UCB and EXP3) is inefficient in
this setting and yields a low convergence speed, as the arms are very
correlated. In this paper we design and experiment a version of the Thompson
Sampling algorithm that easily takes this correlation into account. We combine
this bayesian algorithm with a particle filter, which permits to handle
non-stationarity by sequentially estimating the distribution of the highest bid
to beat in order to win an auction. We apply this methodology on two real
auction datasets, and show that it significantly outperforms more classical
approaches.The strategy defined in this paper is being developed to be deployed
on thousands of publishers worldwide.
| [
{
"created": "Mon, 9 Jul 2018 08:47:19 GMT",
"version": "v1"
}
] | 2018-07-11 | [
[
"Jauvion",
"Grégoire",
"",
"IMT"
],
[
"Grislain",
"Nicolas",
"",
"IMT"
],
[
"Dkengne",
"Pascal Sielenou",
"",
"IMT"
],
[
"Garivier",
"Aurélien",
"",
"IMT"
],
[
"Gerchinovitz",
"Sébastien",
"",
"IMT"
]
] | Over the last decade, digital media (web or app publishers) generalized the use of real time ad auctions to sell their ad spaces. Multiple auction platforms, also called Supply-Side Platforms (SSP), were created. Because of this multiplicity, publishers started to create competition between SSPs. In this setting, there are two successive auctions: a second price auction in each SSP and a secondary, first price auction, called header bidding auction, between SSPs.In this paper, we consider an SSP competing with other SSPs for ad spaces. The SSP acts as an intermediary between an advertiser wanting to buy ad spaces and a web publisher wanting to sell its ad spaces, and needs to define a bidding strategy to be able to deliver to the advertisers as many ads as possible while spending as little as possible. The revenue optimization of this SSP can be written as a contextual bandit problem, where the context consists of the information available about the ad opportunity, such as properties of the internet user or of the ad placement.Using classical multi-armed bandit strategies (such as the original versions of UCB and EXP3) is inefficient in this setting and yields a low convergence speed, as the arms are very correlated. In this paper we design and experiment a version of the Thompson Sampling algorithm that easily takes this correlation into account. We combine this bayesian algorithm with a particle filter, which permits to handle non-stationarity by sequentially estimating the distribution of the highest bid to beat in order to win an auction. We apply this methodology on two real auction datasets, and show that it significantly outperforms more classical approaches.The strategy defined in this paper is being developed to be deployed on thousands of publishers worldwide. |
1803.10648 | Luis A. Pineda | Luis A. Pineda | A Distributed Extension of the Turing Machine | 37 pages, 15 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Turing Machine has two implicit properties that depend on its underlying
notion of computing: the format is fully determinate and computations are
information preserving. Distributed representations lack these properties and
cannot be fully captured by Turing's standard model. To address this limitation
a distributed extension of the Turing Machine is introduced in this paper. In
the extended machine, functions and abstractions are expressed extensionally
and computations are entropic. The machine is applied to the definition of an
associative memory, with its corresponding memory register, recognition and
retrieval operations. The memory is tested with an experiment for storing and
recognizing hand written digits with satisfactory results. The experiment can
be seen as a proof of concept that information can be stored and processed
effectively in a highly distributed fashion using a symbolic but not fully
determinate format. The new machine augments the symbolic mode of computing
with consequences on the way Church Thesis is understood. The paper is
concluded with a discussion of some implications of the extended machine for
Artificial Intelligence and Cognition.
| [
{
"created": "Wed, 28 Mar 2018 14:36:54 GMT",
"version": "v1"
}
] | 2018-03-29 | [
[
"Pineda",
"Luis A.",
""
]
] | The Turing Machine has two implicit properties that depend on its underlying notion of computing: the format is fully determinate and computations are information preserving. Distributed representations lack these properties and cannot be fully captured by Turing's standard model. To address this limitation a distributed extension of the Turing Machine is introduced in this paper. In the extended machine, functions and abstractions are expressed extensionally and computations are entropic. The machine is applied to the definition of an associative memory, with its corresponding memory register, recognition and retrieval operations. The memory is tested with an experiment for storing and recognizing hand written digits with satisfactory results. The experiment can be seen as a proof of concept that information can be stored and processed effectively in a highly distributed fashion using a symbolic but not fully determinate format. The new machine augments the symbolic mode of computing with consequences on the way Church Thesis is understood. The paper is concluded with a discussion of some implications of the extended machine for Artificial Intelligence and Cognition. |
2302.14024 | Subhankar Banerjee | Subhankar Banerjee and Sennur Ulukus | The Freshness Game: Timely Communications in the Presence of an
Adversary | arXiv admin note: substantial text overlap with arXiv:2202.04050,
arXiv:2202.05233 | null | null | null | cs.IT cs.GT eess.SP math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider a communication system where a base station (BS) transmits update
packets to $N$ users, one user at a time, over a wireless channel. We
investigate the age of this status updating system with an adversary that jams
the update packets in the downlink. We consider two system models: with
diversity and without diversity. In the model without diversity, we show that
if the BS schedules the users with a stationary randomized policy, then the
optimal choice for the adversary is to block the user which has the lowest
probability of getting scheduled by the BS, at the middle of the time horizon,
consecutively for $\alpha T$ time slots. In the model with diversity, we show
that for large $T$, the uniform user scheduling algorithm together with the
uniform sub-carrier choosing algorithm is $\frac{2 N_{sub}}{N_{sub}-1}$
optimal. Next, we investigate the game theoretic equilibrium points of this
status updating system. For the model without diversity, we show that a Nash
equilibrium does not exist, however, a Stackelberg equilibrium exists when the
scheduling algorithm of the BS acts as the leader and the adversary acts as the
follower. For the model with diversity, we show that a Nash equilibrium exists
and identify the Nash equilibrium. Finally, we extend the model without
diversity to the case where the BS can serve multiple users and the adversary
can jam multiple users, at a time.
| [
{
"created": "Mon, 27 Feb 2023 18:30:24 GMT",
"version": "v1"
}
] | 2023-02-28 | [
[
"Banerjee",
"Subhankar",
""
],
[
"Ulukus",
"Sennur",
""
]
] | We consider a communication system where a base station (BS) transmits update packets to $N$ users, one user at a time, over a wireless channel. We investigate the age of this status updating system with an adversary that jams the update packets in the downlink. We consider two system models: with diversity and without diversity. In the model without diversity, we show that if the BS schedules the users with a stationary randomized policy, then the optimal choice for the adversary is to block the user which has the lowest probability of getting scheduled by the BS, at the middle of the time horizon, consecutively for $\alpha T$ time slots. In the model with diversity, we show that for large $T$, the uniform user scheduling algorithm together with the uniform sub-carrier choosing algorithm is $\frac{2 N_{sub}}{N_{sub}-1}$ optimal. Next, we investigate the game theoretic equilibrium points of this status updating system. For the model without diversity, we show that a Nash equilibrium does not exist, however, a Stackelberg equilibrium exists when the scheduling algorithm of the BS acts as the leader and the adversary acts as the follower. For the model with diversity, we show that a Nash equilibrium exists and identify the Nash equilibrium. Finally, we extend the model without diversity to the case where the BS can serve multiple users and the adversary can jam multiple users, at a time. |
2205.06427 | Xingchen Zhao | Xingchen Zhao, Chang Liu, Anthony Sicilia, Seong Jae Hwang, Yun Fu | Test-time Fourier Style Calibration for Domain Generalization | 31st International Joint Conference on Artificial Intelligence
(IJCAI) 2022 | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | The topic of generalizing machine learning models learned on a collection of
source domains to unknown target domains is challenging. While many domain
generalization (DG) methods have achieved promising results, they primarily
rely on the source domains at train-time without manipulating the target
domains at test-time. Thus, it is still possible that those methods can overfit
to source domains and perform poorly on target domains. Driven by the
observation that domains are strongly related to styles, we argue that reducing
the gap between source and target styles can boost models' generalizability. To
solve the dilemma of having no access to the target domain during training, we
introduce Test-time Fourier Style Calibration (TF-Cal) for calibrating the
target domain style on the fly during testing. To access styles, we utilize
Fourier transformation to decompose features into amplitude (style) features
and phase (semantic) features. Furthermore, we present an effective technique
to Augment Amplitude Features (AAF) to complement TF-Cal. Extensive experiments
on several popular DG benchmarks and a segmentation dataset for medical images
demonstrate that our method outperforms state-of-the-art methods.
| [
{
"created": "Fri, 13 May 2022 02:43:03 GMT",
"version": "v1"
},
{
"created": "Wed, 18 May 2022 20:01:59 GMT",
"version": "v2"
}
] | 2022-05-20 | [
[
"Zhao",
"Xingchen",
""
],
[
"Liu",
"Chang",
""
],
[
"Sicilia",
"Anthony",
""
],
[
"Hwang",
"Seong Jae",
""
],
[
"Fu",
"Yun",
""
]
] | The topic of generalizing machine learning models learned on a collection of source domains to unknown target domains is challenging. While many domain generalization (DG) methods have achieved promising results, they primarily rely on the source domains at train-time without manipulating the target domains at test-time. Thus, it is still possible that those methods can overfit to source domains and perform poorly on target domains. Driven by the observation that domains are strongly related to styles, we argue that reducing the gap between source and target styles can boost models' generalizability. To solve the dilemma of having no access to the target domain during training, we introduce Test-time Fourier Style Calibration (TF-Cal) for calibrating the target domain style on the fly during testing. To access styles, we utilize Fourier transformation to decompose features into amplitude (style) features and phase (semantic) features. Furthermore, we present an effective technique to Augment Amplitude Features (AAF) to complement TF-Cal. Extensive experiments on several popular DG benchmarks and a segmentation dataset for medical images demonstrate that our method outperforms state-of-the-art methods. |
2104.05750 | Georgios Kampanos | Georgios Kampanos and Siamak F. Shahandashti | Accept All: The Landscape of Cookie Banners in Greece and the UK | 15 pages, 6 figures, 4 tables | null | null | null | cs.CR | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Cookie banners are devices implemented by websites to allow users to manage
their privacy settings with respect to the use of cookies. They are part of a
user's daily web browsing experience since legislation in Europe requires
websites to show such notices. In this paper, we carry out a large-scale study
of more than 17,000 websites including more than 7,500 cookie banners in Greece
and the UK to determine compliance and tracking transparency levels. Our
analysis shows that although more than 60% of websites store third-party
cookies in both countries, only less than 50% show a cookie notice and hence a
substantial proportion do not comply with the law even at the very basic level.
We find only a small proportion of the surveyed websites providing a direct
opt-out option, with an overwhelming majority either nudging users towards
privacy-intrusive choices or making cookie rejection much harder than consent.
Our results differ significantly in some cases from previous smaller-scale
studies and hence underline the importance of large-scale studies for a better
understanding of the big picture in cookie practices.
| [
{
"created": "Mon, 12 Apr 2021 18:23:08 GMT",
"version": "v1"
}
] | 2021-04-14 | [
[
"Kampanos",
"Georgios",
""
],
[
"Shahandashti",
"Siamak F.",
""
]
] | Cookie banners are devices implemented by websites to allow users to manage their privacy settings with respect to the use of cookies. They are part of a user's daily web browsing experience since legislation in Europe requires websites to show such notices. In this paper, we carry out a large-scale study of more than 17,000 websites including more than 7,500 cookie banners in Greece and the UK to determine compliance and tracking transparency levels. Our analysis shows that although more than 60% of websites store third-party cookies in both countries, only less than 50% show a cookie notice and hence a substantial proportion do not comply with the law even at the very basic level. We find only a small proportion of the surveyed websites providing a direct opt-out option, with an overwhelming majority either nudging users towards privacy-intrusive choices or making cookie rejection much harder than consent. Our results differ significantly in some cases from previous smaller-scale studies and hence underline the importance of large-scale studies for a better understanding of the big picture in cookie practices. |
2202.08221 | Luca Mariot | Luca Mariot, Stjepan Picek, Domagoj Jakobovic, Marko Djurasevic,
Alberto Leporati | Evolutionary Construction of Perfectly Balanced Boolean Functions | 19 pages, 2 figures, 3 tables | null | null | null | cs.NE cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Finding Boolean functions suitable for cryptographic primitives is a complex
combinatorial optimization problem, since they must satisfy several properties
to resist cryptanalytic attacks, and the space is very large, which grows super
exponentially with the number of input variables. Recent research has focused
on the study of Boolean functions that satisfy properties on restricted sets of
inputs due to their importance in the development of the FLIP stream cipher. In
this paper, we consider one such property, perfect balancedness, and
investigate the use of Genetic Programming (GP) and Genetic Algorithms (GA) to
construct Boolean functions that satisfy this property along with a good
nonlinearity profile. We formulate the related optimization problem and define
two encodings for the candidate solutions, namely the truth table and the
weightwise balanced representations. Somewhat surprisingly, the results show
that GA with the weightwise balanced representation outperforms GP with the
classical truth table phenotype in finding highly nonlinear WPB functions. This
finding is in stark contrast to previous findings on the evolution of globally
balanced Boolean functions, where GP always performs best.
| [
{
"created": "Wed, 16 Feb 2022 18:03:04 GMT",
"version": "v1"
}
] | 2022-02-17 | [
[
"Mariot",
"Luca",
""
],
[
"Picek",
"Stjepan",
""
],
[
"Jakobovic",
"Domagoj",
""
],
[
"Djurasevic",
"Marko",
""
],
[
"Leporati",
"Alberto",
""
]
] | Finding Boolean functions suitable for cryptographic primitives is a complex combinatorial optimization problem, since they must satisfy several properties to resist cryptanalytic attacks, and the space is very large, which grows super exponentially with the number of input variables. Recent research has focused on the study of Boolean functions that satisfy properties on restricted sets of inputs due to their importance in the development of the FLIP stream cipher. In this paper, we consider one such property, perfect balancedness, and investigate the use of Genetic Programming (GP) and Genetic Algorithms (GA) to construct Boolean functions that satisfy this property along with a good nonlinearity profile. We formulate the related optimization problem and define two encodings for the candidate solutions, namely the truth table and the weightwise balanced representations. Somewhat surprisingly, the results show that GA with the weightwise balanced representation outperforms GP with the classical truth table phenotype in finding highly nonlinear WPB functions. This finding is in stark contrast to previous findings on the evolution of globally balanced Boolean functions, where GP always performs best. |
1910.05057 | Elahe Arani | Elahe Arani, Fahad Sarfraz, Bahram Zonooz | Noise as a Resource for Learning in Knowledge Distillation | Accepted at IEEE Winter Conference on Applications of Computer Vision
(WACV, 2021) | null | null | null | cs.LG cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While noise is commonly considered a nuisance in computing systems, a number
of studies in neuroscience have shown several benefits of noise in the nervous
system from enabling the brain to carry out computations such as probabilistic
inference as well as carrying additional information about the stimuli.
Similarly, noise has been shown to improve the performance of deep neural
networks. In this study, we further investigate the effect of adding noise in
the knowledge distillation framework because of its resemblance to
collaborative subnetworks in the brain regions. We empirically show that
injecting constructive noise at different levels in the collaborative learning
framework enables us to train the model effectively and distill desirable
characteristics in the student model. In doing so, we propose three different
methods that target the common challenges in deep neural networks: minimizing
the performance gap between a compact model and large model (Fickle Teacher),
training high performance compact adversarially robust models (Soft
Randomization), and training models efficiently under label noise (Messy
Collaboration). Our findings motivate further study in the role of noise as a
resource for learning in a collaborative learning framework.
| [
{
"created": "Fri, 11 Oct 2019 09:58:50 GMT",
"version": "v1"
},
{
"created": "Tue, 15 Dec 2020 11:52:30 GMT",
"version": "v2"
}
] | 2020-12-16 | [
[
"Arani",
"Elahe",
""
],
[
"Sarfraz",
"Fahad",
""
],
[
"Zonooz",
"Bahram",
""
]
] | While noise is commonly considered a nuisance in computing systems, a number of studies in neuroscience have shown several benefits of noise in the nervous system from enabling the brain to carry out computations such as probabilistic inference as well as carrying additional information about the stimuli. Similarly, noise has been shown to improve the performance of deep neural networks. In this study, we further investigate the effect of adding noise in the knowledge distillation framework because of its resemblance to collaborative subnetworks in the brain regions. We empirically show that injecting constructive noise at different levels in the collaborative learning framework enables us to train the model effectively and distill desirable characteristics in the student model. In doing so, we propose three different methods that target the common challenges in deep neural networks: minimizing the performance gap between a compact model and large model (Fickle Teacher), training high performance compact adversarially robust models (Soft Randomization), and training models efficiently under label noise (Messy Collaboration). Our findings motivate further study in the role of noise as a resource for learning in a collaborative learning framework. |
1904.05240 | Zhimin Hou | Jing Xu and Zhimin Hou and Zhi Liu and Hong Qiao | Compare Contact Model-based Control and Contact Model-free Learning: A
Survey of Robotic Peg-in-hole Assembly Strategies | null | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present an overview of robotic peg-in-hole assembly and
analyze two main strategies: contact model-based and contact model-free
strategies. More specifically, we first introduce the contact model control
approaches, including contact state recognition and compliant control two
steps. Additionally, we focus on a comprehensive analysis of the whole robotic
assembly system. Second, without the contact state recognition process, we
decompose the contact model-free learning algorithms into two main subfields:
learning from demonstrations and learning from environments (mainly based on
reinforcement learning). For each subfield, we survey the landmark studies and
ongoing research to compare the different categories. We hope to strengthen the
relation between these two research communities by revealing the underlying
links. Ultimately, the remaining challenges and open questions in the field of
robotic peg-in-hole assembly community is discussed. The promising directions
and potential future work are also considered.
| [
{
"created": "Wed, 10 Apr 2019 15:20:23 GMT",
"version": "v1"
}
] | 2019-04-11 | [
[
"Xu",
"Jing",
""
],
[
"Hou",
"Zhimin",
""
],
[
"Liu",
"Zhi",
""
],
[
"Qiao",
"Hong",
""
]
] | In this paper, we present an overview of robotic peg-in-hole assembly and analyze two main strategies: contact model-based and contact model-free strategies. More specifically, we first introduce the contact model control approaches, including contact state recognition and compliant control two steps. Additionally, we focus on a comprehensive analysis of the whole robotic assembly system. Second, without the contact state recognition process, we decompose the contact model-free learning algorithms into two main subfields: learning from demonstrations and learning from environments (mainly based on reinforcement learning). For each subfield, we survey the landmark studies and ongoing research to compare the different categories. We hope to strengthen the relation between these two research communities by revealing the underlying links. Ultimately, the remaining challenges and open questions in the field of robotic peg-in-hole assembly community is discussed. The promising directions and potential future work are also considered. |
2111.15259 | Kavya Govindarajan | Kavya Govindarajan, Dhinakaran Vinayagamurthy, Praveen Jayachandran
and Chester Rebeiro | Privacy-Preserving Decentralized Exchange Marketplaces | 17 pages, 7 figures | null | null | null | cs.CR cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Decentralized exchange markets leveraging blockchain have been proposed
recently to provide open and equal access to traders, improve transparency and
reduce systemic risk of centralized exchanges. However, they compromise on the
privacy of traders with respect to their asset ownership, account balance,
order details and their identity. In this paper, we present Rialto, a fully
decentralized privacy-preserving exchange marketplace with support for matching
trade orders, on-chain settlement and market price discovery. Rialto provides
confidentiality of order rates and account balances and unlinkability between
traders and their trade orders, while retaining the desirable properties of a
traditional marketplace like front-running resilience and market fairness. We
define formal security notions and present a security analysis of the
marketplace. We perform a detailed evaluation of our solution, demonstrate that
it scales well and is suitable for a large class of goods and financial
instruments traded in modern exchange markets.
| [
{
"created": "Tue, 30 Nov 2021 10:18:47 GMT",
"version": "v1"
},
{
"created": "Mon, 20 Dec 2021 05:10:02 GMT",
"version": "v2"
}
] | 2021-12-21 | [
[
"Govindarajan",
"Kavya",
""
],
[
"Vinayagamurthy",
"Dhinakaran",
""
],
[
"Jayachandran",
"Praveen",
""
],
[
"Rebeiro",
"Chester",
""
]
] | Decentralized exchange markets leveraging blockchain have been proposed recently to provide open and equal access to traders, improve transparency and reduce systemic risk of centralized exchanges. However, they compromise on the privacy of traders with respect to their asset ownership, account balance, order details and their identity. In this paper, we present Rialto, a fully decentralized privacy-preserving exchange marketplace with support for matching trade orders, on-chain settlement and market price discovery. Rialto provides confidentiality of order rates and account balances and unlinkability between traders and their trade orders, while retaining the desirable properties of a traditional marketplace like front-running resilience and market fairness. We define formal security notions and present a security analysis of the marketplace. We perform a detailed evaluation of our solution, demonstrate that it scales well and is suitable for a large class of goods and financial instruments traded in modern exchange markets. |
2004.00207 | Xin Yang | Chaoyu Chen, Xin Yang, Ruobing Huang, Wenlong Shi, Shengfeng Liu,
Mingrong Lin, Yuhao Huang, Yong Yang, Yuanji Zhang, Huanjia Luo, Yankai
Huang, Yi Xiong, Dong Ni | Region Proposal Network with Graph Prior and IoU-Balance Loss for
Landmark Detection in 3D Ultrasound | IEEE International Symposium on Biomedical Imaging (IEEE ISBI 2020) | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 3D ultrasound (US) can facilitate detailed prenatal examinations for fetal
growth monitoring. To analyze a 3D US volume, it is fundamental to identify
anatomical landmarks of the evaluated organs accurately. Typical deep learning
methods usually regress the coordinates directly or involve heatmap-matching.
However, these methods struggle to deal with volumes with large sizes and the
highly-varying positions and orientations of fetuses. In this work, we exploit
an object detection framework to detect landmarks in 3D fetal facial US
volumes. By regressing multiple parameters of the landmark-centered bounding
box (B-box) with a strict criteria, the proposed model is able to pinpoint the
exact location of the targeted landmarks. Specifically, the model uses a 3D
region proposal network (RPN) to generate 3D candidate regions, followed by
several 3D classification branches to select the best candidate. It also adopts
an IoU-balance loss to improve communications between branches that benefits
the learning process. Furthermore, it leverages a distance-based graph prior to
regularize the training and helps to reduce false positive predictions. The
performance of the proposed framework is evaluated on a 3D US dataset to detect
five key fetal facial landmarks. Results showed the proposed method outperforms
some of the state-of-the-art methods in efficacy and efficiency.
| [
{
"created": "Wed, 1 Apr 2020 03:00:03 GMT",
"version": "v1"
}
] | 2020-04-02 | [
[
"Chen",
"Chaoyu",
""
],
[
"Yang",
"Xin",
""
],
[
"Huang",
"Ruobing",
""
],
[
"Shi",
"Wenlong",
""
],
[
"Liu",
"Shengfeng",
""
],
[
"Lin",
"Mingrong",
""
],
[
"Huang",
"Yuhao",
""
],
[
"Yang",
"Yong",
""
],
[
"Zhang",
"Yuanji",
""
],
[
"Luo",
"Huanjia",
""
],
[
"Huang",
"Yankai",
""
],
[
"Xiong",
"Yi",
""
],
[
"Ni",
"Dong",
""
]
] | 3D ultrasound (US) can facilitate detailed prenatal examinations for fetal growth monitoring. To analyze a 3D US volume, it is fundamental to identify anatomical landmarks of the evaluated organs accurately. Typical deep learning methods usually regress the coordinates directly or involve heatmap-matching. However, these methods struggle to deal with volumes with large sizes and the highly-varying positions and orientations of fetuses. In this work, we exploit an object detection framework to detect landmarks in 3D fetal facial US volumes. By regressing multiple parameters of the landmark-centered bounding box (B-box) with a strict criteria, the proposed model is able to pinpoint the exact location of the targeted landmarks. Specifically, the model uses a 3D region proposal network (RPN) to generate 3D candidate regions, followed by several 3D classification branches to select the best candidate. It also adopts an IoU-balance loss to improve communications between branches that benefits the learning process. Furthermore, it leverages a distance-based graph prior to regularize the training and helps to reduce false positive predictions. The performance of the proposed framework is evaluated on a 3D US dataset to detect five key fetal facial landmarks. Results showed the proposed method outperforms some of the state-of-the-art methods in efficacy and efficiency. |
1611.06538 | Mohammad Ali Tahmasbi Nejad | Mohammad Ali Tahmasbi Nejad, Seyed Pooya Shariatpanahi, Babak Hossein
Khalaj | On Storage Allocation in Cache-Enabled Interference Channels with Mixed
CSIT | arXiv admin note: text overlap with arXiv:1209.5807 by other authors | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, it has been shown that in a cache-enabled interference channel, the
storage at the transmit and receive sides are of equal value in terms of
Degrees of Freedom (DoF). This is derived by assuming full Channel State
Information at the Transmitter (CSIT). In this paper, we consider a more
practical scenario, where a training/feedback phase should exist for obtaining
CSIT, during which instantaneous channel state is not known to the
transmitters. This results in a combination of delayed and current CSIT
availability, called mixed CSIT. In this setup, we derive DoF of a
cache-enabled interference channel with mixed CSIT, which depends on the memory
available at transmit and receive sides as well as the training/feedback phase
duration. In contrast to the case of having full CSIT, we prove that, in our
setup, the storage at the receive side is more valuable than the one at the
transmit side. This is due to the fact that cooperation opportunities granted
by transmitters' caches are strongly based on instantaneous CSIT availability.
However, multi-casting opportunities provided by receivers' caches are robust
to such imperfection.
| [
{
"created": "Sun, 20 Nov 2016 16:03:54 GMT",
"version": "v1"
},
{
"created": "Sat, 21 Jan 2017 12:57:45 GMT",
"version": "v2"
}
] | 2017-01-24 | [
[
"Nejad",
"Mohammad Ali Tahmasbi",
""
],
[
"Shariatpanahi",
"Seyed Pooya",
""
],
[
"Khalaj",
"Babak Hossein",
""
]
] | Recently, it has been shown that in a cache-enabled interference channel, the storage at the transmit and receive sides are of equal value in terms of Degrees of Freedom (DoF). This is derived by assuming full Channel State Information at the Transmitter (CSIT). In this paper, we consider a more practical scenario, where a training/feedback phase should exist for obtaining CSIT, during which instantaneous channel state is not known to the transmitters. This results in a combination of delayed and current CSIT availability, called mixed CSIT. In this setup, we derive DoF of a cache-enabled interference channel with mixed CSIT, which depends on the memory available at transmit and receive sides as well as the training/feedback phase duration. In contrast to the case of having full CSIT, we prove that, in our setup, the storage at the receive side is more valuable than the one at the transmit side. This is due to the fact that cooperation opportunities granted by transmitters' caches are strongly based on instantaneous CSIT availability. However, multi-casting opportunities provided by receivers' caches are robust to such imperfection. |
1907.06206 | Thanh Huy Nguyen | Thanh Huy Nguyen, Sylvie Daniel, Didier Gueriot, Christophe Sintes,
Jean-Marc Le Caillec | Unsupervised Automatic Building Extraction Using Active Contour Model on
Unregistered Optical Imagery and Airborne LiDAR Data | PIA19 - Photogrammetric Image Analysis 2019 which will be held in
conjunction with MRSS19 - Munich Remote Sensing Symposium 2019 on September
18th-20th, 2019 in Munich, Germany. Proceeding: The International Archives of
the Photogrammetry, Remote Sensing and Spatial Information Sciences | Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. XLII-2/W16
(2019) 181-188 | 10.5194/isprs-archives-XLII-2-W16-181-2019 | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatic extraction of buildings in urban scenes has become a subject of
growing interest in the domain of photogrammetry and remote sensing,
particularly with the emergence of LiDAR systems since mid-1990s. However, in
reality, this task is still very challenging due to the complexity of building
size and shapes, as well as its surrounding environment. Active contour model,
colloquially called snake model, which has been extensively used in many
applications in computer vision and image processing, is also applied to
extract buildings from aerial/satellite imagery. Motivated by the limitations
of existing snake models addressing to the building extraction, this paper
presents an unsupervised and fully automatic snake model to extract buildings
using optical imagery and an unregistered airborne LiDAR dataset, without
manual initial points or training data. The proposed method is shown to be
capable of extracting buildings with varying color from complex environments,
and yielding high overall accuracy.
| [
{
"created": "Sun, 14 Jul 2019 11:18:56 GMT",
"version": "v1"
}
] | 2019-09-19 | [
[
"Nguyen",
"Thanh Huy",
""
],
[
"Daniel",
"Sylvie",
""
],
[
"Gueriot",
"Didier",
""
],
[
"Sintes",
"Christophe",
""
],
[
"Caillec",
"Jean-Marc Le",
""
]
] | Automatic extraction of buildings in urban scenes has become a subject of growing interest in the domain of photogrammetry and remote sensing, particularly with the emergence of LiDAR systems since mid-1990s. However, in reality, this task is still very challenging due to the complexity of building size and shapes, as well as its surrounding environment. Active contour model, colloquially called snake model, which has been extensively used in many applications in computer vision and image processing, is also applied to extract buildings from aerial/satellite imagery. Motivated by the limitations of existing snake models addressing to the building extraction, this paper presents an unsupervised and fully automatic snake model to extract buildings using optical imagery and an unregistered airborne LiDAR dataset, without manual initial points or training data. The proposed method is shown to be capable of extracting buildings with varying color from complex environments, and yielding high overall accuracy. |
1503.00309 | Xin Luna Dong | Xian Li and Xin Luna Dong and Kenneth B. Lyons and Weiyi Meng and
Divesh Srivastava | Scaling up Copy Detection | ICDE 2015 | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent research shows that copying is prevalent for Deep-Web data and
considering copying can significantly improve truth finding from conflicting
values. However, existing copy detection techniques do not scale for large
sizes and numbers of data sources, so truth finding can be slowed down by one
to two orders of magnitude compared with the corresponding techniques that do
not consider copying. In this paper, we study {\em how to improve scalability
of copy detection on structured data}.
Our algorithm builds an inverted index for each \emph{shared} value and
processes the index entries in decreasing order of how much the shared value
can contribute to the conclusion of copying. We show how we use the index to
prune the data items we consider for each pair of sources, and to incrementally
refine our results in iterative copy detection. We also apply a sampling
strategy with which we are able to further reduce copy-detection time while
still obtaining very similar results as on the whole data set. Experiments on
various real data sets show that our algorithm can reduce the time for copy
detection by two to three orders of magnitude; in other words, truth finding
can benefit from copy detection with very little overhead.
| [
{
"created": "Sun, 1 Mar 2015 17:00:29 GMT",
"version": "v1"
}
] | 2015-03-03 | [
[
"Li",
"Xian",
""
],
[
"Dong",
"Xin Luna",
""
],
[
"Lyons",
"Kenneth B.",
""
],
[
"Meng",
"Weiyi",
""
],
[
"Srivastava",
"Divesh",
""
]
] | Recent research shows that copying is prevalent for Deep-Web data and considering copying can significantly improve truth finding from conflicting values. However, existing copy detection techniques do not scale for large sizes and numbers of data sources, so truth finding can be slowed down by one to two orders of magnitude compared with the corresponding techniques that do not consider copying. In this paper, we study {\em how to improve scalability of copy detection on structured data}. Our algorithm builds an inverted index for each \emph{shared} value and processes the index entries in decreasing order of how much the shared value can contribute to the conclusion of copying. We show how we use the index to prune the data items we consider for each pair of sources, and to incrementally refine our results in iterative copy detection. We also apply a sampling strategy with which we are able to further reduce copy-detection time while still obtaining very similar results as on the whole data set. Experiments on various real data sets show that our algorithm can reduce the time for copy detection by two to three orders of magnitude; in other words, truth finding can benefit from copy detection with very little overhead. |
1504.01552 | Ahmed Douik | Ahmed Douik, Hayssam Dahrouj, Tareq Y. Al-Naffouri, Mohamed-Slim
Alouini | Hybrid Scheduling/Signal-Level Coordination in the Downlink of
Multi-Cloud Radio-Access Networks | null | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the context of resource allocation in cloud-radio access networks, recent
studies assume either signal-level or scheduling-level coordination. This
paper, instead, considers a hybrid level of coordination for the scheduling
problem in the downlink of a multi-cloud radio-access network, as a means to
benefit from both scheduling policies. Consider a multi-cloud radio access
network, where each cloud is connected to several base-stations (BSs) via high
capacity links, and therefore allows joint signal processing between them.
Across the multiple clouds, however, only scheduling-level coordination is
permitted, as it requires a lower level of backhaul communication. The frame
structure of every BS is composed of various time/frequency blocks, called
power-zones (PZs), and kept at fixed power level. The paper addresses the
problem of maximizing a network-wide utility by associating users to clouds and
scheduling them to the PZs, under the practical constraints that each user is
scheduled, at most, to a single cloud, but possibly to many BSs within the
cloud, and can be served by one or more distinct PZs within the BSs' frame. The
paper solves the problem using graph theory techniques by constructing the
conflict graph. The scheduling problem is, then, shown to be equivalent to a
maximum-weight independent set problem in the constructed graph, in which each
vertex symbolizes an association of cloud, user, BS and PZ, with a weight
representing the utility of that association. Simulation results suggest that
the proposed hybrid scheduling strategy provides appreciable gain as compared
to the scheduling-level coordinated networks, with a negligible degradation to
signal-level coordination.
| [
{
"created": "Tue, 7 Apr 2015 11:23:39 GMT",
"version": "v1"
}
] | 2015-04-08 | [
[
"Douik",
"Ahmed",
""
],
[
"Dahrouj",
"Hayssam",
""
],
[
"Al-Naffouri",
"Tareq Y.",
""
],
[
"Alouini",
"Mohamed-Slim",
""
]
] | In the context of resource allocation in cloud-radio access networks, recent studies assume either signal-level or scheduling-level coordination. This paper, instead, considers a hybrid level of coordination for the scheduling problem in the downlink of a multi-cloud radio-access network, as a means to benefit from both scheduling policies. Consider a multi-cloud radio access network, where each cloud is connected to several base-stations (BSs) via high capacity links, and therefore allows joint signal processing between them. Across the multiple clouds, however, only scheduling-level coordination is permitted, as it requires a lower level of backhaul communication. The frame structure of every BS is composed of various time/frequency blocks, called power-zones (PZs), and kept at fixed power level. The paper addresses the problem of maximizing a network-wide utility by associating users to clouds and scheduling them to the PZs, under the practical constraints that each user is scheduled, at most, to a single cloud, but possibly to many BSs within the cloud, and can be served by one or more distinct PZs within the BSs' frame. The paper solves the problem using graph theory techniques by constructing the conflict graph. The scheduling problem is, then, shown to be equivalent to a maximum-weight independent set problem in the constructed graph, in which each vertex symbolizes an association of cloud, user, BS and PZ, with a weight representing the utility of that association. Simulation results suggest that the proposed hybrid scheduling strategy provides appreciable gain as compared to the scheduling-level coordinated networks, with a negligible degradation to signal-level coordination. |
2210.13827 | Li Yu | Li Yu, Wenshuai Chang, Shiyu Wu and Moncef Gabbouj | End-to-end Transformer for Compressed Video Quality Enhancement | null | null | null | null | cs.MM cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Convolutional neural networks have achieved excellent results in compressed
video quality enhancement task in recent years. State-of-the-art methods
explore the spatiotemporal information of adjacent frames mainly by deformable
convolution. However, offset fields in deformable convolution are difficult to
train, and its instability in training often leads to offset overflow, which
reduce the efficiency of correlation modeling. In this work, we propose a
transformer-based compressed video quality enhancement (TVQE) method,
consisting of Swin-AutoEncoder based Spatio-Temporal feature Fusion (SSTF)
module and Channel-wise Attention based Quality Enhancement (CAQE) module. The
proposed SSTF module learns both local and global features with the help of
Swin-AutoEncoder, which improves the ability of correlation modeling.
Meanwhile, the window mechanism-based Swin Transformer and the encoderdecoder
structure greatly improve the execution efficiency. On the other hand, the
proposed CAQE module calculates the channel attention, which aggregates the
temporal information between channels in the feature map, and finally achieves
the efficient fusion of inter-frame information. Extensive experimental results
on the JCT-VT test sequences show that the proposed method achieves better
performance in average for both subjective and objective quality. Meanwhile,
our proposed method outperforms existing ones in terms of both inference speed
and GPU consumption.
| [
{
"created": "Tue, 25 Oct 2022 08:12:05 GMT",
"version": "v1"
}
] | 2022-10-26 | [
[
"Yu",
"Li",
""
],
[
"Chang",
"Wenshuai",
""
],
[
"Wu",
"Shiyu",
""
],
[
"Gabbouj",
"Moncef",
""
]
] | Convolutional neural networks have achieved excellent results in compressed video quality enhancement task in recent years. State-of-the-art methods explore the spatiotemporal information of adjacent frames mainly by deformable convolution. However, offset fields in deformable convolution are difficult to train, and its instability in training often leads to offset overflow, which reduce the efficiency of correlation modeling. In this work, we propose a transformer-based compressed video quality enhancement (TVQE) method, consisting of Swin-AutoEncoder based Spatio-Temporal feature Fusion (SSTF) module and Channel-wise Attention based Quality Enhancement (CAQE) module. The proposed SSTF module learns both local and global features with the help of Swin-AutoEncoder, which improves the ability of correlation modeling. Meanwhile, the window mechanism-based Swin Transformer and the encoderdecoder structure greatly improve the execution efficiency. On the other hand, the proposed CAQE module calculates the channel attention, which aggregates the temporal information between channels in the feature map, and finally achieves the efficient fusion of inter-frame information. Extensive experimental results on the JCT-VT test sequences show that the proposed method achieves better performance in average for both subjective and objective quality. Meanwhile, our proposed method outperforms existing ones in terms of both inference speed and GPU consumption. |
1808.00529 | Si Liu | Si Liu, Risheek Garrepalli, Thomas G. Dietterich, Alan Fern, Dan
Hendrycks | Open Category Detection with PAC Guarantees | null | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | Open category detection is the problem of detecting "alien" test instances
that belong to categories or classes that were not present in the training
data. In many applications, reliably detecting such aliens is central to
ensuring the safety and accuracy of test set predictions. Unfortunately, there
are no algorithms that provide theoretical guarantees on their ability to
detect aliens under general assumptions. Further, while there are algorithms
for open category detection, there are few empirical results that directly
report alien detection rates. Thus, there are significant theoretical and
empirical gaps in our understanding of open category detection. In this paper,
we take a step toward addressing this gap by studying a simple, but
practically-relevant variant of open category detection. In our setting, we are
provided with a "clean" training set that contains only the target categories
of interest and an unlabeled "contaminated" training set that contains a
fraction $\alpha$ of alien examples. Under the assumption that we know an upper
bound on $\alpha$, we develop an algorithm with PAC-style guarantees on the
alien detection rate, while aiming to minimize false alarms. Empirical results
on synthetic and standard benchmark datasets demonstrate the regimes in which
the algorithm can be effective and provide a baseline for further advancements.
| [
{
"created": "Wed, 1 Aug 2018 19:41:04 GMT",
"version": "v1"
}
] | 2018-08-03 | [
[
"Liu",
"Si",
""
],
[
"Garrepalli",
"Risheek",
""
],
[
"Dietterich",
"Thomas G.",
""
],
[
"Fern",
"Alan",
""
],
[
"Hendrycks",
"Dan",
""
]
] | Open category detection is the problem of detecting "alien" test instances that belong to categories or classes that were not present in the training data. In many applications, reliably detecting such aliens is central to ensuring the safety and accuracy of test set predictions. Unfortunately, there are no algorithms that provide theoretical guarantees on their ability to detect aliens under general assumptions. Further, while there are algorithms for open category detection, there are few empirical results that directly report alien detection rates. Thus, there are significant theoretical and empirical gaps in our understanding of open category detection. In this paper, we take a step toward addressing this gap by studying a simple, but practically-relevant variant of open category detection. In our setting, we are provided with a "clean" training set that contains only the target categories of interest and an unlabeled "contaminated" training set that contains a fraction $\alpha$ of alien examples. Under the assumption that we know an upper bound on $\alpha$, we develop an algorithm with PAC-style guarantees on the alien detection rate, while aiming to minimize false alarms. Empirical results on synthetic and standard benchmark datasets demonstrate the regimes in which the algorithm can be effective and provide a baseline for further advancements. |
2104.02103 | David Fern\'andez-Baca | Ghazaleh Parvini and David Fern\'andez-Baca | Exact Algorithms for No-Rainbow Coloring and Phylogenetic Decisiveness | null | null | null | null | cs.DS cs.DM | http://creativecommons.org/licenses/by/4.0/ | The input to the no-rainbow hypergraph coloring problem is a hypergraph $H$
where every hyperedge has $r$ nodes. The question is whether there exists an
$r$-coloring of the nodes of $H$ such that all $r$ colors are used and there is
no rainbow hyperedge -- i.e., no hyperedge uses all $r$ colors. The no-rainbow
hypergraph $r$-coloring problem is known to be NP-complete for $r \geq 3$. The
special case of $r=4$ is the complement of the phylogenetic decisiveness
problem. Here we present a deterministic algorithm that solves the no-rainbow
$r$-coloring problem in $O^*((r-1)^{(r-1)n/r})$ time and a randomized algorithm
that solves the problem in $O^*((\frac{r}{2})^n)$ time.
| [
{
"created": "Mon, 5 Apr 2021 18:19:18 GMT",
"version": "v1"
}
] | 2021-04-07 | [
[
"Parvini",
"Ghazaleh",
""
],
[
"Fernández-Baca",
"David",
""
]
] | The input to the no-rainbow hypergraph coloring problem is a hypergraph $H$ where every hyperedge has $r$ nodes. The question is whether there exists an $r$-coloring of the nodes of $H$ such that all $r$ colors are used and there is no rainbow hyperedge -- i.e., no hyperedge uses all $r$ colors. The no-rainbow hypergraph $r$-coloring problem is known to be NP-complete for $r \geq 3$. The special case of $r=4$ is the complement of the phylogenetic decisiveness problem. Here we present a deterministic algorithm that solves the no-rainbow $r$-coloring problem in $O^*((r-1)^{(r-1)n/r})$ time and a randomized algorithm that solves the problem in $O^*((\frac{r}{2})^n)$ time. |
2006.12972 | Daniel DiPietro | Daniel M. DiPietro, Shiying Xiong and Bo Zhu | Sparse Symplectically Integrated Neural Networks | Accepted as a conference paper to NeurIPS 2020. Main paper has 9
pages and 4 figures | null | null | null | cs.LG physics.comp-ph stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce Sparse Symplectically Integrated Neural Networks (SSINNs), a
novel model for learning Hamiltonian dynamical systems from data. SSINNs
combine fourth-order symplectic integration with a learned parameterization of
the Hamiltonian obtained using sparse regression through a mathematically
elegant function space. This allows for interpretable models that incorporate
symplectic inductive biases and have low memory requirements. We evaluate
SSINNs on four classical Hamiltonian dynamical problems: the H\'enon-Heiles
system, nonlinearly coupled oscillators, a multi-particle mass-spring system,
and a pendulum system. Our results demonstrate promise in both system
prediction and conservation of energy, often outperforming the current
state-of-the-art black-box prediction techniques by an order of magnitude.
Further, SSINNs successfully converge to true governing equations from highly
limited and noisy data, demonstrating potential applicability in the discovery
of new physical governing equations.
| [
{
"created": "Wed, 10 Jun 2020 03:33:37 GMT",
"version": "v1"
},
{
"created": "Wed, 28 Oct 2020 05:11:31 GMT",
"version": "v2"
}
] | 2020-10-29 | [
[
"DiPietro",
"Daniel M.",
""
],
[
"Xiong",
"Shiying",
""
],
[
"Zhu",
"Bo",
""
]
] | We introduce Sparse Symplectically Integrated Neural Networks (SSINNs), a novel model for learning Hamiltonian dynamical systems from data. SSINNs combine fourth-order symplectic integration with a learned parameterization of the Hamiltonian obtained using sparse regression through a mathematically elegant function space. This allows for interpretable models that incorporate symplectic inductive biases and have low memory requirements. We evaluate SSINNs on four classical Hamiltonian dynamical problems: the H\'enon-Heiles system, nonlinearly coupled oscillators, a multi-particle mass-spring system, and a pendulum system. Our results demonstrate promise in both system prediction and conservation of energy, often outperforming the current state-of-the-art black-box prediction techniques by an order of magnitude. Further, SSINNs successfully converge to true governing equations from highly limited and noisy data, demonstrating potential applicability in the discovery of new physical governing equations. |
2208.13000 | David Axelrod | David S. Axelrod, Brian P. Harper, John C. Paolillo | YouTube COVID-19 Vaccine Misinformation on Twitter: Platform
Interactions and Moderation Blind Spots | null | null | null | null | cs.SI cs.CY | http://creativecommons.org/licenses/by/4.0/ | While most social media companies have attempted to address the challenge of
COVID-19 misinformation, the success of those policies is difficult to assess,
especially when focusing on individual platforms. This study explores the
relationship between Twitter and YouTube in spreading COVID-19 vaccine-related
misinformation through a mixed-methods approach to analyzing a collection of
tweets in 2021 sharing YouTube videos where those Twitter accounts had also
linked to deleted YouTube videos. Principal components, cluster and network
analyses are used to group the videos and tweets into interpretable groups by
shared tweet dates, terms and sharing patterns; content analysis is employed to
assess the orientation of tweets and videos to COVID-19 messages. From this we
observe that a preponderance of anti-vaccine messaging remains among users who
previously shared suspect information, in which a dissident political framing
dominates, and which suggests moderation policy inefficacy where the platforms
interact.
| [
{
"created": "Sat, 27 Aug 2022 12:55:58 GMT",
"version": "v1"
}
] | 2022-08-30 | [
[
"Axelrod",
"David S.",
""
],
[
"Harper",
"Brian P.",
""
],
[
"Paolillo",
"John C.",
""
]
] | While most social media companies have attempted to address the challenge of COVID-19 misinformation, the success of those policies is difficult to assess, especially when focusing on individual platforms. This study explores the relationship between Twitter and YouTube in spreading COVID-19 vaccine-related misinformation through a mixed-methods approach to analyzing a collection of tweets in 2021 sharing YouTube videos where those Twitter accounts had also linked to deleted YouTube videos. Principal components, cluster and network analyses are used to group the videos and tweets into interpretable groups by shared tweet dates, terms and sharing patterns; content analysis is employed to assess the orientation of tweets and videos to COVID-19 messages. From this we observe that a preponderance of anti-vaccine messaging remains among users who previously shared suspect information, in which a dissident political framing dominates, and which suggests moderation policy inefficacy where the platforms interact. |
2206.08297 | Prateek Verma | Prateek Verma | A Language Model With Million Sample Context For Raw Audio Using
Transformer Architectures | 12 pages, 1 figure. Technical Report at Stanford University | null | null | null | cs.SD cs.LG eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modeling long-term dependencies for audio signals is a particularly
challenging problem, as even small-time scales yield on the order of a hundred
thousand samples. With the recent advent of Transformers, neural architectures
became good at modeling dependencies over longer time scales, but they suffered
from quadratic constraints to scale them. We propose a generative
auto-regressive architecture that can model audio waveforms over quite a large
context, greater than 500,000 samples. Our work is adapted to learn time
dependencies by learning a latent representation by a CNN front-end, and then
learning dependencies over these representations using Transformer encoders,
fully trained end-to-end: thereby allowing to learn representations as it deems
fit for the next sample. Unlike previous works that compared different time
scales to show improvement, we use a standard dataset, with the same number of
parameters/context to show improvements. We achieve a state-of-the-art
performance as compared to other approaches such as Wavenet, SaSHMI, and
Sample-RNN on a standard dataset for modeling long-term structure. This work
gives very exciting direction for the field, given improvements in context
modeling that can be scaled with more data, as well as potentially better
results by using billions/trillions of parameters.
| [
{
"created": "Thu, 16 Jun 2022 16:57:43 GMT",
"version": "v1"
},
{
"created": "Tue, 16 May 2023 20:50:56 GMT",
"version": "v2"
}
] | 2023-05-18 | [
[
"Verma",
"Prateek",
""
]
] | Modeling long-term dependencies for audio signals is a particularly challenging problem, as even small-time scales yield on the order of a hundred thousand samples. With the recent advent of Transformers, neural architectures became good at modeling dependencies over longer time scales, but they suffered from quadratic constraints to scale them. We propose a generative auto-regressive architecture that can model audio waveforms over quite a large context, greater than 500,000 samples. Our work is adapted to learn time dependencies by learning a latent representation by a CNN front-end, and then learning dependencies over these representations using Transformer encoders, fully trained end-to-end: thereby allowing to learn representations as it deems fit for the next sample. Unlike previous works that compared different time scales to show improvement, we use a standard dataset, with the same number of parameters/context to show improvements. We achieve a state-of-the-art performance as compared to other approaches such as Wavenet, SaSHMI, and Sample-RNN on a standard dataset for modeling long-term structure. This work gives very exciting direction for the field, given improvements in context modeling that can be scaled with more data, as well as potentially better results by using billions/trillions of parameters. |
1310.8040 | Angsheng Li | Angsheng Li, Wei Zhang, Yicheng Pan, Xuechen Li | Homophyly and Randomness Resist Cascading Failure in Networks | null | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The universal properties of power law and small world phenomenon of networks
seem unavoidably obstacles for security of networking systems. Existing models
never give secure networks. We found that the essence of security is the
security against cascading failures of attacks and that nature solves the
security by mechanisms. We proposed a model of networks by the natural
mechanisms of homophyly, randomness and preferential attachment. It was shown
that homophyly creates a community structure, that homophyly and randomness
introduce ordering in the networks, and that homophyly creates inclusiveness
and introduces rules of infections. These principles allow us to provably
guarantee the security of the networks against any attacks. Our results show
that security can be achieved provably by structures, that there is a tradeoff
between the roles of structures and of thresholds in security engineering, and
that power law and small world property are never obstacles for security of
networks.
| [
{
"created": "Wed, 30 Oct 2013 06:47:04 GMT",
"version": "v1"
}
] | 2013-10-31 | [
[
"Li",
"Angsheng",
""
],
[
"Zhang",
"Wei",
""
],
[
"Pan",
"Yicheng",
""
],
[
"Li",
"Xuechen",
""
]
] | The universal properties of power law and small world phenomenon of networks seem unavoidably obstacles for security of networking systems. Existing models never give secure networks. We found that the essence of security is the security against cascading failures of attacks and that nature solves the security by mechanisms. We proposed a model of networks by the natural mechanisms of homophyly, randomness and preferential attachment. It was shown that homophyly creates a community structure, that homophyly and randomness introduce ordering in the networks, and that homophyly creates inclusiveness and introduces rules of infections. These principles allow us to provably guarantee the security of the networks against any attacks. Our results show that security can be achieved provably by structures, that there is a tradeoff between the roles of structures and of thresholds in security engineering, and that power law and small world property are never obstacles for security of networks. |
1709.02738 | Panayotis Mertikopoulos | Panayotis Mertikopoulos, Christos Papadimitriou and Georgios Piliouras | Cycles in adversarial regularized learning | 22 pages, 4 figures | null | null | null | cs.GT cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Regularized learning is a fundamental technique in online optimization,
machine learning and many other fields of computer science. A natural question
that arises in these settings is how regularized learning algorithms behave
when faced against each other. We study a natural formulation of this problem
by coupling regularized learning dynamics in zero-sum games. We show that the
system's behavior is Poincar\'e recurrent, implying that almost every
trajectory revisits any (arbitrarily small) neighborhood of its starting point
infinitely often. This cycling behavior is robust to the agents' choice of
regularization mechanism (each agent could be using a different regularizer),
to positive-affine transformations of the agents' utilities, and it also
persists in the case of networked competition, i.e., for zero-sum polymatrix
games.
| [
{
"created": "Fri, 8 Sep 2017 15:16:54 GMT",
"version": "v1"
}
] | 2017-09-11 | [
[
"Mertikopoulos",
"Panayotis",
""
],
[
"Papadimitriou",
"Christos",
""
],
[
"Piliouras",
"Georgios",
""
]
] | Regularized learning is a fundamental technique in online optimization, machine learning and many other fields of computer science. A natural question that arises in these settings is how regularized learning algorithms behave when faced against each other. We study a natural formulation of this problem by coupling regularized learning dynamics in zero-sum games. We show that the system's behavior is Poincar\'e recurrent, implying that almost every trajectory revisits any (arbitrarily small) neighborhood of its starting point infinitely often. This cycling behavior is robust to the agents' choice of regularization mechanism (each agent could be using a different regularizer), to positive-affine transformations of the agents' utilities, and it also persists in the case of networked competition, i.e., for zero-sum polymatrix games. |
2404.09964 | Masato Tamura | Masato Tamura | Design and Analysis of Efficient Attention in Transformers for Social
Group Activity Recognition | Accepted to IJCV, preprint version | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Social group activity recognition is a challenging task extended from group
activity recognition, where social groups must be recognized with their
activities and group members. Existing methods tackle this task by leveraging
region features of individuals following existing group activity recognition
methods. However, the effectiveness of region features is susceptible to person
localization and variable semantics of individual actions. To overcome these
issues, we propose leveraging attention modules in transformers to generate
social group features. In this method, multiple embeddings are used to
aggregate features for a social group, each of which is assigned to a group
member without duplication. Due to this non-duplicated assignment, the number
of embeddings must be significant to avoid missing group members and thus
renders attention in transformers ineffective. To find optimal attention
designs with a large number of embeddings, we explore several design choices of
queries for feature aggregation and self-attention modules in transformer
decoders. Extensive experimental results show that the proposed method achieves
state-of-the-art performance and verify that the proposed attention designs are
highly effective on social group activity recognition.
| [
{
"created": "Mon, 15 Apr 2024 17:40:23 GMT",
"version": "v1"
}
] | 2024-04-16 | [
[
"Tamura",
"Masato",
""
]
] | Social group activity recognition is a challenging task extended from group activity recognition, where social groups must be recognized with their activities and group members. Existing methods tackle this task by leveraging region features of individuals following existing group activity recognition methods. However, the effectiveness of region features is susceptible to person localization and variable semantics of individual actions. To overcome these issues, we propose leveraging attention modules in transformers to generate social group features. In this method, multiple embeddings are used to aggregate features for a social group, each of which is assigned to a group member without duplication. Due to this non-duplicated assignment, the number of embeddings must be significant to avoid missing group members and thus renders attention in transformers ineffective. To find optimal attention designs with a large number of embeddings, we explore several design choices of queries for feature aggregation and self-attention modules in transformer decoders. Extensive experimental results show that the proposed method achieves state-of-the-art performance and verify that the proposed attention designs are highly effective on social group activity recognition. |
2210.11242 | Jenny Schmalfuss | Jenny Schmalfuss and Lukas Mehl and Andr\'es Bruhn | Attacking Motion Estimation with Adversarial Snow | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current adversarial attacks for motion estimation (optical flow) optimize
small per-pixel perturbations, which are unlikely to appear in the real world.
In contrast, we exploit a real-world weather phenomenon for a novel attack with
adversarially optimized snow. At the core of our attack is a differentiable
renderer that consistently integrates photorealistic snowflakes with realistic
motion into the 3D scene. Through optimization we obtain adversarial snow that
significantly impacts the optical flow while being indistinguishable from
ordinary snow. Surprisingly, the impact of our novel attack is largest on
methods that previously showed a high robustness to small L_p perturbations.
| [
{
"created": "Thu, 20 Oct 2022 13:14:19 GMT",
"version": "v1"
}
] | 2022-10-21 | [
[
"Schmalfuss",
"Jenny",
""
],
[
"Mehl",
"Lukas",
""
],
[
"Bruhn",
"Andrés",
""
]
] | Current adversarial attacks for motion estimation (optical flow) optimize small per-pixel perturbations, which are unlikely to appear in the real world. In contrast, we exploit a real-world weather phenomenon for a novel attack with adversarially optimized snow. At the core of our attack is a differentiable renderer that consistently integrates photorealistic snowflakes with realistic motion into the 3D scene. Through optimization we obtain adversarial snow that significantly impacts the optical flow while being indistinguishable from ordinary snow. Surprisingly, the impact of our novel attack is largest on methods that previously showed a high robustness to small L_p perturbations. |
2002.00848 | Xudong Wang | Liang Zhang, Xudong Wang, Hongsheng Li, Guangming Zhu, Peiyi Shen,
Ping Li, Xiaoyuan Lu, Syed Afaq Ali Shah, Mohammed Bennamoun | Structure-Feature based Graph Self-adaptive Pooling | 7 pages, 4 figures, The Web Conference 2020 | null | 10.1145/3366423.3380083 | null | cs.SI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Various methods to deal with graph data have been proposed in recent years.
However, most of these methods focus on graph feature aggregation rather than
graph pooling. Besides, the existing top-k selection graph pooling methods have
a few problems. First, to construct the pooled graph topology, current top-k
selection methods evaluate the importance of the node from a single perspective
only, which is simplistic and unobjective. Second, the feature information of
unselected nodes is directly lost during the pooling process, which inevitably
leads to a massive loss of graph feature information. To solve these problems
mentioned above, we propose a novel graph self-adaptive pooling method with the
following objectives: (1) to construct a reasonable pooled graph topology,
structure and feature information of the graph are considered simultaneously,
which provide additional veracity and objectivity in node selection; and (2) to
make the pooled nodes contain sufficiently effective graph information, node
feature information is aggregated before discarding the unimportant nodes;
thus, the selected nodes contain information from neighbor nodes, which can
enhance the use of features of the unselected nodes. Experimental results on
four different datasets demonstrate that our method is effective in graph
classification and outperforms state-of-the-art graph pooling methods.
| [
{
"created": "Thu, 30 Jan 2020 13:58:49 GMT",
"version": "v1"
}
] | 2020-02-06 | [
[
"Zhang",
"Liang",
""
],
[
"Wang",
"Xudong",
""
],
[
"Li",
"Hongsheng",
""
],
[
"Zhu",
"Guangming",
""
],
[
"Shen",
"Peiyi",
""
],
[
"Li",
"Ping",
""
],
[
"Lu",
"Xiaoyuan",
""
],
[
"Shah",
"Syed Afaq Ali",
""
],
[
"Bennamoun",
"Mohammed",
""
]
] | Various methods to deal with graph data have been proposed in recent years. However, most of these methods focus on graph feature aggregation rather than graph pooling. Besides, the existing top-k selection graph pooling methods have a few problems. First, to construct the pooled graph topology, current top-k selection methods evaluate the importance of the node from a single perspective only, which is simplistic and unobjective. Second, the feature information of unselected nodes is directly lost during the pooling process, which inevitably leads to a massive loss of graph feature information. To solve these problems mentioned above, we propose a novel graph self-adaptive pooling method with the following objectives: (1) to construct a reasonable pooled graph topology, structure and feature information of the graph are considered simultaneously, which provide additional veracity and objectivity in node selection; and (2) to make the pooled nodes contain sufficiently effective graph information, node feature information is aggregated before discarding the unimportant nodes; thus, the selected nodes contain information from neighbor nodes, which can enhance the use of features of the unselected nodes. Experimental results on four different datasets demonstrate that our method is effective in graph classification and outperforms state-of-the-art graph pooling methods. |
1912.13050 | Michael Mogessie Ashenafi | Michael Mogessie Ashenafi | Online Peer-Assessment Datasets | 16 pages | null | null | null | cs.CY | http://creativecommons.org/licenses/by-sa/4.0/ | Peer-assessment experiments were conducted among first and second year
students at the University of Trento. The experiments spanned an entire
semester and were conducted in five computer science courses between 2013 and
2016. Peer-assessment tasks included question and answer submission as well as
answer evaluation tasks. The peer-assessment datasets are complimented by the
final scores of participating students for each course. Teachers were involved
in filtering out questions submitted by students on a weekly basis. Selected
questions were then used in subsequent peer-assessment tasks. However, expert
ratings are not included in the dataset. A major reason for this decision was
that peer-assessment tasks were designed with minimal teacher supervision in
mind. Arguments in favour of this approach are presented. The datasets are
designed in a manner that would allow their utilization in a variety of
experiments. They are reported as parsable data structures that, with
intermediate processing, can be moulded into NLP or ML-ready datasets.
Potential applications of interest include performance prediction and text
similarity tasks.
| [
{
"created": "Mon, 30 Dec 2019 18:48:55 GMT",
"version": "v1"
}
] | 2020-01-01 | [
[
"Ashenafi",
"Michael Mogessie",
""
]
] | Peer-assessment experiments were conducted among first and second year students at the University of Trento. The experiments spanned an entire semester and were conducted in five computer science courses between 2013 and 2016. Peer-assessment tasks included question and answer submission as well as answer evaluation tasks. The peer-assessment datasets are complimented by the final scores of participating students for each course. Teachers were involved in filtering out questions submitted by students on a weekly basis. Selected questions were then used in subsequent peer-assessment tasks. However, expert ratings are not included in the dataset. A major reason for this decision was that peer-assessment tasks were designed with minimal teacher supervision in mind. Arguments in favour of this approach are presented. The datasets are designed in a manner that would allow their utilization in a variety of experiments. They are reported as parsable data structures that, with intermediate processing, can be moulded into NLP or ML-ready datasets. Potential applications of interest include performance prediction and text similarity tasks. |
2307.09279 | Xiaoqi Wang | Xiaoqi Wang, Jian Xiong, Hao Gao, and Weisi Lin | Regression-free Blind Image Quality Assessment with Content-Distortion
Consistency | null | null | null | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The optimization objective of regression-based blind image quality assessment
(IQA) models is to minimize the mean prediction error across the training
dataset, which can lead to biased parameter estimation due to potential
training data biases. To mitigate this issue, we propose a regression-free
framework for image quality evaluation, which is based upon retrieving locally
similar instances by incorporating semantic and distortion feature spaces. The
approach is motivated by the observation that the human visual system (HVS)
exhibits analogous perceptual responses to semantically similar image contents
impaired by identical distortions, which we term as content-distortion
consistency. The proposed method constructs a hierarchical k-nearest neighbor
(k-NN) algorithm for instance retrieval through two classification modules:
semantic classification (SC) module and distortion classification (DC) module.
Given a test image and an IQA database, the SC module retrieves multiple
pristine images semantically similar to the test image. The DC module then
retrieves instances based on distortion similarity from the distorted images
that correspond to each retrieved pristine image. Finally, quality prediction
is obtained by aggregating the subjective scores of the retrieved instances.
Without training on subjective quality scores, the proposed regression-free
method achieves competitive, even superior performance compared to
state-of-the-art regression-based methods on authentic and synthetic distortion
IQA benchmarks.
| [
{
"created": "Tue, 18 Jul 2023 14:19:28 GMT",
"version": "v1"
},
{
"created": "Sat, 21 Oct 2023 07:50:38 GMT",
"version": "v2"
}
] | 2023-10-24 | [
[
"Wang",
"Xiaoqi",
""
],
[
"Xiong",
"Jian",
""
],
[
"Gao",
"Hao",
""
],
[
"Lin",
"Weisi",
""
]
] | The optimization objective of regression-based blind image quality assessment (IQA) models is to minimize the mean prediction error across the training dataset, which can lead to biased parameter estimation due to potential training data biases. To mitigate this issue, we propose a regression-free framework for image quality evaluation, which is based upon retrieving locally similar instances by incorporating semantic and distortion feature spaces. The approach is motivated by the observation that the human visual system (HVS) exhibits analogous perceptual responses to semantically similar image contents impaired by identical distortions, which we term as content-distortion consistency. The proposed method constructs a hierarchical k-nearest neighbor (k-NN) algorithm for instance retrieval through two classification modules: semantic classification (SC) module and distortion classification (DC) module. Given a test image and an IQA database, the SC module retrieves multiple pristine images semantically similar to the test image. The DC module then retrieves instances based on distortion similarity from the distorted images that correspond to each retrieved pristine image. Finally, quality prediction is obtained by aggregating the subjective scores of the retrieved instances. Without training on subjective quality scores, the proposed regression-free method achieves competitive, even superior performance compared to state-of-the-art regression-based methods on authentic and synthetic distortion IQA benchmarks. |
2004.08270 | Matteo Bustreo | Avik Hati, Matteo Bustreo, Diego Sona, Vittorio Murino, Alessio Del
Bue | Weakly Supervised Geodesic Segmentation of Egyptian Mummy CT Scans | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we tackle the task of automatically analyzing 3D volumetric
scans obtained from computed tomography (CT) devices. In particular, we address
a particular task for which data is very limited: the segmentation of ancient
Egyptian mummies CT scans. We aim at digitally unwrapping the mummy and
identify different segments such as body, bandages and jewelry. The problem is
complex because of the lack of annotated data for the different semantic
regions to segment, thus discouraging the use of strongly supervised
approaches. We, therefore, propose a weakly supervised and efficient
interactive segmentation method to solve this challenging problem. After
segmenting the wrapped mummy from its exterior region using histogram analysis
and template matching, we first design a voxel distance measure to find an
approximate solution for the body and bandage segments. Here, we use geodesic
distances since voxel features as well as spatial relationship among voxels is
incorporated in this measure. Next, we refine the solution using a GrabCut
based segmentation together with a tracking method on the slices of the scan
that assigns labels to different regions in the volume, using limited
supervision in the form of scribbles drawn by the user. The efficiency of the
proposed method is demonstrated using visualizations and validated through
quantitative measures and qualitative unwrapping of the mummy.
| [
{
"created": "Fri, 17 Apr 2020 14:35:00 GMT",
"version": "v1"
}
] | 2020-04-20 | [
[
"Hati",
"Avik",
""
],
[
"Bustreo",
"Matteo",
""
],
[
"Sona",
"Diego",
""
],
[
"Murino",
"Vittorio",
""
],
[
"Del Bue",
"Alessio",
""
]
] | In this paper, we tackle the task of automatically analyzing 3D volumetric scans obtained from computed tomography (CT) devices. In particular, we address a particular task for which data is very limited: the segmentation of ancient Egyptian mummies CT scans. We aim at digitally unwrapping the mummy and identify different segments such as body, bandages and jewelry. The problem is complex because of the lack of annotated data for the different semantic regions to segment, thus discouraging the use of strongly supervised approaches. We, therefore, propose a weakly supervised and efficient interactive segmentation method to solve this challenging problem. After segmenting the wrapped mummy from its exterior region using histogram analysis and template matching, we first design a voxel distance measure to find an approximate solution for the body and bandage segments. Here, we use geodesic distances since voxel features as well as spatial relationship among voxels is incorporated in this measure. Next, we refine the solution using a GrabCut based segmentation together with a tracking method on the slices of the scan that assigns labels to different regions in the volume, using limited supervision in the form of scribbles drawn by the user. The efficiency of the proposed method is demonstrated using visualizations and validated through quantitative measures and qualitative unwrapping of the mummy. |
2312.16250 | Nantheera Anantrasirichai | Anqi Yi and Nantheera Anantrasirichai | A Comprehensive Study of Object Tracking in Low-Light Environments | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Accurate object tracking in low-light environments is crucial, particularly
in surveillance and ethology applications. However, achieving this is
significantly challenging due to the poor quality of captured sequences.
Factors such as noise, color imbalance, and low contrast contribute to these
challenges. This paper presents a comprehensive study examining the impact of
these distortions on automatic object trackers. Additionally, we propose a
solution to enhance tracking performance by integrating denoising and low-light
enhancement methods into the transformer-based object tracking system.
Experimental results show that the proposed tracker, trained with low-light
synthetic datasets, outperforms both the vanilla MixFormer and Siam R-CNN.
| [
{
"created": "Mon, 25 Dec 2023 17:20:57 GMT",
"version": "v1"
},
{
"created": "Wed, 3 Jan 2024 13:59:14 GMT",
"version": "v2"
}
] | 2024-01-04 | [
[
"Yi",
"Anqi",
""
],
[
"Anantrasirichai",
"Nantheera",
""
]
] | Accurate object tracking in low-light environments is crucial, particularly in surveillance and ethology applications. However, achieving this is significantly challenging due to the poor quality of captured sequences. Factors such as noise, color imbalance, and low contrast contribute to these challenges. This paper presents a comprehensive study examining the impact of these distortions on automatic object trackers. Additionally, we propose a solution to enhance tracking performance by integrating denoising and low-light enhancement methods into the transformer-based object tracking system. Experimental results show that the proposed tracker, trained with low-light synthetic datasets, outperforms both the vanilla MixFormer and Siam R-CNN. |
2312.03248 | Haowen Wang | Haowen Wang, Tao Sun, Cong Fan, Jinjie Gu | Customizable Combination of Parameter-Efficient Modules for Multi-Task
Learning | 22 pages, 9 figures | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modular and composable transfer learning is an emerging direction in the
field of Parameter Efficient Fine-Tuning, as it enables neural networks to
better organize various aspects of knowledge, leading to improved cross-task
generalization. In this paper, we introduce a novel approach Customized
Polytropon C-Poly that combines task-common skills and task-specific skills,
while the skill parameters being highly parameterized using low-rank
techniques. Each task is associated with a customizable number of exclusive
specialized skills and also benefits from skills shared with peer tasks. A
skill assignment matrix is jointly learned. To evaluate our approach, we
conducted extensive experiments on the Super-NaturalInstructions and the
SuperGLUE benchmarks. Our findings demonstrate that C-Poly outperforms
fully-shared, task-specific, and skill-indistinguishable baselines,
significantly enhancing the sample efficiency in multi-task learning scenarios.
| [
{
"created": "Wed, 6 Dec 2023 02:47:56 GMT",
"version": "v1"
}
] | 2023-12-07 | [
[
"Wang",
"Haowen",
""
],
[
"Sun",
"Tao",
""
],
[
"Fan",
"Cong",
""
],
[
"Gu",
"Jinjie",
""
]
] | Modular and composable transfer learning is an emerging direction in the field of Parameter Efficient Fine-Tuning, as it enables neural networks to better organize various aspects of knowledge, leading to improved cross-task generalization. In this paper, we introduce a novel approach Customized Polytropon C-Poly that combines task-common skills and task-specific skills, while the skill parameters being highly parameterized using low-rank techniques. Each task is associated with a customizable number of exclusive specialized skills and also benefits from skills shared with peer tasks. A skill assignment matrix is jointly learned. To evaluate our approach, we conducted extensive experiments on the Super-NaturalInstructions and the SuperGLUE benchmarks. Our findings demonstrate that C-Poly outperforms fully-shared, task-specific, and skill-indistinguishable baselines, significantly enhancing the sample efficiency in multi-task learning scenarios. |
1310.7158 | Mingyi Hong | Shuai Ma, Mingyi Hong, Enbin Song, Xiangfeng Wang, Dechun Sun | Outage Constrained Robust Secure Transmission for MISO Wiretap Channels | null | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we consider the robust secure beamformer design for MISO
wiretap channels. Assume that the eavesdroppers' channels are only partially
available at the transmitter, we seek to maximize the secrecy rate under the
transmit power and secrecy rate outage probability constraint. The outage
probability constraint requires that the secrecy rate exceeds certain threshold
with high probability. Therefore including such constraint in the design
naturally ensures the desired robustness. Unfortunately, the presence of the
probabilistic constraints makes the problem non-convex and hence difficult to
solve. In this paper, we investigate the outage probability constrained secrecy
rate maximization problem using a novel two-step approach. Under a wide range
of uncertainty models, our developed algorithms can obtain high-quality
solutions, sometimes even exact global solutions, for the robust secure
beamformer design problem. Simulation results are presented to verify the
effectiveness and robustness of the proposed algorithms.
| [
{
"created": "Sun, 27 Oct 2013 04:09:56 GMT",
"version": "v1"
}
] | 2013-10-29 | [
[
"Ma",
"Shuai",
""
],
[
"Hong",
"Mingyi",
""
],
[
"Song",
"Enbin",
""
],
[
"Wang",
"Xiangfeng",
""
],
[
"Sun",
"Dechun",
""
]
] | In this paper we consider the robust secure beamformer design for MISO wiretap channels. Assume that the eavesdroppers' channels are only partially available at the transmitter, we seek to maximize the secrecy rate under the transmit power and secrecy rate outage probability constraint. The outage probability constraint requires that the secrecy rate exceeds certain threshold with high probability. Therefore including such constraint in the design naturally ensures the desired robustness. Unfortunately, the presence of the probabilistic constraints makes the problem non-convex and hence difficult to solve. In this paper, we investigate the outage probability constrained secrecy rate maximization problem using a novel two-step approach. Under a wide range of uncertainty models, our developed algorithms can obtain high-quality solutions, sometimes even exact global solutions, for the robust secure beamformer design problem. Simulation results are presented to verify the effectiveness and robustness of the proposed algorithms. |
2210.10973 | Xinran Zhu | Xinran Zhu, Leo Huang, Cameron Ibrahim, Eric Hans Lee, David Bindel | Scalable Bayesian Transformed Gaussian Processes | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | The Bayesian transformed Gaussian process (BTG) model, proposed by Kedem and
Oliviera, is a fully Bayesian counterpart to the warped Gaussian process (WGP)
and marginalizes out a joint prior over input warping and kernel
hyperparameters. This fully Bayesian treatment of hyperparameters often
provides more accurate regression estimates and superior uncertainty
propagation, but is prohibitively expensive. The BTG posterior predictive
distribution, itself estimated through high-dimensional integration, must be
inverted in order to perform model prediction. To make the Bayesian approach
practical and comparable in speed to maximum-likelihood estimation (MLE), we
propose principled and fast techniques for computing with BTG. Our framework
uses doubly sparse quadrature rules, tight quantile bounds, and rank-one matrix
algebra to enable both fast model prediction and model selection. These
scalable methods allow us to regress over higher-dimensional datasets and apply
BTG with layered transformations that greatly improve its expressibility. We
demonstrate that BTG achieves superior empirical performance over MLE-based
models.
| [
{
"created": "Thu, 20 Oct 2022 02:45:10 GMT",
"version": "v1"
}
] | 2022-10-21 | [
[
"Zhu",
"Xinran",
""
],
[
"Huang",
"Leo",
""
],
[
"Ibrahim",
"Cameron",
""
],
[
"Lee",
"Eric Hans",
""
],
[
"Bindel",
"David",
""
]
] | The Bayesian transformed Gaussian process (BTG) model, proposed by Kedem and Oliviera, is a fully Bayesian counterpart to the warped Gaussian process (WGP) and marginalizes out a joint prior over input warping and kernel hyperparameters. This fully Bayesian treatment of hyperparameters often provides more accurate regression estimates and superior uncertainty propagation, but is prohibitively expensive. The BTG posterior predictive distribution, itself estimated through high-dimensional integration, must be inverted in order to perform model prediction. To make the Bayesian approach practical and comparable in speed to maximum-likelihood estimation (MLE), we propose principled and fast techniques for computing with BTG. Our framework uses doubly sparse quadrature rules, tight quantile bounds, and rank-one matrix algebra to enable both fast model prediction and model selection. These scalable methods allow us to regress over higher-dimensional datasets and apply BTG with layered transformations that greatly improve its expressibility. We demonstrate that BTG achieves superior empirical performance over MLE-based models. |
2208.13414 | Peng Wu | Peng Wu, Lipeng Gu, Xuefeng Yan, Haoran Xie, Fu Lee Wang, Gary Cheng,
Mingqiang Wei | PV-RCNN++: Semantical Point-Voxel Feature Interaction for 3D Object
Detection | 18 pages, 9 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large imbalance often exists between the foreground points (i.e., objects)
and the background points in outdoor LiDAR point clouds. It hinders
cutting-edge detectors from focusing on informative areas to produce accurate
3D object detection results. This paper proposes a novel object detection
network by semantical point-voxel feature interaction, dubbed PV-RCNN++. Unlike
most of existing methods, PV-RCNN++ explores the semantic information to
enhance the quality of object detection. First, a semantic segmentation module
is proposed to retain more discriminative foreground keypoints. Such a module
will guide our PV-RCNN++ to integrate more object-related point-wise and
voxel-wise features in the pivotal areas. Then, to make points and voxels
interact efficiently, we utilize voxel query based on Manhattan distance to
quickly sample voxel-wise features around keypoints. Such the voxel query will
reduce the time complexity from O(N) to O(K), compared to the ball query.
Further, to avoid being stuck in learning only local features, an
attention-based residual PointNet module is designed to expand the receptive
field to adaptively aggregate the neighboring voxel-wise features into
keypoints. Extensive experiments on the KITTI dataset show that PV-RCNN++
achieves 81.60$\%$, 40.18$\%$, 68.21$\%$ 3D mAP on Car, Pedestrian, and
Cyclist, achieving comparable or even better performance to the
state-of-the-arts.
| [
{
"created": "Mon, 29 Aug 2022 08:14:00 GMT",
"version": "v1"
}
] | 2022-08-30 | [
[
"Wu",
"Peng",
""
],
[
"Gu",
"Lipeng",
""
],
[
"Yan",
"Xuefeng",
""
],
[
"Xie",
"Haoran",
""
],
[
"Wang",
"Fu Lee",
""
],
[
"Cheng",
"Gary",
""
],
[
"Wei",
"Mingqiang",
""
]
] | Large imbalance often exists between the foreground points (i.e., objects) and the background points in outdoor LiDAR point clouds. It hinders cutting-edge detectors from focusing on informative areas to produce accurate 3D object detection results. This paper proposes a novel object detection network by semantical point-voxel feature interaction, dubbed PV-RCNN++. Unlike most of existing methods, PV-RCNN++ explores the semantic information to enhance the quality of object detection. First, a semantic segmentation module is proposed to retain more discriminative foreground keypoints. Such a module will guide our PV-RCNN++ to integrate more object-related point-wise and voxel-wise features in the pivotal areas. Then, to make points and voxels interact efficiently, we utilize voxel query based on Manhattan distance to quickly sample voxel-wise features around keypoints. Such the voxel query will reduce the time complexity from O(N) to O(K), compared to the ball query. Further, to avoid being stuck in learning only local features, an attention-based residual PointNet module is designed to expand the receptive field to adaptively aggregate the neighboring voxel-wise features into keypoints. Extensive experiments on the KITTI dataset show that PV-RCNN++ achieves 81.60$\%$, 40.18$\%$, 68.21$\%$ 3D mAP on Car, Pedestrian, and Cyclist, achieving comparable or even better performance to the state-of-the-arts. |
2207.07836 | Suman Banerjee | Mayank Singhal and Suman Banerjee | Envy\mbox{-}free Trip Planning in Group Trip Planning Query Problem | Accepted as a Full Paper @ 25th International Conference on
Network-Based Information Systems (NBiS-2022). 12 Pages. 6 Figures | null | null | null | cs.DB | http://creativecommons.org/licenses/by/4.0/ | In recent times, Group Trip Planning Query (henceforth referred to as GTP
Query) is one of the well\mbox{-}studied problems in Spatial Databases. The
inputs to the problem are a road network where the vertices represent the
Point-of-Interests (mentioned as POIs henceforth) and they are grouped into
different categories, edges represent the road segments, and edge weight
represents the distance and a group of users along with their source and
destination location. This problem asks to return one POI from every category
such that the aggregated distance traveled by the group is minimized. As the
objective is to minimize the aggregated distance, the existing solution
methodologies do not consider the individual distances traveled by the group
members. To address this issue, we introduce and study the \textsc{Envy Free
Group Trip Planning Query} Problem. Along with the inputs of the GTP Query
Problem, in this variant, we also have a threshold distance $D$ such that
aggregated distance traveled by the group is minimized and for any member pairs
the difference between their individual distance traveled is less than equal to
$D$. However, it may so happen that a given $D$ value no such set POIs are
found. To tackle this issue, we introduce the surrogate problem \textsc{Envy
Free Group Trip Planning Query with Minimum Additional Distance} Problem which
asks what is the minimum distance to be added with $D$ to obtain at least one
solution. For these problems, we design efficient solution approaches and
experiment with real-world datasets. From the experiments, we observe that the
proposed solution approaches lead to less aggregated distance compared to
baseline methods with reasonable computational overhead.
| [
{
"created": "Sat, 16 Jul 2022 04:59:55 GMT",
"version": "v1"
}
] | 2022-07-19 | [
[
"Singhal",
"Mayank",
""
],
[
"Banerjee",
"Suman",
""
]
] | In recent times, Group Trip Planning Query (henceforth referred to as GTP Query) is one of the well\mbox{-}studied problems in Spatial Databases. The inputs to the problem are a road network where the vertices represent the Point-of-Interests (mentioned as POIs henceforth) and they are grouped into different categories, edges represent the road segments, and edge weight represents the distance and a group of users along with their source and destination location. This problem asks to return one POI from every category such that the aggregated distance traveled by the group is minimized. As the objective is to minimize the aggregated distance, the existing solution methodologies do not consider the individual distances traveled by the group members. To address this issue, we introduce and study the \textsc{Envy Free Group Trip Planning Query} Problem. Along with the inputs of the GTP Query Problem, in this variant, we also have a threshold distance $D$ such that aggregated distance traveled by the group is minimized and for any member pairs the difference between their individual distance traveled is less than equal to $D$. However, it may so happen that a given $D$ value no such set POIs are found. To tackle this issue, we introduce the surrogate problem \textsc{Envy Free Group Trip Planning Query with Minimum Additional Distance} Problem which asks what is the minimum distance to be added with $D$ to obtain at least one solution. For these problems, we design efficient solution approaches and experiment with real-world datasets. From the experiments, we observe that the proposed solution approaches lead to less aggregated distance compared to baseline methods with reasonable computational overhead. |
2402.09588 | Yanshan Wang | David Oniani, Jordan Hilsman, Chengxi Zang, Junmei Wang, Lianjin Cai,
Jan Zawala, Yanshan Wang | Emerging Opportunities of Using Large Language Models for Translation
Between Drug Molecules and Indications | null | null | null | null | cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A drug molecule is a substance that changes the organism's mental or physical
state. Every approved drug has an indication, which refers to the therapeutic
use of that drug for treating a particular medical condition. While the Large
Language Model (LLM), a generative Artificial Intelligence (AI) technique, has
recently demonstrated effectiveness in translating between molecules and their
textual descriptions, there remains a gap in research regarding their
application in facilitating the translation between drug molecules and
indications, or vice versa, which could greatly benefit the drug discovery
process. The capability of generating a drug from a given indication would
allow for the discovery of drugs targeting specific diseases or targets and
ultimately provide patients with better treatments. In this paper, we first
propose a new task, which is the translation between drug molecules and
corresponding indications, and then test existing LLMs on this new task.
Specifically, we consider nine variations of the T5 LLM and evaluate them on
two public datasets obtained from ChEMBL and DrugBank. Our experiments show the
early results of using LLMs for this task and provide a perspective on the
state-of-the-art. We also emphasize the current limitations and discuss future
work that has the potential to improve the performance on this task. The
creation of molecules from indications, or vice versa, will allow for more
efficient targeting of diseases and significantly reduce the cost of drug
discovery, with the potential to revolutionize the field of drug discovery in
the era of generative AI.
| [
{
"created": "Wed, 14 Feb 2024 21:33:13 GMT",
"version": "v1"
},
{
"created": "Fri, 16 Feb 2024 20:55:08 GMT",
"version": "v2"
}
] | 2024-02-20 | [
[
"Oniani",
"David",
""
],
[
"Hilsman",
"Jordan",
""
],
[
"Zang",
"Chengxi",
""
],
[
"Wang",
"Junmei",
""
],
[
"Cai",
"Lianjin",
""
],
[
"Zawala",
"Jan",
""
],
[
"Wang",
"Yanshan",
""
]
] | A drug molecule is a substance that changes the organism's mental or physical state. Every approved drug has an indication, which refers to the therapeutic use of that drug for treating a particular medical condition. While the Large Language Model (LLM), a generative Artificial Intelligence (AI) technique, has recently demonstrated effectiveness in translating between molecules and their textual descriptions, there remains a gap in research regarding their application in facilitating the translation between drug molecules and indications, or vice versa, which could greatly benefit the drug discovery process. The capability of generating a drug from a given indication would allow for the discovery of drugs targeting specific diseases or targets and ultimately provide patients with better treatments. In this paper, we first propose a new task, which is the translation between drug molecules and corresponding indications, and then test existing LLMs on this new task. Specifically, we consider nine variations of the T5 LLM and evaluate them on two public datasets obtained from ChEMBL and DrugBank. Our experiments show the early results of using LLMs for this task and provide a perspective on the state-of-the-art. We also emphasize the current limitations and discuss future work that has the potential to improve the performance on this task. The creation of molecules from indications, or vice versa, will allow for more efficient targeting of diseases and significantly reduce the cost of drug discovery, with the potential to revolutionize the field of drug discovery in the era of generative AI. |
1403.1863 | Hanie Sedghi | Hanie Sedghi and Edmond Jonckheere | Statistical Structure Learning, Towards a Robust Smart Grid | null | null | null | null | cs.LG cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Robust control and maintenance of the grid relies on accurate data. Both PMUs
and state estimators are prone to false data injection attacks. Thus, it is
crucial to have a mechanism for fast and accurate detection of an agent
maliciously tampering with the data---for both preventing attacks that may lead
to blackouts, and for routine monitoring and control tasks of current and
future grids. We propose a decentralized false data injection detection scheme
based on Markov graph of the bus phase angles. We utilize the Conditional
Covariance Test (CCT) to learn the structure of the grid. Using the DC power
flow model, we show that under normal circumstances, and because of
walk-summability of the grid graph, the Markov graph of the voltage angles can
be determined by the power grid graph. Therefore, a discrepancy between
calculated Markov graph and learned structure should trigger the alarm. Local
grid topology is available online from the protection system and we exploit it
to check for mismatch. Should a mismatch be detected, we use correlation
anomaly score to detect the set of attacked nodes. Our method can detect the
most recent stealthy deception attack on the power grid that assumes knowledge
of bus-branch model of the system and is capable of deceiving the state
estimator, damaging power network observatory, control, monitoring, demand
response and pricing schemes. Specifically, under the stealthy deception
attack, the Markov graph of phase angles changes. In addition to detect a state
of attack, our method can detect the set of attacked nodes. To the best of our
knowledge, our remedy is the first to comprehensively detect this sophisticated
attack and it does not need additional hardware. Moreover, our detection scheme
is successful no matter the size of the attacked subset. Simulation of various
power networks confirms our claims.
| [
{
"created": "Fri, 7 Mar 2014 20:26:09 GMT",
"version": "v1"
}
] | 2014-03-10 | [
[
"Sedghi",
"Hanie",
""
],
[
"Jonckheere",
"Edmond",
""
]
] | Robust control and maintenance of the grid relies on accurate data. Both PMUs and state estimators are prone to false data injection attacks. Thus, it is crucial to have a mechanism for fast and accurate detection of an agent maliciously tampering with the data---for both preventing attacks that may lead to blackouts, and for routine monitoring and control tasks of current and future grids. We propose a decentralized false data injection detection scheme based on Markov graph of the bus phase angles. We utilize the Conditional Covariance Test (CCT) to learn the structure of the grid. Using the DC power flow model, we show that under normal circumstances, and because of walk-summability of the grid graph, the Markov graph of the voltage angles can be determined by the power grid graph. Therefore, a discrepancy between calculated Markov graph and learned structure should trigger the alarm. Local grid topology is available online from the protection system and we exploit it to check for mismatch. Should a mismatch be detected, we use correlation anomaly score to detect the set of attacked nodes. Our method can detect the most recent stealthy deception attack on the power grid that assumes knowledge of bus-branch model of the system and is capable of deceiving the state estimator, damaging power network observatory, control, monitoring, demand response and pricing schemes. Specifically, under the stealthy deception attack, the Markov graph of phase angles changes. In addition to detect a state of attack, our method can detect the set of attacked nodes. To the best of our knowledge, our remedy is the first to comprehensively detect this sophisticated attack and it does not need additional hardware. Moreover, our detection scheme is successful no matter the size of the attacked subset. Simulation of various power networks confirms our claims. |
1412.1591 | Travis Gagie | Travis Gagie and Simon J. Puglisi | Searching and Indexing Genomic Databases via Kernelization | null | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rapid advance of DNA sequencing technologies has yielded databases of
thousands of genomes. To search and index these databases effectively, it is
important that we take advantage of the similarity between those genomes.
Several authors have recently suggested searching or indexing only one
reference genome and the parts of the other genomes where they differ. In this
paper we survey the twenty-year history of this idea and discuss its relation
to kernelization in parameterized complexity.
| [
{
"created": "Thu, 4 Dec 2014 09:11:46 GMT",
"version": "v1"
}
] | 2014-12-05 | [
[
"Gagie",
"Travis",
""
],
[
"Puglisi",
"Simon J.",
""
]
] | The rapid advance of DNA sequencing technologies has yielded databases of thousands of genomes. To search and index these databases effectively, it is important that we take advantage of the similarity between those genomes. Several authors have recently suggested searching or indexing only one reference genome and the parts of the other genomes where they differ. In this paper we survey the twenty-year history of this idea and discuss its relation to kernelization in parameterized complexity. |
1409.2313 | Bernhard Rumpe | Shahar Maoz, Jan Oliver Ringert, Bernhard Rumpe | Semantically Configurable Consistency Analysis for Class and Object
Diagrams | 15 pages, 7 figures. Received Best Paper Award and ACM Distinguished
Paper Award at the MODELS 2011 Conference | Model Driven Engineering Languages and Systems (MODELS 2011),
Wellington, New Zealand. pp. 153-167, LNCS 6981, 2011. | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Checking consistency between an object diagram (OD) and a class diagram (CD)
is an important analysis problem. However, several variations in the semantics
of CDs and ODs, as used in different contexts and for different purposes,
create a challenge for analysis tools. To address this challenge in this paper
we investigate semantically configurable model analysis. We formalize the
variability in the languages semantics using a feature model: each
configuration that the model permits induces a different semantics. Moreover,
we develop a parametrized analysis that can be instantiated to comply with
every legal configuration of the feature model. Thus, the analysis is
semantically congured and its results change according to the semantics induced
by the selected feature configuration. The ideas are implemented using a
parametrized transformation to Alloy. The work can be viewed as a case study
example for a formal and automated approach to handling semantic variability in
modeling languages.
| [
{
"created": "Mon, 8 Sep 2014 12:07:43 GMT",
"version": "v1"
}
] | 2014-09-09 | [
[
"Maoz",
"Shahar",
""
],
[
"Ringert",
"Jan Oliver",
""
],
[
"Rumpe",
"Bernhard",
""
]
] | Checking consistency between an object diagram (OD) and a class diagram (CD) is an important analysis problem. However, several variations in the semantics of CDs and ODs, as used in different contexts and for different purposes, create a challenge for analysis tools. To address this challenge in this paper we investigate semantically configurable model analysis. We formalize the variability in the languages semantics using a feature model: each configuration that the model permits induces a different semantics. Moreover, we develop a parametrized analysis that can be instantiated to comply with every legal configuration of the feature model. Thus, the analysis is semantically congured and its results change according to the semantics induced by the selected feature configuration. The ideas are implemented using a parametrized transformation to Alloy. The work can be viewed as a case study example for a formal and automated approach to handling semantic variability in modeling languages. |
1803.03354 | Tharindu Fernando | Tharindu Fernando, Simon Denman, Sridha Sridharan and Clinton Fookes | Task Specific Visual Saliency Prediction with Memory Augmented
Conditional Generative Adversarial Networks | To appear in IEEE Winter Conference on Applications of Computer
Vision (WACV), 2018 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual saliency patterns are the result of a variety of factors aside from
the image being parsed, however existing approaches have ignored these. To
address this limitation, we propose a novel saliency estimation model which
leverages the semantic modelling power of conditional generative adversarial
networks together with memory architectures which capture the subject's
behavioural patterns and task dependent factors. We make contributions aiming
to bridge the gap between bottom-up feature learning capabilities in modern
deep learning architectures and traditional top-down hand-crafted features
based methods for task specific saliency modelling. The conditional nature of
the proposed framework enables us to learn contextual semantics and
relationships among different tasks together, instead of learning them
separately for each task. Our studies not only shed light on a novel
application area for generative adversarial networks, but also emphasise the
importance of task specific saliency modelling and demonstrate the plausibility
of fully capturing this context via an augmented memory architecture.
| [
{
"created": "Fri, 9 Mar 2018 02:08:09 GMT",
"version": "v1"
}
] | 2018-03-12 | [
[
"Fernando",
"Tharindu",
""
],
[
"Denman",
"Simon",
""
],
[
"Sridharan",
"Sridha",
""
],
[
"Fookes",
"Clinton",
""
]
] | Visual saliency patterns are the result of a variety of factors aside from the image being parsed, however existing approaches have ignored these. To address this limitation, we propose a novel saliency estimation model which leverages the semantic modelling power of conditional generative adversarial networks together with memory architectures which capture the subject's behavioural patterns and task dependent factors. We make contributions aiming to bridge the gap between bottom-up feature learning capabilities in modern deep learning architectures and traditional top-down hand-crafted features based methods for task specific saliency modelling. The conditional nature of the proposed framework enables us to learn contextual semantics and relationships among different tasks together, instead of learning them separately for each task. Our studies not only shed light on a novel application area for generative adversarial networks, but also emphasise the importance of task specific saliency modelling and demonstrate the plausibility of fully capturing this context via an augmented memory architecture. |
1902.08874 | Bargav Jayaraman | Bargav Jayaraman and David Evans | Evaluating Differentially Private Machine Learning in Practice | Revised version of a paper in USENIX Security 2019 | null | null | null | cs.LG cs.CR stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Differential privacy is a strong notion for privacy that can be used to prove
formal guarantees, in terms of a privacy budget, $\epsilon$, about how much
information is leaked by a mechanism. However, implementations of
privacy-preserving machine learning often select large values of $\epsilon$ in
order to get acceptable utility of the model, with little understanding of the
impact of such choices on meaningful privacy. Moreover, in scenarios where
iterative learning procedures are used, differential privacy variants that
offer tighter analyses are used which appear to reduce the needed privacy
budget but present poorly understood trade-offs between privacy and utility. In
this paper, we quantify the impact of these choices on privacy in experiments
with logistic regression and neural network models. Our main finding is that
there is a huge gap between the upper bounds on privacy loss that can be
guaranteed, even with advanced mechanisms, and the effective privacy loss that
can be measured using current inference attacks. Current mechanisms for
differentially private machine learning rarely offer acceptable utility-privacy
trade-offs with guarantees for complex learning tasks: settings that provide
limited accuracy loss provide meaningless privacy guarantees, and settings that
provide strong privacy guarantees result in useless models. Code for the
experiments can be found here: https://github.com/bargavj/EvaluatingDPML
| [
{
"created": "Sun, 24 Feb 2019 01:48:53 GMT",
"version": "v1"
},
{
"created": "Fri, 1 Mar 2019 22:16:03 GMT",
"version": "v2"
},
{
"created": "Sat, 1 Jun 2019 17:16:24 GMT",
"version": "v3"
},
{
"created": "Mon, 12 Aug 2019 23:18:20 GMT",
"version": "v4"
}
] | 2019-08-14 | [
[
"Jayaraman",
"Bargav",
""
],
[
"Evans",
"David",
""
]
] | Differential privacy is a strong notion for privacy that can be used to prove formal guarantees, in terms of a privacy budget, $\epsilon$, about how much information is leaked by a mechanism. However, implementations of privacy-preserving machine learning often select large values of $\epsilon$ in order to get acceptable utility of the model, with little understanding of the impact of such choices on meaningful privacy. Moreover, in scenarios where iterative learning procedures are used, differential privacy variants that offer tighter analyses are used which appear to reduce the needed privacy budget but present poorly understood trade-offs between privacy and utility. In this paper, we quantify the impact of these choices on privacy in experiments with logistic regression and neural network models. Our main finding is that there is a huge gap between the upper bounds on privacy loss that can be guaranteed, even with advanced mechanisms, and the effective privacy loss that can be measured using current inference attacks. Current mechanisms for differentially private machine learning rarely offer acceptable utility-privacy trade-offs with guarantees for complex learning tasks: settings that provide limited accuracy loss provide meaningless privacy guarantees, and settings that provide strong privacy guarantees result in useless models. Code for the experiments can be found here: https://github.com/bargavj/EvaluatingDPML |
1606.02615 | David Darmon | David Darmon | Specific Differential Entropy Rate Estimation for Continuous-Valued Time
Series | null | Entropy 18.5 (2016): 190 | null | null | cs.LG stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a method for quantifying the inherent unpredictability of a
continuous-valued time series via an extension of the differential Shannon
entropy rate. Our extension, the specific entropy rate, quantifies the amount
of predictive uncertainty associated with a specific state, rather than
averaged over all states. We relate the specific entropy rate to popular
`complexity' measures such as Approximate and Sample Entropies. We provide a
data-driven approach for estimating the specific entropy rate of an observed
time series. Finally, we consider three case studies of estimating specific
entropy rate from synthetic and physiological data relevant to the analysis of
heart rate variability.
| [
{
"created": "Wed, 8 Jun 2016 15:57:35 GMT",
"version": "v1"
}
] | 2016-06-09 | [
[
"Darmon",
"David",
""
]
] | We introduce a method for quantifying the inherent unpredictability of a continuous-valued time series via an extension of the differential Shannon entropy rate. Our extension, the specific entropy rate, quantifies the amount of predictive uncertainty associated with a specific state, rather than averaged over all states. We relate the specific entropy rate to popular `complexity' measures such as Approximate and Sample Entropies. We provide a data-driven approach for estimating the specific entropy rate of an observed time series. Finally, we consider three case studies of estimating specific entropy rate from synthetic and physiological data relevant to the analysis of heart rate variability. |
1707.01766 | Thomas Studer | Kai Br\"unnler, Dandolo Flumini, Thomas Studer | A Logic of Blockchain Updates | null | null | null | null | cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Blockchains are distributed data structures that are used to achieve
consensus in systems for cryptocurrencies (like Bitcoin) or smart contracts
(like Ethereum). Although blockchains gained a lot of popularity recently,
there is no logic-based model for blockchains available. We introduce BCL, a
dynamic logic to reason about blockchain updates, and show that BCL is sound
and complete with respect to a simple blockchain model.
| [
{
"created": "Thu, 6 Jul 2017 13:03:04 GMT",
"version": "v1"
}
] | 2017-07-07 | [
[
"Brünnler",
"Kai",
""
],
[
"Flumini",
"Dandolo",
""
],
[
"Studer",
"Thomas",
""
]
] | Blockchains are distributed data structures that are used to achieve consensus in systems for cryptocurrencies (like Bitcoin) or smart contracts (like Ethereum). Although blockchains gained a lot of popularity recently, there is no logic-based model for blockchains available. We introduce BCL, a dynamic logic to reason about blockchain updates, and show that BCL is sound and complete with respect to a simple blockchain model. |
2202.01322 | Rahul Yedida | Rahul Yedida, Tim Menzies | How to Improve Deep Learning for Software Analytics (a case study with
code smell detection) | Accepted to MSR 2022 | null | null | null | cs.SE | http://creativecommons.org/licenses/by/4.0/ | To reduce technical debt and make code more maintainable, it is important to
be able to warn programmers about code smells. State-of-the-art code small
detectors use deep learners, without much exploration of alternatives within
that technology.
One promising alternative for software analytics and deep learning is GHOST
(from TSE'21) that relies on a combination of hyper-parameter optimization of
feedforward neural networks and a novel oversampling technique to deal with
class imbalance.
The prior study from TSE'21 proposing this novel "fuzzy sampling" was
somewhat limited in that the method was tested on defect prediction, but
nothing else. Like defect prediction, code smell detection datasets have a
class imbalance (which motivated "fuzzy sampling"). Hence, in this work we test
if fuzzy sampling is useful for code smell detection.
The results of this paper show that we can achieve better than
state-of-the-art results on code smell detection with fuzzy oversampling. For
example, for "feature envy", we were able to achieve 99+\% AUC across all our
datasets, and on 8/10 datasets for "misplaced class". While our specific
results refer to code smell detection, they do suggest other lessons for other
kinds of analytics. For example: (a) try better preprocessing before trying
complex learners (b) include simpler learners as a baseline in software
analytics (c) try "fuzzy sampling" as one such baseline.
| [
{
"created": "Wed, 2 Feb 2022 23:07:16 GMT",
"version": "v1"
},
{
"created": "Sun, 27 Mar 2022 20:53:54 GMT",
"version": "v2"
}
] | 2022-03-29 | [
[
"Yedida",
"Rahul",
""
],
[
"Menzies",
"Tim",
""
]
] | To reduce technical debt and make code more maintainable, it is important to be able to warn programmers about code smells. State-of-the-art code small detectors use deep learners, without much exploration of alternatives within that technology. One promising alternative for software analytics and deep learning is GHOST (from TSE'21) that relies on a combination of hyper-parameter optimization of feedforward neural networks and a novel oversampling technique to deal with class imbalance. The prior study from TSE'21 proposing this novel "fuzzy sampling" was somewhat limited in that the method was tested on defect prediction, but nothing else. Like defect prediction, code smell detection datasets have a class imbalance (which motivated "fuzzy sampling"). Hence, in this work we test if fuzzy sampling is useful for code smell detection. The results of this paper show that we can achieve better than state-of-the-art results on code smell detection with fuzzy oversampling. For example, for "feature envy", we were able to achieve 99+\% AUC across all our datasets, and on 8/10 datasets for "misplaced class". While our specific results refer to code smell detection, they do suggest other lessons for other kinds of analytics. For example: (a) try better preprocessing before trying complex learners (b) include simpler learners as a baseline in software analytics (c) try "fuzzy sampling" as one such baseline. |
2405.17780 | Edith Cohen | Sara Ahmadian and Edith Cohen | Unmasking Vulnerabilities: Cardinality Sketches under Adaptive Inputs | null | ICML 2024 | null | null | cs.DS | http://creativecommons.org/licenses/by/4.0/ | Cardinality sketches are popular data structures that enhance the efficiency
of working with large data sets. The sketches are randomized representations of
sets that are only of logarithmic size but can support set merges and
approximate cardinality (i.e., distinct count) queries. When queries are not
adaptive, that is, they do not depend on preceding query responses, the design
provides strong guarantees of correctly answering a number of queries
exponential in the sketch size $k$.
In this work, we investigate the performance of cardinality sketches in
adaptive settings and unveil inherent vulnerabilities. We design an attack
against the ``standard'' estimators that constructs an adversarial input by
post-processing responses to a set of simple non-adaptive queries of size
linear in the sketch size $k$. Empirically, our attack used only $4k$ queries
with the widely used HyperLogLog
(HLL++)~\citep{hyperloglog:2007,hyperloglogpractice:EDBT2013} sketch. The
simple attack technique suggests it can be effective with post-processed
natural workloads. Finally and importantly, we demonstrate that the
vulnerability is inherent as \emph{any} estimator applied to known sketch
structures can be attacked using a number of queries that is quadratic in $k$,
matching a generic upper bound.
| [
{
"created": "Tue, 28 May 2024 03:20:05 GMT",
"version": "v1"
}
] | 2024-05-29 | [
[
"Ahmadian",
"Sara",
""
],
[
"Cohen",
"Edith",
""
]
] | Cardinality sketches are popular data structures that enhance the efficiency of working with large data sets. The sketches are randomized representations of sets that are only of logarithmic size but can support set merges and approximate cardinality (i.e., distinct count) queries. When queries are not adaptive, that is, they do not depend on preceding query responses, the design provides strong guarantees of correctly answering a number of queries exponential in the sketch size $k$. In this work, we investigate the performance of cardinality sketches in adaptive settings and unveil inherent vulnerabilities. We design an attack against the ``standard'' estimators that constructs an adversarial input by post-processing responses to a set of simple non-adaptive queries of size linear in the sketch size $k$. Empirically, our attack used only $4k$ queries with the widely used HyperLogLog (HLL++)~\citep{hyperloglog:2007,hyperloglogpractice:EDBT2013} sketch. The simple attack technique suggests it can be effective with post-processed natural workloads. Finally and importantly, we demonstrate that the vulnerability is inherent as \emph{any} estimator applied to known sketch structures can be attacked using a number of queries that is quadratic in $k$, matching a generic upper bound. |
2306.04566 | Bassine Fatima Zahra | Fatima Zahra Bassine, Terence Epule Epule, Ayoub Kechchour, Abdelghani
Chehbouni | Recent applications of machine learning, remote sensing, and iot
approaches in yield prediction: a critical review | 35 pages, 12 figures, 14 tables | null | null | null | cs.LG cs.AI cs.NI cs.SE | http://creativecommons.org/licenses/by/4.0/ | The integration of remote sensing and machine learning in agriculture is
transforming the industry by providing insights and predictions through data
analysis. This combination leads to improved yield prediction and water
management, resulting in increased efficiency, better yields, and more
sustainable agricultural practices. Achieving the United Nations' Sustainable
Development Goals, especially "zero hunger," requires the investigation of crop
yield and precipitation gaps, which can be accomplished through, the usage of
artificial intelligence (AI), machine learning (ML), remote sensing (RS), and
the internet of things (IoT). By integrating these technologies, a robust
agricultural mobile or web application can be developed, providing farmers and
decision-makers with valuable information and tools for improving crop
management and increasing efficiency. Several studies have investigated these
new technologies and their potential for diverse tasks such as crop monitoring,
yield prediction, irrigation management, etc. Through a critical review, this
paper reviews relevant articles that have used RS, ML, cloud computing, and IoT
in crop yield prediction. It reviews the current state-of-the-art in this field
by critically evaluating different machine-learning approaches proposed in the
literature for crop yield prediction and water management. It provides insights
into how these methods can improve decision-making in agricultural production
systems. This work will serve as a compendium for those interested in yield
prediction in terms of primary literature but, most importantly, what
approaches can be used for real-time and robust prediction.
| [
{
"created": "Wed, 7 Jun 2023 16:13:16 GMT",
"version": "v1"
}
] | 2023-06-08 | [
[
"Bassine",
"Fatima Zahra",
""
],
[
"Epule",
"Terence Epule",
""
],
[
"Kechchour",
"Ayoub",
""
],
[
"Chehbouni",
"Abdelghani",
""
]
] | The integration of remote sensing and machine learning in agriculture is transforming the industry by providing insights and predictions through data analysis. This combination leads to improved yield prediction and water management, resulting in increased efficiency, better yields, and more sustainable agricultural practices. Achieving the United Nations' Sustainable Development Goals, especially "zero hunger," requires the investigation of crop yield and precipitation gaps, which can be accomplished through, the usage of artificial intelligence (AI), machine learning (ML), remote sensing (RS), and the internet of things (IoT). By integrating these technologies, a robust agricultural mobile or web application can be developed, providing farmers and decision-makers with valuable information and tools for improving crop management and increasing efficiency. Several studies have investigated these new technologies and their potential for diverse tasks such as crop monitoring, yield prediction, irrigation management, etc. Through a critical review, this paper reviews relevant articles that have used RS, ML, cloud computing, and IoT in crop yield prediction. It reviews the current state-of-the-art in this field by critically evaluating different machine-learning approaches proposed in the literature for crop yield prediction and water management. It provides insights into how these methods can improve decision-making in agricultural production systems. This work will serve as a compendium for those interested in yield prediction in terms of primary literature but, most importantly, what approaches can be used for real-time and robust prediction. |
2308.15097 | Lucien Tisserand | Lucien Tisserand (ICAR), Fr\'ed\'eric Armetta (SyCoSMA, LIRIS), Heike
Baldauf-Quilliatre (ICAR), Antoine Bouquin (SyCoSMA, LIRIS), Salima Hassas
(SyCoSMA, LIRIS), Mathieu Lefort (LIRIS, SyCoSMA) | Sequential annotations for naturally-occurring HRI: first insights | Peer-reviewed workshop paper accepted for the ''Human-Robot
Conversational Interaction'' workshop that took place at the ''ACM/IEEE
International Conference on Human-Robot Interaction'' 2023 Conference in
Stockholm, Sweden | null | null | null | cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We explain the methodology we developed for improving the interactions
accomplished by an embedded conversational agent, drawing from Conversation
Analytic sequential and multimodal analysis. The use case is a Pepper robot
that is expected to inform and orient users in a library. In order to propose
and learn better interactive schema, we are creating a corpus of
naturally-occurring interactions that will be made available to the community.
To do so, we propose an annotation practice based on some theoretical
underpinnings about the use of language and multimodal resources in human-robot
interaction. CCS CONCEPTS $\bullet$ Computing methodologies $\rightarrow$
Discourse, dialogue and pragmatics; $\bullet$ Human-centered computing
$\rightarrow$ Text input; HCI theory, concepts and models; Field studies.
| [
{
"created": "Tue, 29 Aug 2023 08:07:26 GMT",
"version": "v1"
}
] | 2023-08-30 | [
[
"Tisserand",
"Lucien",
"",
"ICAR"
],
[
"Armetta",
"Frédéric",
"",
"SyCoSMA, LIRIS"
],
[
"Baldauf-Quilliatre",
"Heike",
"",
"ICAR"
],
[
"Bouquin",
"Antoine",
"",
"SyCoSMA, LIRIS"
],
[
"Hassas",
"Salima",
"",
"SyCoSMA, LIRIS"
],
[
"Lefort",
"Mathieu",
"",
"LIRIS, SyCoSMA"
]
] | We explain the methodology we developed for improving the interactions accomplished by an embedded conversational agent, drawing from Conversation Analytic sequential and multimodal analysis. The use case is a Pepper robot that is expected to inform and orient users in a library. In order to propose and learn better interactive schema, we are creating a corpus of naturally-occurring interactions that will be made available to the community. To do so, we propose an annotation practice based on some theoretical underpinnings about the use of language and multimodal resources in human-robot interaction. CCS CONCEPTS $\bullet$ Computing methodologies $\rightarrow$ Discourse, dialogue and pragmatics; $\bullet$ Human-centered computing $\rightarrow$ Text input; HCI theory, concepts and models; Field studies. |
2209.01527 | Suraj Mishra | Suraj Mishra, Yizhe Zhang, Li Zhang, Tianyu Zhang, X. Sharon Hu, Danny
Z. Chen | Data-Driven Deep Supervision for Skin Lesion Classification | MICCAI 2022 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Automatic classification of pigmented, non-pigmented, and depigmented
non-melanocytic skin lesions have garnered lots of attention in recent years.
However, imaging variations in skin texture, lesion shape, depigmentation
contrast, lighting condition, etc. hinder robust feature extraction, affecting
classification accuracy. In this paper, we propose a new deep neural network
that exploits input data for robust feature extraction. Specifically, we
analyze the convolutional network's behavior (field-of-view) to find the
location of deep supervision for improved feature extraction. To achieve this,
first, we perform activation mapping to generate an object mask, highlighting
the input regions most critical for classification output generation. Then the
network layer whose layer-wise effective receptive field matches the
approximated object shape in the object mask is selected as our focus for deep
supervision. Utilizing different types of convolutional feature extractors and
classifiers on three melanoma detection datasets and two vitiligo detection
datasets, we verify the effectiveness of our new method.
| [
{
"created": "Sun, 4 Sep 2022 03:57:08 GMT",
"version": "v1"
}
] | 2022-09-07 | [
[
"Mishra",
"Suraj",
""
],
[
"Zhang",
"Yizhe",
""
],
[
"Zhang",
"Li",
""
],
[
"Zhang",
"Tianyu",
""
],
[
"Hu",
"X. Sharon",
""
],
[
"Chen",
"Danny Z.",
""
]
] | Automatic classification of pigmented, non-pigmented, and depigmented non-melanocytic skin lesions have garnered lots of attention in recent years. However, imaging variations in skin texture, lesion shape, depigmentation contrast, lighting condition, etc. hinder robust feature extraction, affecting classification accuracy. In this paper, we propose a new deep neural network that exploits input data for robust feature extraction. Specifically, we analyze the convolutional network's behavior (field-of-view) to find the location of deep supervision for improved feature extraction. To achieve this, first, we perform activation mapping to generate an object mask, highlighting the input regions most critical for classification output generation. Then the network layer whose layer-wise effective receptive field matches the approximated object shape in the object mask is selected as our focus for deep supervision. Utilizing different types of convolutional feature extractors and classifiers on three melanoma detection datasets and two vitiligo detection datasets, we verify the effectiveness of our new method. |
2209.09124 | Payam Nikdel | Payam Nikdel, Mohammad Mahdavian, Mo Chen | DMMGAN: Diverse Multi Motion Prediction of 3D Human Joints using
Attention-Based Generative Adverserial Network | null | null | null | null | cs.CV cs.LG cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human motion prediction is a fundamental part of many human-robot
applications. Despite the recent progress in human motion prediction, most
studies simplify the problem by predicting the human motion relative to a fixed
joint and/or only limit their model to predict one possible future motion.
While due to the complex nature of human motion, a single output cannot reflect
all the possible actions one can do. Also, for any robotics application, we
need the full human motion including the user trajectory not a 3d pose relative
to the hip joint.
In this paper, we try to address these two issues by proposing a
transformer-based generative model for forecasting multiple diverse human
motions. Our model generates \textit{N} future possible motion by querying a
history of human motion. Our model first predicts the pose of the body relative
to the hip joint. Then the \textit{Hip Prediction Module} predicts the
trajectory of the hip movement for each predicted pose frame. To emphasize on
the diverse future motions we introduce a similarity loss that penalizes the
pairwise sample distance. We show that our system outperforms the
state-of-the-art in human motion prediction while it can predict diverse
multi-motion future trajectories with hip movements
| [
{
"created": "Tue, 13 Sep 2022 23:22:33 GMT",
"version": "v1"
},
{
"created": "Sun, 2 Oct 2022 23:19:32 GMT",
"version": "v2"
}
] | 2022-10-04 | [
[
"Nikdel",
"Payam",
""
],
[
"Mahdavian",
"Mohammad",
""
],
[
"Chen",
"Mo",
""
]
] | Human motion prediction is a fundamental part of many human-robot applications. Despite the recent progress in human motion prediction, most studies simplify the problem by predicting the human motion relative to a fixed joint and/or only limit their model to predict one possible future motion. While due to the complex nature of human motion, a single output cannot reflect all the possible actions one can do. Also, for any robotics application, we need the full human motion including the user trajectory not a 3d pose relative to the hip joint. In this paper, we try to address these two issues by proposing a transformer-based generative model for forecasting multiple diverse human motions. Our model generates \textit{N} future possible motion by querying a history of human motion. Our model first predicts the pose of the body relative to the hip joint. Then the \textit{Hip Prediction Module} predicts the trajectory of the hip movement for each predicted pose frame. To emphasize on the diverse future motions we introduce a similarity loss that penalizes the pairwise sample distance. We show that our system outperforms the state-of-the-art in human motion prediction while it can predict diverse multi-motion future trajectories with hip movements |
2402.04889 | Sebastian Schmidt | Sebastian Schmidt, Ines Zelch, Janek Bevendorff, Benno Stein, Matthias
Hagen, Martin Potthast | Detecting Generated Native Ads in Conversational Search | WWW'24 Short Papers Track; 4 pages | null | 10.1145/3589335.3651489 | null | cs.IR cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | Conversational search engines such as YouChat and Microsoft Copilot use large
language models (LLMs) to generate responses to queries. It is only a small
step to also let the same technology insert ads within the generated responses
- instead of separately placing ads next to a response. Inserted ads would be
reminiscent of native advertising and product placement, both of which are very
effective forms of subtle and manipulative advertising. Considering the high
computational costs associated with LLMs, for which providers need to develop
sustainable business models, users of conversational search engines may very
well be confronted with generated native ads in the near future. In this paper,
we thus take a first step to investigate whether LLMs can also be used as a
countermeasure, i.e., to block generated native ads. We compile the Webis
Generated Native Ads 2024 dataset of queries and generated responses with
automatically inserted ads, and evaluate whether LLMs or fine-tuned sentence
transformers can detect the ads. In our experiments, the investigated LLMs
struggle with the task but sentence transformers achieve precision and recall
values above 0.9.
| [
{
"created": "Wed, 7 Feb 2024 14:22:51 GMT",
"version": "v1"
},
{
"created": "Tue, 30 Apr 2024 09:15:42 GMT",
"version": "v2"
}
] | 2024-05-01 | [
[
"Schmidt",
"Sebastian",
""
],
[
"Zelch",
"Ines",
""
],
[
"Bevendorff",
"Janek",
""
],
[
"Stein",
"Benno",
""
],
[
"Hagen",
"Matthias",
""
],
[
"Potthast",
"Martin",
""
]
] | Conversational search engines such as YouChat and Microsoft Copilot use large language models (LLMs) to generate responses to queries. It is only a small step to also let the same technology insert ads within the generated responses - instead of separately placing ads next to a response. Inserted ads would be reminiscent of native advertising and product placement, both of which are very effective forms of subtle and manipulative advertising. Considering the high computational costs associated with LLMs, for which providers need to develop sustainable business models, users of conversational search engines may very well be confronted with generated native ads in the near future. In this paper, we thus take a first step to investigate whether LLMs can also be used as a countermeasure, i.e., to block generated native ads. We compile the Webis Generated Native Ads 2024 dataset of queries and generated responses with automatically inserted ads, and evaluate whether LLMs or fine-tuned sentence transformers can detect the ads. In our experiments, the investigated LLMs struggle with the task but sentence transformers achieve precision and recall values above 0.9. |
1607.02937 | Gabriel Gon\c{c}alves | Gabriel Resende Gon\c{c}alves, Sirlene Pio Gomes da Silva, David
Menotti, William Robson Schwartz | Benchmark for License Plate Character Segmentation | 32 pages, single column | J. Electron. Imaging. 25(5), 053034 (Oct 24, 2016) | 10.1117/1.JEI.25.5.053034 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatic License Plate Recognition (ALPR) has been the focus of many
researches in the past years. In general, ALPR is divided into the following
problems: detection of on-track vehicles, license plates detection, segmention
of license plate characters and optical character recognition (OCR). Even
though commercial solutions are available for controlled acquisition
conditions, e.g., the entrance of a parking lot, ALPR is still an open problem
when dealing with data acquired from uncontrolled environments, such as roads
and highways when relying only on imaging sensors. Due to the multiple
orientations and scales of the license plates captured by the camera, a very
challenging task of the ALPR is the License Plate Character Segmentation (LPCS)
step, which effectiveness is required to be (near) optimal to achieve a high
recognition rate by the OCR. To tackle the LPCS problem, this work proposes a
novel benchmark composed of a dataset designed to focus specifically on the
character segmentation step of the ALPR within an evaluation protocol.
Furthermore, we propose the Jaccard-Centroid coefficient, a new evaluation
measure more suitable than the Jaccard coefficient regarding the location of
the bounding box within the ground-truth annotation. The dataset is composed of
2,000 Brazilian license plates consisting of 14,000 alphanumeric symbols and
their corresponding bounding box annotations. We also present a new
straightforward approach to perform LPCS efficiently. Finally, we provide an
experimental evaluation for the dataset based on four LPCS approaches and
demonstrate the importance of character segmentation for achieving an accurate
OCR.
| [
{
"created": "Mon, 11 Jul 2016 13:32:19 GMT",
"version": "v1"
},
{
"created": "Mon, 31 Oct 2016 16:11:21 GMT",
"version": "v2"
}
] | 2016-11-01 | [
[
"Gonçalves",
"Gabriel Resende",
""
],
[
"da Silva",
"Sirlene Pio Gomes",
""
],
[
"Menotti",
"David",
""
],
[
"Schwartz",
"William Robson",
""
]
] | Automatic License Plate Recognition (ALPR) has been the focus of many researches in the past years. In general, ALPR is divided into the following problems: detection of on-track vehicles, license plates detection, segmention of license plate characters and optical character recognition (OCR). Even though commercial solutions are available for controlled acquisition conditions, e.g., the entrance of a parking lot, ALPR is still an open problem when dealing with data acquired from uncontrolled environments, such as roads and highways when relying only on imaging sensors. Due to the multiple orientations and scales of the license plates captured by the camera, a very challenging task of the ALPR is the License Plate Character Segmentation (LPCS) step, which effectiveness is required to be (near) optimal to achieve a high recognition rate by the OCR. To tackle the LPCS problem, this work proposes a novel benchmark composed of a dataset designed to focus specifically on the character segmentation step of the ALPR within an evaluation protocol. Furthermore, we propose the Jaccard-Centroid coefficient, a new evaluation measure more suitable than the Jaccard coefficient regarding the location of the bounding box within the ground-truth annotation. The dataset is composed of 2,000 Brazilian license plates consisting of 14,000 alphanumeric symbols and their corresponding bounding box annotations. We also present a new straightforward approach to perform LPCS efficiently. Finally, we provide an experimental evaluation for the dataset based on four LPCS approaches and demonstrate the importance of character segmentation for achieving an accurate OCR. |
1201.5360 | Serdar Y\"uksel | Serdar Y\"uksel | Characterization of Information Channels for Asymptotic Mean
Stationarity and Stochastic Stability of Non-stationary/Unstable Linear
Systems | To appear in IEEE Transactions on Information Theory | null | null | null | cs.IT cs.SY math.IT math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Stabilization of non-stationary linear systems over noisy communication
channels is considered. Stochastically stable sources, and unstable but
noise-free or bounded-noise systems have been extensively studied in
information theory and control theory literature since 1970s, with a renewed
interest in the past decade. There have also been studies on non-causal and
causal coding of unstable/non-stationary linear Gaussian sources. In this
paper, tight necessary and sufficient conditions for stochastic stabilizability
of unstable (non-stationary) possibly multi-dimensional linear systems driven
by Gaussian noise over discrete channels (possibly with memory and feedback)
are presented. Stochastic stability notions include recurrence, asymptotic mean
stationarity and sample path ergodicity, and the existence of finite second
moments. Our constructive proof uses random-time state-dependent stochastic
drift criteria for stabilization of Markov chains. For asymptotic mean
stationarity (and thus sample path ergodicity), it is sufficient that the
capacity of a channel is (strictly) greater than the sum of the logarithms of
the unstable pole magnitudes for memoryless channels and a class of channels
with memory. This condition is also necessary under a mild technical condition.
Sufficient conditions for the existence of finite average second moments for
such systems driven by unbounded noise are provided.
| [
{
"created": "Wed, 25 Jan 2012 20:25:26 GMT",
"version": "v1"
},
{
"created": "Tue, 1 May 2012 22:54:40 GMT",
"version": "v2"
},
{
"created": "Fri, 4 May 2012 17:08:20 GMT",
"version": "v3"
}
] | 2012-05-07 | [
[
"Yüksel",
"Serdar",
""
]
] | Stabilization of non-stationary linear systems over noisy communication channels is considered. Stochastically stable sources, and unstable but noise-free or bounded-noise systems have been extensively studied in information theory and control theory literature since 1970s, with a renewed interest in the past decade. There have also been studies on non-causal and causal coding of unstable/non-stationary linear Gaussian sources. In this paper, tight necessary and sufficient conditions for stochastic stabilizability of unstable (non-stationary) possibly multi-dimensional linear systems driven by Gaussian noise over discrete channels (possibly with memory and feedback) are presented. Stochastic stability notions include recurrence, asymptotic mean stationarity and sample path ergodicity, and the existence of finite second moments. Our constructive proof uses random-time state-dependent stochastic drift criteria for stabilization of Markov chains. For asymptotic mean stationarity (and thus sample path ergodicity), it is sufficient that the capacity of a channel is (strictly) greater than the sum of the logarithms of the unstable pole magnitudes for memoryless channels and a class of channels with memory. This condition is also necessary under a mild technical condition. Sufficient conditions for the existence of finite average second moments for such systems driven by unbounded noise are provided. |
1704.02281 | Subhojyoti Mukherjee | Subhojyoti Mukherjee, K. P. Naveen, Nandan Sudarsanam, Balaraman
Ravindran | Thresholding Bandits with Augmented UCB | 7 pages, Accepted at Proceedings of the 26th International Joint
Conference on Artificial Intelligence, 2017, 2515-2521 | Proceedings of the 26th International Joint Conference on
Artificial Intelligence, 2017, 2515-2521 | 10.24963/ijcai.2017/350 | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we propose the Augmented-UCB (AugUCB) algorithm for a
fixed-budget version of the thresholding bandit problem (TBP), where the
objective is to identify a set of arms whose quality is above a threshold. A
key feature of AugUCB is that it uses both mean and variance estimates to
eliminate arms that have been sufficiently explored; to the best of our
knowledge this is the first algorithm to employ such an approach for the
considered TBP. Theoretically, we obtain an upper bound on the loss
(probability of mis-classification) incurred by AugUCB. Although UCBEV in
literature provides a better guarantee, it is important to emphasize that UCBEV
has access to problem complexity (whose computation requires arms' mean and
variances), and hence is not realistic in practice; this is in contrast to
AugUCB whose implementation does not require any such complexity inputs. We
conduct extensive simulation experiments to validate the performance of AugUCB.
Through our simulation work, we establish that AugUCB, owing to its utilization
of variance estimates, performs significantly better than the state-of-the-art
APT, CSAR and other non variance-based algorithms.
| [
{
"created": "Fri, 7 Apr 2017 16:31:13 GMT",
"version": "v1"
},
{
"created": "Tue, 9 May 2017 12:19:48 GMT",
"version": "v2"
},
{
"created": "Fri, 7 Jun 2019 21:43:30 GMT",
"version": "v3"
}
] | 2019-06-11 | [
[
"Mukherjee",
"Subhojyoti",
""
],
[
"Naveen",
"K. P.",
""
],
[
"Sudarsanam",
"Nandan",
""
],
[
"Ravindran",
"Balaraman",
""
]
] | In this paper we propose the Augmented-UCB (AugUCB) algorithm for a fixed-budget version of the thresholding bandit problem (TBP), where the objective is to identify a set of arms whose quality is above a threshold. A key feature of AugUCB is that it uses both mean and variance estimates to eliminate arms that have been sufficiently explored; to the best of our knowledge this is the first algorithm to employ such an approach for the considered TBP. Theoretically, we obtain an upper bound on the loss (probability of mis-classification) incurred by AugUCB. Although UCBEV in literature provides a better guarantee, it is important to emphasize that UCBEV has access to problem complexity (whose computation requires arms' mean and variances), and hence is not realistic in practice; this is in contrast to AugUCB whose implementation does not require any such complexity inputs. We conduct extensive simulation experiments to validate the performance of AugUCB. Through our simulation work, we establish that AugUCB, owing to its utilization of variance estimates, performs significantly better than the state-of-the-art APT, CSAR and other non variance-based algorithms. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.