id stringlengths 9 10 | submitter stringlengths 1 64 ⌀ | authors stringlengths 4 20.7k | title stringlengths 4 246 | comments stringlengths 1 523 ⌀ | journal-ref stringlengths 4 404 ⌀ | doi stringlengths 11 153 ⌀ | report-no stringlengths 2 254 ⌀ | categories stringlengths 5 98 | license stringclasses 9 values | orig_abstract stringlengths 14 3.35k | versions listlengths 1 60 | update_date stringlengths 10 10 | authors_parsed listlengths 1 1.35k | abstract stringlengths 11 3.34k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1903.10092 | Arya D. McCarthy | Arya D. McCarthy, Tongfei Chen, Seth Ebner | An Exact No Free Lunch Theorem for Community Detection | null | Complex Networks and Their Applications VIII. COMPLEX NETWORKS
2019. Studies in Computational Intelligence, vol 881 | 10.1007/978-3-030-36687-2_15 | null | cs.SI cs.DM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A precondition for a No Free Lunch theorem is evaluation with a loss function
which does not assume a priori superiority of some outputs over others. A
previous result for community detection by Peel et al. (2017) relies on a
mismatch between the loss function and the problem domain. The loss function
computes an expectation over only a subset of the universe of possible outputs;
thus, it is only asymptotically appropriate with respect to the problem size.
By using the correct random model for the problem domain, we provide a
stronger, exact No Free Lunch theorem for community detection. The claim
generalizes to other set-partitioning tasks including core/periphery
separation, $k$-clustering, and graph partitioning. Finally, we review the
literature of proposed evaluation functions and identify functions which
(perhaps with slight modifications) are compatible with an exact No Free Lunch
theorem.
| [
{
"created": "Mon, 25 Mar 2019 01:03:28 GMT",
"version": "v1"
}
] | 2020-05-22 | [
[
"McCarthy",
"Arya D.",
""
],
[
"Chen",
"Tongfei",
""
],
[
"Ebner",
"Seth",
""
]
] | A precondition for a No Free Lunch theorem is evaluation with a loss function which does not assume a priori superiority of some outputs over others. A previous result for community detection by Peel et al. (2017) relies on a mismatch between the loss function and the problem domain. The loss function computes an expectation over only a subset of the universe of possible outputs; thus, it is only asymptotically appropriate with respect to the problem size. By using the correct random model for the problem domain, we provide a stronger, exact No Free Lunch theorem for community detection. The claim generalizes to other set-partitioning tasks including core/periphery separation, $k$-clustering, and graph partitioning. Finally, we review the literature of proposed evaluation functions and identify functions which (perhaps with slight modifications) are compatible with an exact No Free Lunch theorem. |
2407.21220 | Camilo Andr\'es Mart\'inez Mej\'ia | C. A. Mart\'inez-Mej\'ia, J. Solano, J. Breier, D. Bucko, X. Hou | DeepBaR: Fault Backdoor Attack on Deep Neural Network Layers | null | null | null | null | cs.LG cs.CR cs.CV | http://creativecommons.org/licenses/by/4.0/ | Machine Learning using neural networks has received prominent attention
recently because of its success in solving a wide variety of computational
tasks, in particular in the field of computer vision. However, several works
have drawn attention to potential security risks involved with the training and
implementation of such networks. In this work, we introduce DeepBaR, a novel
approach that implants backdoors on neural networks by faulting their behavior
at training, especially during fine-tuning. Our technique aims to generate
adversarial samples by optimizing a custom loss function that mimics the
implanted backdoors while adding an almost non-visible trigger in the image. We
attack three popular convolutional neural network architectures and show that
DeepBaR attacks have a success rate of up to 98.30\%. Furthermore, DeepBaR does
not significantly affect the accuracy of the attacked networks after deployment
when non-malicious inputs are given. Remarkably, DeepBaR allows attackers to
choose an input that looks similar to a given class, from a human perspective,
but that will be classified as belonging to an arbitrary target class.
| [
{
"created": "Tue, 30 Jul 2024 22:14:47 GMT",
"version": "v1"
}
] | 2024-08-01 | [
[
"Martínez-Mejía",
"C. A.",
""
],
[
"Solano",
"J.",
""
],
[
"Breier",
"J.",
""
],
[
"Bucko",
"D.",
""
],
[
"Hou",
"X.",
""
]
] | Machine Learning using neural networks has received prominent attention recently because of its success in solving a wide variety of computational tasks, in particular in the field of computer vision. However, several works have drawn attention to potential security risks involved with the training and implementation of such networks. In this work, we introduce DeepBaR, a novel approach that implants backdoors on neural networks by faulting their behavior at training, especially during fine-tuning. Our technique aims to generate adversarial samples by optimizing a custom loss function that mimics the implanted backdoors while adding an almost non-visible trigger in the image. We attack three popular convolutional neural network architectures and show that DeepBaR attacks have a success rate of up to 98.30\%. Furthermore, DeepBaR does not significantly affect the accuracy of the attacked networks after deployment when non-malicious inputs are given. Remarkably, DeepBaR allows attackers to choose an input that looks similar to a given class, from a human perspective, but that will be classified as belonging to an arbitrary target class. |
2311.03608 | Burkhard Schipper | Gaia Belardinelli, Burkhard C. Schipper | Implicit Knowledge in Unawareness Structures | 46 pages. arXiv admin note: substantial text overlap with
arXiv:2307.05041 author note: This is the full version of the
arXiv:2307.05041 | null | null | null | cs.LO cs.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Awareness structures by Fagin and Halpern (1988) (FH) feature a syntactic
awareness correspondence and accessibility relations modeling implicit
knowledge. They are a flexible model of unawareness, and best interpreted from
a outside modeler's perspective. Unawareness structures by Heifetz, Meier, and
Schipper (2006, 2008) (HMS) model awareness by a lattice of state spaces and
explicit knowledge via possibility correspondences. Sublattices thereof can be
interpreted as subjective views of agents. Open questions include (1) how
implicit knowledge can be defined in HMS structures, and (2) in which way FH
structures can be extended to model the agents' subjective views. In this
paper, we address (1) by defining implicit knowledge such that it is consistent
with explicit knowledge in HMS models. We also introduce a variant of HMS
models that instead of explicit knowledge, takes implicit knowledge and
awareness as primitives. Further, we address (2) by introducing a category of
FH models that are modally equivalent relative to sublanguages and can be
interpreted as agents' subjective views depending on their awareness. These
constructions allow us to show an equivalence between HMS and FH models. As a
corollary, we obtain soundness and completeness of HMS models with respect to
the Logic of Propositional Awareness, based on a language featuring both
implicit and explicit knowledge.
| [
{
"created": "Mon, 6 Nov 2023 23:26:21 GMT",
"version": "v1"
},
{
"created": "Fri, 17 May 2024 17:57:22 GMT",
"version": "v2"
}
] | 2024-05-21 | [
[
"Belardinelli",
"Gaia",
""
],
[
"Schipper",
"Burkhard C.",
""
]
] | Awareness structures by Fagin and Halpern (1988) (FH) feature a syntactic awareness correspondence and accessibility relations modeling implicit knowledge. They are a flexible model of unawareness, and best interpreted from a outside modeler's perspective. Unawareness structures by Heifetz, Meier, and Schipper (2006, 2008) (HMS) model awareness by a lattice of state spaces and explicit knowledge via possibility correspondences. Sublattices thereof can be interpreted as subjective views of agents. Open questions include (1) how implicit knowledge can be defined in HMS structures, and (2) in which way FH structures can be extended to model the agents' subjective views. In this paper, we address (1) by defining implicit knowledge such that it is consistent with explicit knowledge in HMS models. We also introduce a variant of HMS models that instead of explicit knowledge, takes implicit knowledge and awareness as primitives. Further, we address (2) by introducing a category of FH models that are modally equivalent relative to sublanguages and can be interpreted as agents' subjective views depending on their awareness. These constructions allow us to show an equivalence between HMS and FH models. As a corollary, we obtain soundness and completeness of HMS models with respect to the Logic of Propositional Awareness, based on a language featuring both implicit and explicit knowledge. |
1910.06079 | Tomek Korbak | Tomasz Korbak and Julian Zubek and {\L}ukasz Kuci\'nski and Piotr
Mi{\l}o\'s and Joanna R\k{a}czaszek-Leonardi | Developmentally motivated emergence of compositional communication via
template transfer | Accepted for NeurIPS 2019 workshop Emergent Communication: Towards
Natural Language | null | null | null | cs.LG cs.AI cs.MA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper explores a novel approach to achieving emergent compositional
communication in multi-agent systems. We propose a training regime implementing
template transfer, the idea of carrying over learned biases across contexts. In
our method, a sender-receiver pair is first trained with disentangled loss
functions and then the receiver is transferred to train a new sender with a
standard loss. Unlike other methods (e.g. the obverter algorithm), our approach
does not require imposing inductive biases on the architecture of the agents.
We experimentally show the emergence of compositional communication using
topographical similarity, zero-shot generalization and context independence as
evaluation metrics. The presented approach is connected to an important line of
work in semiotics and developmental psycholinguistics: it supports a conjecture
that compositional communication is scaffolded on simpler communication
protocols.
| [
{
"created": "Fri, 4 Oct 2019 16:04:53 GMT",
"version": "v1"
}
] | 2019-10-15 | [
[
"Korbak",
"Tomasz",
""
],
[
"Zubek",
"Julian",
""
],
[
"Kuciński",
"Łukasz",
""
],
[
"Miłoś",
"Piotr",
""
],
[
"Rączaszek-Leonardi",
"Joanna",
""
]
] | This paper explores a novel approach to achieving emergent compositional communication in multi-agent systems. We propose a training regime implementing template transfer, the idea of carrying over learned biases across contexts. In our method, a sender-receiver pair is first trained with disentangled loss functions and then the receiver is transferred to train a new sender with a standard loss. Unlike other methods (e.g. the obverter algorithm), our approach does not require imposing inductive biases on the architecture of the agents. We experimentally show the emergence of compositional communication using topographical similarity, zero-shot generalization and context independence as evaluation metrics. The presented approach is connected to an important line of work in semiotics and developmental psycholinguistics: it supports a conjecture that compositional communication is scaffolded on simpler communication protocols. |
1903.00153 | J\'er\'emy Dubut | Juraj Kol\v{c}\'ak, Ichiro Hasuo, J\'er\'emy Dubut, Shin-ya Katsumata,
David Sprunger and Akihisa Yamada | Relational Differential Dynamic Logic | null | null | null | null | cs.LO | http://creativecommons.org/licenses/by/4.0/ | In the field of quality assurance of hybrid systems (that combine continuous
physical dynamics and discrete digital control), Platzer's differential dynamic
logic (dL) is widely recognized as a deductive verification method with solid
mathematical foundations and sophisticated tool support. Motivated by
benchmarks provided by our industry partner, we study a relational extension of
dL, aiming to formally prove statements such as "an earlier deployment of the
emergency brake decreases the collision speed." A main technical challenge here
is to relate two states of two dynamics at different time points. Our main
contribution is a theory of suitable simulations (a relational extension of
differential invariants that are central proof methods in dL), and a derived
technique of time stretching. The latter features particularly high
applicability, since the user does not have to synthesize a simulation out of
the air. We derive new inference rules for dL from these notions, and
demonstrate their use over a couple of automotive case studies.
| [
{
"created": "Fri, 1 Mar 2019 04:42:35 GMT",
"version": "v1"
},
{
"created": "Thu, 12 Mar 2020 07:01:31 GMT",
"version": "v2"
}
] | 2020-03-13 | [
[
"Kolčák",
"Juraj",
""
],
[
"Hasuo",
"Ichiro",
""
],
[
"Dubut",
"Jérémy",
""
],
[
"Katsumata",
"Shin-ya",
""
],
[
"Sprunger",
"David",
""
],
[
"Yamada",
"Akihisa",
""
]
] | In the field of quality assurance of hybrid systems (that combine continuous physical dynamics and discrete digital control), Platzer's differential dynamic logic (dL) is widely recognized as a deductive verification method with solid mathematical foundations and sophisticated tool support. Motivated by benchmarks provided by our industry partner, we study a relational extension of dL, aiming to formally prove statements such as "an earlier deployment of the emergency brake decreases the collision speed." A main technical challenge here is to relate two states of two dynamics at different time points. Our main contribution is a theory of suitable simulations (a relational extension of differential invariants that are central proof methods in dL), and a derived technique of time stretching. The latter features particularly high applicability, since the user does not have to synthesize a simulation out of the air. We derive new inference rules for dL from these notions, and demonstrate their use over a couple of automotive case studies. |
2210.00715 | Chahat Singh | Chahat Deep Singh, Riya Kumari, Cornelia Ferm\"uller, Nitin J. Sanket,
Yiannis Aloimonos | WorldGen: A Large Scale Generative Simulator | null | Under review in ICRA 2023 | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by/4.0/ | In the era of deep learning, data is the critical determining factor in the
performance of neural network models. Generating large datasets suffers from
various difficulties such as scalability, cost efficiency and photorealism. To
avoid expensive and strenuous dataset collection and annotations, researchers
have inclined towards computer-generated datasets. Although, a lack of
photorealism and a limited amount of computer-aided data, has bounded the
accuracy of network predictions.
To this end, we present WorldGen -- an open source framework to autonomously
generate countless structured and unstructured 3D photorealistic scenes such as
city view, object collection, and object fragmentation along with its rich
ground truth annotation data. WorldGen being a generative model gives the user
full access and control to features such as texture, object structure, motion,
camera and lens properties for better generalizability by diminishing the data
bias in the network. We demonstrate the effectiveness of WorldGen by presenting
an evaluation on deep optical flow. We hope such a tool can open doors for
future research in a myriad of domains related to robotics and computer vision
by reducing manual labor and the cost of acquiring rich and high-quality data.
| [
{
"created": "Mon, 3 Oct 2022 05:07:42 GMT",
"version": "v1"
}
] | 2022-10-04 | [
[
"Singh",
"Chahat Deep",
""
],
[
"Kumari",
"Riya",
""
],
[
"Fermüller",
"Cornelia",
""
],
[
"Sanket",
"Nitin J.",
""
],
[
"Aloimonos",
"Yiannis",
""
]
] | In the era of deep learning, data is the critical determining factor in the performance of neural network models. Generating large datasets suffers from various difficulties such as scalability, cost efficiency and photorealism. To avoid expensive and strenuous dataset collection and annotations, researchers have inclined towards computer-generated datasets. Although, a lack of photorealism and a limited amount of computer-aided data, has bounded the accuracy of network predictions. To this end, we present WorldGen -- an open source framework to autonomously generate countless structured and unstructured 3D photorealistic scenes such as city view, object collection, and object fragmentation along with its rich ground truth annotation data. WorldGen being a generative model gives the user full access and control to features such as texture, object structure, motion, camera and lens properties for better generalizability by diminishing the data bias in the network. We demonstrate the effectiveness of WorldGen by presenting an evaluation on deep optical flow. We hope such a tool can open doors for future research in a myriad of domains related to robotics and computer vision by reducing manual labor and the cost of acquiring rich and high-quality data. |
2106.15965 | Michael Yuhas | Michael Yuhas, Yeli Feng, Daniel Jun Xian Ng, Zahra Rahiminasab,
Arvind Easwaran | Embedded out-of-distribution detection on an autonomous robot platform | 6 pages, 8 figures | Yuhas, M., Feng, Y., Ng, D. J. X., Rahiminasab, Z., & Easwaran, A.
(2021, May). Embedded out-of-distribution detection on an autonomous robot
platform. In Proceedings of the Workshop on Design Automation for CPS and IoT
(pp. 13-18) | 10.1145/3445034.3460509 | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine learning (ML) is actively finding its way into modern cyber-physical
systems (CPS), many of which are safety-critical real-time systems. It is well
known that ML outputs are not reliable when testing data are novel with regards
to model training and validation data, i.e., out-of-distribution (OOD) test
data. We implement an unsupervised deep neural network-based OOD detector on a
real-time embedded autonomous Duckiebot and evaluate detection performance. Our
OOD detector produces a success rate of 87.5% for emergency stopping a
Duckiebot on a braking test bed we designed. We also provide case analysis on
computing resource challenges specific to the Robot Operating System (ROS)
middleware on the Duckiebot.
| [
{
"created": "Wed, 30 Jun 2021 10:25:19 GMT",
"version": "v1"
}
] | 2021-07-01 | [
[
"Yuhas",
"Michael",
""
],
[
"Feng",
"Yeli",
""
],
[
"Ng",
"Daniel Jun Xian",
""
],
[
"Rahiminasab",
"Zahra",
""
],
[
"Easwaran",
"Arvind",
""
]
] | Machine learning (ML) is actively finding its way into modern cyber-physical systems (CPS), many of which are safety-critical real-time systems. It is well known that ML outputs are not reliable when testing data are novel with regards to model training and validation data, i.e., out-of-distribution (OOD) test data. We implement an unsupervised deep neural network-based OOD detector on a real-time embedded autonomous Duckiebot and evaluate detection performance. Our OOD detector produces a success rate of 87.5% for emergency stopping a Duckiebot on a braking test bed we designed. We also provide case analysis on computing resource challenges specific to the Robot Operating System (ROS) middleware on the Duckiebot. |
1908.10870 | Shuo Zhang | Jason Shuo Zhang, Chenhao Tan, and Qin Lv | Intergroup Contact in the Wild: Characterizing Language Differences
between Intergroup and Single-group Members in NBA-related Discussion Forums | ACM Conference on Computer-Supported Cooperative Work and Social
Computing (CSCW), 2019 | null | 10.1145/3359295 | null | cs.CY cs.HC cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Intergroup contact has long been considered as an effective strategy to
reduce prejudice between groups. However, recent studies suggest that exposure
to opposing groups in online platforms can exacerbate polarization. To further
understand the behavior of individuals who actively engage in intergroup
contact in practice, we provide a large-scale observational study of intragroup
behavioral differences between members with and without intergroup contact. We
leverage the existing structure of NBA-related discussion forums on Reddit to
study the context of professional sports. We identify fans of each NBA team as
members of a group and trace whether they have intergroup contact. Our results
show that members with intergroup contact use more negative and abusive
language in their affiliated group than those without such contact, after
controlling for activity levels. We further quantify different levels of
intergroup contact and show that there may exist nonlinear mechanisms regarding
how intergroup contact relates to intragroup behavior. Our findings provide
complementary evidence to experimental studies in a novel context and also shed
light on possible reasons for the different outcomes in prior studies.
| [
{
"created": "Wed, 28 Aug 2019 18:00:03 GMT",
"version": "v1"
}
] | 2019-08-30 | [
[
"Zhang",
"Jason Shuo",
""
],
[
"Tan",
"Chenhao",
""
],
[
"Lv",
"Qin",
""
]
] | Intergroup contact has long been considered as an effective strategy to reduce prejudice between groups. However, recent studies suggest that exposure to opposing groups in online platforms can exacerbate polarization. To further understand the behavior of individuals who actively engage in intergroup contact in practice, we provide a large-scale observational study of intragroup behavioral differences between members with and without intergroup contact. We leverage the existing structure of NBA-related discussion forums on Reddit to study the context of professional sports. We identify fans of each NBA team as members of a group and trace whether they have intergroup contact. Our results show that members with intergroup contact use more negative and abusive language in their affiliated group than those without such contact, after controlling for activity levels. We further quantify different levels of intergroup contact and show that there may exist nonlinear mechanisms regarding how intergroup contact relates to intragroup behavior. Our findings provide complementary evidence to experimental studies in a novel context and also shed light on possible reasons for the different outcomes in prior studies. |
2009.09351 | Benjamin Plaut | Ashish Goel and Benjamin Plaut | Counteracting Inequality in Markets via Convex Pricing | Accepted to WINE 2020 | null | null | null | cs.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study market mechanisms for allocating divisible goods to competing agents
with quasilinear utilities. For \emph{linear} pricing (i.e., the cost of a good
is proportional to the quantity purchased), the First Welfare Theorem states
that Walrasian equilibria maximize the sum of agent valuations. This ensures
efficiency, but can lead to extreme inequality across individuals. Many
real-world markets -- especially for water -- use \emph{convex} pricing
instead, often known as increasing block tariffs (IBTs). IBTs are thought to
promote equality, but there is a dearth of theoretical support for this claim.
In this paper, we study a simple convex pricing rule and show that the
resulting equilibria are guaranteed to maximize a CES welfare function.
Furthermore, a parameter of the pricing rule directly determines which CES
welfare function is implemented; by tweaking this parameter, the social planner
can precisely control the tradeoff between equality and efficiency. Our result
holds for any valuations that are homogeneous, differentiable, and concave. We
also give an iterative algorithm for computing these pricing rules, derive a
truthful mechanism for the case of a single good, and discuss Sybil attacks.
| [
{
"created": "Sun, 20 Sep 2020 05:10:01 GMT",
"version": "v1"
}
] | 2020-09-22 | [
[
"Goel",
"Ashish",
""
],
[
"Plaut",
"Benjamin",
""
]
] | We study market mechanisms for allocating divisible goods to competing agents with quasilinear utilities. For \emph{linear} pricing (i.e., the cost of a good is proportional to the quantity purchased), the First Welfare Theorem states that Walrasian equilibria maximize the sum of agent valuations. This ensures efficiency, but can lead to extreme inequality across individuals. Many real-world markets -- especially for water -- use \emph{convex} pricing instead, often known as increasing block tariffs (IBTs). IBTs are thought to promote equality, but there is a dearth of theoretical support for this claim. In this paper, we study a simple convex pricing rule and show that the resulting equilibria are guaranteed to maximize a CES welfare function. Furthermore, a parameter of the pricing rule directly determines which CES welfare function is implemented; by tweaking this parameter, the social planner can precisely control the tradeoff between equality and efficiency. Our result holds for any valuations that are homogeneous, differentiable, and concave. We also give an iterative algorithm for computing these pricing rules, derive a truthful mechanism for the case of a single good, and discuss Sybil attacks. |
1903.01372 | Andrea Tassi | Ioannis Mavromatis, Andrea Tassi, Robert J. Piechocki, Andrew Nix | Efficient Millimeter-Wave Infrastructure Placement for City-Scale ITS | To appear in IEEE VTC-Spring 2019 | null | 10.1109/VTCSpring.2019.8746518 | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Millimeter Waves (mmWaves) will play a pivotal role in the next-generation of
Intelligent Transportation Systems (ITSs). However, in deep urban environments,
sensitivity to blockages creates the need for more sophisticated network
planning. In this paper, we present an agile strategy for deploying road-side
nodes in a dense city scenario. In our system model, we consider strict
Quality-of-Service (QoS) constraints (e.g. high throughput, low latency) that
are typical of ITS applications. Our approach is scalable, insofar that takes
into account the unique road and building shapes of each city, performing well
for both regular and irregular city layouts. It allows us not only to achieve
the required QoS constraints but it also provides up to $50\%$ reduction in the
number of nodes required, compared to existing deployment solutions.
| [
{
"created": "Mon, 4 Mar 2019 17:13:13 GMT",
"version": "v1"
}
] | 2022-09-05 | [
[
"Mavromatis",
"Ioannis",
""
],
[
"Tassi",
"Andrea",
""
],
[
"Piechocki",
"Robert J.",
""
],
[
"Nix",
"Andrew",
""
]
] | Millimeter Waves (mmWaves) will play a pivotal role in the next-generation of Intelligent Transportation Systems (ITSs). However, in deep urban environments, sensitivity to blockages creates the need for more sophisticated network planning. In this paper, we present an agile strategy for deploying road-side nodes in a dense city scenario. In our system model, we consider strict Quality-of-Service (QoS) constraints (e.g. high throughput, low latency) that are typical of ITS applications. Our approach is scalable, insofar that takes into account the unique road and building shapes of each city, performing well for both regular and irregular city layouts. It allows us not only to achieve the required QoS constraints but it also provides up to $50\%$ reduction in the number of nodes required, compared to existing deployment solutions. |
1508.01340 | Marc Boull\'e | Marc Boull\'e | Universal Approximation of Edge Density in Large Graphs | null | null | null | null | cs.SI cs.DB stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present a novel way to summarize the structure of large
graphs, based on non-parametric estimation of edge density in directed
multigraphs. Following coclustering approach, we use a clustering of the
vertices, with a piecewise constant estimation of the density of the edges
across the clusters, and address the problem of automatically and reliably
inferring the number of clusters, which is the granularity of the coclustering.
We use a model selection technique with data-dependent prior and obtain an
exact evaluation criterion for the posterior probability of edge density
estimation models. We demonstrate, both theoretically and empirically, that our
data-dependent modeling technique is consistent, resilient to noise, valid non
asymptotically and asymptotically behaves as an universal approximator of the
true edge density in directed multigraphs. We evaluate our method using
artificial graphs and present its practical interest on real world graphs. The
method is both robust and scalable. It is able to extract insightful patterns
in the unsupervised learning setting and to provide state of the art accuracy
when used as a preparation step for supervised learning.
| [
{
"created": "Thu, 6 Aug 2015 09:40:28 GMT",
"version": "v1"
}
] | 2015-08-07 | [
[
"Boullé",
"Marc",
""
]
] | In this paper, we present a novel way to summarize the structure of large graphs, based on non-parametric estimation of edge density in directed multigraphs. Following coclustering approach, we use a clustering of the vertices, with a piecewise constant estimation of the density of the edges across the clusters, and address the problem of automatically and reliably inferring the number of clusters, which is the granularity of the coclustering. We use a model selection technique with data-dependent prior and obtain an exact evaluation criterion for the posterior probability of edge density estimation models. We demonstrate, both theoretically and empirically, that our data-dependent modeling technique is consistent, resilient to noise, valid non asymptotically and asymptotically behaves as an universal approximator of the true edge density in directed multigraphs. We evaluate our method using artificial graphs and present its practical interest on real world graphs. The method is both robust and scalable. It is able to extract insightful patterns in the unsupervised learning setting and to provide state of the art accuracy when used as a preparation step for supervised learning. |
1608.06794 | Thomas Kober | Thomas Kober, Julie Weeds, Jeremy Reffin and David Weir | Improving Sparse Word Representations with Distributional Inference for
Semantic Composition | To appear at EMNLP 2016 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Distributional models are derived from co-occurrences in a corpus, where only
a small proportion of all possible plausible co-occurrences will be observed.
This results in a very sparse vector space, requiring a mechanism for inferring
missing knowledge. Most methods face this challenge in ways that render the
resulting word representations uninterpretable, with the consequence that
semantic composition becomes hard to model. In this paper we explore an
alternative which involves explicitly inferring unobserved co-occurrences using
the distributional neighbourhood. We show that distributional inference
improves sparse word representations on several word similarity benchmarks and
demonstrate that our model is competitive with the state-of-the-art for
adjective-noun, noun-noun and verb-object compositions while being fully
interpretable.
| [
{
"created": "Wed, 24 Aug 2016 12:38:45 GMT",
"version": "v1"
}
] | 2016-08-25 | [
[
"Kober",
"Thomas",
""
],
[
"Weeds",
"Julie",
""
],
[
"Reffin",
"Jeremy",
""
],
[
"Weir",
"David",
""
]
] | Distributional models are derived from co-occurrences in a corpus, where only a small proportion of all possible plausible co-occurrences will be observed. This results in a very sparse vector space, requiring a mechanism for inferring missing knowledge. Most methods face this challenge in ways that render the resulting word representations uninterpretable, with the consequence that semantic composition becomes hard to model. In this paper we explore an alternative which involves explicitly inferring unobserved co-occurrences using the distributional neighbourhood. We show that distributional inference improves sparse word representations on several word similarity benchmarks and demonstrate that our model is competitive with the state-of-the-art for adjective-noun, noun-noun and verb-object compositions while being fully interpretable. |
1802.03821 | Ibrahim Riza Hallac | Betul Karakus, Ibrahim Riza Hallac, Galip Aydin | Distributed Readability Analysis Of Turkish Elementary School Textbooks | Proceedings of International Conference on Information Technology and
Computer Science July 11-12, 2015, ISBN:9788193137307 | null | null | null | cs.DC cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The readability assessment deals with estimating the level of difficulty in
reading texts.Many readability tests, which do not indicate execution
efficiency, have been applied on specific texts to measure the reading grade
level in science textbooks. In this paper, we analyze the content covered in
elementary school Turkish textbooks by employing a distributed parallel
processing framework based on popular MapReduce paradigm. We outline the
architecture of a distributed Big Data processing system which uses Hadoop for
full-text readability analysis. The readability scores of the textbooks and
system performance measurements are also given in the paper.
| [
{
"created": "Sun, 11 Feb 2018 21:18:45 GMT",
"version": "v1"
}
] | 2018-02-13 | [
[
"Karakus",
"Betul",
""
],
[
"Hallac",
"Ibrahim Riza",
""
],
[
"Aydin",
"Galip",
""
]
] | The readability assessment deals with estimating the level of difficulty in reading texts.Many readability tests, which do not indicate execution efficiency, have been applied on specific texts to measure the reading grade level in science textbooks. In this paper, we analyze the content covered in elementary school Turkish textbooks by employing a distributed parallel processing framework based on popular MapReduce paradigm. We outline the architecture of a distributed Big Data processing system which uses Hadoop for full-text readability analysis. The readability scores of the textbooks and system performance measurements are also given in the paper. |
2110.13363 | Bicheng Ying | Bicheng Ying, Kun Yuan, Yiming Chen, Hanbin Hu, Pan Pan, Wotao Yin | Exponential Graph is Provably Efficient for Decentralized Deep Training | null | null | null | null | cs.LG math.OC | http://creativecommons.org/licenses/by/4.0/ | Decentralized SGD is an emerging training method for deep learning known for
its much less (thus faster) communication per iteration, which relaxes the
averaging step in parallel SGD to inexact averaging. The less exact the
averaging is, however, the more the total iterations the training needs to
take. Therefore, the key to making decentralized SGD efficient is to realize
nearly-exact averaging using little communication. This requires a skillful
choice of communication topology, which is an under-studied topic in
decentralized optimization.
In this paper, we study so-called exponential graphs where every node is
connected to $O(\log(n))$ neighbors and $n$ is the total number of nodes. This
work proves such graphs can lead to both fast communication and effective
averaging simultaneously. We also discover that a sequence of $\log(n)$
one-peer exponential graphs, in which each node communicates to one single
neighbor per iteration, can together achieve exact averaging. This favorable
property enables one-peer exponential graph to average as effective as its
static counterpart but communicates more efficiently. We apply these
exponential graphs in decentralized (momentum) SGD to obtain the
state-of-the-art balance between per-iteration communication and iteration
complexity among all commonly-used topologies. Experimental results on a
variety of tasks and models demonstrate that decentralized (momentum) SGD over
exponential graphs promises both fast and high-quality training. Our code is
implemented through BlueFog and available at
https://github.com/Bluefog-Lib/NeurIPS2021-Exponential-Graph.
| [
{
"created": "Tue, 26 Oct 2021 02:33:39 GMT",
"version": "v1"
}
] | 2021-10-27 | [
[
"Ying",
"Bicheng",
""
],
[
"Yuan",
"Kun",
""
],
[
"Chen",
"Yiming",
""
],
[
"Hu",
"Hanbin",
""
],
[
"Pan",
"Pan",
""
],
[
"Yin",
"Wotao",
""
]
] | Decentralized SGD is an emerging training method for deep learning known for its much less (thus faster) communication per iteration, which relaxes the averaging step in parallel SGD to inexact averaging. The less exact the averaging is, however, the more the total iterations the training needs to take. Therefore, the key to making decentralized SGD efficient is to realize nearly-exact averaging using little communication. This requires a skillful choice of communication topology, which is an under-studied topic in decentralized optimization. In this paper, we study so-called exponential graphs where every node is connected to $O(\log(n))$ neighbors and $n$ is the total number of nodes. This work proves such graphs can lead to both fast communication and effective averaging simultaneously. We also discover that a sequence of $\log(n)$ one-peer exponential graphs, in which each node communicates to one single neighbor per iteration, can together achieve exact averaging. This favorable property enables one-peer exponential graph to average as effective as its static counterpart but communicates more efficiently. We apply these exponential graphs in decentralized (momentum) SGD to obtain the state-of-the-art balance between per-iteration communication and iteration complexity among all commonly-used topologies. Experimental results on a variety of tasks and models demonstrate that decentralized (momentum) SGD over exponential graphs promises both fast and high-quality training. Our code is implemented through BlueFog and available at https://github.com/Bluefog-Lib/NeurIPS2021-Exponential-Graph. |
1109.6494 | Yolande Vieceli | Emmanuelle Darles (XLIM), Beno\^it Crespin (XLIM), Djamchid
Ghazanfarpour (XLIM), Jean-Christophe Gonzato (INRIA Bordeaux - Sud-Ouest,
LaBRI) | A Survey of Ocean Simulation and Rendering Techniques in Computer
Graphics | null | Computer Graphics Forum 30, 1 (2011) 43-60 | 10.1111/j.1467-8659.2010.01828.x | null | cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a survey of ocean simulation and rendering methods in
computer graphics. To model and animate the ocean's surface, these methods
mainly rely on two main approaches: on the one hand, those which approximate
ocean dynamics with parametric, spectral or hybrid models and use empirical
laws from oceanographic research. We will see that this type of methods
essentially allows the simulation of ocean scenes in the deep water domain,
without breaking waves. On the other hand, physically-based methods use
Navier-Stokes Equations (NSE) to represent breaking waves and more generally
ocean surface near the shore. We also describe ocean rendering methods in
computer graphics, with a special interest in the simulation of phenomena such
as foam and spray, and light's interaction with the ocean surface.
| [
{
"created": "Thu, 29 Sep 2011 11:50:29 GMT",
"version": "v1"
}
] | 2011-09-30 | [
[
"Darles",
"Emmanuelle",
"",
"XLIM"
],
[
"Crespin",
"Benoît",
"",
"XLIM"
],
[
"Ghazanfarpour",
"Djamchid",
"",
"XLIM"
],
[
"Gonzato",
"Jean-Christophe",
"",
"INRIA Bordeaux - Sud-Ouest,\n LaBRI"
]
] | This paper presents a survey of ocean simulation and rendering methods in computer graphics. To model and animate the ocean's surface, these methods mainly rely on two main approaches: on the one hand, those which approximate ocean dynamics with parametric, spectral or hybrid models and use empirical laws from oceanographic research. We will see that this type of methods essentially allows the simulation of ocean scenes in the deep water domain, without breaking waves. On the other hand, physically-based methods use Navier-Stokes Equations (NSE) to represent breaking waves and more generally ocean surface near the shore. We also describe ocean rendering methods in computer graphics, with a special interest in the simulation of phenomena such as foam and spray, and light's interaction with the ocean surface. |
2311.18769 | Lei Xin | Lei Xin, George Chiu, Shreyas Sundaram | Online Change Points Detection for Linear Dynamical Systems with Finite
Sample Guarantees | 11 pages, 3 figures | null | null | null | cs.LG cs.SY eess.SY stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The problem of online change point detection is to detect abrupt changes in
properties of time series, ideally as soon as possible after those changes
occur. Existing work on online change point detection either assumes i.i.d
data, focuses on asymptotic analysis, does not present theoretical guarantees
on the trade-off between detection accuracy and detection delay, or is only
suitable for detecting single change points. In this work, we study the online
change point detection problem for linear dynamical systems with unknown
dynamics, where the data exhibits temporal correlations and the system could
have multiple change points. We develop a data-dependent threshold that can be
used in our test that allows one to achieve a pre-specified upper bound on the
probability of making a false alarm. We further provide a finite-sample-based
bound for the probability of detecting a change point. Our bound demonstrates
how parameters used in our algorithm affect the detection probability and
delay, and provides guidance on the minimum required time between changes to
guarantee detection.
| [
{
"created": "Thu, 30 Nov 2023 18:08:16 GMT",
"version": "v1"
}
] | 2023-12-01 | [
[
"Xin",
"Lei",
""
],
[
"Chiu",
"George",
""
],
[
"Sundaram",
"Shreyas",
""
]
] | The problem of online change point detection is to detect abrupt changes in properties of time series, ideally as soon as possible after those changes occur. Existing work on online change point detection either assumes i.i.d data, focuses on asymptotic analysis, does not present theoretical guarantees on the trade-off between detection accuracy and detection delay, or is only suitable for detecting single change points. In this work, we study the online change point detection problem for linear dynamical systems with unknown dynamics, where the data exhibits temporal correlations and the system could have multiple change points. We develop a data-dependent threshold that can be used in our test that allows one to achieve a pre-specified upper bound on the probability of making a false alarm. We further provide a finite-sample-based bound for the probability of detecting a change point. Our bound demonstrates how parameters used in our algorithm affect the detection probability and delay, and provides guidance on the minimum required time between changes to guarantee detection. |
1108.2237 | Lalitha Sankar | Lalitha Sankar, Soummya Kar, Ravi Tandon, H. Vincent Poor | Competitive Privacy in the Smart Grid: An Information-theoretic Approach | Accepted for publication and presentation at the IEEE SmartGridComm
2011 | null | 10.1109/SmartGridComm.2011.6102322 | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Advances in sensing and communication capabilities as well as power industry
deregulation are driving the need for distributed state estimation in the smart
grid at the level of the regional transmission organizations (RTOs). This leads
to a new competitive privacy problem amongst the RTOs since there is a tension
between sharing data to ensure network reliability (utility/benefit to all
RTOs) and withholding data for profitability and privacy reasons. The resulting
tradeoff between utility, quantified via fidelity of its state estimate at each
RTO, and privacy, quantified via the leakage of the state of one RTO at other
RTOs, is captured precisely using a lossy source coding problem formulation for
a two RTO network. For a two-RTO model, it is shown that the set of all
feasible utility-privacy pairs can be achieved via a single round of
communication when each RTO communicates taking into account the correlation
between the measured data at both RTOs. The lossy source coding problem and
solution developed here is also of independent interest.
| [
{
"created": "Wed, 10 Aug 2011 18:19:55 GMT",
"version": "v1"
}
] | 2016-11-18 | [
[
"Sankar",
"Lalitha",
""
],
[
"Kar",
"Soummya",
""
],
[
"Tandon",
"Ravi",
""
],
[
"Poor",
"H. Vincent",
""
]
] | Advances in sensing and communication capabilities as well as power industry deregulation are driving the need for distributed state estimation in the smart grid at the level of the regional transmission organizations (RTOs). This leads to a new competitive privacy problem amongst the RTOs since there is a tension between sharing data to ensure network reliability (utility/benefit to all RTOs) and withholding data for profitability and privacy reasons. The resulting tradeoff between utility, quantified via fidelity of its state estimate at each RTO, and privacy, quantified via the leakage of the state of one RTO at other RTOs, is captured precisely using a lossy source coding problem formulation for a two RTO network. For a two-RTO model, it is shown that the set of all feasible utility-privacy pairs can be achieved via a single round of communication when each RTO communicates taking into account the correlation between the measured data at both RTOs. The lossy source coding problem and solution developed here is also of independent interest. |
2305.19240 | Ulyana Piterbarg | Ulyana Piterbarg, Lerrel Pinto, Rob Fergus | NetHack is Hard to Hack | NeurIPS 2023 | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural policy learning methods have achieved remarkable results in various
control problems, ranging from Atari games to simulated locomotion. However,
these methods struggle in long-horizon tasks, especially in open-ended
environments with multi-modal observations, such as the popular dungeon-crawler
game, NetHack. Intriguingly, the NeurIPS 2021 NetHack Challenge revealed that
symbolic agents outperformed neural approaches by over four times in median
game score. In this paper, we delve into the reasons behind this performance
gap and present an extensive study on neural policy learning for NetHack. To
conduct this study, we analyze the winning symbolic agent, extending its
codebase to track internal strategy selection in order to generate one of the
largest available demonstration datasets. Utilizing this dataset, we examine
(i) the advantages of an action hierarchy; (ii) enhancements in neural
architecture; and (iii) the integration of reinforcement learning with
imitation learning. Our investigations produce a state-of-the-art neural agent
that surpasses previous fully neural policies by 127% in offline settings and
25% in online settings on median game score. However, we also demonstrate that
mere scaling is insufficient to bridge the performance gap with the best
symbolic models or even the top human players.
| [
{
"created": "Tue, 30 May 2023 17:30:17 GMT",
"version": "v1"
},
{
"created": "Mon, 30 Oct 2023 14:23:33 GMT",
"version": "v2"
}
] | 2023-10-31 | [
[
"Piterbarg",
"Ulyana",
""
],
[
"Pinto",
"Lerrel",
""
],
[
"Fergus",
"Rob",
""
]
] | Neural policy learning methods have achieved remarkable results in various control problems, ranging from Atari games to simulated locomotion. However, these methods struggle in long-horizon tasks, especially in open-ended environments with multi-modal observations, such as the popular dungeon-crawler game, NetHack. Intriguingly, the NeurIPS 2021 NetHack Challenge revealed that symbolic agents outperformed neural approaches by over four times in median game score. In this paper, we delve into the reasons behind this performance gap and present an extensive study on neural policy learning for NetHack. To conduct this study, we analyze the winning symbolic agent, extending its codebase to track internal strategy selection in order to generate one of the largest available demonstration datasets. Utilizing this dataset, we examine (i) the advantages of an action hierarchy; (ii) enhancements in neural architecture; and (iii) the integration of reinforcement learning with imitation learning. Our investigations produce a state-of-the-art neural agent that surpasses previous fully neural policies by 127% in offline settings and 25% in online settings on median game score. However, we also demonstrate that mere scaling is insufficient to bridge the performance gap with the best symbolic models or even the top human players. |
2111.04887 | Andrej (Andy) Brodnik | Andrej Brodnik, Andrew Csizmadia, Gerald Futschek, Lidija Kralj,
Violetta Lonati, Peter Micheuz, Mattia Monga | Programming for All: Understanding the Nature of Programs | null | null | null | null | cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Computer programs are part of our daily life, we use them, we provide them
with data, they support our decisions, they help us remember, they control
machines, etc. Programs are made by people, but in most cases we are not their
authors, so we have to decide if we can trust them. Programs enable computers
and computer-controlled machines to behave in a large variety of ways. They
bring the intrinsic power of computers to life. Programs have a variety of
properties that all citizens must be aware of. Due to the intangible nature of
programs, most of these properties are very unusual, but important to
understand the digital world. In this position paper, we describe the Nature of
Programs in the form of knowledge statements, accompanied by examples from
everyday life to clarify their meaning. Everything is formulated in an easily
understandable manner and avoids obscure technical language. We suggest that
these knowledge statements must be imparted to all teachers and school
students.
A great way to learn and experience the nature of programs is to develop
programs yourself.
| [
{
"created": "Tue, 9 Nov 2021 00:14:49 GMT",
"version": "v1"
},
{
"created": "Sat, 4 Dec 2021 17:42:35 GMT",
"version": "v2"
}
] | 2021-12-07 | [
[
"Brodnik",
"Andrej",
""
],
[
"Csizmadia",
"Andrew",
""
],
[
"Futschek",
"Gerald",
""
],
[
"Kralj",
"Lidija",
""
],
[
"Lonati",
"Violetta",
""
],
[
"Micheuz",
"Peter",
""
],
[
"Monga",
"Mattia",
""
]
] | Computer programs are part of our daily life, we use them, we provide them with data, they support our decisions, they help us remember, they control machines, etc. Programs are made by people, but in most cases we are not their authors, so we have to decide if we can trust them. Programs enable computers and computer-controlled machines to behave in a large variety of ways. They bring the intrinsic power of computers to life. Programs have a variety of properties that all citizens must be aware of. Due to the intangible nature of programs, most of these properties are very unusual, but important to understand the digital world. In this position paper, we describe the Nature of Programs in the form of knowledge statements, accompanied by examples from everyday life to clarify their meaning. Everything is formulated in an easily understandable manner and avoids obscure technical language. We suggest that these knowledge statements must be imparted to all teachers and school students. A great way to learn and experience the nature of programs is to develop programs yourself. |
cs/0607109 | Marko Samer | Marko Samer and Stefan Szeider | Complexity and Applications of Edge-Induced Vertex-Cuts | 17 pages, 5 figures, 2 tables | null | null | null | cs.DM cs.CC | null | Motivated by hypergraph decomposition algorithms, we introduce the notion of
edge-induced vertex-cuts and compare it with the well-known notions of
edge-cuts and vertex-cuts. We investigate the complexity of computing minimum
edge-induced vertex-cuts and demonstrate the usefulness of our notion by
applications in network reliability and constraint satisfaction.
| [
{
"created": "Tue, 25 Jul 2006 16:17:22 GMT",
"version": "v1"
},
{
"created": "Mon, 31 Jul 2006 09:23:22 GMT",
"version": "v2"
}
] | 2007-05-23 | [
[
"Samer",
"Marko",
""
],
[
"Szeider",
"Stefan",
""
]
] | Motivated by hypergraph decomposition algorithms, we introduce the notion of edge-induced vertex-cuts and compare it with the well-known notions of edge-cuts and vertex-cuts. We investigate the complexity of computing minimum edge-induced vertex-cuts and demonstrate the usefulness of our notion by applications in network reliability and constraint satisfaction. |
1508.06583 | Avery Miller | Kokouvi Hounkanli, Avery Miller, Andrzej Pelc | Global Synchronization and Consensus Using Beeps in a Fault-Prone MAC | null | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Consensus is one of the fundamental tasks studied in distributed computing.
Processors have input values from some set $V$ and they have to decide the same
value from this set. If all processors have the same input value, then they
must all decide this value. We study the task of consensus in a Multiple Access
Channel (MAC) prone to faults, under a very weak communication model called the
$\mathit{beeping\ model}$. Communication proceeds in synchronous rounds. Some
processors wake up spontaneously, in possibly different rounds decided by an
adversary. In each round, an awake processor can either listen, i.e., stay
silent, or beep, i.e., emit a signal. In each round, a fault can occur in the
channel independently with constant probability $0<p<1$. In a fault-free round,
an awake processor hears a beep if it listens in this round and if one or more
other processors beep in this round. A processor still dormant in a fault-free
round in which some other processor beeps is woken up by this beep and hears
it. In a faulty round nothing is heard, regardless of the behaviour of the
processors.
An algorithm working with error probability at most $\epsilon$, for a given
$\epsilon>0$, is called $\epsilon$-$\mathit{safe}$. Our main result is the
design and analysis, for any constant $\epsilon>0$, of a deterministic
$\epsilon$-safe consensus algorithm that works in time $O(\log w)$ in a
fault-prone MAC, where $w$ is the smallest input value of all participating
processors. We show that this time cannot be improved, even when the MAC is
fault-free. The main algorithmic tool that we develop to achieve our goal, and
that might be of independent interest, is a deterministic algorithm that, with
arbitrarily small constant error probability, establishes a global clock in a
fault-prone MAC in constant time.
| [
{
"created": "Wed, 26 Aug 2015 17:42:44 GMT",
"version": "v1"
}
] | 2015-08-27 | [
[
"Hounkanli",
"Kokouvi",
""
],
[
"Miller",
"Avery",
""
],
[
"Pelc",
"Andrzej",
""
]
] | Consensus is one of the fundamental tasks studied in distributed computing. Processors have input values from some set $V$ and they have to decide the same value from this set. If all processors have the same input value, then they must all decide this value. We study the task of consensus in a Multiple Access Channel (MAC) prone to faults, under a very weak communication model called the $\mathit{beeping\ model}$. Communication proceeds in synchronous rounds. Some processors wake up spontaneously, in possibly different rounds decided by an adversary. In each round, an awake processor can either listen, i.e., stay silent, or beep, i.e., emit a signal. In each round, a fault can occur in the channel independently with constant probability $0<p<1$. In a fault-free round, an awake processor hears a beep if it listens in this round and if one or more other processors beep in this round. A processor still dormant in a fault-free round in which some other processor beeps is woken up by this beep and hears it. In a faulty round nothing is heard, regardless of the behaviour of the processors. An algorithm working with error probability at most $\epsilon$, for a given $\epsilon>0$, is called $\epsilon$-$\mathit{safe}$. Our main result is the design and analysis, for any constant $\epsilon>0$, of a deterministic $\epsilon$-safe consensus algorithm that works in time $O(\log w)$ in a fault-prone MAC, where $w$ is the smallest input value of all participating processors. We show that this time cannot be improved, even when the MAC is fault-free. The main algorithmic tool that we develop to achieve our goal, and that might be of independent interest, is a deterministic algorithm that, with arbitrarily small constant error probability, establishes a global clock in a fault-prone MAC in constant time. |
2209.11390 | Zheng Shi | Zheng Shi, Hong Wang, Yaru Fu, Guanghua Yang, Shaodan Ma, and Xinrong
Ye | Outage Performance and Optimal Design of MIMO-NOMA Enhanced Small Cell
Networks With Imperfect Channel-State Information | null | null | null | null | cs.IT math.IT | http://creativecommons.org/licenses/by/4.0/ | This paper focuses on boosting the performance of small cell networks (SCNs)
by integrating multiple-input multiple-output (MIMO) and non-orthogonal
multiple access (NOMA) in consideration of imperfect channel-state information
(CSI). The estimation error and the spatial randomness of base stations (BSs)
are characterized by using Kronecker model and Poisson point process (PPP),
respectively. The outage probabilities of MIMO-NOMA enhanced SCNs are first
derived in closed-form by taking into account two grouping policies, including
random grouping and distance-based grouping. It is revealed that the average
outage probabilities are irrelevant to the intensity of BSs in the
interference-limited regime, while the outage performance deteriorates if the
intensity is sufficiently low. Besides, as the channel uncertainty lessens, the
asymptotic analyses manifest that the target rates must be restricted up to a
bound to achieve an arbitrarily low outage probability in the absence of the
inter-cell interference.Moreover, highly correlated estimation error
ameliorates the outage performance under a low quality of CSI, otherwise it
behaves oppositely. Afterwards, the goodput is maximized by choosing
appropriate precoding matrix, receiver filters and transmission rates. In the
end, the numerical results verify our analysis and corroborate the superiority
of our proposed algorithm.
| [
{
"created": "Fri, 23 Sep 2022 03:54:06 GMT",
"version": "v1"
}
] | 2022-09-26 | [
[
"Shi",
"Zheng",
""
],
[
"Wang",
"Hong",
""
],
[
"Fu",
"Yaru",
""
],
[
"Yang",
"Guanghua",
""
],
[
"Ma",
"Shaodan",
""
],
[
"Ye",
"Xinrong",
""
]
] | This paper focuses on boosting the performance of small cell networks (SCNs) by integrating multiple-input multiple-output (MIMO) and non-orthogonal multiple access (NOMA) in consideration of imperfect channel-state information (CSI). The estimation error and the spatial randomness of base stations (BSs) are characterized by using Kronecker model and Poisson point process (PPP), respectively. The outage probabilities of MIMO-NOMA enhanced SCNs are first derived in closed-form by taking into account two grouping policies, including random grouping and distance-based grouping. It is revealed that the average outage probabilities are irrelevant to the intensity of BSs in the interference-limited regime, while the outage performance deteriorates if the intensity is sufficiently low. Besides, as the channel uncertainty lessens, the asymptotic analyses manifest that the target rates must be restricted up to a bound to achieve an arbitrarily low outage probability in the absence of the inter-cell interference.Moreover, highly correlated estimation error ameliorates the outage performance under a low quality of CSI, otherwise it behaves oppositely. Afterwards, the goodput is maximized by choosing appropriate precoding matrix, receiver filters and transmission rates. In the end, the numerical results verify our analysis and corroborate the superiority of our proposed algorithm. |
2104.08038 | Ekta Prashnani | Ekta Prashnani, Orazio Gallo, Joohwan Kim, Josef Spjut, Pradeep Sen,
Iuri Frosio | Noise-Aware Video Saliency Prediction | 10 pages, 3 figures, 7 tables | British Machine Vision Conference (BMVC) 2021 | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We tackle the problem of predicting saliency maps for videos of dynamic
scenes. We note that the accuracy of the maps reconstructed from the gaze data
of a fixed number of observers varies with the frame, as it depends on the
content of the scene. This issue is particularly pressing when a limited number
of observers are available. In such cases, directly minimizing the discrepancy
between the predicted and measured saliency maps, as traditional deep-learning
methods do, results in overfitting to the noisy data. We propose a noise-aware
training (NAT) paradigm that quantifies and accounts for the uncertainty
arising from frame-specific gaze data inaccuracy. We show that NAT is
especially advantageous when limited training data is available, with
experiments across different models, loss functions, and datasets. We also
introduce a video game-based saliency dataset, with rich temporal semantics,
and multiple gaze attractors per frame. The dataset and source code are
available at https://github.com/NVlabs/NAT-saliency.
| [
{
"created": "Fri, 16 Apr 2021 11:32:46 GMT",
"version": "v1"
},
{
"created": "Mon, 22 Nov 2021 05:38:36 GMT",
"version": "v2"
}
] | 2021-11-23 | [
[
"Prashnani",
"Ekta",
""
],
[
"Gallo",
"Orazio",
""
],
[
"Kim",
"Joohwan",
""
],
[
"Spjut",
"Josef",
""
],
[
"Sen",
"Pradeep",
""
],
[
"Frosio",
"Iuri",
""
]
] | We tackle the problem of predicting saliency maps for videos of dynamic scenes. We note that the accuracy of the maps reconstructed from the gaze data of a fixed number of observers varies with the frame, as it depends on the content of the scene. This issue is particularly pressing when a limited number of observers are available. In such cases, directly minimizing the discrepancy between the predicted and measured saliency maps, as traditional deep-learning methods do, results in overfitting to the noisy data. We propose a noise-aware training (NAT) paradigm that quantifies and accounts for the uncertainty arising from frame-specific gaze data inaccuracy. We show that NAT is especially advantageous when limited training data is available, with experiments across different models, loss functions, and datasets. We also introduce a video game-based saliency dataset, with rich temporal semantics, and multiple gaze attractors per frame. The dataset and source code are available at https://github.com/NVlabs/NAT-saliency. |
2210.11475 | Zineb Garroussi Dr | Zineb Garroussi, Abdoul Wassi Badirou, Mathieu D'amours, Andr\'e
Girard, Brunilde Sans\`o | On the economic viability of solar energy when upgrading cellular
networks | 25 pages, 12 figures, 51 references, journal paper to ieee
transaciton green communications and networks | null | null | null | cs.CE cs.NI math.OC | http://creativecommons.org/licenses/by/4.0/ | The massive increase of data traffic, the widespread proliferation of
wireless applications and the full-scale deployment of 5G and the IoT, imply a
steep increase in cellular networks energy use, resulting in a significant
carbon footprint. This paper presents a comprehensive model to show the
interaction between the networking and energy features of the problem and study
the economical and technical viability of green networking. Solar equipment,
cell zooming, energy management and dynamic user allocation are considered in
the upgrading network planning process. We propose a mixed-integer optimization
model to minimize long-term capital costs and operational energy expenditures
in a heterogeneous on-grid cellular network with different types of base
station, including solar. Based on eight scenarios where realistic costs of
solar panels, batteries, and inverters were considered, we first found that
solar base stations are currently not economically interesting for cellular
operators. We next studied the impact of a significant and progressive carbon
tax on reducing greenhouse gas emissions (GHG). We found that, at current
energy and equipment prices, a carbon tax ten-fold the current value is the
only element that could make green base stations economically viable.
| [
{
"created": "Mon, 17 Oct 2022 20:36:34 GMT",
"version": "v1"
}
] | 2022-10-24 | [
[
"Garroussi",
"Zineb",
""
],
[
"Badirou",
"Abdoul Wassi",
""
],
[
"D'amours",
"Mathieu",
""
],
[
"Girard",
"André",
""
],
[
"Sansò",
"Brunilde",
""
]
] | The massive increase of data traffic, the widespread proliferation of wireless applications and the full-scale deployment of 5G and the IoT, imply a steep increase in cellular networks energy use, resulting in a significant carbon footprint. This paper presents a comprehensive model to show the interaction between the networking and energy features of the problem and study the economical and technical viability of green networking. Solar equipment, cell zooming, energy management and dynamic user allocation are considered in the upgrading network planning process. We propose a mixed-integer optimization model to minimize long-term capital costs and operational energy expenditures in a heterogeneous on-grid cellular network with different types of base station, including solar. Based on eight scenarios where realistic costs of solar panels, batteries, and inverters were considered, we first found that solar base stations are currently not economically interesting for cellular operators. We next studied the impact of a significant and progressive carbon tax on reducing greenhouse gas emissions (GHG). We found that, at current energy and equipment prices, a carbon tax ten-fold the current value is the only element that could make green base stations economically viable. |
2203.16784 | Dohwan Ko | Dohwan Ko, Joonmyung Choi, Juyeon Ko, Shinyeong Noh, Kyoung-Woon On,
Eun-Sol Kim, Hyunwoo J. Kim | Video-Text Representation Learning via Differentiable Weak Temporal
Alignment | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning generic joint representations for video and text by a supervised
method requires a prohibitively substantial amount of manually annotated video
datasets. As a practical alternative, a large-scale but uncurated and narrated
video dataset, HowTo100M, has recently been introduced. But it is still
challenging to learn joint embeddings of video and text in a self-supervised
manner, due to its ambiguity and non-sequential alignment. In this paper, we
propose a novel multi-modal self-supervised framework Video-Text Temporally
Weak Alignment-based Contrastive Learning (VT-TWINS) to capture significant
information from noisy and weakly correlated data using a variant of Dynamic
Time Warping (DTW). We observe that the standard DTW inherently cannot handle
weakly correlated data and only considers the globally optimal alignment path.
To address these problems, we develop a differentiable DTW which also reflects
local information with weak temporal alignment. Moreover, our proposed model
applies a contrastive learning scheme to learn feature representations on
weakly correlated data. Our extensive experiments demonstrate that VT-TWINS
attains significant improvements in multi-modal representation learning and
outperforms various challenging downstream tasks. Code is available at
https://github.com/mlvlab/VT-TWINS.
| [
{
"created": "Thu, 31 Mar 2022 04:13:16 GMT",
"version": "v1"
}
] | 2022-04-01 | [
[
"Ko",
"Dohwan",
""
],
[
"Choi",
"Joonmyung",
""
],
[
"Ko",
"Juyeon",
""
],
[
"Noh",
"Shinyeong",
""
],
[
"On",
"Kyoung-Woon",
""
],
[
"Kim",
"Eun-Sol",
""
],
[
"Kim",
"Hyunwoo J.",
""
]
] | Learning generic joint representations for video and text by a supervised method requires a prohibitively substantial amount of manually annotated video datasets. As a practical alternative, a large-scale but uncurated and narrated video dataset, HowTo100M, has recently been introduced. But it is still challenging to learn joint embeddings of video and text in a self-supervised manner, due to its ambiguity and non-sequential alignment. In this paper, we propose a novel multi-modal self-supervised framework Video-Text Temporally Weak Alignment-based Contrastive Learning (VT-TWINS) to capture significant information from noisy and weakly correlated data using a variant of Dynamic Time Warping (DTW). We observe that the standard DTW inherently cannot handle weakly correlated data and only considers the globally optimal alignment path. To address these problems, we develop a differentiable DTW which also reflects local information with weak temporal alignment. Moreover, our proposed model applies a contrastive learning scheme to learn feature representations on weakly correlated data. Our extensive experiments demonstrate that VT-TWINS attains significant improvements in multi-modal representation learning and outperforms various challenging downstream tasks. Code is available at https://github.com/mlvlab/VT-TWINS. |
2306.07684 | Jiangtao Zhang | Jiangtao Zhang, Shunyu Liu, Jie Song, Tongtian Zhu, Zhengqi Xu, Mingli
Song | Lookaround Optimizer: $k$ steps around, 1 step average | Accepted to NeurIPS 2023 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Weight Average (WA) is an active research topic due to its simplicity in
ensembling deep networks and the effectiveness in promoting generalization.
Existing weight average approaches, however, are often carried out along only
one training trajectory in a post-hoc manner (i.e., the weights are averaged
after the entire training process is finished), which significantly degrades
the diversity between networks and thus impairs the effectiveness. In this
paper, inspired by weight average, we propose Lookaround, a straightforward yet
effective SGD-based optimizer leading to flatter minima with better
generalization. Specifically, Lookaround iterates two steps during the whole
training period: the around step and the average step. In each iteration, 1)
the around step starts from a common point and trains multiple networks
simultaneously, each on transformed data by a different data augmentation, and
2) the average step averages these trained networks to get the averaged
network, which serves as the starting point for the next iteration. The around
step improves the functionality diversity while the average step guarantees the
weight locality of these networks during the whole training, which is essential
for WA to work. We theoretically explain the superiority of Lookaround by
convergence analysis, and make extensive experiments to evaluate Lookaround on
popular benchmarks including CIFAR and ImageNet with both CNNs and ViTs,
demonstrating clear superiority over state-of-the-arts. Our code is available
at https://github.com/Ardcy/Lookaround.
| [
{
"created": "Tue, 13 Jun 2023 10:55:20 GMT",
"version": "v1"
},
{
"created": "Sun, 8 Oct 2023 06:41:12 GMT",
"version": "v2"
},
{
"created": "Thu, 2 Nov 2023 15:24:29 GMT",
"version": "v3"
}
] | 2023-11-03 | [
[
"Zhang",
"Jiangtao",
""
],
[
"Liu",
"Shunyu",
""
],
[
"Song",
"Jie",
""
],
[
"Zhu",
"Tongtian",
""
],
[
"Xu",
"Zhengqi",
""
],
[
"Song",
"Mingli",
""
]
] | Weight Average (WA) is an active research topic due to its simplicity in ensembling deep networks and the effectiveness in promoting generalization. Existing weight average approaches, however, are often carried out along only one training trajectory in a post-hoc manner (i.e., the weights are averaged after the entire training process is finished), which significantly degrades the diversity between networks and thus impairs the effectiveness. In this paper, inspired by weight average, we propose Lookaround, a straightforward yet effective SGD-based optimizer leading to flatter minima with better generalization. Specifically, Lookaround iterates two steps during the whole training period: the around step and the average step. In each iteration, 1) the around step starts from a common point and trains multiple networks simultaneously, each on transformed data by a different data augmentation, and 2) the average step averages these trained networks to get the averaged network, which serves as the starting point for the next iteration. The around step improves the functionality diversity while the average step guarantees the weight locality of these networks during the whole training, which is essential for WA to work. We theoretically explain the superiority of Lookaround by convergence analysis, and make extensive experiments to evaluate Lookaround on popular benchmarks including CIFAR and ImageNet with both CNNs and ViTs, demonstrating clear superiority over state-of-the-arts. Our code is available at https://github.com/Ardcy/Lookaround. |
2305.13286 | Rochelle Choenni | Rochelle Choenni, Dan Garrette, Ekaterina Shutova | How do languages influence each other? Studying cross-lingual data
sharing during LM fine-tuning | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multilingual large language models (MLLMs) are jointly trained on data from
many different languages such that representation of individual languages can
benefit from other languages' data. Impressive performance on zero-shot
cross-lingual transfer shows that these models are capable of exploiting data
from other languages. Yet, it remains unclear to what extent, and under which
conditions, languages rely on each other's data. In this study, we use TracIn
(Pruthi et al., 2020), a training data attribution (TDA) method, to retrieve
the most influential training samples seen during multilingual fine-tuning for
a particular test language. This allows us to analyse cross-lingual sharing
mechanisms of MLLMs from a new perspective. While previous work studied
cross-lingual sharing at the level of model parameters, we present the first
approach to study cross-lingual sharing at the data level. We find that MLLMs
rely on data from multiple languages from the early stages of fine-tuning and
that this reliance gradually increases as fine-tuning progresses. We further
study how different fine-tuning languages influence model performance on a
given test language and find that they can both reinforce and complement the
knowledge acquired from data of the test language itself.
| [
{
"created": "Mon, 22 May 2023 17:47:41 GMT",
"version": "v1"
},
{
"created": "Tue, 21 May 2024 11:47:13 GMT",
"version": "v2"
}
] | 2024-05-22 | [
[
"Choenni",
"Rochelle",
""
],
[
"Garrette",
"Dan",
""
],
[
"Shutova",
"Ekaterina",
""
]
] | Multilingual large language models (MLLMs) are jointly trained on data from many different languages such that representation of individual languages can benefit from other languages' data. Impressive performance on zero-shot cross-lingual transfer shows that these models are capable of exploiting data from other languages. Yet, it remains unclear to what extent, and under which conditions, languages rely on each other's data. In this study, we use TracIn (Pruthi et al., 2020), a training data attribution (TDA) method, to retrieve the most influential training samples seen during multilingual fine-tuning for a particular test language. This allows us to analyse cross-lingual sharing mechanisms of MLLMs from a new perspective. While previous work studied cross-lingual sharing at the level of model parameters, we present the first approach to study cross-lingual sharing at the data level. We find that MLLMs rely on data from multiple languages from the early stages of fine-tuning and that this reliance gradually increases as fine-tuning progresses. We further study how different fine-tuning languages influence model performance on a given test language and find that they can both reinforce and complement the knowledge acquired from data of the test language itself. |
1708.05905 | Lesandro Ponciano | Lesandro Ponciano and Pedro Barbosa and Francisco Brasileiro and
Andrey Brito and Nazareno Andrade | Designing for Pragmatists and Fundamentalists: Privacy Concerns and
Attitudes on the Internet of Things | XVI Brazilian Symposium on Human Factors in Computing Systems
(IHC'17), October 23-27, 2017, Joinville, SC, Brazil | In Proceedings of IHC 2017. ACM, US, Article 21, 10 pages (2017) | 10.1145/3160504.3160545 | null | cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Internet of Things (IoT) systems have aroused enthusiasm and concerns.
Enthusiasm comes from their utilities in people daily life, and concerns may be
associated with privacy issues. By using two IoT systems as case-studies, we
examine users' privacy beliefs, concerns and attitudes. We focus on four major
dimensions: the collection of personal data, the inference of new information,
the exchange of information to third parties, and the risk-utility trade-off
posed by the features of the system. Altogether, 113 Brazilian individuals
answered a survey about such dimensions. Although their perceptions seem to be
dependent on the context, there are recurrent patterns. Our results suggest
that IoT users can be classified into unconcerned, fundamentalists and
pragmatists. Most of them exhibit a pragmatist profile and believe in privacy
as a right guaranteed by law. One of the most privacy concerning aspect is the
exchange of personal information to third parties. Individuals' perceived risk
is negatively correlated with their perceived utility in the features of the
system. We discuss practical implications of these results and suggest
heuristics to cope with privacy concerns when designing IoT systems.
| [
{
"created": "Sat, 19 Aug 2017 22:15:58 GMT",
"version": "v1"
},
{
"created": "Fri, 19 Jan 2018 16:05:41 GMT",
"version": "v2"
},
{
"created": "Tue, 30 Jul 2019 15:07:58 GMT",
"version": "v3"
}
] | 2019-07-31 | [
[
"Ponciano",
"Lesandro",
""
],
[
"Barbosa",
"Pedro",
""
],
[
"Brasileiro",
"Francisco",
""
],
[
"Brito",
"Andrey",
""
],
[
"Andrade",
"Nazareno",
""
]
] | Internet of Things (IoT) systems have aroused enthusiasm and concerns. Enthusiasm comes from their utilities in people daily life, and concerns may be associated with privacy issues. By using two IoT systems as case-studies, we examine users' privacy beliefs, concerns and attitudes. We focus on four major dimensions: the collection of personal data, the inference of new information, the exchange of information to third parties, and the risk-utility trade-off posed by the features of the system. Altogether, 113 Brazilian individuals answered a survey about such dimensions. Although their perceptions seem to be dependent on the context, there are recurrent patterns. Our results suggest that IoT users can be classified into unconcerned, fundamentalists and pragmatists. Most of them exhibit a pragmatist profile and believe in privacy as a right guaranteed by law. One of the most privacy concerning aspect is the exchange of personal information to third parties. Individuals' perceived risk is negatively correlated with their perceived utility in the features of the system. We discuss practical implications of these results and suggest heuristics to cope with privacy concerns when designing IoT systems. |
2101.02434 | Michael Gundall | Michael Gundall, Christopher Huber, Sergiy Melnyk | Integration of IEEE 802.1AS-based Time Synchronization in IEEE 802.11 as
an Enabler for Novel Industrial Use Cases | arXiv admin note: text overlap with arXiv:2011.06313 | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Industry 4.0 introduces new use cases, with more and more mobile devices
appearing in the industrial landscape. These applications require both new
technologies and smooth integration into existing brownfield deployments.
Emerging mobile use cases can be divided into optional mobile and mandatory
mobile, where the first point considers the use of wireless communications due
to soft criteria such as cost savings and the second means use cases that
cannot be covered by wireline technologies due to their movement, such as AGVs.
For most industrial applications, high determinism, E2E latency and
synchronicity are most important. Therefore, we provide a common table, based
on these requirements, listing both existing and emerging mobile use cases.
Since time synchronization is particularly demanding for wireless use cases, we
propose a concept for a simple but precise synchronization in IEEE 802.11 WLAN
and a suitable integration using TSN in combination with OPC UA technology as
examples. Furthermore, the concept is evaluated with the help of a testbed
utilizing state-of-the-art hardware. This means that this concept can be
directly applied in existing industry solutions. It can be shown that the
concept is already suitable for a wide range of the mandatory mobile
applications.
| [
{
"created": "Thu, 7 Jan 2021 08:54:43 GMT",
"version": "v1"
}
] | 2021-01-08 | [
[
"Gundall",
"Michael",
""
],
[
"Huber",
"Christopher",
""
],
[
"Melnyk",
"Sergiy",
""
]
] | Industry 4.0 introduces new use cases, with more and more mobile devices appearing in the industrial landscape. These applications require both new technologies and smooth integration into existing brownfield deployments. Emerging mobile use cases can be divided into optional mobile and mandatory mobile, where the first point considers the use of wireless communications due to soft criteria such as cost savings and the second means use cases that cannot be covered by wireline technologies due to their movement, such as AGVs. For most industrial applications, high determinism, E2E latency and synchronicity are most important. Therefore, we provide a common table, based on these requirements, listing both existing and emerging mobile use cases. Since time synchronization is particularly demanding for wireless use cases, we propose a concept for a simple but precise synchronization in IEEE 802.11 WLAN and a suitable integration using TSN in combination with OPC UA technology as examples. Furthermore, the concept is evaluated with the help of a testbed utilizing state-of-the-art hardware. This means that this concept can be directly applied in existing industry solutions. It can be shown that the concept is already suitable for a wide range of the mandatory mobile applications. |
1906.05413 | Joshua Robinson | Joshua Robinson, Suvrit Sra, Stefanie Jegelka | Flexible Modeling of Diversity with Strongly Log-Concave Distributions | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Strongly log-concave (SLC) distributions are a rich class of discrete
probability distributions over subsets of some ground set. They are strictly
more general than strongly Rayleigh (SR) distributions such as the well-known
determinantal point process. While SR distributions offer elegant models of
diversity, they lack an easy control over how they express diversity. We
propose SLC as the right extension of SR that enables easier, more intuitive
control over diversity, illustrating this via examples of practical importance.
We develop two fundamental tools needed to apply SLC distributions to learning
and inference: sampling and mode finding. For sampling we develop an MCMC
sampler and give theoretical mixing time bounds. For mode finding, we establish
a weak log-submodularity property for SLC functions and derive optimization
guarantees for a distorted greedy algorithm.
| [
{
"created": "Wed, 12 Jun 2019 22:44:17 GMT",
"version": "v1"
}
] | 2019-06-14 | [
[
"Robinson",
"Joshua",
""
],
[
"Sra",
"Suvrit",
""
],
[
"Jegelka",
"Stefanie",
""
]
] | Strongly log-concave (SLC) distributions are a rich class of discrete probability distributions over subsets of some ground set. They are strictly more general than strongly Rayleigh (SR) distributions such as the well-known determinantal point process. While SR distributions offer elegant models of diversity, they lack an easy control over how they express diversity. We propose SLC as the right extension of SR that enables easier, more intuitive control over diversity, illustrating this via examples of practical importance. We develop two fundamental tools needed to apply SLC distributions to learning and inference: sampling and mode finding. For sampling we develop an MCMC sampler and give theoretical mixing time bounds. For mode finding, we establish a weak log-submodularity property for SLC functions and derive optimization guarantees for a distorted greedy algorithm. |
2103.11525 | Gordon Watts | Gordon Watts | hep_tables: Heterogeneous Array Programming for HEP | 10 pages, 5 figures, submission for vCHEP 2021 | null | 10.1051/epjconf/202125103061 | null | cs.DB physics.data-an | http://creativecommons.org/licenses/by/4.0/ | Array operations are one of the most concise ways of expressing common
filtering and simple aggregation operations that is the hallmark of the first
step of a particle physics analysis: selection, filtering, basic vector
operations, and filling histograms. The High Luminosity run of the Large Hadron
Collider (HL-LHC), scheduled to start in 2026, will require physicists to
regularly skim datasets that are over a PB in size, and repeatedly run over
datasets that are 100's of TB's - too big to fit in memory. Declarative
programming techniques are a way of separating the intent of the physicist from
the mechanics of finding the data, processing the data, and using distributed
computing to process it efficiently that is required to extract the plot or
data desired in a timely fashion. This paper describes a prototype library that
provides a framework for different sub-systems to cooperate in producing this
data, using an array-programming declarative interface. This prototype has a
ServiceX data-delivery sub-system and an awkward array sub-system cooperating
to generate requested data. The ServiceX system runs against ATLAS xAOD data.
| [
{
"created": "Mon, 22 Mar 2021 00:49:45 GMT",
"version": "v1"
}
] | 2021-09-08 | [
[
"Watts",
"Gordon",
""
]
] | Array operations are one of the most concise ways of expressing common filtering and simple aggregation operations that is the hallmark of the first step of a particle physics analysis: selection, filtering, basic vector operations, and filling histograms. The High Luminosity run of the Large Hadron Collider (HL-LHC), scheduled to start in 2026, will require physicists to regularly skim datasets that are over a PB in size, and repeatedly run over datasets that are 100's of TB's - too big to fit in memory. Declarative programming techniques are a way of separating the intent of the physicist from the mechanics of finding the data, processing the data, and using distributed computing to process it efficiently that is required to extract the plot or data desired in a timely fashion. This paper describes a prototype library that provides a framework for different sub-systems to cooperate in producing this data, using an array-programming declarative interface. This prototype has a ServiceX data-delivery sub-system and an awkward array sub-system cooperating to generate requested data. The ServiceX system runs against ATLAS xAOD data. |
1510.01871 | Peter Van Den Besselaar | Peter van den Besselaar and Ulf Sandstrom | Does Quantity Make a Difference? The importance of publishing many
papers | Presented at ISSI 2015, Istanbul | null | null | null | cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Do highly productive researchers have significantly higher probability to
produce top cited papers? Or does the increased productivity in science only
result in a sea of irrelevant papers as a perverse effect of competition and
the increased use of indicators for research evaluation and accountability
focus? We use a Swedish author disambiguated data set consisting of 48,000
researchers and their WoS-publications during the period of 2008 2011 with
citations until 2014 to investigate the relation between productivity and
production of highly cited papers. As the analysis shows, quantity does make a
difference.
| [
{
"created": "Wed, 7 Oct 2015 09:39:50 GMT",
"version": "v1"
}
] | 2015-10-08 | [
[
"Besselaar",
"Peter van den",
""
],
[
"Sandstrom",
"Ulf",
""
]
] | Do highly productive researchers have significantly higher probability to produce top cited papers? Or does the increased productivity in science only result in a sea of irrelevant papers as a perverse effect of competition and the increased use of indicators for research evaluation and accountability focus? We use a Swedish author disambiguated data set consisting of 48,000 researchers and their WoS-publications during the period of 2008 2011 with citations until 2014 to investigate the relation between productivity and production of highly cited papers. As the analysis shows, quantity does make a difference. |
1504.04244 | Pedro Henrique Juliano Nardelli | Pedro H. J. Nardelli, Hirley Alves, Carlos H. M. de Lima, Matti
Latva-aho | Throughput Maximization in Multi-Hop Wireless Networks under Secrecy
Constraint | null | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper analyzes the throughput of industrial communication networks under
a secrecy constraint. The proposed scenario is composed by sensors that measure
some relevant information of the plant that is first processed by aggregator
node and then sent to the control unit. The sensor measurements, their
communication with the aggregetor and the information processing are all
assumed perfect. To reach the control unit, the message may travel through
relay nodes, forming a multi-hop, wireless link. At every hop, eavesdropper
nodes attempt to acquire the messages transmitted through the legitimate link.
The communication design problem posed here is how to maximize the multi-hop
throughput from the aggregator to the control unit by finding the best
combination of relay positions (i.e. hop length: short or long) and coding
rates (i.e. high or low spectral efficiency) so that the secrecy constraint is
satisfied. Using a stochastic-geometry approach, we show that the optimal
choice of coding rate depends only on the path- loss exponent and is normally
high while greater number of shorter hops are preferable to smaller number of
longer hops. For the scenarios of interest, we prove that the optimal
throughput subject to the secrecy constraint achieves the unconstrained optimal
performance if a feasible solution exists.
| [
{
"created": "Thu, 16 Apr 2015 14:12:54 GMT",
"version": "v1"
},
{
"created": "Thu, 17 Dec 2015 11:16:23 GMT",
"version": "v2"
}
] | 2015-12-18 | [
[
"Nardelli",
"Pedro H. J.",
""
],
[
"Alves",
"Hirley",
""
],
[
"de Lima",
"Carlos H. M.",
""
],
[
"Latva-aho",
"Matti",
""
]
] | This paper analyzes the throughput of industrial communication networks under a secrecy constraint. The proposed scenario is composed by sensors that measure some relevant information of the plant that is first processed by aggregator node and then sent to the control unit. The sensor measurements, their communication with the aggregetor and the information processing are all assumed perfect. To reach the control unit, the message may travel through relay nodes, forming a multi-hop, wireless link. At every hop, eavesdropper nodes attempt to acquire the messages transmitted through the legitimate link. The communication design problem posed here is how to maximize the multi-hop throughput from the aggregator to the control unit by finding the best combination of relay positions (i.e. hop length: short or long) and coding rates (i.e. high or low spectral efficiency) so that the secrecy constraint is satisfied. Using a stochastic-geometry approach, we show that the optimal choice of coding rate depends only on the path- loss exponent and is normally high while greater number of shorter hops are preferable to smaller number of longer hops. For the scenarios of interest, we prove that the optimal throughput subject to the secrecy constraint achieves the unconstrained optimal performance if a feasible solution exists. |
1910.11117 | Shubham Dokania | Shubham Dokania, Vasudev Singh | Graph Representation learning for Audio & Music genre Classification | null | null | null | null | cs.SD cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | Music genre is arguably one of the most important and discriminative
information for music and audio content. Visual representation based approaches
have been explored on spectrograms for music genre classification. However,
lack of quality data and augmentation techniques makes it difficult to employ
deep learning techniques successfully. We discuss the application of graph
neural networks on such task due to their strong inductive bias, and show that
combination of CNN and GNN is able to achieve state-of-the-art results on
GTZAN, and AudioSet (Imbalanced Music) datasets. We also discuss the role of
Siamese Neural Networks as an analogous to GNN for learning edge similarity
weights. Furthermore, we also perform visual analysis to understand the
field-of-view of our model into the spectrogram based on genre labels.
| [
{
"created": "Wed, 23 Oct 2019 13:59:23 GMT",
"version": "v1"
}
] | 2019-10-25 | [
[
"Dokania",
"Shubham",
""
],
[
"Singh",
"Vasudev",
""
]
] | Music genre is arguably one of the most important and discriminative information for music and audio content. Visual representation based approaches have been explored on spectrograms for music genre classification. However, lack of quality data and augmentation techniques makes it difficult to employ deep learning techniques successfully. We discuss the application of graph neural networks on such task due to their strong inductive bias, and show that combination of CNN and GNN is able to achieve state-of-the-art results on GTZAN, and AudioSet (Imbalanced Music) datasets. We also discuss the role of Siamese Neural Networks as an analogous to GNN for learning edge similarity weights. Furthermore, we also perform visual analysis to understand the field-of-view of our model into the spectrogram based on genre labels. |
1811.12161 | Robert Kent | Christian Neuss and Robert E. Kent | Conceptual Analysis of Resource Meta-information | 17 pages, 3 figures, 8 tables, Third International World Wide Web
(WWW) Conference 1995. arXiv admin note: text overlap with arXiv:1810.07232 | Computer Networks and ISDN Systems 27(6): 973-984 (1995) | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It's ease of use and the availability of browsers for various platforms have
paved the way for the enormous popularity that the World Wide Web currently
enjoys. In the near future, by providing not only easy access to information,
but also means for conducting business transactions, the Web could form the
base technology for the information superhighway. In such a large distributed
information system, resource discovery becomes a critical problem.
Recent developments in resource discovery systems, such as Harvest and
Whois++, provide scalable mechanisms for the identification, location and
characterization of networked information resources based upon resource
meta-information. However, the Web's vast information space can only be handled
effectively, when resources are meaningfully classified into coherent
conceptual structures.
The automatic classification of resource meta-information is at the heart of
the WAVE system, which employs methods from the mathematical theory of concept
analysis to analyze and interactively explore the vast information space
defined by wide area resource discovery services. In this paper we discuss
these methods by interpreting various synoptic and summary interchange formats
for resource meta-information, such as the Harvest SOIF and the Whois++ urc, in
terms of basic ideas from concept analysis. In so doing, we advocate concept
analysis as a principled approach to effective resource discovery.
| [
{
"created": "Mon, 22 Oct 2018 21:33:03 GMT",
"version": "v1"
}
] | 2018-11-30 | [
[
"Neuss",
"Christian",
""
],
[
"Kent",
"Robert E.",
""
]
] | It's ease of use and the availability of browsers for various platforms have paved the way for the enormous popularity that the World Wide Web currently enjoys. In the near future, by providing not only easy access to information, but also means for conducting business transactions, the Web could form the base technology for the information superhighway. In such a large distributed information system, resource discovery becomes a critical problem. Recent developments in resource discovery systems, such as Harvest and Whois++, provide scalable mechanisms for the identification, location and characterization of networked information resources based upon resource meta-information. However, the Web's vast information space can only be handled effectively, when resources are meaningfully classified into coherent conceptual structures. The automatic classification of resource meta-information is at the heart of the WAVE system, which employs methods from the mathematical theory of concept analysis to analyze and interactively explore the vast information space defined by wide area resource discovery services. In this paper we discuss these methods by interpreting various synoptic and summary interchange formats for resource meta-information, such as the Harvest SOIF and the Whois++ urc, in terms of basic ideas from concept analysis. In so doing, we advocate concept analysis as a principled approach to effective resource discovery. |
1906.04982 | \'Etienne Andr\'e | \'Etienne Andr\'e, Beno\^it Delahaye and Paulin Fournier | Consistency in Parametric Interval Probabilistic Timed Automata | This is the author version of the manuscript of the same name
published in the Journal of Logical and Algebraic Methods in Programming.
This work is partially supported by the ANR national research program PACS
(ANR-14-CE28-0002) | null | 10.1016/j.jlamp.2019.04.007 | null | cs.FL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new abstract formalism for probabilistic timed systems,
Parametric Interval Probabilistic Timed Automata, based on an extension of
Parametric Timed Automata and Interval Markov Chains. In this context, we
consider the consistency problem that amounts to deciding whether a given
specification admits at least one implementation. In the context of Interval
Probabilistic Timed Automata (with no timing parameters), we show that this
problem is decidable and propose a constructive algorithm for its resolution.
We show that the existence of timing parameter valuations ensuring consistency
is undecidable in the general context, but still exhibit a syntactic condition
on parameters to ensure decidability. We also propose procedures that resolve
both the consistency and the consistent reachability problems when the
parametric probabilistic zone graph is finite.
| [
{
"created": "Wed, 12 Jun 2019 07:40:35 GMT",
"version": "v1"
}
] | 2019-06-13 | [
[
"André",
"Étienne",
""
],
[
"Delahaye",
"Benoît",
""
],
[
"Fournier",
"Paulin",
""
]
] | We propose a new abstract formalism for probabilistic timed systems, Parametric Interval Probabilistic Timed Automata, based on an extension of Parametric Timed Automata and Interval Markov Chains. In this context, we consider the consistency problem that amounts to deciding whether a given specification admits at least one implementation. In the context of Interval Probabilistic Timed Automata (with no timing parameters), we show that this problem is decidable and propose a constructive algorithm for its resolution. We show that the existence of timing parameter valuations ensuring consistency is undecidable in the general context, but still exhibit a syntactic condition on parameters to ensure decidability. We also propose procedures that resolve both the consistency and the consistent reachability problems when the parametric probabilistic zone graph is finite. |
cs/0611076 | Ying Jun Zhang Ph.D. | Soung Chang Liew and Ying Jun Zhang | Proportional Fairness in Multi-channel Multi-rate Wireless Networks-Part
II: The Case of Time-Varying Channels | null | null | null | null | cs.PF cs.IT cs.NI math.IT | null | This is Part II of a two-part paper series that studies the use of the
proportional fairness (PF) utility function as the basis for capacity
allocation and scheduling in multi-channel multi-rate wireless networks. The
contributions of Part II are twofold. (i) First, we extend the problem
formulation, theoretical results, and algorithms to the case of time-varying
channels, where opportunistic capacity allocation and scheduling can be
exploited to improve system performance. We lay down the theoretical foundation
for optimization that "couples" the time-varying characteristic of channels
with the requirements of the underlying applications into one consideration. In
particular, the extent to which opportunistic optimization is possible is not
just a function of how fast the channel characteristics vary, but also a
function of the elasticity of the underlying applications for delayed capacity
allocation. (ii) Second, building upon our theoretical framework and results,
we study subcarrier allocation and scheduling in orthogonal frequency division
multiplexing (OFDM) cellular wireless networks. We introduce the concept of a
W-normalized Doppler frequency to capture the extent to which opportunistic
scheduling can be exploited to achieve throughput-fairness performance gain. We
show that a "look-back PF" scheduling can strike a good balance between system
throughput and fairness while taking the underlying application requirements
into account.
| [
{
"created": "Thu, 16 Nov 2006 03:14:40 GMT",
"version": "v1"
},
{
"created": "Sat, 23 Feb 2008 04:23:32 GMT",
"version": "v2"
}
] | 2008-02-23 | [
[
"Liew",
"Soung Chang",
""
],
[
"Zhang",
"Ying Jun",
""
]
] | This is Part II of a two-part paper series that studies the use of the proportional fairness (PF) utility function as the basis for capacity allocation and scheduling in multi-channel multi-rate wireless networks. The contributions of Part II are twofold. (i) First, we extend the problem formulation, theoretical results, and algorithms to the case of time-varying channels, where opportunistic capacity allocation and scheduling can be exploited to improve system performance. We lay down the theoretical foundation for optimization that "couples" the time-varying characteristic of channels with the requirements of the underlying applications into one consideration. In particular, the extent to which opportunistic optimization is possible is not just a function of how fast the channel characteristics vary, but also a function of the elasticity of the underlying applications for delayed capacity allocation. (ii) Second, building upon our theoretical framework and results, we study subcarrier allocation and scheduling in orthogonal frequency division multiplexing (OFDM) cellular wireless networks. We introduce the concept of a W-normalized Doppler frequency to capture the extent to which opportunistic scheduling can be exploited to achieve throughput-fairness performance gain. We show that a "look-back PF" scheduling can strike a good balance between system throughput and fairness while taking the underlying application requirements into account. |
2009.09279 | Mohamed Wiem Mkaouer | Eman Abdullah AlOmar, Mohamed Wiem Mkaouer, Ali Ouni | Toward the Automatic Classification of Self-Affirmed Refactoring | null | null | null | null | cs.SE cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The concept of Self-Affirmed Refactoring (SAR) was introduced to explore how
developers document their refactoring activities in commit messages, i.e.,
developers' explicit documentation of refactoring operations intentionally
introduced during a code change. In our previous study, we have manually
identified refactoring patterns and defined three main common quality
improvement categories, including internal quality attributes, external quality
attributes, and code smells, by only considering refactoring-related commits.
However, this approach heavily depends on the manual inspection of commit
messages. In this paper, we propose a two-step approach to first identify
whether a commit describes developer-related refactoring events, then to
classify it according to the refactoring common quality improvement categories.
Specifically, we combine the N-Gram TF-IDF feature selection with binary and
multiclass classifiers to build a new model to automate the classification of
refactorings based on their quality improvement categories. We challenge our
model using a total of 2,867 commit messages extracted from well-engineered
open-source Java projects. Our findings show that (1) our model is able to
accurately classify SAR commits, outperforming the pattern-based and random
classifier approaches, and allowing the discovery of 40 more relevant SAR
patterns, and (2) our model reaches an F-measure of up to 90% even with a
relatively small training dataset.
| [
{
"created": "Sat, 19 Sep 2020 18:35:21 GMT",
"version": "v1"
}
] | 2020-09-22 | [
[
"AlOmar",
"Eman Abdullah",
""
],
[
"Mkaouer",
"Mohamed Wiem",
""
],
[
"Ouni",
"Ali",
""
]
] | The concept of Self-Affirmed Refactoring (SAR) was introduced to explore how developers document their refactoring activities in commit messages, i.e., developers' explicit documentation of refactoring operations intentionally introduced during a code change. In our previous study, we have manually identified refactoring patterns and defined three main common quality improvement categories, including internal quality attributes, external quality attributes, and code smells, by only considering refactoring-related commits. However, this approach heavily depends on the manual inspection of commit messages. In this paper, we propose a two-step approach to first identify whether a commit describes developer-related refactoring events, then to classify it according to the refactoring common quality improvement categories. Specifically, we combine the N-Gram TF-IDF feature selection with binary and multiclass classifiers to build a new model to automate the classification of refactorings based on their quality improvement categories. We challenge our model using a total of 2,867 commit messages extracted from well-engineered open-source Java projects. Our findings show that (1) our model is able to accurately classify SAR commits, outperforming the pattern-based and random classifier approaches, and allowing the discovery of 40 more relevant SAR patterns, and (2) our model reaches an F-measure of up to 90% even with a relatively small training dataset. |
1711.07684 | Mukul Bhutani | Mukul Bhutani and Bamdev Mishra | A two-dimensional decomposition approach for matrix completion through
gossip | Appeared in the Emergent Communication Workshop at NIPS 2017 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Factoring a matrix into two low rank matrices is at the heart of many
problems. The problem of matrix completion especially uses it to decompose a
sparse matrix into two non sparse, low rank matrices which can then be used to
predict unknown entries of the original matrix. We present a scalable and
decentralized approach in which instead of learning two factors for the
original input matrix, we decompose the original matrix into a grid blocks,
each of whose factors can be individually learned just by communicating
(gossiping) with neighboring blocks. This eliminates any need for a central
server. We show that our algorithm performs well on both synthetic and real
datasets.
| [
{
"created": "Tue, 21 Nov 2017 09:21:13 GMT",
"version": "v1"
},
{
"created": "Thu, 11 Jan 2018 12:41:33 GMT",
"version": "v2"
}
] | 2018-01-12 | [
[
"Bhutani",
"Mukul",
""
],
[
"Mishra",
"Bamdev",
""
]
] | Factoring a matrix into two low rank matrices is at the heart of many problems. The problem of matrix completion especially uses it to decompose a sparse matrix into two non sparse, low rank matrices which can then be used to predict unknown entries of the original matrix. We present a scalable and decentralized approach in which instead of learning two factors for the original input matrix, we decompose the original matrix into a grid blocks, each of whose factors can be individually learned just by communicating (gossiping) with neighboring blocks. This eliminates any need for a central server. We show that our algorithm performs well on both synthetic and real datasets. |
2007.14672 | Muzammal Naseer | Muzammal Naseer, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Fatih
Porikli | Stylized Adversarial Defense | IEEE Transactions on Pattern Analysis and Machine Intelligence
(TPAMI) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep Convolution Neural Networks (CNNs) can easily be fooled by subtle,
imperceptible changes to the input images. To address this vulnerability,
adversarial training creates perturbation patterns and includes them in the
training set to robustify the model. In contrast to existing adversarial
training methods that only use class-boundary information (e.g., using a
cross-entropy loss), we propose to exploit additional information from the
feature space to craft stronger adversaries that are in turn used to learn a
robust model. Specifically, we use the style and content information of the
target sample from another class, alongside its class-boundary information to
create adversarial perturbations. We apply our proposed multi-task objective in
a deeply supervised manner, extracting multi-scale feature knowledge to create
maximally separating adversaries. Subsequently, we propose a max-margin
adversarial training approach that minimizes the distance between source image
and its adversary and maximizes the distance between the adversary and the
target image. Our adversarial training approach demonstrates strong robustness
compared to state-of-the-art defenses, generalizes well to naturally occurring
corruptions and data distributional shifts, and retains the model accuracy on
clean examples.
| [
{
"created": "Wed, 29 Jul 2020 08:38:10 GMT",
"version": "v1"
},
{
"created": "Fri, 16 Sep 2022 14:37:43 GMT",
"version": "v2"
}
] | 2022-09-19 | [
[
"Naseer",
"Muzammal",
""
],
[
"Khan",
"Salman",
""
],
[
"Hayat",
"Munawar",
""
],
[
"Khan",
"Fahad Shahbaz",
""
],
[
"Porikli",
"Fatih",
""
]
] | Deep Convolution Neural Networks (CNNs) can easily be fooled by subtle, imperceptible changes to the input images. To address this vulnerability, adversarial training creates perturbation patterns and includes them in the training set to robustify the model. In contrast to existing adversarial training methods that only use class-boundary information (e.g., using a cross-entropy loss), we propose to exploit additional information from the feature space to craft stronger adversaries that are in turn used to learn a robust model. Specifically, we use the style and content information of the target sample from another class, alongside its class-boundary information to create adversarial perturbations. We apply our proposed multi-task objective in a deeply supervised manner, extracting multi-scale feature knowledge to create maximally separating adversaries. Subsequently, we propose a max-margin adversarial training approach that minimizes the distance between source image and its adversary and maximizes the distance between the adversary and the target image. Our adversarial training approach demonstrates strong robustness compared to state-of-the-art defenses, generalizes well to naturally occurring corruptions and data distributional shifts, and retains the model accuracy on clean examples. |
1912.11673 | Wensheng Gan | Wensheng Gan, Jerry Chun-Wei Lin, Jiexiong Zhang and Philip S. Yu | Utility Mining Across Multi-Sequences with Individualized Thresholds | Accepted by ACM Trans. on Data Science, 29 pages, 6 figures | ACM Transactions on Data Science, 2020 | 10.1145/3362070 | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Utility-oriented pattern mining has become an emerging topic since it can
reveal high-utility patterns (e.g., itemsets, rules, sequences) from different
types of data, which provides more information than the traditional
frequent/confident-based pattern mining models. The utilities of various items
are not exactly equal in realistic situations; each item has its own utility or
importance. In general, user considers a uniform minimum utility (minutil)
threshold to identify the set of high-utility sequential patterns (HUSPs). This
is unable to find the interesting patterns while the minutil is set extremely
high or low. We first design a new utility mining framework namely USPT for
mining high-Utility Sequential Patterns across multi-sequences with
individualized Thresholds. Each item in the designed framework has its own
specified minimum utility threshold. Based on the lexicographic-sequential tree
and the utility-array structure, the USPT framework is presented to efficiently
discover the HUSPs. With the upper-bounds on utility, several pruning
strategies are developed to prune the unpromising candidates early in the
search space. Several experiments are conducted on both real-life and synthetic
datasets to show the performance of the designed USPT algorithm, and the
results showed that USPT could achieve good effectiveness and efficiency for
mining HUSPs with individualized minimum utility thresholds.
| [
{
"created": "Wed, 25 Dec 2019 14:06:02 GMT",
"version": "v1"
}
] | 2021-04-01 | [
[
"Gan",
"Wensheng",
""
],
[
"Lin",
"Jerry Chun-Wei",
""
],
[
"Zhang",
"Jiexiong",
""
],
[
"Yu",
"Philip S.",
""
]
] | Utility-oriented pattern mining has become an emerging topic since it can reveal high-utility patterns (e.g., itemsets, rules, sequences) from different types of data, which provides more information than the traditional frequent/confident-based pattern mining models. The utilities of various items are not exactly equal in realistic situations; each item has its own utility or importance. In general, user considers a uniform minimum utility (minutil) threshold to identify the set of high-utility sequential patterns (HUSPs). This is unable to find the interesting patterns while the minutil is set extremely high or low. We first design a new utility mining framework namely USPT for mining high-Utility Sequential Patterns across multi-sequences with individualized Thresholds. Each item in the designed framework has its own specified minimum utility threshold. Based on the lexicographic-sequential tree and the utility-array structure, the USPT framework is presented to efficiently discover the HUSPs. With the upper-bounds on utility, several pruning strategies are developed to prune the unpromising candidates early in the search space. Several experiments are conducted on both real-life and synthetic datasets to show the performance of the designed USPT algorithm, and the results showed that USPT could achieve good effectiveness and efficiency for mining HUSPs with individualized minimum utility thresholds. |
2204.08069 | Huili Chen | Huili Chen, Jie Ding, Eric Tramel, Shuang Wu, Anit Kumar Sahu, Salman
Avestimehr, Tao Zhang | Self-Aware Personalized Federated Learning | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the context of personalized federated learning (FL), the critical
challenge is to balance local model improvement and global model tuning when
the personal and global objectives may not be exactly aligned. Inspired by
Bayesian hierarchical models, we develop a self-aware personalized FL method
where each client can automatically balance the training of its local personal
model and the global model that implicitly contributes to other clients'
training. Such a balance is derived from the inter-client and intra-client
uncertainty quantification. A larger inter-client variation implies more
personalization is needed. Correspondingly, our method uses uncertainty-driven
local training steps and aggregation rule instead of conventional local
fine-tuning and sample size-based aggregation. With experimental studies on
synthetic data, Amazon Alexa audio data, and public datasets such as MNIST,
FEMNIST, CIFAR10, and Sent140, we show that our proposed method can achieve
significantly improved personalization performance compared with the existing
counterparts.
| [
{
"created": "Sun, 17 Apr 2022 19:02:25 GMT",
"version": "v1"
}
] | 2022-04-19 | [
[
"Chen",
"Huili",
""
],
[
"Ding",
"Jie",
""
],
[
"Tramel",
"Eric",
""
],
[
"Wu",
"Shuang",
""
],
[
"Sahu",
"Anit Kumar",
""
],
[
"Avestimehr",
"Salman",
""
],
[
"Zhang",
"Tao",
""
]
] | In the context of personalized federated learning (FL), the critical challenge is to balance local model improvement and global model tuning when the personal and global objectives may not be exactly aligned. Inspired by Bayesian hierarchical models, we develop a self-aware personalized FL method where each client can automatically balance the training of its local personal model and the global model that implicitly contributes to other clients' training. Such a balance is derived from the inter-client and intra-client uncertainty quantification. A larger inter-client variation implies more personalization is needed. Correspondingly, our method uses uncertainty-driven local training steps and aggregation rule instead of conventional local fine-tuning and sample size-based aggregation. With experimental studies on synthetic data, Amazon Alexa audio data, and public datasets such as MNIST, FEMNIST, CIFAR10, and Sent140, we show that our proposed method can achieve significantly improved personalization performance compared with the existing counterparts. |
1611.09030 | Samin Aref | Samin Aref, Andrew J. Mason, and Mark C. Wilson | A modelling and computational study of the frustration index in signed
networks | 25 pages, 4 figures, 6 tables Old title: An exact method for
computing the frustration index in signed networks using binary programming.
The current authors have published a book chapter under the title "Computing
the Line Index of Balance Using Integer Programming Optimisation" that can be
found in ArXiv:1710.09876. This paper is a continuation of the same line of
research with a different focus | null | null | null | cs.SI math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Computing the frustration index of a signed graph is a key step toward
solving problems in many fields including social networks, political science,
physics, chemistry, and biology. The frustration index determines the distance
of a network from a state of total structural balance. Although the definition
of the frustration index goes back to the 1950's, its exact algorithmic
computation, which is closely related to classic NP-hard graph problems, has
only become a focus in recent years. We develop three new binary linear
programming models to compute the frustration index exactly and efficiently as
the solution to a global optimisation problem. Solving the models with
prioritised branching and valid inequalities in Gurobi, we can compute the
frustration index of real signed networks with over 15000 edges in less than a
minute on inexpensive hardware. We provide extensive performance analysis for
both random and real signed networks and show that our models outperform all
existing approaches by large factors. Based on solve time, algorithm output,
and effective branching factor we highlight the superiority of our models to
both exact and heuristic methods in the literature.
| [
{
"created": "Mon, 28 Nov 2016 09:10:06 GMT",
"version": "v1"
},
{
"created": "Fri, 26 May 2017 06:54:54 GMT",
"version": "v2"
},
{
"created": "Thu, 3 May 2018 12:18:03 GMT",
"version": "v3"
},
{
"created": "Mon, 26 Aug 2019 17:48:48 GMT",
"version": "v4"
}
] | 2019-08-27 | [
[
"Aref",
"Samin",
""
],
[
"Mason",
"Andrew J.",
""
],
[
"Wilson",
"Mark C.",
""
]
] | Computing the frustration index of a signed graph is a key step toward solving problems in many fields including social networks, political science, physics, chemistry, and biology. The frustration index determines the distance of a network from a state of total structural balance. Although the definition of the frustration index goes back to the 1950's, its exact algorithmic computation, which is closely related to classic NP-hard graph problems, has only become a focus in recent years. We develop three new binary linear programming models to compute the frustration index exactly and efficiently as the solution to a global optimisation problem. Solving the models with prioritised branching and valid inequalities in Gurobi, we can compute the frustration index of real signed networks with over 15000 edges in less than a minute on inexpensive hardware. We provide extensive performance analysis for both random and real signed networks and show that our models outperform all existing approaches by large factors. Based on solve time, algorithm output, and effective branching factor we highlight the superiority of our models to both exact and heuristic methods in the literature. |
2307.02784 | Seyoung Ahn | Seyoung Ahn, Soohyeong Kim, Yongseok Kwon, Joohan Park, Jiseung Youn,
Sunghyun Cho | On the Spatial-Wideband Effects in Millimeter-Wave Cell-Free Massive
MIMO | null | null | null | null | cs.IT cs.NI eess.SP math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we investigate the spatial-wideband effects in cell-free
massive MIMO (CF-mMIMO) systems in mmWave bands. The utilization of mmWave
frequencies brings challenges such as signal attenuation and the need for
denser networks like ultra-dense networks (UDN) to maintain communication
performance. CF-mMIMO is introduced as a solution, where distributed access
points (APs) transmit signals to a central processing unit (CPU) for joint
processing. CF-mMIMO offers advantages in reducing non-line-of-sight (NLOS)
conditions and overcoming signal blockage. We investigate the synchronization
problem in CF-mMIMO due to time delays between APs. It proposes a minimum
cyclic prefix length to mitigate inter-symbol interference (ISI) in OFDM
systems. Furthermore, the spatial correlations of channel responses are
analyzed in the frequency-phase domain. The impact of these correlations on
system performance is examined. The findings contribute to improving the
performance of CF-mMIMO systems and enhancing the effective utilization of
mmWave communication.
| [
{
"created": "Thu, 6 Jul 2023 05:22:37 GMT",
"version": "v1"
}
] | 2023-07-07 | [
[
"Ahn",
"Seyoung",
""
],
[
"Kim",
"Soohyeong",
""
],
[
"Kwon",
"Yongseok",
""
],
[
"Park",
"Joohan",
""
],
[
"Youn",
"Jiseung",
""
],
[
"Cho",
"Sunghyun",
""
]
] | In this paper, we investigate the spatial-wideband effects in cell-free massive MIMO (CF-mMIMO) systems in mmWave bands. The utilization of mmWave frequencies brings challenges such as signal attenuation and the need for denser networks like ultra-dense networks (UDN) to maintain communication performance. CF-mMIMO is introduced as a solution, where distributed access points (APs) transmit signals to a central processing unit (CPU) for joint processing. CF-mMIMO offers advantages in reducing non-line-of-sight (NLOS) conditions and overcoming signal blockage. We investigate the synchronization problem in CF-mMIMO due to time delays between APs. It proposes a minimum cyclic prefix length to mitigate inter-symbol interference (ISI) in OFDM systems. Furthermore, the spatial correlations of channel responses are analyzed in the frequency-phase domain. The impact of these correlations on system performance is examined. The findings contribute to improving the performance of CF-mMIMO systems and enhancing the effective utilization of mmWave communication. |
2204.09188 | Nirmal Shende | Nirmal V. Shende and Aaron B. Wagner | Functional Covering of Point Processes | null | null | null | null | cs.IT math.IT | http://creativecommons.org/licenses/by/4.0/ | We introduce a new distortion measure for point processes called
functional-covering distortion. It is inspired by intensity theory and is
related to both the covering of point processes and logarithmic loss
distortion. We obtain the distortion-rate function with feedforward under this
distortion measure for a large class of point processes. For Poisson processes,
the rate-distortion function is obtained under a general condition called
constrained functional-covering distortion, of which both covering and
functional-covering are special cases. Also for Poisson processes, we
characterize the rate-distortion region for a two-encoder CEO problem and show
that feedforward does not enlarge this region.
| [
{
"created": "Wed, 20 Apr 2022 02:30:18 GMT",
"version": "v1"
}
] | 2022-04-21 | [
[
"Shende",
"Nirmal V.",
""
],
[
"Wagner",
"Aaron B.",
""
]
] | We introduce a new distortion measure for point processes called functional-covering distortion. It is inspired by intensity theory and is related to both the covering of point processes and logarithmic loss distortion. We obtain the distortion-rate function with feedforward under this distortion measure for a large class of point processes. For Poisson processes, the rate-distortion function is obtained under a general condition called constrained functional-covering distortion, of which both covering and functional-covering are special cases. Also for Poisson processes, we characterize the rate-distortion region for a two-encoder CEO problem and show that feedforward does not enlarge this region. |
2002.02701 | Henry Wilde | Henry Wilde, Vincent Knight, Jonathan Gillard | A novel initialisation based on hospital-resident assignment for the
k-modes algorithm | 23 pages, 11 figures (31 panels) | null | null | null | cs.LG cs.GT stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a new way of selecting an initial solution for the
k-modes algorithm that allows for a notion of mathematical fairness and a
leverage of the data that the common initialisations from literature do not.
The method, which utilises the Hospital-Resident Assignment Problem to find the
set of initial cluster centroids, is compared with the current initialisations
on both benchmark datasets and a body of newly generated artificial datasets.
Based on this analysis, the proposed method is shown to outperform the other
initialisations in the majority of cases, especially when the number of
clusters is optimised. In addition, we find that our method outperforms the
leading established method specifically for low-density data.
| [
{
"created": "Fri, 7 Feb 2020 10:20:49 GMT",
"version": "v1"
}
] | 2020-02-10 | [
[
"Wilde",
"Henry",
""
],
[
"Knight",
"Vincent",
""
],
[
"Gillard",
"Jonathan",
""
]
] | This paper presents a new way of selecting an initial solution for the k-modes algorithm that allows for a notion of mathematical fairness and a leverage of the data that the common initialisations from literature do not. The method, which utilises the Hospital-Resident Assignment Problem to find the set of initial cluster centroids, is compared with the current initialisations on both benchmark datasets and a body of newly generated artificial datasets. Based on this analysis, the proposed method is shown to outperform the other initialisations in the majority of cases, especially when the number of clusters is optimised. In addition, we find that our method outperforms the leading established method specifically for low-density data. |
2111.10090 | Huashan Chen | Huashan Chen, Richard B. Garcia-Lebron, Zheyuan Sun, Jin-Hee Cho, and
Shouhuai Xu | Quantifying Cybersecurity Effectiveness of Software Diversity | null | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The deployment of monoculture software stacks can cause a devastating damage
even by a single exploit against a single vulnerability. Inspired by the
resilience benefit of biological diversity, the concept of software diversity
has been proposed in the security domain. Although it is intuitive that
software diversity may enhance security, its effectiveness has not been
quantitatively investigated. Currently, no theoretical or empirical study has
been explored to measure the security effectiveness of network diversity. In
this paper, we take a first step towards ultimately tackling the problem. We
propose a systematic framework that can model and quantify the security
effectiveness of network diversity. We conduct simulations to demonstrate the
usefulness of the framework. In contrast to the intuitive belief, we show that
diversity does not necessarily improve security from a whole-network
perspective. The root cause of this phenomenon is that the degree of
vulnerability in diversified software implementations plays a critical role in
determining the security effectiveness of software diversity.
| [
{
"created": "Fri, 19 Nov 2021 08:28:27 GMT",
"version": "v1"
}
] | 2021-11-22 | [
[
"Chen",
"Huashan",
""
],
[
"Garcia-Lebron",
"Richard B.",
""
],
[
"Sun",
"Zheyuan",
""
],
[
"Cho",
"Jin-Hee",
""
],
[
"Xu",
"Shouhuai",
""
]
] | The deployment of monoculture software stacks can cause a devastating damage even by a single exploit against a single vulnerability. Inspired by the resilience benefit of biological diversity, the concept of software diversity has been proposed in the security domain. Although it is intuitive that software diversity may enhance security, its effectiveness has not been quantitatively investigated. Currently, no theoretical or empirical study has been explored to measure the security effectiveness of network diversity. In this paper, we take a first step towards ultimately tackling the problem. We propose a systematic framework that can model and quantify the security effectiveness of network diversity. We conduct simulations to demonstrate the usefulness of the framework. In contrast to the intuitive belief, we show that diversity does not necessarily improve security from a whole-network perspective. The root cause of this phenomenon is that the degree of vulnerability in diversified software implementations plays a critical role in determining the security effectiveness of software diversity. |
2302.14306 | Srikanth Malla | Srikanth Malla, Yi-Ting Chen | CLR-GAM: Contrastive Point Cloud Learning with Guided Augmentation and
Feature Mapping | null | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Point cloud data plays an essential role in robotics and self-driving
applications. Yet, annotating point cloud data is time-consuming and nontrivial
while they enable learning discriminative 3D representations that empower
downstream tasks, such as classification and segmentation. Recently,
contrastive learning-based frameworks have shown promising results for learning
3D representations in a self-supervised manner. However, existing contrastive
learning methods cannot precisely encode and associate structural features and
search the higher dimensional augmentation space efficiently. In this paper, we
present CLR-GAM, a novel contrastive learning-based framework with Guided
Augmentation (GA) for efficient dynamic exploration strategy and Guided Feature
Mapping (GFM) for similar structural feature association between augmented
point clouds. We empirically demonstrate that the proposed approach achieves
state-of-the-art performance on both simulated and real-world 3D point cloud
datasets for three different downstream tasks, i.e., 3D point cloud
classification, few-shot learning, and object part segmentation.
| [
{
"created": "Tue, 28 Feb 2023 04:38:52 GMT",
"version": "v1"
}
] | 2023-03-01 | [
[
"Malla",
"Srikanth",
""
],
[
"Chen",
"Yi-Ting",
""
]
] | Point cloud data plays an essential role in robotics and self-driving applications. Yet, annotating point cloud data is time-consuming and nontrivial while they enable learning discriminative 3D representations that empower downstream tasks, such as classification and segmentation. Recently, contrastive learning-based frameworks have shown promising results for learning 3D representations in a self-supervised manner. However, existing contrastive learning methods cannot precisely encode and associate structural features and search the higher dimensional augmentation space efficiently. In this paper, we present CLR-GAM, a novel contrastive learning-based framework with Guided Augmentation (GA) for efficient dynamic exploration strategy and Guided Feature Mapping (GFM) for similar structural feature association between augmented point clouds. We empirically demonstrate that the proposed approach achieves state-of-the-art performance on both simulated and real-world 3D point cloud datasets for three different downstream tasks, i.e., 3D point cloud classification, few-shot learning, and object part segmentation. |
2309.05477 | Tim Bakker | Tim Bakker, Herke van Hoof, Max Welling | Learning Objective-Specific Active Learning Strategies with Attentive
Neural Processes | Accepted at ECML 2023 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pool-based active learning (AL) is a promising technology for increasing
data-efficiency of machine learning models. However, surveys show that
performance of recent AL methods is very sensitive to the choice of dataset and
training setting, making them unsuitable for general application. In order to
tackle this problem, the field Learning Active Learning (LAL) suggests to learn
the active learning strategy itself, allowing it to adapt to the given setting.
In this work, we propose a novel LAL method for classification that exploits
symmetry and independence properties of the active learning problem with an
Attentive Conditional Neural Process model. Our approach is based on learning
from a myopic oracle, which gives our model the ability to adapt to
non-standard objectives, such as those that do not equally weight the error on
all data points. We experimentally verify that our Neural Process model
outperforms a variety of baselines in these settings. Finally, our experiments
show that our model exhibits a tendency towards improved stability to changing
datasets. However, performance is sensitive to choice of classifier and more
work is necessary to reduce the performance the gap with the myopic oracle and
to improve scalability. We present our work as a proof-of-concept for LAL on
nonstandard objectives and hope our analysis and modelling considerations
inspire future LAL work.
| [
{
"created": "Mon, 11 Sep 2023 14:16:37 GMT",
"version": "v1"
}
] | 2023-09-12 | [
[
"Bakker",
"Tim",
""
],
[
"van Hoof",
"Herke",
""
],
[
"Welling",
"Max",
""
]
] | Pool-based active learning (AL) is a promising technology for increasing data-efficiency of machine learning models. However, surveys show that performance of recent AL methods is very sensitive to the choice of dataset and training setting, making them unsuitable for general application. In order to tackle this problem, the field Learning Active Learning (LAL) suggests to learn the active learning strategy itself, allowing it to adapt to the given setting. In this work, we propose a novel LAL method for classification that exploits symmetry and independence properties of the active learning problem with an Attentive Conditional Neural Process model. Our approach is based on learning from a myopic oracle, which gives our model the ability to adapt to non-standard objectives, such as those that do not equally weight the error on all data points. We experimentally verify that our Neural Process model outperforms a variety of baselines in these settings. Finally, our experiments show that our model exhibits a tendency towards improved stability to changing datasets. However, performance is sensitive to choice of classifier and more work is necessary to reduce the performance the gap with the myopic oracle and to improve scalability. We present our work as a proof-of-concept for LAL on nonstandard objectives and hope our analysis and modelling considerations inspire future LAL work. |
1808.08349 | Aron Laszka | Aron Laszka, Waseem Abbas, Yevgeniy Vorobeychik, and Xenofon
Koutsoukos | Detection and Mitigation of Attacks on Transportation Networks as a
Multi-Stage Security Game | null | null | null | null | cs.CR cs.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, state-of-the-art traffic-control devices have evolved from
standalone hardware to networked smart devices. Smart traffic control enables
operators to decrease traffic congestion and environmental impact by acquiring
real-time traffic data and changing traffic signals from fixed to adaptive
schedules. However, these capabilities have inadvertently exposed traffic
control to a wide range of cyber-attacks, which adversaries can easily mount
through wireless networks or even through the Internet. Indeed, recent studies
have found that a large number of traffic signals that are deployed in practice
suffer from exploitable vulnerabilities, which adversaries may use to take
control of the devices. Thanks to the hardware-based failsafes that most
devices employ, adversaries cannot cause traffic accidents directly by setting
compromised signals to dangerous configurations. Nonetheless, an adversary
could cause disastrous traffic congestion by changing the schedule of
compromised traffic signals, thereby effectively crippling the transportation
network. To provide theoretical foundations for the protection of
transportation networks from these attacks, we introduce a game-theoretic model
of launching, detecting, and mitigating attacks that tamper with traffic-signal
schedules. We show that finding optimal strategies is a computationally
challenging problem, and we propose efficient heuristic algorithms for finding
near optimal strategies. We also introduce a Gaussian-process based anomaly
detector, which can alert operators to ongoing attacks. Finally, we evaluate
our algorithms and the proposed detector using numerical experiments based on
the SUMO traffic simulator.
| [
{
"created": "Sat, 25 Aug 2018 03:19:20 GMT",
"version": "v1"
},
{
"created": "Fri, 2 Aug 2019 18:03:48 GMT",
"version": "v2"
}
] | 2019-08-06 | [
[
"Laszka",
"Aron",
""
],
[
"Abbas",
"Waseem",
""
],
[
"Vorobeychik",
"Yevgeniy",
""
],
[
"Koutsoukos",
"Xenofon",
""
]
] | In recent years, state-of-the-art traffic-control devices have evolved from standalone hardware to networked smart devices. Smart traffic control enables operators to decrease traffic congestion and environmental impact by acquiring real-time traffic data and changing traffic signals from fixed to adaptive schedules. However, these capabilities have inadvertently exposed traffic control to a wide range of cyber-attacks, which adversaries can easily mount through wireless networks or even through the Internet. Indeed, recent studies have found that a large number of traffic signals that are deployed in practice suffer from exploitable vulnerabilities, which adversaries may use to take control of the devices. Thanks to the hardware-based failsafes that most devices employ, adversaries cannot cause traffic accidents directly by setting compromised signals to dangerous configurations. Nonetheless, an adversary could cause disastrous traffic congestion by changing the schedule of compromised traffic signals, thereby effectively crippling the transportation network. To provide theoretical foundations for the protection of transportation networks from these attacks, we introduce a game-theoretic model of launching, detecting, and mitigating attacks that tamper with traffic-signal schedules. We show that finding optimal strategies is a computationally challenging problem, and we propose efficient heuristic algorithms for finding near optimal strategies. We also introduce a Gaussian-process based anomaly detector, which can alert operators to ongoing attacks. Finally, we evaluate our algorithms and the proposed detector using numerical experiments based on the SUMO traffic simulator. |
1402.5029 | Nicolas Emilio Bordenabe | Nicol\'as E. Bordenabe, Konstantinos Chatzikokolakis, Catuscia
Palamidessi | Optimal Geo-Indistinguishable Mechanisms for Location Privacy | 13 pages | null | 10.1145/2660267.2660345 | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the geo-indistinguishability approach to location privacy, and
the trade-off with respect to utility. We show that, given a desired degree of
geo-indistinguishability, it is possible to construct a mechanism that
minimizes the service quality loss, using linear programming techniques. In
addition we show that, under certain conditions, such mechanism also provides
optimal privacy in the sense of Shokri et al. Furthermore, we propose a method
to reduce the number of constraints of the linear program from cubic to
quadratic, maintaining the privacy guarantees and without affecting
significantly the utility of the generated mechanism. This reduces considerably
the time required to solve the linear program, thus enlarging significantly the
location sets for which the optimal mechanisms can be computed.
| [
{
"created": "Thu, 20 Feb 2014 15:21:42 GMT",
"version": "v1"
},
{
"created": "Sat, 16 Aug 2014 18:43:33 GMT",
"version": "v2"
},
{
"created": "Sun, 24 Aug 2014 22:11:45 GMT",
"version": "v3"
}
] | 2014-08-26 | [
[
"Bordenabe",
"Nicolás E.",
""
],
[
"Chatzikokolakis",
"Konstantinos",
""
],
[
"Palamidessi",
"Catuscia",
""
]
] | We consider the geo-indistinguishability approach to location privacy, and the trade-off with respect to utility. We show that, given a desired degree of geo-indistinguishability, it is possible to construct a mechanism that minimizes the service quality loss, using linear programming techniques. In addition we show that, under certain conditions, such mechanism also provides optimal privacy in the sense of Shokri et al. Furthermore, we propose a method to reduce the number of constraints of the linear program from cubic to quadratic, maintaining the privacy guarantees and without affecting significantly the utility of the generated mechanism. This reduces considerably the time required to solve the linear program, thus enlarging significantly the location sets for which the optimal mechanisms can be computed. |
2308.06657 | Sidong Feng | Sidong Feng, Haochuan Lu, Ting Xiong, Yuetang Deng, Chunyang Chen | Towards Efficient Record and Replay: A Case Study in WeChat | null | null | null | null | cs.SE | http://creativecommons.org/licenses/by/4.0/ | WeChat, a widely-used messenger app boasting over 1 billion monthly active
users, requires effective app quality assurance for its complex features.
Record-and-replay tools are crucial in achieving this goal. Despite the
extensive development of these tools, the impact of waiting time between replay
events has been largely overlooked. On one hand, a long waiting time for
executing replay events on fully-rendered GUIs slows down the process. On the
other hand, a short waiting time can lead to events executing on
partially-rendered GUIs, negatively affecting replay effectiveness. An optimal
waiting time should strike a balance between effectiveness and efficiency. We
introduce WeReplay, a lightweight image-based approach that dynamically adjusts
inter-event time based on the GUI rendering state. Given the real-time
streaming on the GUI, WeReplay employs a deep learning model to infer the
rendering state and synchronize with the replaying tool, scheduling the next
event when the GUI is fully rendered. Our evaluation shows that our model
achieves 92.1% precision and 93.3% recall in discerning GUI rendering states in
the WeChat app. Through assessing the performance in replaying 23 common WeChat
usage scenarios, WeReplay successfully replays all scenarios on the same and
different devices more efficiently than the state-of-the-practice baselines.
| [
{
"created": "Sun, 13 Aug 2023 01:02:00 GMT",
"version": "v1"
},
{
"created": "Fri, 25 Aug 2023 09:14:35 GMT",
"version": "v2"
}
] | 2023-08-28 | [
[
"Feng",
"Sidong",
""
],
[
"Lu",
"Haochuan",
""
],
[
"Xiong",
"Ting",
""
],
[
"Deng",
"Yuetang",
""
],
[
"Chen",
"Chunyang",
""
]
] | WeChat, a widely-used messenger app boasting over 1 billion monthly active users, requires effective app quality assurance for its complex features. Record-and-replay tools are crucial in achieving this goal. Despite the extensive development of these tools, the impact of waiting time between replay events has been largely overlooked. On one hand, a long waiting time for executing replay events on fully-rendered GUIs slows down the process. On the other hand, a short waiting time can lead to events executing on partially-rendered GUIs, negatively affecting replay effectiveness. An optimal waiting time should strike a balance between effectiveness and efficiency. We introduce WeReplay, a lightweight image-based approach that dynamically adjusts inter-event time based on the GUI rendering state. Given the real-time streaming on the GUI, WeReplay employs a deep learning model to infer the rendering state and synchronize with the replaying tool, scheduling the next event when the GUI is fully rendered. Our evaluation shows that our model achieves 92.1% precision and 93.3% recall in discerning GUI rendering states in the WeChat app. Through assessing the performance in replaying 23 common WeChat usage scenarios, WeReplay successfully replays all scenarios on the same and different devices more efficiently than the state-of-the-practice baselines. |
2402.12930 | Nils Philipp Walter | Sascha Xu, Nils Philipp Walter, Janis Kalofolias, Jilles Vreeken | Learning Exceptional Subgroups by End-to-End Maximizing KL-divergence | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Finding and describing sub-populations that are exceptional regarding a
target property has important applications in many scientific disciplines, from
identifying disadvantaged demographic groups in census data to finding
conductive molecules within gold nanoparticles. Current approaches to finding
such subgroups require pre-discretized predictive variables, do not permit
non-trivial target distributions, do not scale to large datasets, and struggle
to find diverse results.
To address these limitations, we propose Syflow, an end-to-end optimizable
approach in which we leverage normalizing flows to model arbitrary target
distributions, and introduce a novel neural layer that results in easily
interpretable subgroup descriptions. We demonstrate on synthetic and real-world
data, including a case study, that Syflow reliably finds highly exceptional
subgroups accompanied by insightful descriptions.
| [
{
"created": "Tue, 20 Feb 2024 11:29:57 GMT",
"version": "v1"
}
] | 2024-02-21 | [
[
"Xu",
"Sascha",
""
],
[
"Walter",
"Nils Philipp",
""
],
[
"Kalofolias",
"Janis",
""
],
[
"Vreeken",
"Jilles",
""
]
] | Finding and describing sub-populations that are exceptional regarding a target property has important applications in many scientific disciplines, from identifying disadvantaged demographic groups in census data to finding conductive molecules within gold nanoparticles. Current approaches to finding such subgroups require pre-discretized predictive variables, do not permit non-trivial target distributions, do not scale to large datasets, and struggle to find diverse results. To address these limitations, we propose Syflow, an end-to-end optimizable approach in which we leverage normalizing flows to model arbitrary target distributions, and introduce a novel neural layer that results in easily interpretable subgroup descriptions. We demonstrate on synthetic and real-world data, including a case study, that Syflow reliably finds highly exceptional subgroups accompanied by insightful descriptions. |
cs/0703045 | Mehmet Ak\c{c}akaya | Mehmet Ak\c{c}akaya and Vahid Tarokh | Performance Bounds on Sparse Representations Using Redundant Frames | 8 pages, 1 figure, Submitted to IEEE Transactions on Signal
Processing | null | null | null | cs.IT math.IT | null | We consider approximations of signals by the elements of a frame in a complex
vector space of dimension $N$ and formulate both the noiseless and the noisy
sparse representation problems. The noiseless representation problem is to find
sparse representations of a signal $\mathbf{r}$ given that such representations
exist. In this case, we explicitly construct a frame, referred to as the
Vandermonde frame, for which the noiseless sparse representation problem can be
solved uniquely using $O(N^2)$ operations, as long as the number of non-zero
coefficients in the sparse representation of $\mathbf{r}$ is $\epsilon N$ for
some $0 \le \epsilon \le 0.5$, thus improving on a result of Candes and Tao
\cite{Candes-Tao}. We also show that $\epsilon \le 0.5$ cannot be relaxed
without violating uniqueness.
The noisy sparse representation problem is to find sparse representations of
a signal $\mathbf{r}$ satisfying a distortion criterion. In this case, we
establish a lower bound on the trade-off between the sparsity of the
representation, the underlying distortion and the redundancy of any given
frame.
| [
{
"created": "Fri, 9 Mar 2007 19:28:10 GMT",
"version": "v1"
}
] | 2007-07-13 | [
[
"Akçakaya",
"Mehmet",
""
],
[
"Tarokh",
"Vahid",
""
]
] | We consider approximations of signals by the elements of a frame in a complex vector space of dimension $N$ and formulate both the noiseless and the noisy sparse representation problems. The noiseless representation problem is to find sparse representations of a signal $\mathbf{r}$ given that such representations exist. In this case, we explicitly construct a frame, referred to as the Vandermonde frame, for which the noiseless sparse representation problem can be solved uniquely using $O(N^2)$ operations, as long as the number of non-zero coefficients in the sparse representation of $\mathbf{r}$ is $\epsilon N$ for some $0 \le \epsilon \le 0.5$, thus improving on a result of Candes and Tao \cite{Candes-Tao}. We also show that $\epsilon \le 0.5$ cannot be relaxed without violating uniqueness. The noisy sparse representation problem is to find sparse representations of a signal $\mathbf{r}$ satisfying a distortion criterion. In this case, we establish a lower bound on the trade-off between the sparsity of the representation, the underlying distortion and the redundancy of any given frame. |
2301.13642 | Navdeep Kumar | Navdeep Kumar, Kfir Levy, Kaixin Wang, Shie Mannor | An Efficient Solution to s-Rectangular Robust Markov Decision Processes | arXiv admin note: substantial text overlap with arXiv:2205.14327 | null | null | null | cs.LG math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an efficient robust value iteration for \texttt{s}-rectangular
robust Markov Decision Processes (MDPs) with a time complexity comparable to
standard (non-robust) MDPs which is significantly faster than any existing
method. We do so by deriving the optimal robust Bellman operator in concrete
forms using our $L_p$ water filling lemma. We unveil the exact form of the
optimal policies, which turn out to be novel threshold policies with the
probability of playing an action proportional to its advantage.
| [
{
"created": "Tue, 31 Jan 2023 13:54:23 GMT",
"version": "v1"
}
] | 2023-02-01 | [
[
"Kumar",
"Navdeep",
""
],
[
"Levy",
"Kfir",
""
],
[
"Wang",
"Kaixin",
""
],
[
"Mannor",
"Shie",
""
]
] | We present an efficient robust value iteration for \texttt{s}-rectangular robust Markov Decision Processes (MDPs) with a time complexity comparable to standard (non-robust) MDPs which is significantly faster than any existing method. We do so by deriving the optimal robust Bellman operator in concrete forms using our $L_p$ water filling lemma. We unveil the exact form of the optimal policies, which turn out to be novel threshold policies with the probability of playing an action proportional to its advantage. |
1312.5434 | Xiaochuan Zhao | Xiaochuan Zhao and Ali H. Sayed | Asynchronous Adaptation and Learning over Networks --- Part I: Modeling
and Stability Analysis | 40 pages, 6 figures | null | null | null | cs.SY cs.IT cs.LG math.IT math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work and the supporting Parts II [2] and III [3], we provide a rather
detailed analysis of the stability and performance of asynchronous strategies
for solving distributed optimization and adaptation problems over networks. We
examine asynchronous networks that are subject to fairly general sources of
uncertainties, such as changing topologies, random link failures, random data
arrival times, and agents turning on and off randomly. Under this model, agents
in the network may stop updating their solutions or may stop sending or
receiving information in a random manner and without coordination with other
agents. We establish in Part I conditions on the first and second-order moments
of the relevant parameter distributions to ensure mean-square stable behavior.
We derive in Part II expressions that reveal how the various parameters of the
asynchronous behavior influence network performance. We compare in Part III the
performance of asynchronous networks to the performance of both centralized
solutions and synchronous networks. One notable conclusion is that the
mean-square-error performance of asynchronous networks shows a degradation only
of the order of $O(\nu)$, where $\nu$ is a small step-size parameter, while the
convergence rate remains largely unaltered. The results provide a solid
justification for the remarkable resilience of cooperative networks in the face
of random failures at multiple levels: agents, links, data arrivals, and
topology.
| [
{
"created": "Thu, 19 Dec 2013 08:29:57 GMT",
"version": "v1"
},
{
"created": "Sun, 27 Jul 2014 01:00:16 GMT",
"version": "v2"
},
{
"created": "Tue, 16 Dec 2014 08:16:46 GMT",
"version": "v3"
}
] | 2014-12-17 | [
[
"Zhao",
"Xiaochuan",
""
],
[
"Sayed",
"Ali H.",
""
]
] | In this work and the supporting Parts II [2] and III [3], we provide a rather detailed analysis of the stability and performance of asynchronous strategies for solving distributed optimization and adaptation problems over networks. We examine asynchronous networks that are subject to fairly general sources of uncertainties, such as changing topologies, random link failures, random data arrival times, and agents turning on and off randomly. Under this model, agents in the network may stop updating their solutions or may stop sending or receiving information in a random manner and without coordination with other agents. We establish in Part I conditions on the first and second-order moments of the relevant parameter distributions to ensure mean-square stable behavior. We derive in Part II expressions that reveal how the various parameters of the asynchronous behavior influence network performance. We compare in Part III the performance of asynchronous networks to the performance of both centralized solutions and synchronous networks. One notable conclusion is that the mean-square-error performance of asynchronous networks shows a degradation only of the order of $O(\nu)$, where $\nu$ is a small step-size parameter, while the convergence rate remains largely unaltered. The results provide a solid justification for the remarkable resilience of cooperative networks in the face of random failures at multiple levels: agents, links, data arrivals, and topology. |
2007.02758 | Omar Sharif | Eftekhar Hossain, Omar Sharif and Mohammed Moshiul Hoque | Sentiment Polarity Detection on Bengali Book Reviews Using Multinomial
Naive Bayes | 12 pages, ICACIE 2020, Will be published by Advances in Intelligent
Systems and Computing (AISC) series of Springer | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Recently, sentiment polarity detection has increased attention to NLP
researchers due to the massive availability of customer's opinions or reviews
in the online platform. Due to the continued expansion of e-commerce sites, the
rate of purchase of various products, including books, are growing enormously
among the people. Reader's opinions/reviews affect the buying decision of a
customer in most cases. This work introduces a machine learning-based technique
to determine sentiment polarities (either positive or negative category) from
Bengali book reviews. To assess the effectiveness of the proposed technique, a
corpus with 2000 reviews on Bengali books is developed. A comparative analysis
with various approaches (such as logistic regression, naive Bayes, SVM, and
SGD) also performed by taking into consideration of the unigram, bigram, and
trigram features, respectively. Experimental result reveals that the
multinomial Naive Bayes with unigram feature outperforms the other techniques
with 84% accuracy on the test set.
| [
{
"created": "Mon, 6 Jul 2020 13:58:51 GMT",
"version": "v1"
}
] | 2020-07-07 | [
[
"Hossain",
"Eftekhar",
""
],
[
"Sharif",
"Omar",
""
],
[
"Hoque",
"Mohammed Moshiul",
""
]
] | Recently, sentiment polarity detection has increased attention to NLP researchers due to the massive availability of customer's opinions or reviews in the online platform. Due to the continued expansion of e-commerce sites, the rate of purchase of various products, including books, are growing enormously among the people. Reader's opinions/reviews affect the buying decision of a customer in most cases. This work introduces a machine learning-based technique to determine sentiment polarities (either positive or negative category) from Bengali book reviews. To assess the effectiveness of the proposed technique, a corpus with 2000 reviews on Bengali books is developed. A comparative analysis with various approaches (such as logistic regression, naive Bayes, SVM, and SGD) also performed by taking into consideration of the unigram, bigram, and trigram features, respectively. Experimental result reveals that the multinomial Naive Bayes with unigram feature outperforms the other techniques with 84% accuracy on the test set. |
2008.05782 | Volodymyr Leno | V. Leno, A. Augusto, M. Dumas, M. La Rosa, F. Maggi, A. Polyvyanyy | Identifying candidate routines for Robotic Process Automation from
unsegmented UI logs | International Conference on Process Mining 2020 | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Robotic Process Automation (RPA) is a technology to develop software bots
that automate repetitive sequences of interactions between users and software
applications (a.k.a. routines). To take full advantage of this technology,
organizations need to identify and to scope their routines. This is a
challenging endeavor in large organizations, as routines are usually not
concentrated in a handful of processes, but rather scattered across the process
landscape. Accordingly, the identification of routines from User Interaction
(UI) logs has received significant attention. Existing approaches to this
problem assume that the UI log is segmented, meaning that it consists of traces
of a task that is presupposed to contain one or more routines. However, a UI
log usually takes the form of a single unsegmented sequence of events. This
paper presents an approach to discover candidate routines from unsegmented UI
logs in the presence of noise, i.e. events within or between routine instances
that do not belong to any routine. The approach is implemented as an
open-source tool and evaluated using synthetic and real-life UI logs.
| [
{
"created": "Thu, 13 Aug 2020 09:58:40 GMT",
"version": "v1"
},
{
"created": "Thu, 27 Aug 2020 01:16:10 GMT",
"version": "v2"
}
] | 2020-08-28 | [
[
"Leno",
"V.",
""
],
[
"Augusto",
"A.",
""
],
[
"Dumas",
"M.",
""
],
[
"La Rosa",
"M.",
""
],
[
"Maggi",
"F.",
""
],
[
"Polyvyanyy",
"A.",
""
]
] | Robotic Process Automation (RPA) is a technology to develop software bots that automate repetitive sequences of interactions between users and software applications (a.k.a. routines). To take full advantage of this technology, organizations need to identify and to scope their routines. This is a challenging endeavor in large organizations, as routines are usually not concentrated in a handful of processes, but rather scattered across the process landscape. Accordingly, the identification of routines from User Interaction (UI) logs has received significant attention. Existing approaches to this problem assume that the UI log is segmented, meaning that it consists of traces of a task that is presupposed to contain one or more routines. However, a UI log usually takes the form of a single unsegmented sequence of events. This paper presents an approach to discover candidate routines from unsegmented UI logs in the presence of noise, i.e. events within or between routine instances that do not belong to any routine. The approach is implemented as an open-source tool and evaluated using synthetic and real-life UI logs. |
1708.07808 | Cagdas Ulas | Cagdas Ulas, Christine Preibisch, Jonathan Sperl, Thomas Pyka,
Jayashree Kalpathy-Cramer, Bjoern Menze | Accelerated Reconstruction of Perfusion-Weighted MRI Enforcing Jointly
Local and Nonlocal Spatio-temporal Constraints | Submission to IEEE Transactions on Medical Imaging (August 2017) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Perfusion-weighted magnetic resonance imaging (MRI) is an imaging technique
that allows one to measure tissue perfusion in an organ of interest through the
injection of an intravascular paramagnetic contrast agent (CA). Due to a
preference for high temporal and spatial resolution in many applications, this
modality could significantly benefit from accelerated data acquisitions. In
this paper, we specifically address the problem of reconstructing perfusion MR
image series from a subset of k-space data. Our proposed approach is motivated
by the observation that temporal variations (dynamics) in perfusion imaging
often exhibit correlation across different spatial scales. Hence, we propose a
model that jointly penalizes the voxel-wise deviations in temporal gradient
images obtained based on a baseline, and the patch-wise dissimilarities between
the spatio-temporal neighborhoods of entire image sequence. We validate our
method on dynamic susceptibility contrast (DSC)-MRI and dynamic
contrast-enhanced (DCE)-MRI brain perfusion datasets acquired from 10 tumor
patients in total. We provide extensive analysis of reconstruction performance
and perfusion parameter estimation in comparison to state-of-the-art
reconstruction methods. Experimental results on clinical datasets demonstrate
that our reconstruction model can potentially achieve up to 8-fold acceleration
by enabling accurate estimation of perfusion parameters while preserving
spatial image details and reconstructing the complete perfusion time-intensity
curves (TICs).
| [
{
"created": "Fri, 25 Aug 2017 16:52:04 GMT",
"version": "v1"
}
] | 2017-08-28 | [
[
"Ulas",
"Cagdas",
""
],
[
"Preibisch",
"Christine",
""
],
[
"Sperl",
"Jonathan",
""
],
[
"Pyka",
"Thomas",
""
],
[
"Kalpathy-Cramer",
"Jayashree",
""
],
[
"Menze",
"Bjoern",
""
]
] | Perfusion-weighted magnetic resonance imaging (MRI) is an imaging technique that allows one to measure tissue perfusion in an organ of interest through the injection of an intravascular paramagnetic contrast agent (CA). Due to a preference for high temporal and spatial resolution in many applications, this modality could significantly benefit from accelerated data acquisitions. In this paper, we specifically address the problem of reconstructing perfusion MR image series from a subset of k-space data. Our proposed approach is motivated by the observation that temporal variations (dynamics) in perfusion imaging often exhibit correlation across different spatial scales. Hence, we propose a model that jointly penalizes the voxel-wise deviations in temporal gradient images obtained based on a baseline, and the patch-wise dissimilarities between the spatio-temporal neighborhoods of entire image sequence. We validate our method on dynamic susceptibility contrast (DSC)-MRI and dynamic contrast-enhanced (DCE)-MRI brain perfusion datasets acquired from 10 tumor patients in total. We provide extensive analysis of reconstruction performance and perfusion parameter estimation in comparison to state-of-the-art reconstruction methods. Experimental results on clinical datasets demonstrate that our reconstruction model can potentially achieve up to 8-fold acceleration by enabling accurate estimation of perfusion parameters while preserving spatial image details and reconstructing the complete perfusion time-intensity curves (TICs). |
1906.02729 | Nilesh Kulkarni | Nilesh Kulkarni, Ishan Misra, Shubham Tulsiani, Abhinav Gupta | 3D-RelNet: Joint Object and Relational Network for 3D Prediction | Project page: https://nileshkulkarni.github.io/relative3d/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose an approach to predict the 3D shape and pose for the objects
present in a scene. Existing learning based methods that pursue this goal make
independent predictions per object, and do not leverage the relationships
amongst them. We argue that reasoning about these relationships is crucial, and
present an approach to incorporate these in a 3D prediction framework. In
addition to independent per-object predictions, we predict pairwise relations
in the form of relative 3D pose, and demonstrate that these can be easily
incorporated to improve object level estimates. We report performance across
different datasets (SUNCG, NYUv2), and show that our approach significantly
improves over independent prediction approaches while also outperforming
alternate implicit reasoning methods.
| [
{
"created": "Thu, 6 Jun 2019 17:50:48 GMT",
"version": "v1"
},
{
"created": "Tue, 27 Aug 2019 17:49:01 GMT",
"version": "v2"
},
{
"created": "Thu, 5 Mar 2020 04:10:25 GMT",
"version": "v3"
}
] | 2020-03-06 | [
[
"Kulkarni",
"Nilesh",
""
],
[
"Misra",
"Ishan",
""
],
[
"Tulsiani",
"Shubham",
""
],
[
"Gupta",
"Abhinav",
""
]
] | We propose an approach to predict the 3D shape and pose for the objects present in a scene. Existing learning based methods that pursue this goal make independent predictions per object, and do not leverage the relationships amongst them. We argue that reasoning about these relationships is crucial, and present an approach to incorporate these in a 3D prediction framework. In addition to independent per-object predictions, we predict pairwise relations in the form of relative 3D pose, and demonstrate that these can be easily incorporated to improve object level estimates. We report performance across different datasets (SUNCG, NYUv2), and show that our approach significantly improves over independent prediction approaches while also outperforming alternate implicit reasoning methods. |
2312.10600 | Yixin Zhang | Yixin Zhang, Shen Zhao, Hanxue Gu, Maciej A. Mazurowski | How to Efficiently Annotate Images for Best-Performing Deep Learning
Based Segmentation Models: An Empirical Study with Weak and Noisy Annotations
and Segment Anything Model | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Deep neural networks (DNNs) have been deployed for many image segmentation
tasks and achieved outstanding performance. However, preparing a dataset for
training segmentation DNNs is laborious and costly since typically pixel-level
annotations are provided for each object of interest. To alleviate this issue,
one can provide only weak labels such as bounding boxes or scribbles, or less
accurate (noisy) annotations of the objects. These are significantly faster to
generate and thus result in more annotated images given the same time budget.
However, the reduction in quality might negatively affect the segmentation
performance of the resulting model. In this study, we perform a thorough
cost-effectiveness evaluation of several weak and noisy labels. We considered
11 variants of annotation strategies and 4 datasets. We conclude that the
common practice of accurately outlining the objects of interest is virtually
never the optimal approach when the annotation time is limited, even if notable
annotation time is available (10s of hours). Annotation approaches that stood
out in such scenarios were (1) contour-based annotation with rough continuous
traces, (2) polygon-based annotation with few vertices, and (3) box annotations
combined with the Segment Anything Model (SAM). In situations where unlimited
annotation time was available, precise annotations still lead to the highest
segmentation model performance.
| [
{
"created": "Sun, 17 Dec 2023 04:26:42 GMT",
"version": "v1"
},
{
"created": "Wed, 20 Dec 2023 22:53:23 GMT",
"version": "v2"
}
] | 2023-12-22 | [
[
"Zhang",
"Yixin",
""
],
[
"Zhao",
"Shen",
""
],
[
"Gu",
"Hanxue",
""
],
[
"Mazurowski",
"Maciej A.",
""
]
] | Deep neural networks (DNNs) have been deployed for many image segmentation tasks and achieved outstanding performance. However, preparing a dataset for training segmentation DNNs is laborious and costly since typically pixel-level annotations are provided for each object of interest. To alleviate this issue, one can provide only weak labels such as bounding boxes or scribbles, or less accurate (noisy) annotations of the objects. These are significantly faster to generate and thus result in more annotated images given the same time budget. However, the reduction in quality might negatively affect the segmentation performance of the resulting model. In this study, we perform a thorough cost-effectiveness evaluation of several weak and noisy labels. We considered 11 variants of annotation strategies and 4 datasets. We conclude that the common practice of accurately outlining the objects of interest is virtually never the optimal approach when the annotation time is limited, even if notable annotation time is available (10s of hours). Annotation approaches that stood out in such scenarios were (1) contour-based annotation with rough continuous traces, (2) polygon-based annotation with few vertices, and (3) box annotations combined with the Segment Anything Model (SAM). In situations where unlimited annotation time was available, precise annotations still lead to the highest segmentation model performance. |
2209.08235 | Xindi Wang | Jinjing Wang, Xindi Wang | Technical Report for Trend Prediction Based Intelligent UAV Trajectory
Planning for Large-scale Dynamic Scenarios | null | null | null | null | cs.NI | http://creativecommons.org/publicdomain/zero/1.0/ | The unmanned aerial vehicle (UAV)-enabled communication technology is
regarded as an efficient and effective solution for some special application
scenarios where existing terrestrial infrastructures are overloaded to provide
reliable services. To maximize the utility of the UAV-enabled system while
meeting the QoS and energy constraints, the UAV needs to plan its trajectory
considering the dynamic characteristics of scenarios, which is formulated as
the Markov Decision Process (MDP). To solve the above problem, a deep
reinforcement learning (DRL)-based scheme is proposed here, which predicts the
trend of the dynamic scenarios to provide a long-term view for the UAV
trajectory planning. Simulation results validate that our proposed scheme
converges more quickly and achieves the better performance in dynamic
scenarios.
| [
{
"created": "Sat, 17 Sep 2022 03:46:23 GMT",
"version": "v1"
}
] | 2022-09-20 | [
[
"Wang",
"Jinjing",
""
],
[
"Wang",
"Xindi",
""
]
] | The unmanned aerial vehicle (UAV)-enabled communication technology is regarded as an efficient and effective solution for some special application scenarios where existing terrestrial infrastructures are overloaded to provide reliable services. To maximize the utility of the UAV-enabled system while meeting the QoS and energy constraints, the UAV needs to plan its trajectory considering the dynamic characteristics of scenarios, which is formulated as the Markov Decision Process (MDP). To solve the above problem, a deep reinforcement learning (DRL)-based scheme is proposed here, which predicts the trend of the dynamic scenarios to provide a long-term view for the UAV trajectory planning. Simulation results validate that our proposed scheme converges more quickly and achieves the better performance in dynamic scenarios. |
1709.02557 | EPTCS | Lucas E. R. Fernandes (1), Vinicius Custodio (1), Gleifer V. Alves
(1), Michael Fisher (2) ((1) UTFPR, Ponta Grossa, Parana, Brazil, (2)
University of Liverpool, Liverpool, United Kingdom) | A Rational Agent Controlling an Autonomous Vehicle: Implementation and
Formal Verification | In Proceedings FVAV 2017, arXiv:1709.02126 | EPTCS 257, 2017, pp. 35-42 | 10.4204/EPTCS.257.5 | null | cs.LO cs.MA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The development and deployment of Autonomous Vehicles (AVs) on our roads is
not only realistic in the near future but can also bring significant benefits.
In particular, it can potentially solve several problems relating to vehicles
and traffic, for instance: (i) possible reduction of traffic congestion, with
the consequence of improved fuel economy and reduced driver inactivity; (ii)
possible reduction in the number of accidents, assuming that an AV can minimise
the human errors that often cause traffic accidents; and (iii) increased ease
of parking, especially when one considers the potential for shared AVs. In
order to deploy an AV there are significant steps that must be completed in
terms of hardware and software. As expected, software components play a key
role in the complex AV system and so, at least for safety, we should assess the
correctness of these components.
In this paper, we are concerned with the high-level software component(s)
responsible for the decisions in an AV. We intend to model an AV capable of
navigation; obstacle avoidance; obstacle selection (when a crash is
unavoidable) and vehicle recovery, etc, using a rational agent. To achieve
this, we have established the following stages. First, the agent plans and
actions have been implemented within the Gwendolen agent programming language.
Second, we have built a simulated automotive environment in the Java language.
Third, we have formally specified some of the required agent properties through
LTL formulae, which are then formally verified with the AJPF verification tool.
Finally, within the MCAPL framework (which comprises all the tools used in
previous stages) we have obtained formal verification of our AV agent in terms
of its specific behaviours. For example, the agent plans responsible for
selecting an obstacle with low potential damage, instead of a higher damage
obstacle (when possible) can be formally verified within MCAPL. We must
emphasise that the major goal (of our present approach) lies in the formal
verification of agent plans, rather than evaluating real-world applications.
For this reason we utilised a simple matrix representation concerning the
environment used by our agent.
| [
{
"created": "Fri, 8 Sep 2017 06:35:30 GMT",
"version": "v1"
}
] | 2017-09-11 | [
[
"Fernandes",
"Lucas E. R.",
""
],
[
"Custodio",
"Vinicius",
""
],
[
"Alves",
"Gleifer V.",
""
],
[
"Fisher",
"Michael",
""
]
] | The development and deployment of Autonomous Vehicles (AVs) on our roads is not only realistic in the near future but can also bring significant benefits. In particular, it can potentially solve several problems relating to vehicles and traffic, for instance: (i) possible reduction of traffic congestion, with the consequence of improved fuel economy and reduced driver inactivity; (ii) possible reduction in the number of accidents, assuming that an AV can minimise the human errors that often cause traffic accidents; and (iii) increased ease of parking, especially when one considers the potential for shared AVs. In order to deploy an AV there are significant steps that must be completed in terms of hardware and software. As expected, software components play a key role in the complex AV system and so, at least for safety, we should assess the correctness of these components. In this paper, we are concerned with the high-level software component(s) responsible for the decisions in an AV. We intend to model an AV capable of navigation; obstacle avoidance; obstacle selection (when a crash is unavoidable) and vehicle recovery, etc, using a rational agent. To achieve this, we have established the following stages. First, the agent plans and actions have been implemented within the Gwendolen agent programming language. Second, we have built a simulated automotive environment in the Java language. Third, we have formally specified some of the required agent properties through LTL formulae, which are then formally verified with the AJPF verification tool. Finally, within the MCAPL framework (which comprises all the tools used in previous stages) we have obtained formal verification of our AV agent in terms of its specific behaviours. For example, the agent plans responsible for selecting an obstacle with low potential damage, instead of a higher damage obstacle (when possible) can be formally verified within MCAPL. We must emphasise that the major goal (of our present approach) lies in the formal verification of agent plans, rather than evaluating real-world applications. For this reason we utilised a simple matrix representation concerning the environment used by our agent. |
2009.03268 | Teng Liu | Hong Shu, Teng Liu, Xingyu Mu, Dongpu Cao | Driving Tasks Transfer in Deep Reinforcement Learning for
Decision-making of Autonomous Vehicles | 10 pages 12 figures | null | null | null | cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge transfer is a promising concept to achieve real-time
decision-making for autonomous vehicles. This paper constructs a transfer deep
reinforcement learning framework to transform the driving tasks in
inter-section environments. The driving missions at the un-signalized
intersection are cast into a left turn, right turn, and running straight for
automated vehicles. The goal of the autonomous ego vehicle (AEV) is to drive
through the intersection situation efficiently and safely. This objective
promotes the studied vehicle to increase its speed and avoid crashing other
vehicles. The decision-making pol-icy learned from one driving task is
transferred and evaluated in another driving mission. Simulation results reveal
that the decision-making strategies related to similar tasks are transferable.
It indicates that the presented control framework could reduce the time
consumption and realize online implementation.
| [
{
"created": "Mon, 7 Sep 2020 17:34:01 GMT",
"version": "v1"
},
{
"created": "Sat, 10 Oct 2020 14:16:31 GMT",
"version": "v2"
}
] | 2020-10-13 | [
[
"Shu",
"Hong",
""
],
[
"Liu",
"Teng",
""
],
[
"Mu",
"Xingyu",
""
],
[
"Cao",
"Dongpu",
""
]
] | Knowledge transfer is a promising concept to achieve real-time decision-making for autonomous vehicles. This paper constructs a transfer deep reinforcement learning framework to transform the driving tasks in inter-section environments. The driving missions at the un-signalized intersection are cast into a left turn, right turn, and running straight for automated vehicles. The goal of the autonomous ego vehicle (AEV) is to drive through the intersection situation efficiently and safely. This objective promotes the studied vehicle to increase its speed and avoid crashing other vehicles. The decision-making pol-icy learned from one driving task is transferred and evaluated in another driving mission. Simulation results reveal that the decision-making strategies related to similar tasks are transferable. It indicates that the presented control framework could reduce the time consumption and realize online implementation. |
1102.4241 | Petros Zimourtopoulos E | Nikolitsa Yannopoulou, Petros Zimourtopoulos | Support of Interactive 3D/4D Presentations by the Very First Ever Made
Virtual Laboratories of Antennas | Inadequately justified rejection (with reviewers' grades: [4, 4, 3,
4] + [4, 4, 4, 4] + [2, 4, 4, 4] out of 3 x [5, 5, 5, 5]) from publication to
the Proceedings of 21st International Conference Radioelektronika 2011, April
19-20, Brno, Czech Republic - No changes in the paper since [v1] Mon, 21 Feb
2011 14:52:34 GMT (412kb): [v3] = [v2] = [v1] | FunkTechnikPlus # Journal, Issue 3 - Year 1, 31 January 2014, v1,
7-18, otoiser ftp#j | null | null | cs.OH physics.comp-ph | http://creativecommons.org/licenses/by/3.0/ | Based on the experience we have gained so far, as independent reviewers of
Radioengineering journal, we thought that may be proved useful to publicly
share with the interested author, especially the young one, some practical
implementations of our ideas for the interactive representation of data using
3D/4D movement and animation, in an attempt to motivate and support her/him in
the development of similar dynamic presentations, when s/he is looking for a
way to locate the stronger aspects of her/his research results in order to
prepare a clear, most appropriate for publication, static presentation figure.
For this purpose, we selected to demonstrate a number of presentations, from
the simplest to the most complicated, concerning well-known antenna issues with
rather hard to imagine details, as it happens perhaps in cases involving
Spherical Coordinates and Polarization, which we created to enrich the very
first ever made Virtual Laboratories of Antennas, that we distribute over the
Open Internet through our website Virtual Antennas. These presentations were
developed in a general way, without using antenna simulators, to handle output
text and image data from third-party CAS Computer Algebra Systems, such as the
Mathematica commercial software we use or the Maxima FLOSS we track its
evolution.
| [
{
"created": "Mon, 21 Feb 2011 14:52:34 GMT",
"version": "v1"
},
{
"created": "Thu, 17 Mar 2011 17:45:27 GMT",
"version": "v2"
},
{
"created": "Wed, 17 Oct 2012 09:18:07 GMT",
"version": "v3"
}
] | 2015-06-03 | [
[
"Yannopoulou",
"Nikolitsa",
""
],
[
"Zimourtopoulos",
"Petros",
""
]
] | Based on the experience we have gained so far, as independent reviewers of Radioengineering journal, we thought that may be proved useful to publicly share with the interested author, especially the young one, some practical implementations of our ideas for the interactive representation of data using 3D/4D movement and animation, in an attempt to motivate and support her/him in the development of similar dynamic presentations, when s/he is looking for a way to locate the stronger aspects of her/his research results in order to prepare a clear, most appropriate for publication, static presentation figure. For this purpose, we selected to demonstrate a number of presentations, from the simplest to the most complicated, concerning well-known antenna issues with rather hard to imagine details, as it happens perhaps in cases involving Spherical Coordinates and Polarization, which we created to enrich the very first ever made Virtual Laboratories of Antennas, that we distribute over the Open Internet through our website Virtual Antennas. These presentations were developed in a general way, without using antenna simulators, to handle output text and image data from third-party CAS Computer Algebra Systems, such as the Mathematica commercial software we use or the Maxima FLOSS we track its evolution. |
2104.06820 | Utkarsh Ojha | Utkarsh Ojha, Yijun Li, Jingwan Lu, Alexei A. Efros, Yong Jae Lee, Eli
Shechtman, Richard Zhang | Few-shot Image Generation via Cross-domain Correspondence | CVPR 2021 | null | null | null | cs.CV cs.GR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Training generative models, such as GANs, on a target domain containing
limited examples (e.g., 10) can easily result in overfitting. In this work, we
seek to utilize a large source domain for pretraining and transfer the
diversity information from source to target. We propose to preserve the
relative similarities and differences between instances in the source via a
novel cross-domain distance consistency loss. To further reduce overfitting, we
present an anchor-based strategy to encourage different levels of realism over
different regions in the latent space. With extensive results in both
photorealistic and non-photorealistic domains, we demonstrate qualitatively and
quantitatively that our few-shot model automatically discovers correspondences
between source and target domains and generates more diverse and realistic
images than previous methods.
| [
{
"created": "Tue, 13 Apr 2021 17:59:35 GMT",
"version": "v1"
}
] | 2021-04-15 | [
[
"Ojha",
"Utkarsh",
""
],
[
"Li",
"Yijun",
""
],
[
"Lu",
"Jingwan",
""
],
[
"Efros",
"Alexei A.",
""
],
[
"Lee",
"Yong Jae",
""
],
[
"Shechtman",
"Eli",
""
],
[
"Zhang",
"Richard",
""
]
] | Training generative models, such as GANs, on a target domain containing limited examples (e.g., 10) can easily result in overfitting. In this work, we seek to utilize a large source domain for pretraining and transfer the diversity information from source to target. We propose to preserve the relative similarities and differences between instances in the source via a novel cross-domain distance consistency loss. To further reduce overfitting, we present an anchor-based strategy to encourage different levels of realism over different regions in the latent space. With extensive results in both photorealistic and non-photorealistic domains, we demonstrate qualitatively and quantitatively that our few-shot model automatically discovers correspondences between source and target domains and generates more diverse and realistic images than previous methods. |
2305.12173 | Simon Jeanteur | Simon Jeanteur, Laura Kov\'acs, Matteo Maffei and Michael Rawson | CryptoVampire: Automated Reasoning for the Complete Symbolic Attacker
Cryptographic Model | null | null | null | null | cs.CR cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cryptographic protocols are hard to design and prove correct, as witnessed by
the ever-growing list of attacks even on protocol standards. Symbolic models of
cryptography enable automated formal security proofs of such protocols against
an idealized model, which abstracts away from the algebraic properties of
cryptographic schemes and thus misses attacks. Computational models yield
rigorous guarantees but support at present only interactive proofs and/or
restricted classes of protocols. A promising approach is given by the
computationally complete symbolic attacker (CCSA), formalized in the BC Logic,
which aims at bridging and getting the best of the two worlds, obtaining
cryptographic guarantees by symbolic analysis. The BC Logic is supported by a
recently developed interactive theorem prover, Squirrel, which enables
machine-checked interactive security proofs, as opposed to automated ones, thus
requiring expert knowledge.
We introduce the CryptoVampire cryptographic protocol verifier, which for the
first time fully automates proofs of trace properties in the BC Logic. The key
technical contribution is a first-order (FO) formalization of protocol
properties with tailored handling of subterm relations. We overcome the burden
of interactive proving in higher-order (HO) logic and automatically establish
soundness of cryptographic protocols using only FO reasoning. On the
theoretical side, we restrict full FO logic with cryptographic axioms to ensure
that, by losing the expressivity of the HO BC Logic, we do not lose soundness.
On the practical side, CryptoVampire integrates dedicated proof techniques
using FO saturation algorithms and heuristics, which enable leveraging the
state-of-the-art Vampire FO theorem prover as the underlying proving engine.
Our experimental results show CryptoVampire's effectiveness of as a standalone
verifier and in terms of automation support for Squirrel.
| [
{
"created": "Sat, 20 May 2023 11:26:51 GMT",
"version": "v1"
},
{
"created": "Fri, 5 Apr 2024 14:45:53 GMT",
"version": "v2"
}
] | 2024-04-08 | [
[
"Jeanteur",
"Simon",
""
],
[
"Kovács",
"Laura",
""
],
[
"Maffei",
"Matteo",
""
],
[
"Rawson",
"Michael",
""
]
] | Cryptographic protocols are hard to design and prove correct, as witnessed by the ever-growing list of attacks even on protocol standards. Symbolic models of cryptography enable automated formal security proofs of such protocols against an idealized model, which abstracts away from the algebraic properties of cryptographic schemes and thus misses attacks. Computational models yield rigorous guarantees but support at present only interactive proofs and/or restricted classes of protocols. A promising approach is given by the computationally complete symbolic attacker (CCSA), formalized in the BC Logic, which aims at bridging and getting the best of the two worlds, obtaining cryptographic guarantees by symbolic analysis. The BC Logic is supported by a recently developed interactive theorem prover, Squirrel, which enables machine-checked interactive security proofs, as opposed to automated ones, thus requiring expert knowledge. We introduce the CryptoVampire cryptographic protocol verifier, which for the first time fully automates proofs of trace properties in the BC Logic. The key technical contribution is a first-order (FO) formalization of protocol properties with tailored handling of subterm relations. We overcome the burden of interactive proving in higher-order (HO) logic and automatically establish soundness of cryptographic protocols using only FO reasoning. On the theoretical side, we restrict full FO logic with cryptographic axioms to ensure that, by losing the expressivity of the HO BC Logic, we do not lose soundness. On the practical side, CryptoVampire integrates dedicated proof techniques using FO saturation algorithms and heuristics, which enable leveraging the state-of-the-art Vampire FO theorem prover as the underlying proving engine. Our experimental results show CryptoVampire's effectiveness of as a standalone verifier and in terms of automation support for Squirrel. |
2212.10166 | Jade Ma\"i Cock | Jade Ma\"i Cock, Muhammad Bilal, Richard Davis, Mirko Marras, Tanja
K\"aser | Protected Attributes Tell Us Who, Behavior Tells Us How: A Comparison of
Demographic and Behavioral Oversampling for Fair Student Success Modeling | Accepted as a full paper at LAK 2023: The 13th International Learning
Analytics and Knowledge Conference, 13-17 of March 2023, Arlington | null | 10.1145/3576050.3576149 | null | cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Algorithms deployed in education can shape the learning experience and
success of a student. It is therefore important to understand whether and how
such algorithms might create inequalities or amplify existing biases. In this
paper, we analyze the fairness of models which use behavioral data to identify
at-risk students and suggest two novel pre-processing approaches for bias
mitigation. Based on the concept of intersectionality, the first approach
involves intelligent oversampling on combinations of demographic attributes.
The second approach does not require any knowledge of demographic attributes
and is based on the assumption that such attributes are a (noisy) proxy for
student behavior. We hence propose to directly oversample different types of
behaviors identified in a cluster analysis. We evaluate our approaches on data
from (i) an open-ended learning environment and (ii) a flipped classroom
course. Our results show that both approaches can mitigate model bias. Directly
oversampling on behavior is a valuable alternative, when demographic metadata
is not available. Source code and extended results are provided in
https://github.com/epfl-ml4ed/behavioral-oversampling}{https://github.com/epfl-ml4ed/behavioral-oversampling .
| [
{
"created": "Tue, 20 Dec 2022 11:09:11 GMT",
"version": "v1"
}
] | 2022-12-21 | [
[
"Cock",
"Jade Maï",
""
],
[
"Bilal",
"Muhammad",
""
],
[
"Davis",
"Richard",
""
],
[
"Marras",
"Mirko",
""
],
[
"Käser",
"Tanja",
""
]
] | Algorithms deployed in education can shape the learning experience and success of a student. It is therefore important to understand whether and how such algorithms might create inequalities or amplify existing biases. In this paper, we analyze the fairness of models which use behavioral data to identify at-risk students and suggest two novel pre-processing approaches for bias mitigation. Based on the concept of intersectionality, the first approach involves intelligent oversampling on combinations of demographic attributes. The second approach does not require any knowledge of demographic attributes and is based on the assumption that such attributes are a (noisy) proxy for student behavior. We hence propose to directly oversample different types of behaviors identified in a cluster analysis. We evaluate our approaches on data from (i) an open-ended learning environment and (ii) a flipped classroom course. Our results show that both approaches can mitigate model bias. Directly oversampling on behavior is a valuable alternative, when demographic metadata is not available. Source code and extended results are provided in https://github.com/epfl-ml4ed/behavioral-oversampling}{https://github.com/epfl-ml4ed/behavioral-oversampling . |
1812.04293 | Hien Truong | Kumar Sharad, Giorgia Azzurra Marson, Hien Thi Thu Truong, Ghassan
Karame | On the Security of Randomized Defenses Against Adversarial Samples | null | null | null | null | cs.CR | http://creativecommons.org/licenses/by/4.0/ | Deep Learning has been shown to be particularly vulnerable to adversarial
samples. To combat adversarial strategies, numerous defensive techniques have
been proposed. Among these, a promising approach is to use randomness in order
to make the classification process unpredictable and presumably harder for the
adversary to control. In this paper, we study the effectiveness of randomized
defenses against adversarial samples. To this end, we categorize existing
state-of-the-art adversarial strategies into three attacker models of
increasing strength, namely blackbox, graybox, and whitebox (a.k.a.~adaptive)
attackers. We also devise a lightweight randomization strategy for image
classification based on feature squeezing, that consists of pre-processing the
classifier input by embedding randomness within each feature, before applying
feature squeezing. We evaluate the proposed defense and compare it to other
randomized techniques in the literature via thorough experiments. Our results
indeed show that careful integration of randomness can be effective against
both graybox and blackbox attacks without significantly degrading the accuracy
of the underlying classifier. However, our experimental results offer strong
evidence that in the present form such randomization techniques cannot deter a
whitebox adversary that has access to all classifier parameters and has full
knowledge of the defense. Our work thoroughly and empirically analyzes the
impact of randomization techniques against all classes of adversarial
strategies.
| [
{
"created": "Tue, 11 Dec 2018 09:34:29 GMT",
"version": "v1"
},
{
"created": "Wed, 8 May 2019 13:14:29 GMT",
"version": "v2"
},
{
"created": "Wed, 11 Mar 2020 15:49:38 GMT",
"version": "v3"
},
{
"created": "Mon, 16 Mar 2020 22:03:45 GMT",
"version": "v4"
}
] | 2020-03-18 | [
[
"Sharad",
"Kumar",
""
],
[
"Marson",
"Giorgia Azzurra",
""
],
[
"Truong",
"Hien Thi Thu",
""
],
[
"Karame",
"Ghassan",
""
]
] | Deep Learning has been shown to be particularly vulnerable to adversarial samples. To combat adversarial strategies, numerous defensive techniques have been proposed. Among these, a promising approach is to use randomness in order to make the classification process unpredictable and presumably harder for the adversary to control. In this paper, we study the effectiveness of randomized defenses against adversarial samples. To this end, we categorize existing state-of-the-art adversarial strategies into three attacker models of increasing strength, namely blackbox, graybox, and whitebox (a.k.a.~adaptive) attackers. We also devise a lightweight randomization strategy for image classification based on feature squeezing, that consists of pre-processing the classifier input by embedding randomness within each feature, before applying feature squeezing. We evaluate the proposed defense and compare it to other randomized techniques in the literature via thorough experiments. Our results indeed show that careful integration of randomness can be effective against both graybox and blackbox attacks without significantly degrading the accuracy of the underlying classifier. However, our experimental results offer strong evidence that in the present form such randomization techniques cannot deter a whitebox adversary that has access to all classifier parameters and has full knowledge of the defense. Our work thoroughly and empirically analyzes the impact of randomization techniques against all classes of adversarial strategies. |
1606.06047 | Harmeet Singh | Harmeet Singh | Contravening Esotery: Cryptanalysis of Knapsack Cipher using Genetic
Algorithms | http://www.ijcaonline.org/archives/volume140/number6/24599-2016909333, 2016 | null | null | null | cs.CR cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cryptanalysis of knapsack cipher is a fascinating problem which has eluded
the computing fraternity for decades. However, in most of the cases either the
time complexity of the proposed algorithm is colossal or an insufficient number
of samples have been taken for verification. The present work proposes a
Genetic Algorithm based technique for cryptanalysis of knapsack cipher. The
experiments conducted prove the validity of the technique. The results prove
that the technique is better than the existing techniques. An extensive review
has been carried out in order to find the gaps in the existing techniques. The
work paves the way of the application of computational intelligence techniques
to the discipline of cryptanalysis.
| [
{
"created": "Mon, 20 Jun 2016 10:04:05 GMT",
"version": "v1"
}
] | 2016-06-21 | [
[
"Singh",
"Harmeet",
""
]
] | Cryptanalysis of knapsack cipher is a fascinating problem which has eluded the computing fraternity for decades. However, in most of the cases either the time complexity of the proposed algorithm is colossal or an insufficient number of samples have been taken for verification. The present work proposes a Genetic Algorithm based technique for cryptanalysis of knapsack cipher. The experiments conducted prove the validity of the technique. The results prove that the technique is better than the existing techniques. An extensive review has been carried out in order to find the gaps in the existing techniques. The work paves the way of the application of computational intelligence techniques to the discipline of cryptanalysis. |
2205.12520 | Weijun Gao | Chong Han, Weijun Gao, Nan Yang, and Josep M. Jornet | Molecular Absorption Effect: A Double-edged Sword of Terahertz
Communications | null | null | null | null | cs.IT eess.SP math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Communications in the terahertz band (THz) (0.1--10~THz) have been regarded
as a promising technology for future 6G and beyond wireless systems, to
overcome the challenges of evergrowing wireless data traffic and crowded
spectrum. As the frequency increases from the microwave band to the THz band,
new spectrum features pose unprecedented challenges to wireless communication
system design. The molecular absorption effect is one of the new THz spectrum
properties, which enlarges the path loss and noise at specific frequencies.
This brings in a double-edged sword for THz wireless communication systems. On
one hand, from the data rate viewpoint, molecular absorption is detrimental,
since it mitigates the received signal power and degrades the channel capacity.
On the other hand, it is worth noticing that for wireless security and
covertness, the molecular absorption effect can be utilized to safeguard THz
communications among users. In this paper, the features of the molecular
absorption effect and their impact on the THz system design are analyzed under
various scenarios, with the ultimate goal of providing guidelines to how better
exploit this unique THz phenomenon. Specifically, since the molecular
absorption greatly depends on the propagation medium, different communication
scenarios consisting of various media are discussed, including terrestrial, air
and space, sea surface and nano-scale communications. Furthermore, two novel
molecular absorption enlightened secure and covert communication schemes are
presented, where the molecular absorption effect is utilized as the key and
unique feature to boost security and covertness.
| [
{
"created": "Wed, 25 May 2022 06:20:58 GMT",
"version": "v1"
}
] | 2022-05-26 | [
[
"Han",
"Chong",
""
],
[
"Gao",
"Weijun",
""
],
[
"Yang",
"Nan",
""
],
[
"Jornet",
"Josep M.",
""
]
] | Communications in the terahertz band (THz) (0.1--10~THz) have been regarded as a promising technology for future 6G and beyond wireless systems, to overcome the challenges of evergrowing wireless data traffic and crowded spectrum. As the frequency increases from the microwave band to the THz band, new spectrum features pose unprecedented challenges to wireless communication system design. The molecular absorption effect is one of the new THz spectrum properties, which enlarges the path loss and noise at specific frequencies. This brings in a double-edged sword for THz wireless communication systems. On one hand, from the data rate viewpoint, molecular absorption is detrimental, since it mitigates the received signal power and degrades the channel capacity. On the other hand, it is worth noticing that for wireless security and covertness, the molecular absorption effect can be utilized to safeguard THz communications among users. In this paper, the features of the molecular absorption effect and their impact on the THz system design are analyzed under various scenarios, with the ultimate goal of providing guidelines to how better exploit this unique THz phenomenon. Specifically, since the molecular absorption greatly depends on the propagation medium, different communication scenarios consisting of various media are discussed, including terrestrial, air and space, sea surface and nano-scale communications. Furthermore, two novel molecular absorption enlightened secure and covert communication schemes are presented, where the molecular absorption effect is utilized as the key and unique feature to boost security and covertness. |
1803.02489 | Vahid Ahmadi | Vahid Ahmadi | Deforestation Prediction Using Neural Networks and Satellite Imagery in
a Spatial Information System | null | null | null | null | cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deforestation, as one of the challenging environmental problems in the world,
has been recorded the most serious threat to environmental diversity and one of
the main components of land-use change. In this paper, we investigate spatial
distribution of deforestation using artificial neural networks and satellite
imagery. Modeling deforestation can be conducted considering various factors in
determining the relationship between deforestation and environmental and
socioeconomic factors. Therefore, in order to ascertain this relationship, the
proximity to roads and habitats, fragmentation of the forest, height from sea
level, slope, and soil type. In this research, we modeled land cover changes
(forests) to predict deforestation using an artificial neural network due to
its significant potential for the development of nonlinear complex models. The
procedure involves image registration and error correction, image
classification, preparing deforestation maps, determining layers, and designing
a multi-layer neural network to predict deforestation. The satellite images for
this study are of a region in Hong Kong which are captured from 2012 to 2016.
The results of the study demonstrate that neural networks approach for
predicting deforestation can be utilized and its outcomes show the areas that
destroyed during the research period.
| [
{
"created": "Wed, 7 Mar 2018 00:35:02 GMT",
"version": "v1"
},
{
"created": "Tue, 25 Dec 2018 23:59:45 GMT",
"version": "v2"
}
] | 2018-12-27 | [
[
"Ahmadi",
"Vahid",
""
]
] | Deforestation, as one of the challenging environmental problems in the world, has been recorded the most serious threat to environmental diversity and one of the main components of land-use change. In this paper, we investigate spatial distribution of deforestation using artificial neural networks and satellite imagery. Modeling deforestation can be conducted considering various factors in determining the relationship between deforestation and environmental and socioeconomic factors. Therefore, in order to ascertain this relationship, the proximity to roads and habitats, fragmentation of the forest, height from sea level, slope, and soil type. In this research, we modeled land cover changes (forests) to predict deforestation using an artificial neural network due to its significant potential for the development of nonlinear complex models. The procedure involves image registration and error correction, image classification, preparing deforestation maps, determining layers, and designing a multi-layer neural network to predict deforestation. The satellite images for this study are of a region in Hong Kong which are captured from 2012 to 2016. The results of the study demonstrate that neural networks approach for predicting deforestation can be utilized and its outcomes show the areas that destroyed during the research period. |
2308.16185 | Antonio Loquercio | Andrea Bajcsy, Antonio Loquercio, Ashish Kumar, Jitendra Malik | Learning Vision-based Pursuit-Evasion Robot Policies | Includes Supplementary. Project webpage at
https://abajcsy.github.io/vision-based-pursuit/ | null | null | null | cs.RO cs.AI | http://creativecommons.org/licenses/by/4.0/ | Learning strategic robot behavior -- like that required in pursuit-evasion
interactions -- under real-world constraints is extremely challenging. It
requires exploiting the dynamics of the interaction, and planning through both
physical state and latent intent uncertainty. In this paper, we transform this
intractable problem into a supervised learning problem, where a
fully-observable robot policy generates supervision for a partially-observable
one. We find that the quality of the supervision signal for the
partially-observable pursuer policy depends on two key factors: the balance of
diversity and optimality of the evader's behavior and the strength of the
modeling assumptions in the fully-observable policy. We deploy our policy on a
physical quadruped robot with an RGB-D camera on pursuit-evasion interactions
in the wild. Despite all the challenges, the sensing constraints bring about
creativity: the robot is pushed to gather information when uncertain, predict
intent from noisy measurements, and anticipate in order to intercept. Project
webpage: https://abajcsy.github.io/vision-based-pursuit/
| [
{
"created": "Wed, 30 Aug 2023 17:59:05 GMT",
"version": "v1"
}
] | 2023-08-31 | [
[
"Bajcsy",
"Andrea",
""
],
[
"Loquercio",
"Antonio",
""
],
[
"Kumar",
"Ashish",
""
],
[
"Malik",
"Jitendra",
""
]
] | Learning strategic robot behavior -- like that required in pursuit-evasion interactions -- under real-world constraints is extremely challenging. It requires exploiting the dynamics of the interaction, and planning through both physical state and latent intent uncertainty. In this paper, we transform this intractable problem into a supervised learning problem, where a fully-observable robot policy generates supervision for a partially-observable one. We find that the quality of the supervision signal for the partially-observable pursuer policy depends on two key factors: the balance of diversity and optimality of the evader's behavior and the strength of the modeling assumptions in the fully-observable policy. We deploy our policy on a physical quadruped robot with an RGB-D camera on pursuit-evasion interactions in the wild. Despite all the challenges, the sensing constraints bring about creativity: the robot is pushed to gather information when uncertain, predict intent from noisy measurements, and anticipate in order to intercept. Project webpage: https://abajcsy.github.io/vision-based-pursuit/ |
1807.04181 | Martin Wilhelm | Martin Wilhelm | On error representation in exact-decisions number types | null | null | null | null | cs.CG cs.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accuracy-driven computation is a strategy widely used in exact-decisions
number types for robust geometric algorithms. This work provides an overview on
the usage of error bounds in accuracy-driven computation, compares different
approaches on the representation and computation of these error bounds and
points out some caveats. The stated claims are supported by experiments.
| [
{
"created": "Wed, 11 Jul 2018 15:01:39 GMT",
"version": "v1"
}
] | 2018-07-12 | [
[
"Wilhelm",
"Martin",
""
]
] | Accuracy-driven computation is a strategy widely used in exact-decisions number types for robust geometric algorithms. This work provides an overview on the usage of error bounds in accuracy-driven computation, compares different approaches on the representation and computation of these error bounds and points out some caveats. The stated claims are supported by experiments. |
1809.00066 | Yova Kementchedjhieva | Yova Kementchedjhieva and Adam Lopez | Indicatements that character language models learn English
morpho-syntactic units and regularities | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Character language models have access to surface morphological patterns, but
it is not clear whether or how they learn abstract morphological regularities.
We instrument a character language model with several probes, finding that it
can develop a specific unit to identify word boundaries and, by extension,
morpheme boundaries, which allows it to capture linguistic properties and
regularities of these units. Our language model proves surprisingly good at
identifying the selectional restrictions of English derivational morphemes, a
task that requires both morphological and syntactic awareness. Thus we conclude
that, when morphemes overlap extensively with the words of a language, a
character language model can perform morphological abstraction.
| [
{
"created": "Fri, 31 Aug 2018 21:27:54 GMT",
"version": "v1"
}
] | 2018-09-05 | [
[
"Kementchedjhieva",
"Yova",
""
],
[
"Lopez",
"Adam",
""
]
] | Character language models have access to surface morphological patterns, but it is not clear whether or how they learn abstract morphological regularities. We instrument a character language model with several probes, finding that it can develop a specific unit to identify word boundaries and, by extension, morpheme boundaries, which allows it to capture linguistic properties and regularities of these units. Our language model proves surprisingly good at identifying the selectional restrictions of English derivational morphemes, a task that requires both morphological and syntactic awareness. Thus we conclude that, when morphemes overlap extensively with the words of a language, a character language model can perform morphological abstraction. |
2106.00014 | Mircea Andrecut Dr | M. Andrecut | Diffusion Self-Organizing Map on the Hypersphere | 10 pages, 4 figures | null | null | null | cs.NE cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We discuss a diffusion based implementation of the self-organizing map on the
unit hypersphere. We show that this approach can be efficiently implemented
using just linear algebra methods, we give a python numpy implementation, and
we illustrate the approach using the well known MNIST dataset.
| [
{
"created": "Mon, 31 May 2021 16:27:50 GMT",
"version": "v1"
}
] | 2021-06-02 | [
[
"Andrecut",
"M.",
""
]
] | We discuss a diffusion based implementation of the self-organizing map on the unit hypersphere. We show that this approach can be efficiently implemented using just linear algebra methods, we give a python numpy implementation, and we illustrate the approach using the well known MNIST dataset. |
cs/0703006 | Jingchao Chen | Jing-Chao Chen | XORSAT: An Efficient Algorithm for the DIMACS 32-bit Parity Problem | null | null | null | null | cs.DS | null | The DIMACS 32-bit parity problem is a satisfiability (SAT) problem hard to
solve. So far, EqSatz by Li is the only solver which can solve this problem.
However, This solver is very slow. It is reported that it spent 11855 seconds
to solve a par32-5 instance on a Maxintosh G3 300 MHz. The paper introduces a
new solver, XORSAT, which splits the original problem into two parts:
structured part and random part, and then solves separately them with WalkSAT
and an XOR equation solver. Based our empirical observation, XORSAT is
surprisingly fast, which is approximately 1000 times faster than EqSatz. For a
par32-5 instance, XORSAT took 2.9 seconds, while EqSatz took 2844 seconds on
Intel Pentium IV 2.66GHz CPU. We believe that this method significantly
different from traditional methods is also useful beyond this domain.
| [
{
"created": "Fri, 2 Mar 2007 01:38:16 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Chen",
"Jing-Chao",
""
]
] | The DIMACS 32-bit parity problem is a satisfiability (SAT) problem hard to solve. So far, EqSatz by Li is the only solver which can solve this problem. However, This solver is very slow. It is reported that it spent 11855 seconds to solve a par32-5 instance on a Maxintosh G3 300 MHz. The paper introduces a new solver, XORSAT, which splits the original problem into two parts: structured part and random part, and then solves separately them with WalkSAT and an XOR equation solver. Based our empirical observation, XORSAT is surprisingly fast, which is approximately 1000 times faster than EqSatz. For a par32-5 instance, XORSAT took 2.9 seconds, while EqSatz took 2844 seconds on Intel Pentium IV 2.66GHz CPU. We believe that this method significantly different from traditional methods is also useful beyond this domain. |
1912.04042 | John Duchi | Hilal Asi and John Duchi and Omid Javidbakht | Element Level Differential Privacy: The Right Granularity of Privacy | 34 pages, 5 figures | null | null | null | cs.LG cs.CR stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Differential Privacy (DP) provides strong guarantees on the risk of
compromising a user's data in statistical learning applications, though these
strong protections make learning challenging and may be too stringent for some
use cases. To address this, we propose element level differential privacy,
which extends differential privacy to provide protection against leaking
information about any particular "element" a user has, allowing better utility
and more robust results than classical DP. By carefully choosing these
"elements," it is possible to provide privacy protections at a desired
granularity. We provide definitions, associated privacy guarantees, and
analysis to identify the tradeoffs with the new definition; we also develop
several private estimation and learning methodologies, providing careful
examples for item frequency and M-estimation (empirical risk minimization) with
concomitant privacy and utility analysis. We complement our theoretical and
methodological advances with several real-world applications, estimating
histograms and fitting several large-scale prediction models, including deep
networks.
| [
{
"created": "Thu, 5 Dec 2019 23:05:54 GMT",
"version": "v1"
}
] | 2019-12-10 | [
[
"Asi",
"Hilal",
""
],
[
"Duchi",
"John",
""
],
[
"Javidbakht",
"Omid",
""
]
] | Differential Privacy (DP) provides strong guarantees on the risk of compromising a user's data in statistical learning applications, though these strong protections make learning challenging and may be too stringent for some use cases. To address this, we propose element level differential privacy, which extends differential privacy to provide protection against leaking information about any particular "element" a user has, allowing better utility and more robust results than classical DP. By carefully choosing these "elements," it is possible to provide privacy protections at a desired granularity. We provide definitions, associated privacy guarantees, and analysis to identify the tradeoffs with the new definition; we also develop several private estimation and learning methodologies, providing careful examples for item frequency and M-estimation (empirical risk minimization) with concomitant privacy and utility analysis. We complement our theoretical and methodological advances with several real-world applications, estimating histograms and fitting several large-scale prediction models, including deep networks. |
2407.02336 | Adrian Rebmann | Adrian Rebmann, Timotheus Kampik, Carl Corea, Han van der Aa | Mining Constraints from Reference Process Models for Detecting
Best-Practice Violations in Event Log | Preprint submitted to Information Systems | null | null | null | cs.SE cs.DB | http://creativecommons.org/licenses/by/4.0/ | Detecting undesired process behavior is one of the main tasks of process
mining and various conformance-checking techniques have been developed to this
end. These techniques typically require a normative process model as input,
specifically designed for the processes to be analyzed. Such models are rarely
available, though, and their creation involves considerable manual
effort.However, reference process models serve as best-practice templates for
organizational processes in a plethora of domains, containing valuable
knowledge about general behavioral relations in well-engineered processes.
These general models can thus mitigate the need for dedicated models by
providing a basis to check for undesired behavior. Still, finding a perfectly
matching reference model for a real-life event log is unrealistic because
organizational needs can vary, despite similarities in process execution.
Furthermore, event logs may encompass behavior related to different reference
models, making traditional conformance checking impractical as it requires
aligning process executions to individual models. To still use reference models
for conformance checking, we propose a framework for mining declarative
best-practice constraints from a reference model collection, automatically
selecting constraints that are relevant for a given event log, and checking for
best-practice violations. We demonstrate the capability of our framework to
detect best-practice violations through an evaluation based on real-world
process model collections and event logs.
| [
{
"created": "Tue, 2 Jul 2024 15:05:37 GMT",
"version": "v1"
}
] | 2024-07-03 | [
[
"Rebmann",
"Adrian",
""
],
[
"Kampik",
"Timotheus",
""
],
[
"Corea",
"Carl",
""
],
[
"van der Aa",
"Han",
""
]
] | Detecting undesired process behavior is one of the main tasks of process mining and various conformance-checking techniques have been developed to this end. These techniques typically require a normative process model as input, specifically designed for the processes to be analyzed. Such models are rarely available, though, and their creation involves considerable manual effort.However, reference process models serve as best-practice templates for organizational processes in a plethora of domains, containing valuable knowledge about general behavioral relations in well-engineered processes. These general models can thus mitigate the need for dedicated models by providing a basis to check for undesired behavior. Still, finding a perfectly matching reference model for a real-life event log is unrealistic because organizational needs can vary, despite similarities in process execution. Furthermore, event logs may encompass behavior related to different reference models, making traditional conformance checking impractical as it requires aligning process executions to individual models. To still use reference models for conformance checking, we propose a framework for mining declarative best-practice constraints from a reference model collection, automatically selecting constraints that are relevant for a given event log, and checking for best-practice violations. We demonstrate the capability of our framework to detect best-practice violations through an evaluation based on real-world process model collections and event logs. |
2306.17172 | Bashir Sadiq Mr | Bashir Olaniyi Sadiq, Muhammed Yusuf Abiodun, Sikiru Olayinka
Zakariyya, and Mohammed Dahiru Buhari | FANET Experiment: Real-Time Surveillance Applications Connected to Image
Processing System | KIU Journal of Science, Engineering and Technology (2023) | null | 10.59568/KJSET-2023-2-1-02 | null | cs.CV | http://creativecommons.org/publicdomain/zero/1.0/ | The major goal of this paper is to use image enhancement techniques for
enhancing and extracting data in FANET applications to improve the efficiency
of surveillance. The proposed conceptual system design can improve the
likelihood of FANET operations in oil pipeline surveillance, and sports and
media coverage with the ultimate goal of providing efficient services to those
who are interested. The system architecture model is based on current
scientific principles and developing technologies. A FANET, which is capable of
gathering image data from video-enabled drones, and an image processing system
that permits data collection and analysis are the two primary components of the
system. Based on the image processing technique, a proof of concept for
efficient data extraction and enhancement in FANET situations and possible
services is illustrated.
| [
{
"created": "Thu, 15 Jun 2023 10:14:44 GMT",
"version": "v1"
}
] | 2023-07-03 | [
[
"Sadiq",
"Bashir Olaniyi",
""
],
[
"Abiodun",
"Muhammed Yusuf",
""
],
[
"Zakariyya",
"Sikiru Olayinka",
""
],
[
"Buhari",
"Mohammed Dahiru",
""
]
] | The major goal of this paper is to use image enhancement techniques for enhancing and extracting data in FANET applications to improve the efficiency of surveillance. The proposed conceptual system design can improve the likelihood of FANET operations in oil pipeline surveillance, and sports and media coverage with the ultimate goal of providing efficient services to those who are interested. The system architecture model is based on current scientific principles and developing technologies. A FANET, which is capable of gathering image data from video-enabled drones, and an image processing system that permits data collection and analysis are the two primary components of the system. Based on the image processing technique, a proof of concept for efficient data extraction and enhancement in FANET situations and possible services is illustrated. |
1701.02795 | Zeshan Hussain | Hardie Cate and Zeshan Hussain | Bidirectional American Sign Language to English Translation | 7 pages | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We outline a bidirectional translation system that converts sentences from
American Sign Language (ASL) to English, and vice versa. To perform machine
translation between ASL and English, we utilize a generative approach.
Specifically, we employ an adjustment to the IBM word-alignment model 1 (IBM
WAM1), where we define language models for English and ASL, as well as a
translation model, and attempt to generate a translation that maximizes the
posterior distribution defined by these models. Then, using these models, we
are able to quantify the concepts of fluency and faithfulness of a translation
between languages.
| [
{
"created": "Tue, 10 Jan 2017 21:45:56 GMT",
"version": "v1"
}
] | 2017-01-12 | [
[
"Cate",
"Hardie",
""
],
[
"Hussain",
"Zeshan",
""
]
] | We outline a bidirectional translation system that converts sentences from American Sign Language (ASL) to English, and vice versa. To perform machine translation between ASL and English, we utilize a generative approach. Specifically, we employ an adjustment to the IBM word-alignment model 1 (IBM WAM1), where we define language models for English and ASL, as well as a translation model, and attempt to generate a translation that maximizes the posterior distribution defined by these models. Then, using these models, we are able to quantify the concepts of fluency and faithfulness of a translation between languages. |
1707.07548 | Yinghao Huang | Yinghao Huang, Federica Bogo, Christoph Lassner, Angjoo Kanazawa,
Peter V. Gehler, Ijaz Akhter, Michael J. Black | Towards Accurate Markerless Human Shape and Pose Estimation over Time | 10 pages, 6 figures, 5 tables, published in 3DV-2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing marker-less motion capture methods often assume known backgrounds,
static cameras, and sequence specific motion priors, which narrows its
application scenarios. Here we propose a fully automatic method that given
multi-view video, estimates 3D human motion and body shape. We take recent
SMPLify \cite{bogo2016keep} as the base method, and extend it in several ways.
First we fit the body to 2D features detected in multi-view images. Second, we
use a CNN method to segment the person in each image and fit the 3D body model
to the contours to further improves accuracy. Third we utilize a generic and
robust DCT temporal prior to handle the left and right side swapping issue
sometimes introduced by the 2D pose estimator. Validation on standard
benchmarks shows our results are comparable to the state of the art and also
provide a realistic 3D shape avatar. We also demonstrate accurate results on
HumanEva and on challenging dance sequences from YouTube in monocular case.
| [
{
"created": "Mon, 24 Jul 2017 13:31:37 GMT",
"version": "v1"
},
{
"created": "Wed, 26 Jul 2017 19:28:39 GMT",
"version": "v2"
},
{
"created": "Mon, 18 Dec 2017 09:57:36 GMT",
"version": "v3"
},
{
"created": "Wed, 20 Dec 2017 13:07:03 GMT",
"version": "v4"
},
{
"created": "Mon, 30 Apr 2018 12:19:54 GMT",
"version": "v5"
}
] | 2018-05-01 | [
[
"Huang",
"Yinghao",
""
],
[
"Bogo",
"Federica",
""
],
[
"Lassner",
"Christoph",
""
],
[
"Kanazawa",
"Angjoo",
""
],
[
"Gehler",
"Peter V.",
""
],
[
"Akhter",
"Ijaz",
""
],
[
"Black",
"Michael J.",
""
]
] | Existing marker-less motion capture methods often assume known backgrounds, static cameras, and sequence specific motion priors, which narrows its application scenarios. Here we propose a fully automatic method that given multi-view video, estimates 3D human motion and body shape. We take recent SMPLify \cite{bogo2016keep} as the base method, and extend it in several ways. First we fit the body to 2D features detected in multi-view images. Second, we use a CNN method to segment the person in each image and fit the 3D body model to the contours to further improves accuracy. Third we utilize a generic and robust DCT temporal prior to handle the left and right side swapping issue sometimes introduced by the 2D pose estimator. Validation on standard benchmarks shows our results are comparable to the state of the art and also provide a realistic 3D shape avatar. We also demonstrate accurate results on HumanEva and on challenging dance sequences from YouTube in monocular case. |
1102.2969 | Gook-Pil Roh | Gook-Pil Roh, Seung-won Hwang, and Byoung-Kee Yi | Efficient and scalable geometric hashing method for searching protein 3D
structures | 9 pages, 1 figures | null | null | null | cs.DB q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As the structural databases continue to expand, efficient methods are
required to search similar structures of the query structure from the database.
There are many previous works about comparing protein 3D structures and
scanning the database with a query structure. However, they generally have
limitations on practical use because of large computational and storage
requirements.
We propose two new types of queries for searching similar sub-structures on
the structural database: LSPM (Local Spatial Pattern Matching) and RLSPM
(Reverse LSPM). Between two types of queries, we focus on RLSPM problem,
because it is more practical and general than LSPM. As a naive algorithm, we
adopt geometric hashing techniques to RLSPM problem and then propose our
proposed algorithm which improves the baseline algorithm to deal with
large-scale data and provide an efficient matching algorithm. We employ the
sub-sampling and Z-ordering to reduce the storage requirement and execution
time, respectively. We conduct our experiments to show the correctness and
reliability of the proposed method. Our experiment shows that the true positive
rate is at least 0.8 using the reliability measure.
| [
{
"created": "Tue, 15 Feb 2011 05:37:34 GMT",
"version": "v1"
}
] | 2011-02-16 | [
[
"Roh",
"Gook-Pil",
""
],
[
"Hwang",
"Seung-won",
""
],
[
"Yi",
"Byoung-Kee",
""
]
] | As the structural databases continue to expand, efficient methods are required to search similar structures of the query structure from the database. There are many previous works about comparing protein 3D structures and scanning the database with a query structure. However, they generally have limitations on practical use because of large computational and storage requirements. We propose two new types of queries for searching similar sub-structures on the structural database: LSPM (Local Spatial Pattern Matching) and RLSPM (Reverse LSPM). Between two types of queries, we focus on RLSPM problem, because it is more practical and general than LSPM. As a naive algorithm, we adopt geometric hashing techniques to RLSPM problem and then propose our proposed algorithm which improves the baseline algorithm to deal with large-scale data and provide an efficient matching algorithm. We employ the sub-sampling and Z-ordering to reduce the storage requirement and execution time, respectively. We conduct our experiments to show the correctness and reliability of the proposed method. Our experiment shows that the true positive rate is at least 0.8 using the reliability measure. |
2402.06892 | Masanari Kimura | Masanari Kimura | Understanding Test-Time Augmentation | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Test-Time Augmentation (TTA) is a very powerful heuristic that takes
advantage of data augmentation during testing to produce averaged output.
Despite the experimental effectiveness of TTA, there is insufficient discussion
of its theoretical aspects. In this paper, we aim to give theoretical
guarantees for TTA and clarify its behavior.
| [
{
"created": "Sat, 10 Feb 2024 06:49:08 GMT",
"version": "v1"
}
] | 2024-02-13 | [
[
"Kimura",
"Masanari",
""
]
] | Test-Time Augmentation (TTA) is a very powerful heuristic that takes advantage of data augmentation during testing to produce averaged output. Despite the experimental effectiveness of TTA, there is insufficient discussion of its theoretical aspects. In this paper, we aim to give theoretical guarantees for TTA and clarify its behavior. |
2203.16942 | Xu Chen | Weiqi Shao and Xu Chen and Long Xia and Jiashu Zhao and Dawei Yin | Sequential Recommendation with User Evolving Preference Decomposition | sequential recommendation | null | null | null | cs.IR | http://creativecommons.org/licenses/by/4.0/ | Modeling user sequential behaviors has recently attracted increasing
attention in the recommendation domain. Existing methods mostly assume coherent
preference in the same sequence. However, user personalities are volatile and
easily changed, and there can be multiple mixed preferences underlying user
behaviors. To solve this problem, in this paper, we propose a novel sequential
recommender model via decomposing and modeling user independent preferences. To
achieve this goal, we highlight three practical challenges considering the
inconsistent, evolving and uneven nature of the user behavior, which are seldom
noticed by the previous work. For overcoming these challenges in a unified
framework, we introduce a reinforcement learning module to simulate the
evolution of user preference. More specifically, the action aims to allocate
each item into a sub-sequence or create a new one according to how the previous
items are decomposed as well as the time interval between successive behaviors.
The reward is associated with the final loss of the learning objective, aiming
to generate sub-sequences which can better fit the training data. We conduct
extensive experiments based on six real-world datasets across different
domains. Compared with the state-of-the-art methods, empirical studies manifest
that our model can on average improve the performance by about 8.21%, 10.08%,
10.32%, and 9.82% on the metrics of Precision, Recall, NDCG and MRR,
respectively.
| [
{
"created": "Thu, 31 Mar 2022 10:57:59 GMT",
"version": "v1"
}
] | 2022-04-01 | [
[
"Shao",
"Weiqi",
""
],
[
"Chen",
"Xu",
""
],
[
"Xia",
"Long",
""
],
[
"Zhao",
"Jiashu",
""
],
[
"Yin",
"Dawei",
""
]
] | Modeling user sequential behaviors has recently attracted increasing attention in the recommendation domain. Existing methods mostly assume coherent preference in the same sequence. However, user personalities are volatile and easily changed, and there can be multiple mixed preferences underlying user behaviors. To solve this problem, in this paper, we propose a novel sequential recommender model via decomposing and modeling user independent preferences. To achieve this goal, we highlight three practical challenges considering the inconsistent, evolving and uneven nature of the user behavior, which are seldom noticed by the previous work. For overcoming these challenges in a unified framework, we introduce a reinforcement learning module to simulate the evolution of user preference. More specifically, the action aims to allocate each item into a sub-sequence or create a new one according to how the previous items are decomposed as well as the time interval between successive behaviors. The reward is associated with the final loss of the learning objective, aiming to generate sub-sequences which can better fit the training data. We conduct extensive experiments based on six real-world datasets across different domains. Compared with the state-of-the-art methods, empirical studies manifest that our model can on average improve the performance by about 8.21%, 10.08%, 10.32%, and 9.82% on the metrics of Precision, Recall, NDCG and MRR, respectively. |
1710.03960 | Gabriel Nivasch | David Eppstein, Sariel Har-Peled and Gabriel Nivasch | Grid peeling and the affine curve-shortening flow | 18 pages, 11 figures. A preliminary version appeared in ALENEX 2018 | Experimental Mathematics 29 (3): 306-316, 2020 | 10.1080/10586458.2018.1466379 | null | cs.CG math.DG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we study an experimentally-observed connection between two
seemingly unrelated processes, one from computational geometry and the other
from differential geometry. The first one (which we call "grid peeling") is the
convex-layer decomposition of subsets $G\subset \mathbb Z^2$ of the integer
grid, previously studied for the particular case $G=\{1,\ldots,m\}^2$ by
Har-Peled and Lidick\'y (2013). The second one is the affine curve-shortening
flow (ACSF), first studied by Alvarez et al. (1993) and Sapiro and Tannenbaum
(1993). We present empirical evidence that, in a certain well-defined sense,
grid peeling behaves at the limit like ACSF on convex curves. We offer some
theoretical arguments in favor of this conjecture.
We also pay closer attention to the simple case where $G=\mathbb N^2$ is a
quarter-infinite grid. This case corresponds to ACSF starting with an infinite
L-shaped curve, which when transformed using the ACSF becomes a hyperbola for
all times $t>0$. We prove that, in the grid peeling of $\mathbb N^2$, (1) the
number of grid points removed up to iteration $n$ is $\Theta(n^{3/2}\log n)$;
and (2) the boundary at iteration $n$ is sandwiched between two hyperbolas that
are separated from each other by a constant factor.
| [
{
"created": "Wed, 11 Oct 2017 08:37:39 GMT",
"version": "v1"
},
{
"created": "Tue, 23 Jan 2018 11:24:55 GMT",
"version": "v2"
}
] | 2020-08-14 | [
[
"Eppstein",
"David",
""
],
[
"Har-Peled",
"Sariel",
""
],
[
"Nivasch",
"Gabriel",
""
]
] | In this paper we study an experimentally-observed connection between two seemingly unrelated processes, one from computational geometry and the other from differential geometry. The first one (which we call "grid peeling") is the convex-layer decomposition of subsets $G\subset \mathbb Z^2$ of the integer grid, previously studied for the particular case $G=\{1,\ldots,m\}^2$ by Har-Peled and Lidick\'y (2013). The second one is the affine curve-shortening flow (ACSF), first studied by Alvarez et al. (1993) and Sapiro and Tannenbaum (1993). We present empirical evidence that, in a certain well-defined sense, grid peeling behaves at the limit like ACSF on convex curves. We offer some theoretical arguments in favor of this conjecture. We also pay closer attention to the simple case where $G=\mathbb N^2$ is a quarter-infinite grid. This case corresponds to ACSF starting with an infinite L-shaped curve, which when transformed using the ACSF becomes a hyperbola for all times $t>0$. We prove that, in the grid peeling of $\mathbb N^2$, (1) the number of grid points removed up to iteration $n$ is $\Theta(n^{3/2}\log n)$; and (2) the boundary at iteration $n$ is sandwiched between two hyperbolas that are separated from each other by a constant factor. |
2302.05832 | Tim Whitaker | Tim Whitaker, Darrell Whitley | Sparse Mutation Decompositions: Fine Tuning Deep Neural Networks with
Subspace Evolution | 8 Pages, 3 Figures | null | null | null | cs.NE cs.LG | http://creativecommons.org/licenses/by/4.0/ | Neuroevolution is a promising area of research that combines evolutionary
algorithms with neural networks. A popular subclass of neuroevolutionary
methods, called evolution strategies, relies on dense noise perturbations to
mutate networks, which can be sample inefficient and challenging for large
models with millions of parameters. We introduce an approach to alleviating
this problem by decomposing dense mutations into low-dimensional subspaces.
Restricting mutations in this way can significantly reduce variance as networks
can handle stronger perturbations while maintaining performance, which enables
a more controlled and targeted evolution of deep networks. This approach is
uniquely effective for the task of fine tuning pre-trained models, which is an
increasingly valuable area of research as networks continue to scale in size
and open source models become more widely available. Furthermore, we show how
this work naturally connects to ensemble learning where sparse mutations
encourage diversity among children such that their combined predictions can
reliably improve performance. We conduct the first large scale exploration of
neuroevolutionary fine tuning and ensembling on the notoriously difficult
ImageNet dataset, where we see small generalization improvements with only a
single evolutionary generation using nearly a dozen different deep neural
network architectures.
| [
{
"created": "Sun, 12 Feb 2023 01:27:26 GMT",
"version": "v1"
}
] | 2023-02-14 | [
[
"Whitaker",
"Tim",
""
],
[
"Whitley",
"Darrell",
""
]
] | Neuroevolution is a promising area of research that combines evolutionary algorithms with neural networks. A popular subclass of neuroevolutionary methods, called evolution strategies, relies on dense noise perturbations to mutate networks, which can be sample inefficient and challenging for large models with millions of parameters. We introduce an approach to alleviating this problem by decomposing dense mutations into low-dimensional subspaces. Restricting mutations in this way can significantly reduce variance as networks can handle stronger perturbations while maintaining performance, which enables a more controlled and targeted evolution of deep networks. This approach is uniquely effective for the task of fine tuning pre-trained models, which is an increasingly valuable area of research as networks continue to scale in size and open source models become more widely available. Furthermore, we show how this work naturally connects to ensemble learning where sparse mutations encourage diversity among children such that their combined predictions can reliably improve performance. We conduct the first large scale exploration of neuroevolutionary fine tuning and ensembling on the notoriously difficult ImageNet dataset, where we see small generalization improvements with only a single evolutionary generation using nearly a dozen different deep neural network architectures. |
1010.2733 | Talbot Hugues | Camille Couprie (LIGM), Leo Grady, Hugues Talbot (LIGM), Laurent
Najman (LIGM) | Combinatorial Continuous Maximal Flows | 26 pages | SIAM Journal on Imaging Sciences 4 (2011) 905-930 | 10.1137/100799186 | null | cs.CV math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Maximum flow (and minimum cut) algorithms have had a strong impact on
computer vision. In particular, graph cuts algorithms provide a mechanism for
the discrete optimization of an energy functional which has been used in a
variety of applications such as image segmentation, stereo, image stitching and
texture synthesis. Algorithms based on the classical formulation of max-flow
defined on a graph are known to exhibit metrication artefacts in the solution.
Therefore, a recent trend has been to instead employ a spatially continuous
maximum flow (or the dual min-cut problem) in these same applications to
produce solutions with no metrication errors. However, known fast continuous
max-flow algorithms have no stopping criteria or have not been proved to
converge. In this work, we revisit the continuous max-flow problem and show
that the analogous discrete formulation is different from the classical
max-flow problem. We then apply an appropriate combinatorial optimization
technique to this combinatorial continuous max-flow CCMF problem to find a
null-divergence solution that exhibits no metrication artefacts and may be
solved exactly by a fast, efficient algorithm with provable convergence.
Finally, by exhibiting the dual problem of our CCMF formulation, we clarify the
fact, already proved by Nozawa in the continuous setting, that the max-flow and
the total variation problems are not always equivalent.
| [
{
"created": "Wed, 13 Oct 2010 19:08:02 GMT",
"version": "v1"
},
{
"created": "Wed, 28 Dec 2011 07:26:41 GMT",
"version": "v2"
}
] | 2011-12-30 | [
[
"Couprie",
"Camille",
"",
"LIGM"
],
[
"Grady",
"Leo",
"",
"LIGM"
],
[
"Talbot",
"Hugues",
"",
"LIGM"
],
[
"Najman",
"Laurent",
"",
"LIGM"
]
] | Maximum flow (and minimum cut) algorithms have had a strong impact on computer vision. In particular, graph cuts algorithms provide a mechanism for the discrete optimization of an energy functional which has been used in a variety of applications such as image segmentation, stereo, image stitching and texture synthesis. Algorithms based on the classical formulation of max-flow defined on a graph are known to exhibit metrication artefacts in the solution. Therefore, a recent trend has been to instead employ a spatially continuous maximum flow (or the dual min-cut problem) in these same applications to produce solutions with no metrication errors. However, known fast continuous max-flow algorithms have no stopping criteria or have not been proved to converge. In this work, we revisit the continuous max-flow problem and show that the analogous discrete formulation is different from the classical max-flow problem. We then apply an appropriate combinatorial optimization technique to this combinatorial continuous max-flow CCMF problem to find a null-divergence solution that exhibits no metrication artefacts and may be solved exactly by a fast, efficient algorithm with provable convergence. Finally, by exhibiting the dual problem of our CCMF formulation, we clarify the fact, already proved by Nozawa in the continuous setting, that the max-flow and the total variation problems are not always equivalent. |
2207.08426 | Ryann Sim Wei Jian | Georgios Piliouras, Lillian Ratliff, Ryann Sim, Stratis Skoulakis | Fast Convergence of Optimistic Gradient Ascent in Network Zero-Sum
Extensive Form Games | To appear in SAGT 2022 | null | null | null | cs.GT cs.LG cs.MA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The study of learning in games has thus far focused primarily on normal form
games. In contrast, our understanding of learning in extensive form games
(EFGs) and particularly in EFGs with many agents lags far behind, despite them
being closer in nature to many real world applications. We consider the natural
class of Network Zero-Sum Extensive Form Games, which combines the global
zero-sum property of agent payoffs, the efficient representation of graphical
games as well the expressive power of EFGs. We examine the convergence
properties of Optimistic Gradient Ascent (OGA) in these games. We prove that
the time-average behavior of such online learning dynamics exhibits $O(1/T)$
rate convergence to the set of Nash Equilibria. Moreover, we show that the
day-to-day behavior also converges to Nash with rate $O(c^{-t})$ for some
game-dependent constant $c>0$.
| [
{
"created": "Mon, 18 Jul 2022 08:21:39 GMT",
"version": "v1"
}
] | 2022-07-19 | [
[
"Piliouras",
"Georgios",
""
],
[
"Ratliff",
"Lillian",
""
],
[
"Sim",
"Ryann",
""
],
[
"Skoulakis",
"Stratis",
""
]
] | The study of learning in games has thus far focused primarily on normal form games. In contrast, our understanding of learning in extensive form games (EFGs) and particularly in EFGs with many agents lags far behind, despite them being closer in nature to many real world applications. We consider the natural class of Network Zero-Sum Extensive Form Games, which combines the global zero-sum property of agent payoffs, the efficient representation of graphical games as well the expressive power of EFGs. We examine the convergence properties of Optimistic Gradient Ascent (OGA) in these games. We prove that the time-average behavior of such online learning dynamics exhibits $O(1/T)$ rate convergence to the set of Nash Equilibria. Moreover, we show that the day-to-day behavior also converges to Nash with rate $O(c^{-t})$ for some game-dependent constant $c>0$. |
1905.09211 | Berkan Demirel | Berkan Demirel, Omer Ozdil, Yunus Emre Esin, Safak Ozturk | Segmentation-Aware Hyperspectral Image Classification | To appear at International Geoscience and Remote Sensing Symposium
(IGARSS) 2019 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose an unified hyperspectral image classification
method which takes three-dimensional hyperspectral data cube as an input and
produces a classification map. In the proposed method, a deep neural network
which uses spectral and spatial information together with residual connections,
and pixel affinity network based segmentation-aware superpixels are used
together. In the architecture, segmentation-aware superpixels run on the
initial classification map of deep residual network, and apply majority voting
on obtained results. Experimental results show that our propoped method yields
state-of-the-art results in two benchmark datasets. Moreover, we also show that
the segmentation-aware superpixels have great contribution to the success of
hyperspectral image classification methods in cases where training data is
insufficient.
| [
{
"created": "Wed, 22 May 2019 16:03:01 GMT",
"version": "v1"
}
] | 2019-05-23 | [
[
"Demirel",
"Berkan",
""
],
[
"Ozdil",
"Omer",
""
],
[
"Esin",
"Yunus Emre",
""
],
[
"Ozturk",
"Safak",
""
]
] | In this paper, we propose an unified hyperspectral image classification method which takes three-dimensional hyperspectral data cube as an input and produces a classification map. In the proposed method, a deep neural network which uses spectral and spatial information together with residual connections, and pixel affinity network based segmentation-aware superpixels are used together. In the architecture, segmentation-aware superpixels run on the initial classification map of deep residual network, and apply majority voting on obtained results. Experimental results show that our propoped method yields state-of-the-art results in two benchmark datasets. Moreover, we also show that the segmentation-aware superpixels have great contribution to the success of hyperspectral image classification methods in cases where training data is insufficient. |
1501.01202 | Christopher Mattern | Christopher Mattern | On Probability Estimation by Exponential Smoothing | null | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Probability estimation is essential for every statistical data compression
algorithm. In practice probability estimation should be adaptive, recent
observations should receive a higher weight than older observations. We present
a probability estimation method based on exponential smoothing that satisfies
this requirement and runs in constant time per letter. Our main contribution is
a theoretical analysis in case of a binary alphabet for various smoothing rate
sequences: We show that the redundancy w.r.t. a piecewise stationary model with
$s$ segments is $O\left(s\sqrt n\right)$ for any bit sequence of length $n$, an
improvement over redundancy $O\left(s\sqrt{n\log n}\right)$ of previous
approaches with similar time complexity.
| [
{
"created": "Tue, 6 Jan 2015 15:31:53 GMT",
"version": "v1"
},
{
"created": "Fri, 9 Jan 2015 11:53:51 GMT",
"version": "v2"
}
] | 2015-01-12 | [
[
"Mattern",
"Christopher",
""
]
] | Probability estimation is essential for every statistical data compression algorithm. In practice probability estimation should be adaptive, recent observations should receive a higher weight than older observations. We present a probability estimation method based on exponential smoothing that satisfies this requirement and runs in constant time per letter. Our main contribution is a theoretical analysis in case of a binary alphabet for various smoothing rate sequences: We show that the redundancy w.r.t. a piecewise stationary model with $s$ segments is $O\left(s\sqrt n\right)$ for any bit sequence of length $n$, an improvement over redundancy $O\left(s\sqrt{n\log n}\right)$ of previous approaches with similar time complexity. |
2106.14699 | Johan \"Ofverstedt | Johan \"Ofverstedt, Joakim Lindblad, Nata\v{s}a Sladoje | Fast computation of mutual information in the frequency domain with
applications to global multimodal image alignment | 7 pages, 4 figures, 2 tables. The article is under consideration at
Pattern Recognition Letters | Pattern Recognition Letters, Vol. 159, pp. 196-203, 2022 | 10.1016/j.patrec.2022.05.022 | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimodal image alignment is the process of finding spatial correspondences
between images formed by different imaging techniques or under different
conditions, to facilitate heterogeneous data fusion and correlative analysis.
The information-theoretic concept of mutual information (MI) is widely used as
a similarity measure to guide multimodal alignment processes, where most works
have focused on local maximization of MI that typically works well only for
small displacements; this points to a need for global maximization of MI, which
has previously been computationally infeasible due to the high run-time
complexity of existing algorithms. We propose an efficient algorithm for
computing MI for all discrete displacements (formalized as the cross-mutual
information function (CMIF)), which is based on cross-correlation computed in
the frequency domain. We show that the algorithm is equivalent to a direct
method while asymptotically superior in terms of run-time. Furthermore, we
propose a method for multimodal image alignment for transformation models with
few degrees of freedom (e.g. rigid) based on the proposed CMIF-algorithm. We
evaluate the efficacy of the proposed method on three distinct benchmark
datasets, of aerial images, cytological images, and histological images, and we
observe excellent success-rates (in recovering known rigid transformations),
overall outperforming alternative methods, including local optimization of MI
as well as several recent deep learning-based approaches. We also evaluate the
run-times of a GPU implementation of the proposed algorithm and observe
speed-ups from 100 to more than 10,000 times for realistic image sizes compared
to a GPU implementation of a direct method. Code is shared as open-source at
\url{github.com/MIDA-group/globalign}.
| [
{
"created": "Mon, 28 Jun 2021 13:27:05 GMT",
"version": "v1"
}
] | 2022-07-01 | [
[
"Öfverstedt",
"Johan",
""
],
[
"Lindblad",
"Joakim",
""
],
[
"Sladoje",
"Nataša",
""
]
] | Multimodal image alignment is the process of finding spatial correspondences between images formed by different imaging techniques or under different conditions, to facilitate heterogeneous data fusion and correlative analysis. The information-theoretic concept of mutual information (MI) is widely used as a similarity measure to guide multimodal alignment processes, where most works have focused on local maximization of MI that typically works well only for small displacements; this points to a need for global maximization of MI, which has previously been computationally infeasible due to the high run-time complexity of existing algorithms. We propose an efficient algorithm for computing MI for all discrete displacements (formalized as the cross-mutual information function (CMIF)), which is based on cross-correlation computed in the frequency domain. We show that the algorithm is equivalent to a direct method while asymptotically superior in terms of run-time. Furthermore, we propose a method for multimodal image alignment for transformation models with few degrees of freedom (e.g. rigid) based on the proposed CMIF-algorithm. We evaluate the efficacy of the proposed method on three distinct benchmark datasets, of aerial images, cytological images, and histological images, and we observe excellent success-rates (in recovering known rigid transformations), overall outperforming alternative methods, including local optimization of MI as well as several recent deep learning-based approaches. We also evaluate the run-times of a GPU implementation of the proposed algorithm and observe speed-ups from 100 to more than 10,000 times for realistic image sizes compared to a GPU implementation of a direct method. Code is shared as open-source at \url{github.com/MIDA-group/globalign}. |
1910.10853 | Chunlei Liu | Chunlei Liu, Wenrui Ding, Xin Xia, Baochang Zhang, Jiaxin Gu,
Jianzhuang Liu, Rongrong Ji, David Doermann | Circulant Binary Convolutional Networks: Enhancing the Performance of
1-bit DCNNs with Circulant Back Propagation | Published in CVPR2019 | ]Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition. 2019: 2691-2699 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rapidly decreasing computation and memory cost has recently driven the
success of many applications in the field of deep learning. Practical
applications of deep learning in resource-limited hardware, such as embedded
devices and smart phones, however, remain challenging. For binary convolutional
networks, the reason lies in the degraded representation caused by binarizing
full-precision filters. To address this problem, we propose new circulant
filters (CiFs) and a circulant binary convolution (CBConv) to enhance the
capacity of binarized convolutional features via our circulant back propagation
(CBP). The CiFs can be easily incorporated into existing deep convolutional
neural networks (DCNNs), which leads to new Circulant Binary Convolutional
Networks (CBCNs). Extensive experiments confirm that the performance gap
between the 1-bit and full-precision DCNNs is minimized by increasing the
filter diversity, which further increases the representational ability in our
networks. Our experiments on ImageNet show that CBCNs achieve 61.4% top-1
accuracy with ResNet18. Compared to the state-of-the-art such as XNOR, CBCNs
can achieve up to 10% higher top-1 accuracy with more powerful representational
ability.
| [
{
"created": "Thu, 24 Oct 2019 00:24:30 GMT",
"version": "v1"
}
] | 2019-10-25 | [
[
"Liu",
"Chunlei",
""
],
[
"Ding",
"Wenrui",
""
],
[
"Xia",
"Xin",
""
],
[
"Zhang",
"Baochang",
""
],
[
"Gu",
"Jiaxin",
""
],
[
"Liu",
"Jianzhuang",
""
],
[
"Ji",
"Rongrong",
""
],
[
"Doermann",
"David",
""
]
] | The rapidly decreasing computation and memory cost has recently driven the success of many applications in the field of deep learning. Practical applications of deep learning in resource-limited hardware, such as embedded devices and smart phones, however, remain challenging. For binary convolutional networks, the reason lies in the degraded representation caused by binarizing full-precision filters. To address this problem, we propose new circulant filters (CiFs) and a circulant binary convolution (CBConv) to enhance the capacity of binarized convolutional features via our circulant back propagation (CBP). The CiFs can be easily incorporated into existing deep convolutional neural networks (DCNNs), which leads to new Circulant Binary Convolutional Networks (CBCNs). Extensive experiments confirm that the performance gap between the 1-bit and full-precision DCNNs is minimized by increasing the filter diversity, which further increases the representational ability in our networks. Our experiments on ImageNet show that CBCNs achieve 61.4% top-1 accuracy with ResNet18. Compared to the state-of-the-art such as XNOR, CBCNs can achieve up to 10% higher top-1 accuracy with more powerful representational ability. |
2302.03088 | Laura Stegner | David Porfirio, Laura Stegner, Maya Cakmak, Allison Saupp\'e, Aws
Albarghouthi, Bilge Mutlu | Sketching Robot Programs On the Fly | Accepted at HRI '23, March 13-16, 2023, Stockholm, Sweden | null | 10.1145/3568162.3576991 | null | cs.RO cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Service robots for personal use in the home and the workplace require
end-user development solutions for swiftly scripting robot tasks as the need
arises. Many existing solutions preserve ease, efficiency, and convenience
through simple programming interfaces or by restricting task complexity. Others
facilitate meticulous task design but often do so at the expense of simplicity
and efficiency. There is a need for robot programming solutions that reconcile
the complexity of robotics with the on-the-fly goals of end-user development.
In response to this need, we present a novel, multimodal, and on-the-fly
development system, Tabula. Inspired by a formative design study with a
prototype, Tabula leverages a combination of spoken language for specifying the
core of a robot task and sketching for contextualizing the core. The result is
that developers can script partial, sloppy versions of robot programs to be
completed and refined by a program synthesizer. Lastly, we demonstrate our
anticipated use cases of Tabula via a set of application scenarios.
| [
{
"created": "Mon, 6 Feb 2023 19:44:05 GMT",
"version": "v1"
}
] | 2023-02-08 | [
[
"Porfirio",
"David",
""
],
[
"Stegner",
"Laura",
""
],
[
"Cakmak",
"Maya",
""
],
[
"Sauppé",
"Allison",
""
],
[
"Albarghouthi",
"Aws",
""
],
[
"Mutlu",
"Bilge",
""
]
] | Service robots for personal use in the home and the workplace require end-user development solutions for swiftly scripting robot tasks as the need arises. Many existing solutions preserve ease, efficiency, and convenience through simple programming interfaces or by restricting task complexity. Others facilitate meticulous task design but often do so at the expense of simplicity and efficiency. There is a need for robot programming solutions that reconcile the complexity of robotics with the on-the-fly goals of end-user development. In response to this need, we present a novel, multimodal, and on-the-fly development system, Tabula. Inspired by a formative design study with a prototype, Tabula leverages a combination of spoken language for specifying the core of a robot task and sketching for contextualizing the core. The result is that developers can script partial, sloppy versions of robot programs to be completed and refined by a program synthesizer. Lastly, we demonstrate our anticipated use cases of Tabula via a set of application scenarios. |
2112.05340 | Alex Tamkin | Ananya Karthik, Mike Wu, Noah Goodman, Alex Tamkin | Tradeoffs Between Contrastive and Supervised Learning: An Empirical
Study | NeurIPS 2021 Workshop: Self-Supervised Learning - Theory and Practice | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Contrastive learning has made considerable progress in computer vision,
outperforming supervised pretraining on a range of downstream datasets.
However, is contrastive learning the better choice in all situations? We
demonstrate two cases where it is not. First, under sufficiently small
pretraining budgets, supervised pretraining on ImageNet consistently
outperforms a comparable contrastive model on eight diverse image
classification datasets. This suggests that the common practice of comparing
pretraining approaches at hundreds or thousands of epochs may not produce
actionable insights for those with more limited compute budgets. Second, even
with larger pretraining budgets we identify tasks where supervised learning
prevails, perhaps because the object-centric bias of supervised pretraining
makes the model more resilient to common corruptions and spurious
foreground-background correlations. These results underscore the need to
characterize tradeoffs of different pretraining objectives across a wider range
of contexts and training regimes.
| [
{
"created": "Fri, 10 Dec 2021 05:19:32 GMT",
"version": "v1"
}
] | 2021-12-13 | [
[
"Karthik",
"Ananya",
""
],
[
"Wu",
"Mike",
""
],
[
"Goodman",
"Noah",
""
],
[
"Tamkin",
"Alex",
""
]
] | Contrastive learning has made considerable progress in computer vision, outperforming supervised pretraining on a range of downstream datasets. However, is contrastive learning the better choice in all situations? We demonstrate two cases where it is not. First, under sufficiently small pretraining budgets, supervised pretraining on ImageNet consistently outperforms a comparable contrastive model on eight diverse image classification datasets. This suggests that the common practice of comparing pretraining approaches at hundreds or thousands of epochs may not produce actionable insights for those with more limited compute budgets. Second, even with larger pretraining budgets we identify tasks where supervised learning prevails, perhaps because the object-centric bias of supervised pretraining makes the model more resilient to common corruptions and spurious foreground-background correlations. These results underscore the need to characterize tradeoffs of different pretraining objectives across a wider range of contexts and training regimes. |
1404.4887 | Christian Schulz | Yaroslav Akhremtsev, Peter Sanders, Christian Schulz | (Semi-)External Algorithms for Graph Partitioning and Clustering | null | null | null | null | cs.DS cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we develop semi-external and external memory algorithms for
graph partitioning and clustering problems. Graph partitioning and clustering
are key tools for processing and analyzing large complex networks. We address
both problems in the (semi-)external model by adapting the size-constrained
label propagation technique. Our (semi-)external size-constrained label
propagation algorithm can be used to compute graph clusterings and is a
prerequisite for the (semi-)external graph partitioning algorithm. The
algorithm is then used for both the coarsening and the refinement phase of a
multilevel algorithm to compute graph partitions. Our algorithm is able to
partition and cluster huge complex networks with billions of edges on cheap
commodity machines. Experiments demonstrate that the semi-external graph
partitioning algorithm is scalable and can compute high quality partitions in
time that is comparable to the running time of an efficient internal memory
implementation. A parallelization of the algorithm in the semi-external model
further reduces running time.
| [
{
"created": "Fri, 18 Apr 2014 20:58:21 GMT",
"version": "v1"
},
{
"created": "Mon, 22 Sep 2014 21:38:12 GMT",
"version": "v2"
}
] | 2014-09-24 | [
[
"Akhremtsev",
"Yaroslav",
""
],
[
"Sanders",
"Peter",
""
],
[
"Schulz",
"Christian",
""
]
] | In this paper, we develop semi-external and external memory algorithms for graph partitioning and clustering problems. Graph partitioning and clustering are key tools for processing and analyzing large complex networks. We address both problems in the (semi-)external model by adapting the size-constrained label propagation technique. Our (semi-)external size-constrained label propagation algorithm can be used to compute graph clusterings and is a prerequisite for the (semi-)external graph partitioning algorithm. The algorithm is then used for both the coarsening and the refinement phase of a multilevel algorithm to compute graph partitions. Our algorithm is able to partition and cluster huge complex networks with billions of edges on cheap commodity machines. Experiments demonstrate that the semi-external graph partitioning algorithm is scalable and can compute high quality partitions in time that is comparable to the running time of an efficient internal memory implementation. A parallelization of the algorithm in the semi-external model further reduces running time. |
1103.3190 | Johnny Karout | Johnny Karout, Erik Agrell, Krzysztof Szczerba and Magnus Karlsson | Designing Power-Efficient Modulation Formats for Noncoherent Optical
Systems | Submitted to Globecom 2011 | Proc. Global Communications Conference (GlobeCom), Houston, TX,
Dec. 2011 (best paper award) | 10.1109/GLOCOM.2011.6133546 | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We optimize modulation formats for the additive white Gaussian noise channel
with a nonnegative input constraint, also known as the intensity-modulated
direct detection channel, with and without confining them to a lattice
structure. Our optimization criteria are the average electrical and optical
power. The nonnegativity input signal constraint is translated into a conical
constraint in signal space, and modulation formats are designed by sphere
packing inside this cone. Some remarkably dense packings are found, which yield
more power-efficient modulation formats than previously known. For example, at
a spectral efficiency of 1 bit/s/Hz, the obtained modulation format offers a
0.86 dB average electrical power gain and 0.43 dB average optical power gain
over the previously best known modulation formats to achieve a symbol error
rate of 10^-6. This modulation turns out to have a lattice-based structure. At
a spectral efficiency of 3/2 bits/s/Hz and to achieve a symbol error rate of
10^-6, the modulation format obtained for optimizing the average electrical
power offers a 0.58 dB average electrical power gain over the best
lattice-based modulation and 2.55 dB gain over the best previously known
format. However, the modulation format optimized for average optical power
offers a 0.46 dB average optical power gain over the best lattice-based
modulation and 1.35 dB gain over the best previously known format.
| [
{
"created": "Sat, 12 Mar 2011 00:17:35 GMT",
"version": "v1"
}
] | 2015-03-19 | [
[
"Karout",
"Johnny",
""
],
[
"Agrell",
"Erik",
""
],
[
"Szczerba",
"Krzysztof",
""
],
[
"Karlsson",
"Magnus",
""
]
] | We optimize modulation formats for the additive white Gaussian noise channel with a nonnegative input constraint, also known as the intensity-modulated direct detection channel, with and without confining them to a lattice structure. Our optimization criteria are the average electrical and optical power. The nonnegativity input signal constraint is translated into a conical constraint in signal space, and modulation formats are designed by sphere packing inside this cone. Some remarkably dense packings are found, which yield more power-efficient modulation formats than previously known. For example, at a spectral efficiency of 1 bit/s/Hz, the obtained modulation format offers a 0.86 dB average electrical power gain and 0.43 dB average optical power gain over the previously best known modulation formats to achieve a symbol error rate of 10^-6. This modulation turns out to have a lattice-based structure. At a spectral efficiency of 3/2 bits/s/Hz and to achieve a symbol error rate of 10^-6, the modulation format obtained for optimizing the average electrical power offers a 0.58 dB average electrical power gain over the best lattice-based modulation and 2.55 dB gain over the best previously known format. However, the modulation format optimized for average optical power offers a 0.46 dB average optical power gain over the best lattice-based modulation and 1.35 dB gain over the best previously known format. |
2111.02997 | Shangtong Zhang | Shangtong Zhang, Remi Tachet, Romain Laroche | Global Optimality and Finite Sample Analysis of Softmax Off-Policy Actor
Critic under State Distribution Mismatch | Journal of Machine Learning Research 2022 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we establish the global optimality and convergence rate of an
off-policy actor critic algorithm in the tabular setting without using density
ratio to correct the discrepancy between the state distribution of the behavior
policy and that of the target policy. Our work goes beyond existing works on
the optimality of policy gradient methods in that existing works use the exact
policy gradient for updating the policy parameters while we use an approximate
and stochastic update step. Our update step is not a gradient update because we
do not use a density ratio to correct the state distribution, which aligns well
with what practitioners do. Our update is approximate because we use a learned
critic instead of the true value function. Our update is stochastic because at
each step the update is done for only the current state action pair. Moreover,
we remove several restrictive assumptions from existing works in our analysis.
Central to our work is the finite sample analysis of a generic stochastic
approximation algorithm with time-inhomogeneous update operators on
time-inhomogeneous Markov chains, based on its uniform contraction properties.
| [
{
"created": "Thu, 4 Nov 2021 16:48:45 GMT",
"version": "v1"
},
{
"created": "Tue, 18 Oct 2022 13:05:59 GMT",
"version": "v2"
},
{
"created": "Mon, 24 Oct 2022 17:52:33 GMT",
"version": "v3"
}
] | 2022-10-25 | [
[
"Zhang",
"Shangtong",
""
],
[
"Tachet",
"Remi",
""
],
[
"Laroche",
"Romain",
""
]
] | In this paper, we establish the global optimality and convergence rate of an off-policy actor critic algorithm in the tabular setting without using density ratio to correct the discrepancy between the state distribution of the behavior policy and that of the target policy. Our work goes beyond existing works on the optimality of policy gradient methods in that existing works use the exact policy gradient for updating the policy parameters while we use an approximate and stochastic update step. Our update step is not a gradient update because we do not use a density ratio to correct the state distribution, which aligns well with what practitioners do. Our update is approximate because we use a learned critic instead of the true value function. Our update is stochastic because at each step the update is done for only the current state action pair. Moreover, we remove several restrictive assumptions from existing works in our analysis. Central to our work is the finite sample analysis of a generic stochastic approximation algorithm with time-inhomogeneous update operators on time-inhomogeneous Markov chains, based on its uniform contraction properties. |
1802.02288 | Xiaoling Hu | Caijun Zhong, Xiaoling Hu, Xiaoming Chen, Derrick Wing Kwan Ng and
Zhaoyang Zhang | Spatial Modulation Assisted Multi-Antenna Non-Orthogonal Multiple Access | null | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-antenna non-orthogonal multiple access (NOMA) is a promising technique
to significantly improve the spectral efficiency and support massive access,
which has received considerable interests from academic and industry. This
article first briefly introduces the basic idea of conventional multi-antenna
NOMA technique, and then discusses the key limitations, namely, the high
complexity of successive interference cancellation(SIC) and the lack of
fairness between the user with a strong channel gain and the user with a weak
channel gain. To address these problems, this article proposes a novel spatial
modulation (SM) assisted multi-antenna NOMA technique, which avoids the use of
SIC and is able to completely cancel intra-cluster interference. Furthermore,
simulation results are provided to validate the effectiveness of the proposed
novel technique compared to the conventional multi-antenna NOMA. Finally, this
article points out the key challenges and sheds light on the future research
directions of the SM assisted multi-antenna NOMA technique.
| [
{
"created": "Wed, 7 Feb 2018 02:21:40 GMT",
"version": "v1"
}
] | 2018-02-08 | [
[
"Zhong",
"Caijun",
""
],
[
"Hu",
"Xiaoling",
""
],
[
"Chen",
"Xiaoming",
""
],
[
"Ng",
"Derrick Wing Kwan",
""
],
[
"Zhang",
"Zhaoyang",
""
]
] | Multi-antenna non-orthogonal multiple access (NOMA) is a promising technique to significantly improve the spectral efficiency and support massive access, which has received considerable interests from academic and industry. This article first briefly introduces the basic idea of conventional multi-antenna NOMA technique, and then discusses the key limitations, namely, the high complexity of successive interference cancellation(SIC) and the lack of fairness between the user with a strong channel gain and the user with a weak channel gain. To address these problems, this article proposes a novel spatial modulation (SM) assisted multi-antenna NOMA technique, which avoids the use of SIC and is able to completely cancel intra-cluster interference. Furthermore, simulation results are provided to validate the effectiveness of the proposed novel technique compared to the conventional multi-antenna NOMA. Finally, this article points out the key challenges and sheds light on the future research directions of the SM assisted multi-antenna NOMA technique. |
2212.01618 | Xin Kang | Huilin Wang, Xin Kang, Tieyan Li, Zhongding Lei, Cheng-Kang Chu, and
Haiguang Wang | An Overview of Trust Standards for Communication Networks and Future
Digital World | 7 pages, 3 figures, Magazine paper under review | null | null | null | cs.IT cs.CR math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the development of Information and Communication Technologies, trust has
been applied more and more in various scenarios. At the same time, different
organizations have published a series of trust frameworks to support the
implementation of trust. There are also academic paper discussing about these
trust standards, however, most of them only focus on a specific application.
Unlike existing works, this paper provides an overview of all current available
trust standards related to communication networks and future digital world from
several main organizations. To be specific, this paper summarizes and organizes
all these trust standards into three layers: trust foundation, trust elements,
and trust applications. We then analysis these trust standards and discuss
their contribution in a systematic way. We discuss the motivations behind each
current in forced standards, analyzes their frameworks and solutions, and
presents their role and impact on communication works and future digital world.
Finally, we give our suggestions on the trust work that needs to be
standardized in future.
| [
{
"created": "Sat, 3 Dec 2022 13:51:37 GMT",
"version": "v1"
}
] | 2022-12-06 | [
[
"Wang",
"Huilin",
""
],
[
"Kang",
"Xin",
""
],
[
"Li",
"Tieyan",
""
],
[
"Lei",
"Zhongding",
""
],
[
"Chu",
"Cheng-Kang",
""
],
[
"Wang",
"Haiguang",
""
]
] | With the development of Information and Communication Technologies, trust has been applied more and more in various scenarios. At the same time, different organizations have published a series of trust frameworks to support the implementation of trust. There are also academic paper discussing about these trust standards, however, most of them only focus on a specific application. Unlike existing works, this paper provides an overview of all current available trust standards related to communication networks and future digital world from several main organizations. To be specific, this paper summarizes and organizes all these trust standards into three layers: trust foundation, trust elements, and trust applications. We then analysis these trust standards and discuss their contribution in a systematic way. We discuss the motivations behind each current in forced standards, analyzes their frameworks and solutions, and presents their role and impact on communication works and future digital world. Finally, we give our suggestions on the trust work that needs to be standardized in future. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.