id stringlengths 9 10 | submitter stringlengths 1 64 ⌀ | authors stringlengths 4 20.7k | title stringlengths 4 246 | comments stringlengths 1 523 ⌀ | journal-ref stringlengths 4 404 ⌀ | doi stringlengths 11 153 ⌀ | report-no stringlengths 2 254 ⌀ | categories stringlengths 5 98 | license stringclasses 9 values | orig_abstract stringlengths 14 3.35k | versions listlengths 1 60 | update_date stringlengths 10 10 | authors_parsed listlengths 1 1.35k | abstract stringlengths 11 3.34k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
cs/0612059 | Herve Jegou | Simon Malinowski (IRISA / INRIA Rennes), Herv\'e J\'egou (IRISA /
INRIA Rennes, INRIA Rh\^one-Alpes / GRAVIR-IMAG), Christine Guillemot (IRISA
/ INRIA Rennes) | Synchronization recovery and state model reduction for soft decoding of
variable length codes | null | IEEE transactions on information theory (2006) | null | null | cs.NI cs.IT math.IT | null | Variable length codes exhibit de-synchronization problems when transmitted
over noisy channels. Trellis decoding techniques based on Maximum A Posteriori
(MAP) estimators are often used to minimize the error rate on the estimated
sequence. If the number of symbols and/or bits transmitted are known by the
decoder, termination constraints can be incorporated in the decoding process.
All the paths in the trellis which do not lead to a valid sequence length are
suppressed. This paper presents an analytic method to assess the expected error
resilience of a VLC when trellis decoding with a sequence length constraint is
used. The approach is based on the computation, for a given code, of the amount
of information brought by the constraint. It is then shown that this quantity
as well as the probability that the VLC decoder does not re-synchronize in a
strict sense, are not significantly altered by appropriate trellis states
aggregation. This proves that the performance obtained by running a
length-constrained Viterbi decoder on aggregated state models approaches the
one obtained with the bit/symbol trellis, with a significantly reduced
complexity. It is then shown that the complexity can be further decreased by
projecting the state model on two state models of reduced size.
| [
{
"created": "Mon, 11 Dec 2006 15:52:12 GMT",
"version": "v1"
}
] | 2016-08-16 | [
[
"Malinowski",
"Simon",
"",
"IRISA / INRIA Rennes"
],
[
"Jégou",
"Hervé",
"",
"IRISA /\n INRIA Rennes, INRIA Rhône-Alpes / GRAVIR-IMAG"
],
[
"Guillemot",
"Christine",
"",
"IRISA\n / INRIA Rennes"
]
] | Variable length codes exhibit de-synchronization problems when transmitted over noisy channels. Trellis decoding techniques based on Maximum A Posteriori (MAP) estimators are often used to minimize the error rate on the estimated sequence. If the number of symbols and/or bits transmitted are known by the decoder, termination constraints can be incorporated in the decoding process. All the paths in the trellis which do not lead to a valid sequence length are suppressed. This paper presents an analytic method to assess the expected error resilience of a VLC when trellis decoding with a sequence length constraint is used. The approach is based on the computation, for a given code, of the amount of information brought by the constraint. It is then shown that this quantity as well as the probability that the VLC decoder does not re-synchronize in a strict sense, are not significantly altered by appropriate trellis states aggregation. This proves that the performance obtained by running a length-constrained Viterbi decoder on aggregated state models approaches the one obtained with the bit/symbol trellis, with a significantly reduced complexity. It is then shown that the complexity can be further decreased by projecting the state model on two state models of reduced size. |
1704.03931 | Laurel Riek | Laurel D. Riek | Healthcare Robotics | 8 pages, Communications of the ACM, 2017 | Communications of the ACM, November 2017, Vol. 60 No. 11, Pages
68-78 | 10.1145/3127874 | null | cs.RO cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Robots have the potential to be a game changer in healthcare: improving
health and well-being, filling care gaps, supporting care givers, and aiding
health care workers. However, before robots are able to be widely deployed, it
is crucial that both the research and industrial communities work together to
establish a strong evidence-base for healthcare robotics, and surmount likely
adoption barriers. This article presents a broad contextualization of robots in
healthcare by identifying key stakeholders, care settings, and tasks; reviewing
recent advances in healthcare robotics; and outlining major challenges and
opportunities to their adoption.
| [
{
"created": "Wed, 12 Apr 2017 21:02:25 GMT",
"version": "v1"
}
] | 2020-07-03 | [
[
"Riek",
"Laurel D.",
""
]
] | Robots have the potential to be a game changer in healthcare: improving health and well-being, filling care gaps, supporting care givers, and aiding health care workers. However, before robots are able to be widely deployed, it is crucial that both the research and industrial communities work together to establish a strong evidence-base for healthcare robotics, and surmount likely adoption barriers. This article presents a broad contextualization of robots in healthcare by identifying key stakeholders, care settings, and tasks; reviewing recent advances in healthcare robotics; and outlining major challenges and opportunities to their adoption. |
2408.05897 | Yaxuan Song | Liuqing Chen, Yaxuan Song, Shixian Ding, Lingyun Sun, Peter Childs,
and Haoyu Zuo | TRIZ-GPT: An LLM-augmented method for problem-solving | null | null | null | null | cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | TRIZ, the Theory of Inventive Problem Solving, is derived from a
comprehensive analysis of patents across various domains, offering a framework
and practical tools for problem-solving. Despite its potential to foster
innovative solutions, the complexity and abstractness of TRIZ methodology often
make its acquisition and application challenging. This often requires users to
have a deep understanding of the theory, as well as substantial practical
experience and knowledge across various disciplines. The advent of Large
Language Models (LLMs) presents an opportunity to address these challenges by
leveraging their extensive knowledge bases and reasoning capabilities for
innovative solution generation within TRIZ-based problem-solving process. This
study explores and evaluates the application of LLMs within the TRIZ-based
problem-solving process. The construction of TRIZ case collections establishes
a solid empirical foundation for our experiments and offers valuable resources
to the TRIZ community. A specifically designed workflow, utilizing step-by-step
reasoning and evaluation-validated prompt strategies, effectively transforms
concrete problems into TRIZ problems and finally generates inventive solutions.
Finally, we present a case study in mechanical engineering field that
highlights the practical application of this LLM-augmented method. It showcases
GPT-4's ability to generate solutions that closely resonate with original
solutions and suggests more implementation mechanisms.
| [
{
"created": "Mon, 12 Aug 2024 02:32:45 GMT",
"version": "v1"
}
] | 2024-08-13 | [
[
"Chen",
"Liuqing",
""
],
[
"Song",
"Yaxuan",
""
],
[
"Ding",
"Shixian",
""
],
[
"Sun",
"Lingyun",
""
],
[
"Childs",
"Peter",
""
],
[
"Zuo",
"Haoyu",
""
]
] | TRIZ, the Theory of Inventive Problem Solving, is derived from a comprehensive analysis of patents across various domains, offering a framework and practical tools for problem-solving. Despite its potential to foster innovative solutions, the complexity and abstractness of TRIZ methodology often make its acquisition and application challenging. This often requires users to have a deep understanding of the theory, as well as substantial practical experience and knowledge across various disciplines. The advent of Large Language Models (LLMs) presents an opportunity to address these challenges by leveraging their extensive knowledge bases and reasoning capabilities for innovative solution generation within TRIZ-based problem-solving process. This study explores and evaluates the application of LLMs within the TRIZ-based problem-solving process. The construction of TRIZ case collections establishes a solid empirical foundation for our experiments and offers valuable resources to the TRIZ community. A specifically designed workflow, utilizing step-by-step reasoning and evaluation-validated prompt strategies, effectively transforms concrete problems into TRIZ problems and finally generates inventive solutions. Finally, we present a case study in mechanical engineering field that highlights the practical application of this LLM-augmented method. It showcases GPT-4's ability to generate solutions that closely resonate with original solutions and suggests more implementation mechanisms. |
2403.13293 | Keith Mills | Keith G. Mills, Fred X. Han, Mohammad Salameh, Shengyao Lu, Chunhua
Zhou, Jiao He, Fengyu Sun, Di Niu | Building Optimal Neural Architectures using Interpretable Knowledge | CVPR'24; 18 Pages, 18 Figures, 3 Tables | null | null | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural Architecture Search is a costly practice. The fact that a search space
can span a vast number of design choices with each architecture evaluation
taking nontrivial overhead makes it hard for an algorithm to sufficiently
explore candidate networks. In this paper, we propose AutoBuild, a scheme which
learns to align the latent embeddings of operations and architecture modules
with the ground-truth performance of the architectures they appear in. By doing
so, AutoBuild is capable of assigning interpretable importance scores to
architecture modules, such as individual operation features and larger macro
operation sequences such that high-performance neural networks can be
constructed without any need for search. Through experiments performed on
state-of-the-art image classification, segmentation, and Stable Diffusion
models, we show that by mining a relatively small set of evaluated
architectures, AutoBuild can learn to build high-quality architectures directly
or help to reduce search space to focus on relevant areas, finding better
architectures that outperform both the original labeled ones and ones found by
search baselines. Code available at
https://github.com/Ascend-Research/AutoBuild
| [
{
"created": "Wed, 20 Mar 2024 04:18:38 GMT",
"version": "v1"
}
] | 2024-03-21 | [
[
"Mills",
"Keith G.",
""
],
[
"Han",
"Fred X.",
""
],
[
"Salameh",
"Mohammad",
""
],
[
"Lu",
"Shengyao",
""
],
[
"Zhou",
"Chunhua",
""
],
[
"He",
"Jiao",
""
],
[
"Sun",
"Fengyu",
""
],
[
"Niu",
"Di",
""
]
] | Neural Architecture Search is a costly practice. The fact that a search space can span a vast number of design choices with each architecture evaluation taking nontrivial overhead makes it hard for an algorithm to sufficiently explore candidate networks. In this paper, we propose AutoBuild, a scheme which learns to align the latent embeddings of operations and architecture modules with the ground-truth performance of the architectures they appear in. By doing so, AutoBuild is capable of assigning interpretable importance scores to architecture modules, such as individual operation features and larger macro operation sequences such that high-performance neural networks can be constructed without any need for search. Through experiments performed on state-of-the-art image classification, segmentation, and Stable Diffusion models, we show that by mining a relatively small set of evaluated architectures, AutoBuild can learn to build high-quality architectures directly or help to reduce search space to focus on relevant areas, finding better architectures that outperform both the original labeled ones and ones found by search baselines. Code available at https://github.com/Ascend-Research/AutoBuild |
2107.00204 | Yi Liu | Wenjun Zeng and Yi Liu | Markov Decision Process modeled with Bandits for Sequential Decision
Making in Linear-flow | Accepted by 2021 KDD Multi-Armed Bandits and Reinforcement Learning
Workshop: https://sites.google.com/view/marble-kdd | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by-nc-sa/4.0/ | For marketing, we sometimes need to recommend content for multiple pages in
sequence. Different from general sequential decision making process, the use
cases have a simpler flow where customers per seeing recommended content on
each page can only return feedback as moving forward in the process or dropping
from it until a termination state. We refer to this type of problems as
sequential decision making in linear--flow. We propose to formulate the problem
as an MDP with Bandits where Bandits are employed to model the transition
probability matrix. At recommendation time, we use Thompson sampling (TS) to
sample the transition probabilities and allocate the best series of actions
with analytical solution through exact dynamic programming. The way that we
formulate the problem allows us to leverage TS's efficiency in balancing
exploration and exploitation and Bandit's convenience in modeling actions'
incompatibility. In the simulation study, we observe the proposed MDP with
Bandits algorithm outperforms Q-learning with $\epsilon$-greedy and decreasing
$\epsilon$, independent Bandits, and interaction Bandits. We also find the
proposed algorithm's performance is the most robust to changes in the
across-page interdependence strength.
| [
{
"created": "Thu, 1 Jul 2021 03:54:36 GMT",
"version": "v1"
},
{
"created": "Wed, 16 Mar 2022 23:25:08 GMT",
"version": "v2"
}
] | 2022-03-18 | [
[
"Zeng",
"Wenjun",
""
],
[
"Liu",
"Yi",
""
]
] | For marketing, we sometimes need to recommend content for multiple pages in sequence. Different from general sequential decision making process, the use cases have a simpler flow where customers per seeing recommended content on each page can only return feedback as moving forward in the process or dropping from it until a termination state. We refer to this type of problems as sequential decision making in linear--flow. We propose to formulate the problem as an MDP with Bandits where Bandits are employed to model the transition probability matrix. At recommendation time, we use Thompson sampling (TS) to sample the transition probabilities and allocate the best series of actions with analytical solution through exact dynamic programming. The way that we formulate the problem allows us to leverage TS's efficiency in balancing exploration and exploitation and Bandit's convenience in modeling actions' incompatibility. In the simulation study, we observe the proposed MDP with Bandits algorithm outperforms Q-learning with $\epsilon$-greedy and decreasing $\epsilon$, independent Bandits, and interaction Bandits. We also find the proposed algorithm's performance is the most robust to changes in the across-page interdependence strength. |
1806.06004 | Peter Anderson | Peter Anderson, Stephen Gould, Mark Johnson | Partially-Supervised Image Captioning | NeurIPS 2018 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Image captioning models are becoming increasingly successful at describing
the content of images in restricted domains. However, if these models are to
function in the wild - for example, as assistants for people with impaired
vision - a much larger number and variety of visual concepts must be
understood. To address this problem, we teach image captioning models new
visual concepts from labeled images and object detection datasets. Since image
labels and object classes can be interpreted as partial captions, we formulate
this problem as learning from partially-specified sequence data. We then
propose a novel algorithm for training sequence models, such as recurrent
neural networks, on partially-specified sequences which we represent using
finite state automata. In the context of image captioning, our method lifts the
restriction that previously required image captioning models to be trained on
paired image-sentence corpora only, or otherwise required specialized model
architectures to take advantage of alternative data modalities. Applying our
approach to an existing neural captioning model, we achieve state of the art
results on the novel object captioning task using the COCO dataset. We further
show that we can train a captioning model to describe new visual concepts from
the Open Images dataset while maintaining competitive COCO evaluation scores.
| [
{
"created": "Fri, 15 Jun 2018 14:52:40 GMT",
"version": "v1"
},
{
"created": "Wed, 28 Nov 2018 15:29:42 GMT",
"version": "v2"
}
] | 2018-11-29 | [
[
"Anderson",
"Peter",
""
],
[
"Gould",
"Stephen",
""
],
[
"Johnson",
"Mark",
""
]
] | Image captioning models are becoming increasingly successful at describing the content of images in restricted domains. However, if these models are to function in the wild - for example, as assistants for people with impaired vision - a much larger number and variety of visual concepts must be understood. To address this problem, we teach image captioning models new visual concepts from labeled images and object detection datasets. Since image labels and object classes can be interpreted as partial captions, we formulate this problem as learning from partially-specified sequence data. We then propose a novel algorithm for training sequence models, such as recurrent neural networks, on partially-specified sequences which we represent using finite state automata. In the context of image captioning, our method lifts the restriction that previously required image captioning models to be trained on paired image-sentence corpora only, or otherwise required specialized model architectures to take advantage of alternative data modalities. Applying our approach to an existing neural captioning model, we achieve state of the art results on the novel object captioning task using the COCO dataset. We further show that we can train a captioning model to describe new visual concepts from the Open Images dataset while maintaining competitive COCO evaluation scores. |
1805.03496 | Tom van Dijk | Tom van Dijk and R\"udiger Ehlers and Armin Biere | Revisiting Decision Diagrams for SAT | null | null | null | null | cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Symbolic variants of clause distribution using decision diagrams to eliminate
variables in SAT were shown to perform well on hard combinatorial instances. In
this paper we revisit both existing ZDD and BDD variants of this approach. We
further investigate different heuristics for selecting the next variable to
eliminate. Our implementation makes further use of parallel features of the
open source BDD library Sylvan.
| [
{
"created": "Wed, 9 May 2018 13:16:42 GMT",
"version": "v1"
}
] | 2018-05-10 | [
[
"van Dijk",
"Tom",
""
],
[
"Ehlers",
"Rüdiger",
""
],
[
"Biere",
"Armin",
""
]
] | Symbolic variants of clause distribution using decision diagrams to eliminate variables in SAT were shown to perform well on hard combinatorial instances. In this paper we revisit both existing ZDD and BDD variants of this approach. We further investigate different heuristics for selecting the next variable to eliminate. Our implementation makes further use of parallel features of the open source BDD library Sylvan. |
0910.2276 | Tshilidzi Marwala | Evan Hurwitz and Tshilidzi Marwala | State of the Art Review for Applying Computational Intelligence and
Machine Learning Techniques to Portfolio Optimisation | 9 pages | null | null | null | cs.CE cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Computational techniques have shown much promise in the field of Finance,
owing to their ability to extract sense out of dauntingly complex systems. This
paper reviews the most promising of these techniques, from traditional
computational intelligence methods to their machine learning siblings, with
particular view to their application in optimising the management of a
portfolio of financial instruments. The current state of the art is assessed,
and prospective further work is assessed and recommended
| [
{
"created": "Tue, 13 Oct 2009 15:53:45 GMT",
"version": "v1"
}
] | 2009-10-14 | [
[
"Hurwitz",
"Evan",
""
],
[
"Marwala",
"Tshilidzi",
""
]
] | Computational techniques have shown much promise in the field of Finance, owing to their ability to extract sense out of dauntingly complex systems. This paper reviews the most promising of these techniques, from traditional computational intelligence methods to their machine learning siblings, with particular view to their application in optimising the management of a portfolio of financial instruments. The current state of the art is assessed, and prospective further work is assessed and recommended |
2007.14671 | Ali Alizadeh | Yunus Bicer, Ali Alizadeh, Nazim Kemal Ure, Ahmetcan Erdogan, and
Orkun Kizilirmak | Sample Efficient Interactive End-to-End Deep Learning for Self-Driving
Cars with Selective Multi-Class Safe Dataset Aggregation | 6 pages, 6 figures, IROS2019 conference | 2019 IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS), Macau, China, 2019, pp. 2629-2634 | 10.1109/IROS40897.2019.8967948 | null | cs.RO cs.CV cs.LG cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The objective of this paper is to develop a sample efficient end-to-end deep
learning method for self-driving cars, where we attempt to increase the value
of the information extracted from samples, through careful analysis obtained
from each call to expert driver\'s policy. End-to-end imitation learning is a
popular method for computing self-driving car policies. The standard approach
relies on collecting pairs of inputs (camera images) and outputs (steering
angle, etc.) from an expert policy and fitting a deep neural network to this
data to learn the driving policy. Although this approach had some successful
demonstrations in the past, learning a good policy might require a lot of
samples from the expert driver, which might be resource-consuming. In this
work, we develop a novel framework based on the Safe Dateset Aggregation (safe
DAgger) approach, where the current learned policy is automatically segmented
into different trajectory classes, and the algorithm identifies trajectory
segments or classes with the weak performance at each step. Once the trajectory
segments with weak performance identified, the sampling algorithm focuses on
calling the expert policy only on these segments, which improves the
convergence rate. The presented simulation results show that the proposed
approach can yield significantly better performance compared to the standard
Safe DAgger algorithm while using the same amount of samples from the expert.
| [
{
"created": "Wed, 29 Jul 2020 08:38:00 GMT",
"version": "v1"
}
] | 2020-07-30 | [
[
"Bicer",
"Yunus",
""
],
[
"Alizadeh",
"Ali",
""
],
[
"Ure",
"Nazim Kemal",
""
],
[
"Erdogan",
"Ahmetcan",
""
],
[
"Kizilirmak",
"Orkun",
""
]
] | The objective of this paper is to develop a sample efficient end-to-end deep learning method for self-driving cars, where we attempt to increase the value of the information extracted from samples, through careful analysis obtained from each call to expert driver\'s policy. End-to-end imitation learning is a popular method for computing self-driving car policies. The standard approach relies on collecting pairs of inputs (camera images) and outputs (steering angle, etc.) from an expert policy and fitting a deep neural network to this data to learn the driving policy. Although this approach had some successful demonstrations in the past, learning a good policy might require a lot of samples from the expert driver, which might be resource-consuming. In this work, we develop a novel framework based on the Safe Dateset Aggregation (safe DAgger) approach, where the current learned policy is automatically segmented into different trajectory classes, and the algorithm identifies trajectory segments or classes with the weak performance at each step. Once the trajectory segments with weak performance identified, the sampling algorithm focuses on calling the expert policy only on these segments, which improves the convergence rate. The presented simulation results show that the proposed approach can yield significantly better performance compared to the standard Safe DAgger algorithm while using the same amount of samples from the expert. |
2407.16893 | Erik Johannes Husom | Erik Johannes Husom, Arda Goknil, Lwin Khin Shar, Sagar Sen | The Price of Prompting: Profiling Energy Use in Large Language Models
Inference | 11 pages, 5 figures. Submitted to NeurIPS 2024. The released code and
dataset are available at https://github.com/ejhusom/MELODI | null | null | null | cs.CY cs.AI cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | In the rapidly evolving realm of artificial intelligence, deploying large
language models (LLMs) poses increasingly pressing computational and
environmental challenges. This paper introduces MELODI - Monitoring Energy
Levels and Optimization for Data-driven Inference - a multifaceted framework
crafted to monitor and analyze the energy consumed during LLM inference
processes. MELODI enables detailed observations of power consumption dynamics
and facilitates the creation of a comprehensive dataset reflective of energy
efficiency across varied deployment scenarios. The dataset, generated using
MELODI, encompasses a broad spectrum of LLM deployment frameworks, multiple
language models, and extensive prompt datasets, enabling a comparative analysis
of energy use. Using the dataset, we investigate how prompt attributes,
including length and complexity, correlate with energy expenditure. Our
findings indicate substantial disparities in energy efficiency, suggesting
ample scope for optimization and adoption of sustainable measures in LLM
deployment. Our contribution lies not only in the MELODI framework but also in
the novel dataset, a resource that can be expanded by other researchers. Thus,
MELODI is a foundational tool and dataset for advancing research into
energy-conscious LLM deployment, steering the field toward a more sustainable
future.
| [
{
"created": "Thu, 4 Jul 2024 12:16:28 GMT",
"version": "v1"
}
] | 2024-07-25 | [
[
"Husom",
"Erik Johannes",
""
],
[
"Goknil",
"Arda",
""
],
[
"Shar",
"Lwin Khin",
""
],
[
"Sen",
"Sagar",
""
]
] | In the rapidly evolving realm of artificial intelligence, deploying large language models (LLMs) poses increasingly pressing computational and environmental challenges. This paper introduces MELODI - Monitoring Energy Levels and Optimization for Data-driven Inference - a multifaceted framework crafted to monitor and analyze the energy consumed during LLM inference processes. MELODI enables detailed observations of power consumption dynamics and facilitates the creation of a comprehensive dataset reflective of energy efficiency across varied deployment scenarios. The dataset, generated using MELODI, encompasses a broad spectrum of LLM deployment frameworks, multiple language models, and extensive prompt datasets, enabling a comparative analysis of energy use. Using the dataset, we investigate how prompt attributes, including length and complexity, correlate with energy expenditure. Our findings indicate substantial disparities in energy efficiency, suggesting ample scope for optimization and adoption of sustainable measures in LLM deployment. Our contribution lies not only in the MELODI framework but also in the novel dataset, a resource that can be expanded by other researchers. Thus, MELODI is a foundational tool and dataset for advancing research into energy-conscious LLM deployment, steering the field toward a more sustainable future. |
1711.00571 | Arun Jambulapati | Arun Jambulapati and Aaron Sidford | Efficient $\widetilde{O}(n/\epsilon)$ Spectral Sketches for the
Laplacian and its Pseudoinverse | Accepted to SODA 2018; v2 fixes a small bug in the proof of lemma 3.
This does not affect correctness of any of our results | null | null | null | cs.DS math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we consider the problem of efficiently computing
$\epsilon$-sketches for the Laplacian and its pseudoinverse. Given a Laplacian
and an error tolerance $\epsilon$, we seek to construct a function $f$ such
that for any vector $x$ (chosen obliviously from $f$), with high probability
$(1-\epsilon) x^\top A x \leq f(x) \leq (1 + \epsilon) x^\top A x$ where $A$ is
either the Laplacian or its pseudoinverse. Our goal is to construct such a
sketch $f$ efficiently and to store it in the least space possible.
We provide nearly-linear time algorithms that, when given a Laplacian matrix
$\mathcal{L} \in \mathbb{R}^{n \times n}$ and an error tolerance $\epsilon$,
produce $\tilde{O}(n/\epsilon)$-size sketches of both $\mathcal{L}$ and its
pseudoinverse. Our algorithms improve upon the previous best sketch size of
$\widetilde{O}(n / \epsilon^{1.6})$ for sketching the Laplacian form by Andoni
et al (2015) and $O(n / \epsilon^2)$ for sketching the Laplacian pseudoinverse
by Batson, Spielman, and Srivastava (2008).
Furthermore we show how to compute all-pairs effective resistances from
$\widetilde{O}(n/\epsilon)$ size sketch in $\widetilde{O}(n^2/\epsilon)$ time.
This improves upon the previous best running time of
$\widetilde{O}(n^2/\epsilon^2)$ by Spielman and Srivastava (2008).
| [
{
"created": "Thu, 2 Nov 2017 00:06:55 GMT",
"version": "v1"
},
{
"created": "Sun, 7 Jan 2018 06:36:44 GMT",
"version": "v2"
}
] | 2018-01-09 | [
[
"Jambulapati",
"Arun",
""
],
[
"Sidford",
"Aaron",
""
]
] | In this paper we consider the problem of efficiently computing $\epsilon$-sketches for the Laplacian and its pseudoinverse. Given a Laplacian and an error tolerance $\epsilon$, we seek to construct a function $f$ such that for any vector $x$ (chosen obliviously from $f$), with high probability $(1-\epsilon) x^\top A x \leq f(x) \leq (1 + \epsilon) x^\top A x$ where $A$ is either the Laplacian or its pseudoinverse. Our goal is to construct such a sketch $f$ efficiently and to store it in the least space possible. We provide nearly-linear time algorithms that, when given a Laplacian matrix $\mathcal{L} \in \mathbb{R}^{n \times n}$ and an error tolerance $\epsilon$, produce $\tilde{O}(n/\epsilon)$-size sketches of both $\mathcal{L}$ and its pseudoinverse. Our algorithms improve upon the previous best sketch size of $\widetilde{O}(n / \epsilon^{1.6})$ for sketching the Laplacian form by Andoni et al (2015) and $O(n / \epsilon^2)$ for sketching the Laplacian pseudoinverse by Batson, Spielman, and Srivastava (2008). Furthermore we show how to compute all-pairs effective resistances from $\widetilde{O}(n/\epsilon)$ size sketch in $\widetilde{O}(n^2/\epsilon)$ time. This improves upon the previous best running time of $\widetilde{O}(n^2/\epsilon^2)$ by Spielman and Srivastava (2008). |
2407.17112 | Arun Verma | Arun Verma, Zhongxiang Dai, Xiaoqiang Lin, Patrick Jaillet, Bryan Kian
Hsiang Low | Neural Dueling Bandits | Accepted at ICML 2024 Workshop on Foundations of Reinforcement
Learning and Control | null | null | null | cs.LG cs.AI stat.ML | http://creativecommons.org/licenses/by/4.0/ | Contextual dueling bandit is used to model the bandit problems, where a
learner's goal is to find the best arm for a given context using observed noisy
preference feedback over the selected arms for the past contexts. However,
existing algorithms assume the reward function is linear, which can be complex
and non-linear in many real-life applications like online recommendations or
ranking web search results. To overcome this challenge, we use a neural network
to estimate the reward function using preference feedback for the previously
selected arms. We propose upper confidence bound- and Thompson sampling-based
algorithms with sub-linear regret guarantees that efficiently select arms in
each round. We then extend our theoretical results to contextual bandit
problems with binary feedback, which is in itself a non-trivial contribution.
Experimental results on the problem instances derived from synthetic datasets
corroborate our theoretical results.
| [
{
"created": "Wed, 24 Jul 2024 09:23:22 GMT",
"version": "v1"
}
] | 2024-07-25 | [
[
"Verma",
"Arun",
""
],
[
"Dai",
"Zhongxiang",
""
],
[
"Lin",
"Xiaoqiang",
""
],
[
"Jaillet",
"Patrick",
""
],
[
"Low",
"Bryan Kian Hsiang",
""
]
] | Contextual dueling bandit is used to model the bandit problems, where a learner's goal is to find the best arm for a given context using observed noisy preference feedback over the selected arms for the past contexts. However, existing algorithms assume the reward function is linear, which can be complex and non-linear in many real-life applications like online recommendations or ranking web search results. To overcome this challenge, we use a neural network to estimate the reward function using preference feedback for the previously selected arms. We propose upper confidence bound- and Thompson sampling-based algorithms with sub-linear regret guarantees that efficiently select arms in each round. We then extend our theoretical results to contextual bandit problems with binary feedback, which is in itself a non-trivial contribution. Experimental results on the problem instances derived from synthetic datasets corroborate our theoretical results. |
2202.03532 | Vishwanath Saragadam Raja Venkata | Vishwanath Saragadam, Jasper Tan, Guha Balakrishnan, Richard G.
Baraniuk, Ashok Veeraraghavan | MINER: Multiscale Implicit Neural Representations | 14 pages, accepted to ECCV 2022 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We introduce a new neural signal model designed for efficient high-resolution
representation of large-scale signals. The key innovation in our multiscale
implicit neural representation (MINER) is an internal representation via a
Laplacian pyramid, which provides a sparse multiscale decomposition of the
signal that captures orthogonal parts of the signal across scales. We leverage
the advantages of the Laplacian pyramid by representing small disjoint patches
of the pyramid at each scale with a small MLP. This enables the capacity of the
network to adaptively increase from coarse to fine scales, and only represent
parts of the signal with strong signal energy. The parameters of each MLP are
optimized from coarse-to-fine scale which results in faster approximations at
coarser scales, thereby ultimately an extremely fast training process. We apply
MINER to a range of large-scale signal representation tasks, including
gigapixel images and very large point clouds, and demonstrate that it requires
fewer than 25% of the parameters, 33% of the memory footprint, and 10% of the
computation time of competing techniques such as ACORN to reach the same
representation accuracy.
| [
{
"created": "Mon, 7 Feb 2022 21:49:33 GMT",
"version": "v1"
},
{
"created": "Mon, 18 Jul 2022 00:28:05 GMT",
"version": "v2"
}
] | 2022-07-19 | [
[
"Saragadam",
"Vishwanath",
""
],
[
"Tan",
"Jasper",
""
],
[
"Balakrishnan",
"Guha",
""
],
[
"Baraniuk",
"Richard G.",
""
],
[
"Veeraraghavan",
"Ashok",
""
]
] | We introduce a new neural signal model designed for efficient high-resolution representation of large-scale signals. The key innovation in our multiscale implicit neural representation (MINER) is an internal representation via a Laplacian pyramid, which provides a sparse multiscale decomposition of the signal that captures orthogonal parts of the signal across scales. We leverage the advantages of the Laplacian pyramid by representing small disjoint patches of the pyramid at each scale with a small MLP. This enables the capacity of the network to adaptively increase from coarse to fine scales, and only represent parts of the signal with strong signal energy. The parameters of each MLP are optimized from coarse-to-fine scale which results in faster approximations at coarser scales, thereby ultimately an extremely fast training process. We apply MINER to a range of large-scale signal representation tasks, including gigapixel images and very large point clouds, and demonstrate that it requires fewer than 25% of the parameters, 33% of the memory footprint, and 10% of the computation time of competing techniques such as ACORN to reach the same representation accuracy. |
2307.05502 | Andrew Weinert | Ngaire Underhill and Evan Maki and Bilal Gill and Andrew Weinert | Estimating See and Be Seen Performance with an Airborne Visual
Acquisition Model | 8 pages, 3 tables, 7 figures | null | null | null | cs.CE cs.CV cs.RO eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Separation provision and collision avoidance to avoid other air traffic are
fundamental components of the layered conflict management system to ensure safe
and efficient operations. Pilots have visual-based separation responsibilities
to see and be seen to maintain separation between aircraft. To safely integrate
into the airspace, drones should be required to have a minimum level of
performance based on the safety achieved as baselined by crewed aircraft seen
and be seen interactions. Drone interactions with crewed aircraft should not be
more hazardous than interactions between traditional aviation aircraft.
Accordingly, there is need for a methodology to design and evaluate detect and
avoid systems, to be equipped by drones to mitigate the risk of a midair
collision, where the methodology explicitly addresses, both semantically and
mathematically, the appropriate operating rules associated with see and be
seen. In response, we simulated how onboard pilots safely operate through see
and be seen interactions using an updated visual acquisition model that was
originally developed by J.W. Andrews decades ago. Monte Carlo simulations were
representative two aircraft flying under visual flight rules and results were
analyzed with respect to drone detect and avoid performance standards.
| [
{
"created": "Thu, 29 Jun 2023 11:39:10 GMT",
"version": "v1"
}
] | 2023-07-13 | [
[
"Underhill",
"Ngaire",
""
],
[
"Maki",
"Evan",
""
],
[
"Gill",
"Bilal",
""
],
[
"Weinert",
"Andrew",
""
]
] | Separation provision and collision avoidance to avoid other air traffic are fundamental components of the layered conflict management system to ensure safe and efficient operations. Pilots have visual-based separation responsibilities to see and be seen to maintain separation between aircraft. To safely integrate into the airspace, drones should be required to have a minimum level of performance based on the safety achieved as baselined by crewed aircraft seen and be seen interactions. Drone interactions with crewed aircraft should not be more hazardous than interactions between traditional aviation aircraft. Accordingly, there is need for a methodology to design and evaluate detect and avoid systems, to be equipped by drones to mitigate the risk of a midair collision, where the methodology explicitly addresses, both semantically and mathematically, the appropriate operating rules associated with see and be seen. In response, we simulated how onboard pilots safely operate through see and be seen interactions using an updated visual acquisition model that was originally developed by J.W. Andrews decades ago. Monte Carlo simulations were representative two aircraft flying under visual flight rules and results were analyzed with respect to drone detect and avoid performance standards. |
2308.06076 | Haoyu Wang | Haoyu Wang, Haozhe Wu, Junliang Xing, Jia Jia | Versatile Face Animator: Driving Arbitrary 3D Facial Avatar in RGBD
Space | Accepted by ACM MM2023 | null | 10.1145/3581783.3612065 | null | cs.CV cs.MM | http://creativecommons.org/licenses/by-sa/4.0/ | Creating realistic 3D facial animation is crucial for various applications in
the movie production and gaming industry, especially with the burgeoning demand
in the metaverse. However, prevalent methods such as blendshape-based
approaches and facial rigging techniques are time-consuming, labor-intensive,
and lack standardized configurations, making facial animation production
challenging and costly. In this paper, we propose a novel self-supervised
framework, Versatile Face Animator, which combines facial motion capture with
motion retargeting in an end-to-end manner, eliminating the need for
blendshapes or rigs. Our method has the following two main characteristics: 1)
we propose an RGBD animation module to learn facial motion from raw RGBD videos
by hierarchical motion dictionaries and animate RGBD images rendered from 3D
facial mesh coarse-to-fine, enabling facial animation on arbitrary 3D
characters regardless of their topology, textures, blendshapes, and rigs; and
2) we introduce a mesh retarget module to utilize RGBD animation to create 3D
facial animation by manipulating facial mesh with controller transformations,
which are estimated from dense optical flow fields and blended together with
geodesic-distance-based weights. Comprehensive experiments demonstrate the
effectiveness of our proposed framework in generating impressive 3D facial
animation results, highlighting its potential as a promising solution for the
cost-effective and efficient production of facial animation in the metaverse.
| [
{
"created": "Fri, 11 Aug 2023 11:29:01 GMT",
"version": "v1"
}
] | 2023-08-14 | [
[
"Wang",
"Haoyu",
""
],
[
"Wu",
"Haozhe",
""
],
[
"Xing",
"Junliang",
""
],
[
"Jia",
"Jia",
""
]
] | Creating realistic 3D facial animation is crucial for various applications in the movie production and gaming industry, especially with the burgeoning demand in the metaverse. However, prevalent methods such as blendshape-based approaches and facial rigging techniques are time-consuming, labor-intensive, and lack standardized configurations, making facial animation production challenging and costly. In this paper, we propose a novel self-supervised framework, Versatile Face Animator, which combines facial motion capture with motion retargeting in an end-to-end manner, eliminating the need for blendshapes or rigs. Our method has the following two main characteristics: 1) we propose an RGBD animation module to learn facial motion from raw RGBD videos by hierarchical motion dictionaries and animate RGBD images rendered from 3D facial mesh coarse-to-fine, enabling facial animation on arbitrary 3D characters regardless of their topology, textures, blendshapes, and rigs; and 2) we introduce a mesh retarget module to utilize RGBD animation to create 3D facial animation by manipulating facial mesh with controller transformations, which are estimated from dense optical flow fields and blended together with geodesic-distance-based weights. Comprehensive experiments demonstrate the effectiveness of our proposed framework in generating impressive 3D facial animation results, highlighting its potential as a promising solution for the cost-effective and efficient production of facial animation in the metaverse. |
2209.08786 | Yukai Liu | Yukai Liu and Wen Chen | Capacity Analysis and Sum Rate Maximization for the SCMA Cellular
Network Coexisting with D2D Communications | null | null | null | null | cs.IT eess.SP math.IT | http://creativecommons.org/licenses/by/4.0/ | Sparse code multiple access (SCMA) is the most concerning scheme among
non-orthogonal multiple access (NOMA) technologies for 5G wireless
communication new interface. Another efficient technique in 5G aimed to improve
spectral efficiency for local communications is device-to-device (D2D)
communications. Therefore, we utilize the SCMA cellular network coexisting with
D2D communications for the connection demand of the Internet of things (IOT),
and improve the system sum rate performance of the hybrid network. We first
derive the information-theoretic expression of the capacity for all users and
find the capacity bound of cellular users based on the mutual interference
between cellular users and D2D users. Then we consider the power optimization
problem for the cellular users and D2D users jointly to maximize the system sum
rate. To tackle the non-convex optimization problem, we propose a geometric
programming (GP) based iterative power allocation algorithm. Simulation results
demonstrate that the proposed algorithm converges fast and well improves the
sum rate performance.
| [
{
"created": "Mon, 19 Sep 2022 06:32:29 GMT",
"version": "v1"
},
{
"created": "Wed, 21 Sep 2022 04:35:29 GMT",
"version": "v2"
}
] | 2022-09-22 | [
[
"Liu",
"Yukai",
""
],
[
"Chen",
"Wen",
""
]
] | Sparse code multiple access (SCMA) is the most concerning scheme among non-orthogonal multiple access (NOMA) technologies for 5G wireless communication new interface. Another efficient technique in 5G aimed to improve spectral efficiency for local communications is device-to-device (D2D) communications. Therefore, we utilize the SCMA cellular network coexisting with D2D communications for the connection demand of the Internet of things (IOT), and improve the system sum rate performance of the hybrid network. We first derive the information-theoretic expression of the capacity for all users and find the capacity bound of cellular users based on the mutual interference between cellular users and D2D users. Then we consider the power optimization problem for the cellular users and D2D users jointly to maximize the system sum rate. To tackle the non-convex optimization problem, we propose a geometric programming (GP) based iterative power allocation algorithm. Simulation results demonstrate that the proposed algorithm converges fast and well improves the sum rate performance. |
1212.1914 | Sugata Sanyal | Manoj Rameshchandra Thakur and Sugata Sanyal | A Heuristic Reputation Based System to Detect Spam activities in a
Social Networking Platform, HRSSSNP | 5 Pages, 1 Figure | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The introduction of the social networking platform has drastically affected
the way individuals interact. Even though most of the effects have been
positive, there exist some serious threats associated with the interactions on
a social networking website. A considerable proportion of the crimes that occur
are initiated through a social networking platform [5]. Almost 33% of the
crimes on the internet are initiated through a social networking website [5].
Moreover activities like spam messages create unnecessary traffic and might
affect the user base of a social networking platform. As a result preventing
interactions with malicious intent and spam activities becomes crucial. This
work attempts to detect the same in a social networking platform by considering
a social network as a weighted graph wherein each node, which represents an
individual in the social network, stores activities of other nodes with respect
to itself in an optimized format which is referred to as localized data-set.
The weights associated with the edges in the graph represent the trust
relationship between profiles. The weights of the edges along with the
localized data-set is used to infer whether nodes in the social network are
compromised and are performing spam or malicious activities.
| [
{
"created": "Sun, 9 Dec 2012 20:01:32 GMT",
"version": "v1"
}
] | 2012-12-11 | [
[
"Thakur",
"Manoj Rameshchandra",
""
],
[
"Sanyal",
"Sugata",
""
]
] | The introduction of the social networking platform has drastically affected the way individuals interact. Even though most of the effects have been positive, there exist some serious threats associated with the interactions on a social networking website. A considerable proportion of the crimes that occur are initiated through a social networking platform [5]. Almost 33% of the crimes on the internet are initiated through a social networking website [5]. Moreover activities like spam messages create unnecessary traffic and might affect the user base of a social networking platform. As a result preventing interactions with malicious intent and spam activities becomes crucial. This work attempts to detect the same in a social networking platform by considering a social network as a weighted graph wherein each node, which represents an individual in the social network, stores activities of other nodes with respect to itself in an optimized format which is referred to as localized data-set. The weights associated with the edges in the graph represent the trust relationship between profiles. The weights of the edges along with the localized data-set is used to infer whether nodes in the social network are compromised and are performing spam or malicious activities. |
2406.11713 | Luan Trinh T. | Luan Thanh Trinh and Tomoki Hamagami | Latent Denoising Diffusion GAN: Faster sampling, Higher image quality | Submited to IEEE Access | null | 10.1109/ACCESS.2024.3406535 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Diffusion models are emerging as powerful solutions for generating
high-fidelity and diverse images, often surpassing GANs under many
circumstances. However, their slow inference speed hinders their potential for
real-time applications. To address this, DiffusionGAN leveraged a conditional
GAN to drastically reduce the denoising steps and speed up inference. Its
advancement, Wavelet Diffusion, further accelerated the process by converting
data into wavelet space, thus enhancing efficiency. Nonetheless, these models
still fall short of GANs in terms of speed and image quality. To bridge these
gaps, this paper introduces the Latent Denoising Diffusion GAN, which employs
pre-trained autoencoders to compress images into a compact latent space,
significantly improving inference speed and image quality. Furthermore, we
propose a Weighted Learning strategy to enhance diversity and image quality.
Experimental results on the CIFAR-10, CelebA-HQ, and LSUN-Church datasets prove
that our model achieves state-of-the-art running speed among diffusion models.
Compared to its predecessors, DiffusionGAN and Wavelet Diffusion, our model
shows remarkable improvements in all evaluation metrics. Code and pre-trained
checkpoints: \url{https://github.com/thanhluantrinh/LDDGAN.git}
| [
{
"created": "Mon, 17 Jun 2024 16:32:23 GMT",
"version": "v1"
}
] | 2024-06-18 | [
[
"Trinh",
"Luan Thanh",
""
],
[
"Hamagami",
"Tomoki",
""
]
] | Diffusion models are emerging as powerful solutions for generating high-fidelity and diverse images, often surpassing GANs under many circumstances. However, their slow inference speed hinders their potential for real-time applications. To address this, DiffusionGAN leveraged a conditional GAN to drastically reduce the denoising steps and speed up inference. Its advancement, Wavelet Diffusion, further accelerated the process by converting data into wavelet space, thus enhancing efficiency. Nonetheless, these models still fall short of GANs in terms of speed and image quality. To bridge these gaps, this paper introduces the Latent Denoising Diffusion GAN, which employs pre-trained autoencoders to compress images into a compact latent space, significantly improving inference speed and image quality. Furthermore, we propose a Weighted Learning strategy to enhance diversity and image quality. Experimental results on the CIFAR-10, CelebA-HQ, and LSUN-Church datasets prove that our model achieves state-of-the-art running speed among diffusion models. Compared to its predecessors, DiffusionGAN and Wavelet Diffusion, our model shows remarkable improvements in all evaluation metrics. Code and pre-trained checkpoints: \url{https://github.com/thanhluantrinh/LDDGAN.git} |
1906.05560 | Hung-Hsuan Chen | Yu-Wei Kao and Hung-Hsuan Chen | Associated Learning: Decomposing End-to-end Backpropagation based on
Auto-encoders and Target Propagation | 34 pages, 6 figures, 7 tables | MIT Neural Computation 33(1), 2021 | null | null | cs.NE cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Backpropagation (BP) is the cornerstone of today's deep learning algorithms,
but it is inefficient partially because of backward locking, which means
updating the weights of one layer locks the weight updates in the other layers.
Consequently, it is challenging to apply parallel computing or a pipeline
structure to update the weights in different layers simultaneously. In this
paper, we introduce a novel learning structure called associated learning (AL),
which modularizes the network into smaller components, each of which has a
local objective. Because the objectives are mutually independent, AL can learn
the parameters in different layers independently and simultaneously, so it is
feasible to apply a pipeline structure to improve the training throughput.
Specifically, this pipeline structure improves the complexity of the training
time from O(nl), which is the time complexity when using BP and stochastic
gradient descent (SGD) for training, to O(n + l), where n is the number of
training instances and l is the number of hidden layers. Surprisingly, even
though most of the parameters in AL do not directly interact with the target
variable, training deep models by this method yields accuracies comparable to
those from models trained using typical BP methods, in which all parameters are
used to predict the target variable. Consequently, because of the scalability
and the predictive power demonstrated in the experiments, AL deserves further
study to determine the better hyperparameter settings, such as activation
function selection, learning rate scheduling, and weight initialization, to
accumulate experience, as we have done over the years with the typical BP
method. Additionally, perhaps our design can also inspire new network designs
for deep learning. Our implementation is available at
https://github.com/SamYWK/Associated_Learning.
| [
{
"created": "Thu, 13 Jun 2019 09:21:10 GMT",
"version": "v1"
},
{
"created": "Mon, 1 Jul 2019 12:18:47 GMT",
"version": "v2"
},
{
"created": "Thu, 30 Jul 2020 15:37:02 GMT",
"version": "v3"
},
{
"created": "Tue, 9 Feb 2021 07:40:50 GMT",
"version": "v4"
}
] | 2021-02-10 | [
[
"Kao",
"Yu-Wei",
""
],
[
"Chen",
"Hung-Hsuan",
""
]
] | Backpropagation (BP) is the cornerstone of today's deep learning algorithms, but it is inefficient partially because of backward locking, which means updating the weights of one layer locks the weight updates in the other layers. Consequently, it is challenging to apply parallel computing or a pipeline structure to update the weights in different layers simultaneously. In this paper, we introduce a novel learning structure called associated learning (AL), which modularizes the network into smaller components, each of which has a local objective. Because the objectives are mutually independent, AL can learn the parameters in different layers independently and simultaneously, so it is feasible to apply a pipeline structure to improve the training throughput. Specifically, this pipeline structure improves the complexity of the training time from O(nl), which is the time complexity when using BP and stochastic gradient descent (SGD) for training, to O(n + l), where n is the number of training instances and l is the number of hidden layers. Surprisingly, even though most of the parameters in AL do not directly interact with the target variable, training deep models by this method yields accuracies comparable to those from models trained using typical BP methods, in which all parameters are used to predict the target variable. Consequently, because of the scalability and the predictive power demonstrated in the experiments, AL deserves further study to determine the better hyperparameter settings, such as activation function selection, learning rate scheduling, and weight initialization, to accumulate experience, as we have done over the years with the typical BP method. Additionally, perhaps our design can also inspire new network designs for deep learning. Our implementation is available at https://github.com/SamYWK/Associated_Learning. |
2105.05796 | Tomasz Stanis{\l}awek | Tomasz Stanis{\l}awek and Filip Grali\'nski and Anna Wr\'oblewska and
Dawid Lipi\'nski and Agnieszka Kaliska and Paulina Rosalska and Bartosz
Topolski and Przemys{\l}aw Biecek | Kleister: Key Information Extraction Datasets Involving Long Documents
with Complex Layouts | accepted to ICDAR 2021 | International Conference on Document Analysis and Recognition
ICDAR 2021 | 10.1007/978-3-030-86549-8_36 | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The relevance of the Key Information Extraction (KIE) task is increasingly
important in natural language processing problems. But there are still only a
few well-defined problems that serve as benchmarks for solutions in this area.
To bridge this gap, we introduce two new datasets (Kleister NDA and Kleister
Charity). They involve a mix of scanned and born-digital long formal
English-language documents. In these datasets, an NLP system is expected to
find or infer various types of entities by employing both textual and
structural layout features. The Kleister Charity dataset consists of 2,788
annual financial reports of charity organizations, with 61,643 unique pages and
21,612 entities to extract. The Kleister NDA dataset has 540 Non-disclosure
Agreements, with 3,229 unique pages and 2,160 entities to extract. We provide
several state-of-the-art baseline systems from the KIE domain (Flair, BERT,
RoBERTa, LayoutLM, LAMBERT), which show that our datasets pose a strong
challenge to existing models. The best model achieved an 81.77% and an 83.57%
F1-score on respectively the Kleister NDA and the Kleister Charity datasets. We
share the datasets to encourage progress on more in-depth and complex
information extraction tasks.
| [
{
"created": "Wed, 12 May 2021 17:08:01 GMT",
"version": "v1"
}
] | 2022-11-28 | [
[
"Stanisławek",
"Tomasz",
""
],
[
"Graliński",
"Filip",
""
],
[
"Wróblewska",
"Anna",
""
],
[
"Lipiński",
"Dawid",
""
],
[
"Kaliska",
"Agnieszka",
""
],
[
"Rosalska",
"Paulina",
""
],
[
"Topolski",
"Bartosz",
""
],
[
"Biecek",
"Przemysław",
""
]
] | The relevance of the Key Information Extraction (KIE) task is increasingly important in natural language processing problems. But there are still only a few well-defined problems that serve as benchmarks for solutions in this area. To bridge this gap, we introduce two new datasets (Kleister NDA and Kleister Charity). They involve a mix of scanned and born-digital long formal English-language documents. In these datasets, an NLP system is expected to find or infer various types of entities by employing both textual and structural layout features. The Kleister Charity dataset consists of 2,788 annual financial reports of charity organizations, with 61,643 unique pages and 21,612 entities to extract. The Kleister NDA dataset has 540 Non-disclosure Agreements, with 3,229 unique pages and 2,160 entities to extract. We provide several state-of-the-art baseline systems from the KIE domain (Flair, BERT, RoBERTa, LayoutLM, LAMBERT), which show that our datasets pose a strong challenge to existing models. The best model achieved an 81.77% and an 83.57% F1-score on respectively the Kleister NDA and the Kleister Charity datasets. We share the datasets to encourage progress on more in-depth and complex information extraction tasks. |
1910.04857 | Suryabhan Singh Hada | Suryabhan Singh Hada and Miguel \'A. Carreira-Perpi\~n\'an | Sampling the "Inverse Set" of a Neuron: An Approach to Understanding
Neural Nets | 15 pages, 9 figures | null | null | null | cs.CV cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the recent success of deep neural networks in computer vision, it is
important to understand the internal working of these networks. What does a
given neuron represent? The concepts captured by a neuron may be hard to
understand or express in simple terms. The approach we propose in this paper is
to characterize the region of input space that excites a given neuron to a
certain level; we call this the inverse set. This inverse set is a complicated
high dimensional object that we explore by an optimization-based sampling
approach. Inspection of samples of this set by a human can reveal regularities
that help to understand the neuron. This goes beyond approaches which were
limited to finding an image which maximally activates the neuron or using
Markov chain Monte Carlo to sample images, but this is very slow, generates
samples with little diversity and lacks control over the activation value of
the generated samples. Our approach also allows us to explore the intersection
of inverse sets of several neurons and other variations.
| [
{
"created": "Fri, 27 Sep 2019 02:22:43 GMT",
"version": "v1"
},
{
"created": "Fri, 25 Dec 2020 00:49:03 GMT",
"version": "v2"
}
] | 2020-12-29 | [
[
"Hada",
"Suryabhan Singh",
""
],
[
"Carreira-Perpiñán",
"Miguel Á.",
""
]
] | With the recent success of deep neural networks in computer vision, it is important to understand the internal working of these networks. What does a given neuron represent? The concepts captured by a neuron may be hard to understand or express in simple terms. The approach we propose in this paper is to characterize the region of input space that excites a given neuron to a certain level; we call this the inverse set. This inverse set is a complicated high dimensional object that we explore by an optimization-based sampling approach. Inspection of samples of this set by a human can reveal regularities that help to understand the neuron. This goes beyond approaches which were limited to finding an image which maximally activates the neuron or using Markov chain Monte Carlo to sample images, but this is very slow, generates samples with little diversity and lacks control over the activation value of the generated samples. Our approach also allows us to explore the intersection of inverse sets of several neurons and other variations. |
2008.09662 | Alhabib Abbas | Alhabib Abbas and Yiannis Andreopoulos | Biased Mixtures Of Experts: Enabling Computer Vision Inference Under
Data Transfer Limitations | null | null | 10.1109/TIP.2020.3005508 | null | cs.LG eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel mixture-of-experts class to optimize computer vision
models in accordance with data transfer limitations at test time. Our approach
postulates that the minimum acceptable amount of data allowing for
highly-accurate results can vary for different input space partitions.
Therefore, we consider mixtures where experts require different amounts of
data, and train a sparse gating function to divide the input space for each
expert. By appropriate hyperparameter selection, our approach is able to bias
mixtures of experts towards selecting specific experts over others. In this
way, we show that the data transfer optimization between visual sensing and
processing can be solved as a convex optimization problem.To demonstrate the
relation between data availability and performance, we evaluate biased mixtures
on a range of mainstream computer vision problems, namely: (i) single shot
detection, (ii) image super resolution, and (iii) realtime video action
classification. For all cases, and when experts constitute modified baselines
to meet different limits on allowed data utility, biased mixtures significantly
outperform previous work optimized to meet the same constraints on available
data.
| [
{
"created": "Fri, 21 Aug 2020 19:38:26 GMT",
"version": "v1"
}
] | 2020-09-02 | [
[
"Abbas",
"Alhabib",
""
],
[
"Andreopoulos",
"Yiannis",
""
]
] | We propose a novel mixture-of-experts class to optimize computer vision models in accordance with data transfer limitations at test time. Our approach postulates that the minimum acceptable amount of data allowing for highly-accurate results can vary for different input space partitions. Therefore, we consider mixtures where experts require different amounts of data, and train a sparse gating function to divide the input space for each expert. By appropriate hyperparameter selection, our approach is able to bias mixtures of experts towards selecting specific experts over others. In this way, we show that the data transfer optimization between visual sensing and processing can be solved as a convex optimization problem.To demonstrate the relation between data availability and performance, we evaluate biased mixtures on a range of mainstream computer vision problems, namely: (i) single shot detection, (ii) image super resolution, and (iii) realtime video action classification. For all cases, and when experts constitute modified baselines to meet different limits on allowed data utility, biased mixtures significantly outperform previous work optimized to meet the same constraints on available data. |
1810.08317 | Huixu Dong | Huixu Dong, Chen Qiu, Dilip K. Prasad, Ye Pan, Jiansheng Dai, I-Ming
Chen | Enabling Grasp Action: Generalized Evaluation of Grasp Stability via
Contact Stiffness from Contact Mechanics Insight | 12 pages, 14 figures | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Performing a grasp is a pivotal capability for a robotic gripper. We propose
a new evaluation approach of grasping stability via constructing a model of
grasping stiffness based on the theory of contact mechanics. First, the
mathematical models are built to explore soft contact and the general grasp
stiffness between a finger and an object. Next, the grasping stiffness matrix
is constructed to reflect the normal, tangential and torsion stiffness
coefficients. Finally, we design two grasping cases to verify the proposed
measurement criterion of grasping stability by comparing different grasping
configurations. Specifically, a standard grasping index is used and compared
with the minimum eigenvalue index of the constructed grasping stiffness we
built. The comparison result reveals a similar tendency between them for
measuring the grasping stability and thus, validates the proposed approach.
| [
{
"created": "Fri, 19 Oct 2018 00:35:45 GMT",
"version": "v1"
}
] | 2018-10-22 | [
[
"Dong",
"Huixu",
""
],
[
"Qiu",
"Chen",
""
],
[
"Prasad",
"Dilip K.",
""
],
[
"Pan",
"Ye",
""
],
[
"Dai",
"Jiansheng",
""
],
[
"Chen",
"I-Ming",
""
]
] | Performing a grasp is a pivotal capability for a robotic gripper. We propose a new evaluation approach of grasping stability via constructing a model of grasping stiffness based on the theory of contact mechanics. First, the mathematical models are built to explore soft contact and the general grasp stiffness between a finger and an object. Next, the grasping stiffness matrix is constructed to reflect the normal, tangential and torsion stiffness coefficients. Finally, we design two grasping cases to verify the proposed measurement criterion of grasping stability by comparing different grasping configurations. Specifically, a standard grasping index is used and compared with the minimum eigenvalue index of the constructed grasping stiffness we built. The comparison result reveals a similar tendency between them for measuring the grasping stability and thus, validates the proposed approach. |
2209.03336 | Connor Henley | Connor Henley, Siddharth Somasundaram, Joseph Hollmann and Ramesh
Raskar | Detection and Mapping of Specular Surfaces Using Multibounce Lidar
Returns | null | null | 10.1364/OE.479900 | null | cs.CV eess.IV | http://creativecommons.org/licenses/by/4.0/ | We propose methods that use specular, multibounce lidar returns to detect and
map specular surfaces that might be invisible to conventional lidar systems
that rely on direct, single-scatter returns. We derive expressions that relate
the time- and angle-of-arrival of these multibounce returns to scattering
points on the specular surface, and then use these expressions to formulate
techniques for retrieving specular surface geometry when the scene is scanned
by a single beam or illuminated with a multi-beam flash. We also consider the
special case of transparent specular surfaces, for which surface reflections
can be mixed together with light that scatters off of objects lying behind the
surface.
| [
{
"created": "Wed, 7 Sep 2022 17:49:59 GMT",
"version": "v1"
}
] | 2023-02-22 | [
[
"Henley",
"Connor",
""
],
[
"Somasundaram",
"Siddharth",
""
],
[
"Hollmann",
"Joseph",
""
],
[
"Raskar",
"Ramesh",
""
]
] | We propose methods that use specular, multibounce lidar returns to detect and map specular surfaces that might be invisible to conventional lidar systems that rely on direct, single-scatter returns. We derive expressions that relate the time- and angle-of-arrival of these multibounce returns to scattering points on the specular surface, and then use these expressions to formulate techniques for retrieving specular surface geometry when the scene is scanned by a single beam or illuminated with a multi-beam flash. We also consider the special case of transparent specular surfaces, for which surface reflections can be mixed together with light that scatters off of objects lying behind the surface. |
2012.02360 | Jing Qin | Jing Qin | Research Progress of News Recommendation Methods | null | null | null | null | cs.IR cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | Due to researchers'aim to study personalized recommendations for different
business fields, the summary of recommendation methods in specific fields is of
practical significance. News recommendation systems were the earliest research
field regarding recommendation systems, and were also the earliest
recommendation field to apply the collaborative filtering method. In addition,
news is real-time and rich in content, which makes news recommendation methods
more challenging than in other fields. Thus, this paper summarizes the research
progress regarding news recommendation methods. From 2018 to 2020, developed
news recommendation methods were mainly deep learning-based, attention-based,
and knowledge graphs-based. As of 2020, there are many news recommendation
methods that combine attention mechanisms and knowledge graphs. However, these
methods were all developed based on basic methods (the collaborative filtering
method, the content-based recommendation method, and a mixed recommendation
method combining the two). In order to allow researchers to have a detailed
understanding of the development process of news recommendation methods, the
news recommendation methods surveyed in this paper, which cover nearly 10
years, are divided into three categories according to the abovementioned basic
methods. Firstly, the paper introduces the basic ideas of each category of
methods and then summarizes the recommendation methods that are combined with
other methods based on each category of methods and according to the time
sequence of research results. Finally, this paper also summarizes the
challenges confronting news recommendation systems.
| [
{
"created": "Fri, 4 Dec 2020 01:47:24 GMT",
"version": "v1"
},
{
"created": "Mon, 8 Mar 2021 01:53:42 GMT",
"version": "v2"
}
] | 2021-03-09 | [
[
"Qin",
"Jing",
""
]
] | Due to researchers'aim to study personalized recommendations for different business fields, the summary of recommendation methods in specific fields is of practical significance. News recommendation systems were the earliest research field regarding recommendation systems, and were also the earliest recommendation field to apply the collaborative filtering method. In addition, news is real-time and rich in content, which makes news recommendation methods more challenging than in other fields. Thus, this paper summarizes the research progress regarding news recommendation methods. From 2018 to 2020, developed news recommendation methods were mainly deep learning-based, attention-based, and knowledge graphs-based. As of 2020, there are many news recommendation methods that combine attention mechanisms and knowledge graphs. However, these methods were all developed based on basic methods (the collaborative filtering method, the content-based recommendation method, and a mixed recommendation method combining the two). In order to allow researchers to have a detailed understanding of the development process of news recommendation methods, the news recommendation methods surveyed in this paper, which cover nearly 10 years, are divided into three categories according to the abovementioned basic methods. Firstly, the paper introduces the basic ideas of each category of methods and then summarizes the recommendation methods that are combined with other methods based on each category of methods and according to the time sequence of research results. Finally, this paper also summarizes the challenges confronting news recommendation systems. |
2212.00946 | Diego Arroyuelo Darroyue | Diego Arroyuelo and Juan Pablo Castillo | Trie-Compressed Intersectable Sets | null | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce space- and time-efficient algorithms and data structures for the
offline set intersection problem. We show that a sorted integer set $S
\subseteq [0{..}u)$ of $n$ elements can be represented using compressed space
while supporting $k$-way intersections in adaptive
$O(k\delta\lg{\!(u/\delta)})$ time, $\delta$ being the alternation measure
introduced by Barbay and Kenyon. Our experimental results suggest that our
approaches are competitive in practice, outperforming the most efficient
alternatives (Partitioned Elias-Fano indexes, Roaring Bitmaps, and Recursive
Universe Partitioning (RUP)) in several scenarios, offering in general relevant
space-time trade-offs.
| [
{
"created": "Fri, 2 Dec 2022 03:19:44 GMT",
"version": "v1"
}
] | 2022-12-05 | [
[
"Arroyuelo",
"Diego",
""
],
[
"Castillo",
"Juan Pablo",
""
]
] | We introduce space- and time-efficient algorithms and data structures for the offline set intersection problem. We show that a sorted integer set $S \subseteq [0{..}u)$ of $n$ elements can be represented using compressed space while supporting $k$-way intersections in adaptive $O(k\delta\lg{\!(u/\delta)})$ time, $\delta$ being the alternation measure introduced by Barbay and Kenyon. Our experimental results suggest that our approaches are competitive in practice, outperforming the most efficient alternatives (Partitioned Elias-Fano indexes, Roaring Bitmaps, and Recursive Universe Partitioning (RUP)) in several scenarios, offering in general relevant space-time trade-offs. |
2203.12969 | Gyunam Park | Gyunam Park, Marco Comuzzi, Wil M. P. van der Aalst | Analyzing Process-Aware Information System Updates Using Digital Twins
of Organizations | null | LNBIP 446 (2022) 159-176 | 10.1007/978-3-031-05760-1_10 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Digital transformation often entails small-scale changes to information
systems supporting the execution of business processes. These changes may
increase the operational frictions in process execution, which decreases the
process performance. The contributions in the literature providing support to
the tracking and impact analysis of small-scale changes are limited in scope
and functionality. In this paper, we use the recently developed Digital Twins
of Organizations (DTOs) to assess the impact of (process-aware) information
systems updates. More in detail, we model the updates using the configuration
of DTOs and quantitatively assess different types of impacts of information
system updates (structural, operational, and performance-related). We
implemented a prototype of the proposed approach. Moreover, we discuss a case
study involving a standard ERP procure-to-pay business process.
| [
{
"created": "Thu, 24 Mar 2022 10:19:59 GMT",
"version": "v1"
}
] | 2022-11-01 | [
[
"Park",
"Gyunam",
""
],
[
"Comuzzi",
"Marco",
""
],
[
"van der Aalst",
"Wil M. P.",
""
]
] | Digital transformation often entails small-scale changes to information systems supporting the execution of business processes. These changes may increase the operational frictions in process execution, which decreases the process performance. The contributions in the literature providing support to the tracking and impact analysis of small-scale changes are limited in scope and functionality. In this paper, we use the recently developed Digital Twins of Organizations (DTOs) to assess the impact of (process-aware) information systems updates. More in detail, we model the updates using the configuration of DTOs and quantitatively assess different types of impacts of information system updates (structural, operational, and performance-related). We implemented a prototype of the proposed approach. Moreover, we discuss a case study involving a standard ERP procure-to-pay business process. |
1609.07350 | Nils Kopal < | Nils Kopal | Rational Unified Process | German paper, 6 papges, 4 figures | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this German seminar paper, which was written in the year 2011 at the
University of Duisburg for a Bachelor Colloquium in Applied computer science,
we show a brief overview of the Rational Unified Process (RUP). Thus,
interested students or generally interested people in software development gain
a first impression of RUP.
The paper includes a survey and overview of the underlying process structure,
the phases of the process, its workflows, and describes the always by the RUP
developers postulated "best practices" of software development.
| [
{
"created": "Thu, 22 Sep 2016 12:56:35 GMT",
"version": "v1"
}
] | 2016-09-26 | [
[
"Kopal",
"Nils",
""
]
] | In this German seminar paper, which was written in the year 2011 at the University of Duisburg for a Bachelor Colloquium in Applied computer science, we show a brief overview of the Rational Unified Process (RUP). Thus, interested students or generally interested people in software development gain a first impression of RUP. The paper includes a survey and overview of the underlying process structure, the phases of the process, its workflows, and describes the always by the RUP developers postulated "best practices" of software development. |
2401.01867 | Devin Kwok | Devin Kwok, Nikhil Anand, Jonathan Frankle, Gintare Karolina
Dziugaite, David Rolnick | Dataset Difficulty and the Role of Inductive Bias | 10 pages, 6 figures | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivated by the goals of dataset pruning and defect identification, a
growing body of methods have been developed to score individual examples within
a dataset. These methods, which we call "example difficulty scores", are
typically used to rank or categorize examples, but the consistency of rankings
between different training runs, scoring methods, and model architectures is
generally unknown. To determine how example rankings vary due to these random
and controlled effects, we systematically compare different formulations of
scores over a range of runs and model architectures. We find that scores
largely share the following traits: they are noisy over individual runs of a
model, strongly correlated with a single notion of difficulty, and reveal
examples that range from being highly sensitive to insensitive to the inductive
biases of certain model architectures. Drawing from statistical genetics, we
develop a simple method for fingerprinting model architectures using a few
sensitive examples. These findings guide practitioners in maximizing the
consistency of their scores (e.g. by choosing appropriate scoring methods,
number of runs, and subsets of examples), and establishes comprehensive
baselines for evaluating scores in the future.
| [
{
"created": "Wed, 3 Jan 2024 18:19:51 GMT",
"version": "v1"
}
] | 2024-01-04 | [
[
"Kwok",
"Devin",
""
],
[
"Anand",
"Nikhil",
""
],
[
"Frankle",
"Jonathan",
""
],
[
"Dziugaite",
"Gintare Karolina",
""
],
[
"Rolnick",
"David",
""
]
] | Motivated by the goals of dataset pruning and defect identification, a growing body of methods have been developed to score individual examples within a dataset. These methods, which we call "example difficulty scores", are typically used to rank or categorize examples, but the consistency of rankings between different training runs, scoring methods, and model architectures is generally unknown. To determine how example rankings vary due to these random and controlled effects, we systematically compare different formulations of scores over a range of runs and model architectures. We find that scores largely share the following traits: they are noisy over individual runs of a model, strongly correlated with a single notion of difficulty, and reveal examples that range from being highly sensitive to insensitive to the inductive biases of certain model architectures. Drawing from statistical genetics, we develop a simple method for fingerprinting model architectures using a few sensitive examples. These findings guide practitioners in maximizing the consistency of their scores (e.g. by choosing appropriate scoring methods, number of runs, and subsets of examples), and establishes comprehensive baselines for evaluating scores in the future. |
1708.08813 | Pelumi Oluwasanya | Pelumi Oluwasanya | Anomaly Detection: Review and preliminary Entropy method tests | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Anomalies are strange data points; they usually represent an unusual
occurrence. Anomaly detection is presented from the perspective of Wireless
sensor networks. Different approaches have been taken in the past, as we will
see, not only to identify outliers, but also to establish the statistical
properties of the different methods. The usual goal is to show that the
approach is asymptotically efficient and that the metric used is unbiased or
maybe biased.
This project is based on a work done by [1]. The approach is based on the
principle that the entropy of the data is increased when an anomalous data
point is measured. The entropy of the data set is thus to be estimated. In this
report however, preliminary efforts at confirming the results of [1] is
presented. To estimate the entropy of the dataset, since no parametric form is
assumed, the probability density function of the data set is first estimated
using data split method. This estimated pdf value is then plugged-in to the
entropy estimation formula to estimate the entropy of the dataset. The data
(test signal) used in this report is Gaussian distributed with zero mean and
variance 4. Results of pdf estimation using the k-nearest neighbour method
using the entire dataset, and a data-split method are presented and compared
based on how well they approximate the probability density function of a
Gaussian with similar mean and variance. The number of nearest neighbours
chosen for the purpose of this report is 8. This is arbitrary, but is
reasonable since the number of anomalies introduced is expected to be less than
this upon data-split. The data-split method is preferred and rightly so.
| [
{
"created": "Tue, 29 Aug 2017 15:08:05 GMT",
"version": "v1"
}
] | 2017-08-30 | [
[
"Oluwasanya",
"Pelumi",
""
]
] | Anomalies are strange data points; they usually represent an unusual occurrence. Anomaly detection is presented from the perspective of Wireless sensor networks. Different approaches have been taken in the past, as we will see, not only to identify outliers, but also to establish the statistical properties of the different methods. The usual goal is to show that the approach is asymptotically efficient and that the metric used is unbiased or maybe biased. This project is based on a work done by [1]. The approach is based on the principle that the entropy of the data is increased when an anomalous data point is measured. The entropy of the data set is thus to be estimated. In this report however, preliminary efforts at confirming the results of [1] is presented. To estimate the entropy of the dataset, since no parametric form is assumed, the probability density function of the data set is first estimated using data split method. This estimated pdf value is then plugged-in to the entropy estimation formula to estimate the entropy of the dataset. The data (test signal) used in this report is Gaussian distributed with zero mean and variance 4. Results of pdf estimation using the k-nearest neighbour method using the entire dataset, and a data-split method are presented and compared based on how well they approximate the probability density function of a Gaussian with similar mean and variance. The number of nearest neighbours chosen for the purpose of this report is 8. This is arbitrary, but is reasonable since the number of anomalies introduced is expected to be less than this upon data-split. The data-split method is preferred and rightly so. |
2009.05152 | Zhixuan Xu | Zhixuan Xu, Minghui Qian, Xiaowei Huang, and Jie Meng | CasGCN: Predicting future cascade growth based on information diffusion
graph | null | null | null | null | cs.SI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sudden bursts of information cascades can lead to unexpected consequences
such as extreme opinions, changes in fashion trends, and uncontrollable spread
of rumors. It has become an important problem on how to effectively predict a
cascade' size in the future, especially for large-scale cascades on social
media platforms such as Twitter and Weibo. However, existing methods are
insufficient in dealing with this challenging prediction problem. Conventional
methods heavily rely on either hand crafted features or unrealistic
assumptions. End-to-end deep learning models, such as recurrent neural
networks, are not suitable to work with graphical inputs directly and cannot
handle structural information that is embedded in the cascade graphs. In this
paper, we propose a novel deep learning architecture for cascade growth
prediction, called CasGCN, which employs the graph convolutional network to
extract structural features from a graphical input, followed by the application
of the attention mechanism on both the extracted features and the temporal
information before conducting cascade size prediction. We conduct experiments
on two real-world cascade growth prediction scenarios (i.e., retweet popularity
on Sina Weibo and academic paper citations on DBLP), with the experimental
results showing that CasGCN enjoys a superior performance over several baseline
methods, particularly when the cascades are of large scale.
| [
{
"created": "Thu, 10 Sep 2020 21:20:09 GMT",
"version": "v1"
}
] | 2020-09-14 | [
[
"Xu",
"Zhixuan",
""
],
[
"Qian",
"Minghui",
""
],
[
"Huang",
"Xiaowei",
""
],
[
"Meng",
"Jie",
""
]
] | Sudden bursts of information cascades can lead to unexpected consequences such as extreme opinions, changes in fashion trends, and uncontrollable spread of rumors. It has become an important problem on how to effectively predict a cascade' size in the future, especially for large-scale cascades on social media platforms such as Twitter and Weibo. However, existing methods are insufficient in dealing with this challenging prediction problem. Conventional methods heavily rely on either hand crafted features or unrealistic assumptions. End-to-end deep learning models, such as recurrent neural networks, are not suitable to work with graphical inputs directly and cannot handle structural information that is embedded in the cascade graphs. In this paper, we propose a novel deep learning architecture for cascade growth prediction, called CasGCN, which employs the graph convolutional network to extract structural features from a graphical input, followed by the application of the attention mechanism on both the extracted features and the temporal information before conducting cascade size prediction. We conduct experiments on two real-world cascade growth prediction scenarios (i.e., retweet popularity on Sina Weibo and academic paper citations on DBLP), with the experimental results showing that CasGCN enjoys a superior performance over several baseline methods, particularly when the cascades are of large scale. |
2011.07630 | Masoud Ebrahimi | Roderick Bloem and Hana Chockler and Masoud Ebrahimi and Dana Fisman
and Heinz Riener | Safety Synthesis Sans Specification | null | null | null | null | cs.FL cs.LG | http://creativecommons.org/licenses/by/4.0/ | We define the problem of learning a transducer ${S}$ from a target language
$U$ containing possibly conflicting transducers, using membership queries and
conjecture queries. The requirement is that the language of ${S}$ be a subset
of $U$. We argue that this is a natural question in many situations in hardware
and software verification. We devise a learning algorithm for this problem and
show that its time and query complexity is polynomial with respect to the rank
of the target language, its incompatibility measure, and the maximal length of
a given counterexample. We report on experiments conducted with a prototype
implementation.
| [
{
"created": "Sun, 15 Nov 2020 21:13:17 GMT",
"version": "v1"
},
{
"created": "Fri, 27 Nov 2020 13:25:02 GMT",
"version": "v2"
}
] | 2020-11-30 | [
[
"Bloem",
"Roderick",
""
],
[
"Chockler",
"Hana",
""
],
[
"Ebrahimi",
"Masoud",
""
],
[
"Fisman",
"Dana",
""
],
[
"Riener",
"Heinz",
""
]
] | We define the problem of learning a transducer ${S}$ from a target language $U$ containing possibly conflicting transducers, using membership queries and conjecture queries. The requirement is that the language of ${S}$ be a subset of $U$. We argue that this is a natural question in many situations in hardware and software verification. We devise a learning algorithm for this problem and show that its time and query complexity is polynomial with respect to the rank of the target language, its incompatibility measure, and the maximal length of a given counterexample. We report on experiments conducted with a prototype implementation. |
2306.08781 | Mohammad Amin Saeidi | Mohammad Amin Saeidi, Hina Tabassum | Resource Allocation and Performance Analysis of Hybrid RSMA-NOMA in the
Downlink | This paper has been accepted in the 2023 IEEE 34th Annual
International Symposium on Personal, Indoor and Mobile Radio Communications
(PIMRC) | null | null | null | cs.IT eess.SP math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Rate splitting multiple access (RSMA) and non-orthogonal multiple access
(NOMA) are the key enabling multiple access techniques to enable massive
connectivity. However, it is unclear whether RSMA would consistently outperform
NOMA from a system sum-rate perspective, users' fairness, as well as
convergence and feasibility of the resource allocation solutions. This paper
investigates the weighted sum-rate maximization problem to optimize power and
rate allocations in a hybrid RSMA-NOMA network. In the hybrid RSMA-NOMA, by
optimally allocating the maximum power budget to each scheme, the BS operates
on NOMA and RSMA in two orthogonal channels, allowing users to simultaneously
receive signals on both RSMA and NOMA. Based on the successive convex
approximation (SCA) approach, we jointly optimize the power allocation of users
in NOMA and RSMA, the rate allocation of users in RSMA, and the power budget
allocation for NOMA and RSMA considering successive interference cancellation
(SIC) constraints. Numerical results demonstrate the trade-offs that hybrid
RSMA-NOMA access offers in terms of system sum rate, fairness, convergence, and
feasibility of the solutions.
| [
{
"created": "Wed, 14 Jun 2023 23:24:03 GMT",
"version": "v1"
}
] | 2023-06-16 | [
[
"Saeidi",
"Mohammad Amin",
""
],
[
"Tabassum",
"Hina",
""
]
] | Rate splitting multiple access (RSMA) and non-orthogonal multiple access (NOMA) are the key enabling multiple access techniques to enable massive connectivity. However, it is unclear whether RSMA would consistently outperform NOMA from a system sum-rate perspective, users' fairness, as well as convergence and feasibility of the resource allocation solutions. This paper investigates the weighted sum-rate maximization problem to optimize power and rate allocations in a hybrid RSMA-NOMA network. In the hybrid RSMA-NOMA, by optimally allocating the maximum power budget to each scheme, the BS operates on NOMA and RSMA in two orthogonal channels, allowing users to simultaneously receive signals on both RSMA and NOMA. Based on the successive convex approximation (SCA) approach, we jointly optimize the power allocation of users in NOMA and RSMA, the rate allocation of users in RSMA, and the power budget allocation for NOMA and RSMA considering successive interference cancellation (SIC) constraints. Numerical results demonstrate the trade-offs that hybrid RSMA-NOMA access offers in terms of system sum rate, fairness, convergence, and feasibility of the solutions. |
1806.09727 | Helio M. de Oliveira | A. J. A. Paschoal, R. M. Campello de Souza, H. M. de Oliveira | The Hamming and Golay Number-Theoretic Transforms | 5 pages, 2 figures | null | 10.14209/SBRT.2018.179 | XXXVI Simp\'osio Brasileiro de Telecomunica\c{c}\~oes SBrT 2018 | cs.IT eess.SP math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | New number-theoretic transforms are derived from known linear block codes
over finite fields. In particular, two new such transforms are built from
perfect codes, namely the \textit {Hamming number-theoretic transform} and the
\textit {Golay number-theoretic transform}. A few properties of these new
transforms are presented.
| [
{
"created": "Mon, 25 Jun 2018 23:28:20 GMT",
"version": "v1"
},
{
"created": "Tue, 25 Sep 2018 12:21:32 GMT",
"version": "v2"
}
] | 2019-09-27 | [
[
"Paschoal",
"A. J. A.",
""
],
[
"de Souza",
"R. M. Campello",
""
],
[
"de Oliveira",
"H. M.",
""
]
] | New number-theoretic transforms are derived from known linear block codes over finite fields. In particular, two new such transforms are built from perfect codes, namely the \textit {Hamming number-theoretic transform} and the \textit {Golay number-theoretic transform}. A few properties of these new transforms are presented. |
2206.06227 | Holden Lee | Holden Lee and Jianfeng Lu and Yixin Tan | Convergence for score-based generative modeling with polynomial
complexity | 43 pages | Advances in Neural Information Processing Systems 35 (2022),
22870--22882 | null | null | cs.LG math.PR math.ST stat.ML stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Score-based generative modeling (SGM) is a highly successful approach for
learning a probability distribution from data and generating further samples.
We prove the first polynomial convergence guarantees for the core mechanic
behind SGM: drawing samples from a probability density $p$ given a score
estimate (an estimate of $\nabla \ln p$) that is accurate in $L^2(p)$. Compared
to previous works, we do not incur error that grows exponentially in time or
that suffers from a curse of dimensionality. Our guarantee works for any smooth
distribution and depends polynomially on its log-Sobolev constant. Using our
guarantee, we give a theoretical analysis of score-based generative modeling,
which transforms white-noise input into samples from a learned data
distribution given score estimates at different noise scales. Our analysis
gives theoretical grounding to the observation that an annealed procedure is
required in practice to generate good samples, as our proof depends essentially
on using annealing to obtain a warm start at each step. Moreover, we show that
a predictor-corrector algorithm gives better convergence than using either
portion alone.
| [
{
"created": "Mon, 13 Jun 2022 14:57:35 GMT",
"version": "v1"
},
{
"created": "Wed, 3 May 2023 17:51:05 GMT",
"version": "v2"
}
] | 2023-05-04 | [
[
"Lee",
"Holden",
""
],
[
"Lu",
"Jianfeng",
""
],
[
"Tan",
"Yixin",
""
]
] | Score-based generative modeling (SGM) is a highly successful approach for learning a probability distribution from data and generating further samples. We prove the first polynomial convergence guarantees for the core mechanic behind SGM: drawing samples from a probability density $p$ given a score estimate (an estimate of $\nabla \ln p$) that is accurate in $L^2(p)$. Compared to previous works, we do not incur error that grows exponentially in time or that suffers from a curse of dimensionality. Our guarantee works for any smooth distribution and depends polynomially on its log-Sobolev constant. Using our guarantee, we give a theoretical analysis of score-based generative modeling, which transforms white-noise input into samples from a learned data distribution given score estimates at different noise scales. Our analysis gives theoretical grounding to the observation that an annealed procedure is required in practice to generate good samples, as our proof depends essentially on using annealing to obtain a warm start at each step. Moreover, we show that a predictor-corrector algorithm gives better convergence than using either portion alone. |
1202.6444 | Marcos Villagra | Marcos Villagra, Masaki Nakanishi, Shigeru Yamashita, Yasuhiko
Nakashima | Tensor Rank and Strong Quantum Nondeterminism in Multiparty
Communication | In v3 corrected some lesser typos. Extended abstract in Proc. of
TAMC'12, LNCS 7287, pp. 400-411, 2012 | IEICE Transactions on Information and Systems Vol. E96.D (2013)
No. 1 pp. 1-8 | 10.1587/transinf.E96.D.1 | null | cs.CC quant-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we study quantum nondeterminism in multiparty communication.
There are three (possibly) different types of nondeterminism in quantum
computation: i) strong, ii) weak with classical proofs, and iii) weak with
quantum proofs. Here we focus on the first one. A strong quantum
nondeterministic protocol accepts a correct input with positive probability,
and rejects an incorrect input with probability 1. In this work we relate
strong quantum nondeterministic multiparty communication complexity to the rank
of the communication tensor in the Number-On-Forehead and Number-In-Hand
models. In particular, by extending the definition proposed by de Wolf to {\it
nondeterministic tensor-rank} ($nrank$), we show that for any boolean function
$f$ when there is no prior shared entanglement between the players, 1) in the
Number-On-Forehead model, the cost is upper-bounded by the logarithm of
$nrank(f)$; 2) in the Number-In-Hand model, the cost is lower-bounded by the
logarithm of $nrank(f)$. Furthermore, we show that when the number of players
is $o(\log\log n)$ we have that $NQP\nsubseteq BQP$ for Number-On-Forehead
communication.
| [
{
"created": "Wed, 29 Feb 2012 05:18:13 GMT",
"version": "v1"
},
{
"created": "Wed, 13 Jun 2012 08:11:31 GMT",
"version": "v2"
},
{
"created": "Mon, 15 Oct 2012 01:34:34 GMT",
"version": "v3"
}
] | 2013-08-13 | [
[
"Villagra",
"Marcos",
""
],
[
"Nakanishi",
"Masaki",
""
],
[
"Yamashita",
"Shigeru",
""
],
[
"Nakashima",
"Yasuhiko",
""
]
] | In this paper we study quantum nondeterminism in multiparty communication. There are three (possibly) different types of nondeterminism in quantum computation: i) strong, ii) weak with classical proofs, and iii) weak with quantum proofs. Here we focus on the first one. A strong quantum nondeterministic protocol accepts a correct input with positive probability, and rejects an incorrect input with probability 1. In this work we relate strong quantum nondeterministic multiparty communication complexity to the rank of the communication tensor in the Number-On-Forehead and Number-In-Hand models. In particular, by extending the definition proposed by de Wolf to {\it nondeterministic tensor-rank} ($nrank$), we show that for any boolean function $f$ when there is no prior shared entanglement between the players, 1) in the Number-On-Forehead model, the cost is upper-bounded by the logarithm of $nrank(f)$; 2) in the Number-In-Hand model, the cost is lower-bounded by the logarithm of $nrank(f)$. Furthermore, we show that when the number of players is $o(\log\log n)$ we have that $NQP\nsubseteq BQP$ for Number-On-Forehead communication. |
2402.08812 | Zijian Ding | Zijian Ding, Joel Chan | Intelligent Canvas: Enabling Design-Like Exploratory Visual Data
Analysis with Generative AI through Rapid Prototyping, Iteration and Curation | null | null | null | null | cs.HC cs.AI | http://creativecommons.org/licenses/by/4.0/ | Complex data analysis inherently seeks unexpected insights through
exploratory visual analysis methods, transcending logical, step-by-step
processing. However, existing interfaces such as notebooks and dashboards have
limitations in exploration and comparison for visual data analysis. Addressing
these limitations, we introduce a "design-like" intelligent canvas environment
integrating generative AI into data analysis, offering rapid prototyping,
iteration, and comparative visualization management. Our dual contributions
include the integration of generative AI components into a canvas interface,
and empirical findings from a user study (N=10) evaluating the effectiveness of
the canvas interface.
| [
{
"created": "Tue, 13 Feb 2024 21:33:12 GMT",
"version": "v1"
},
{
"created": "Fri, 16 Feb 2024 18:04:47 GMT",
"version": "v2"
},
{
"created": "Thu, 21 Mar 2024 16:44:41 GMT",
"version": "v3"
}
] | 2024-03-22 | [
[
"Ding",
"Zijian",
""
],
[
"Chan",
"Joel",
""
]
] | Complex data analysis inherently seeks unexpected insights through exploratory visual analysis methods, transcending logical, step-by-step processing. However, existing interfaces such as notebooks and dashboards have limitations in exploration and comparison for visual data analysis. Addressing these limitations, we introduce a "design-like" intelligent canvas environment integrating generative AI into data analysis, offering rapid prototyping, iteration, and comparative visualization management. Our dual contributions include the integration of generative AI components into a canvas interface, and empirical findings from a user study (N=10) evaluating the effectiveness of the canvas interface. |
2407.08257 | Seonwhee Jin | Seonwhee Jin | Knowledge distillation to effectively attain both region-of-interest and
global semantics from an image where multiple objects appear | null | null | null | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Models based on convolutional neural networks (CNN) and transformers have
steadily been improved. They also have been applied in various computer vision
downstream tasks. However, in object detection tasks, accurately localizing and
classifying almost infinite categories of foods in images remains challenging.
To address these problems, we first segmented the food as the
region-of-interest (ROI) by using the segment-anything model (SAM) and masked
the rest of the region except ROI as black pixels. This process simplified the
problems into a single classification for which annotation and training were
much simpler than object detection. The images in which only the ROI was
preserved were fed as inputs to fine-tune various off-the-shelf models that
encoded their own inductive biases. Among them, Data-efficient image
Transformers (DeiTs) had the best classification performance. Nonetheless, when
foods' shapes and textures were similar, the contextual features of the
ROI-only images were not enough for accurate classification. Therefore, we
introduced a novel type of combined architecture, RveRNet, which consisted of
ROI, extra-ROI, and integration modules that allowed it to account for both the
ROI's and global contexts. The RveRNet's F1 score was 10% better than other
individual models when classifying ambiguous food images. If the RveRNet's
modules were DeiT with the knowledge distillation from the CNN, performed the
best. We investigated how architectures can be made robust against input noise
caused by permutation and translocation. The results indicated that there was a
trade-off between how much the CNN teacher's knowledge could be distilled to
DeiT and DeiT's innate strength. Code is publicly available at:
https://github.com/Seonwhee-Genome/RveRNet.
| [
{
"created": "Thu, 11 Jul 2024 07:57:33 GMT",
"version": "v1"
}
] | 2024-07-12 | [
[
"Jin",
"Seonwhee",
""
]
] | Models based on convolutional neural networks (CNN) and transformers have steadily been improved. They also have been applied in various computer vision downstream tasks. However, in object detection tasks, accurately localizing and classifying almost infinite categories of foods in images remains challenging. To address these problems, we first segmented the food as the region-of-interest (ROI) by using the segment-anything model (SAM) and masked the rest of the region except ROI as black pixels. This process simplified the problems into a single classification for which annotation and training were much simpler than object detection. The images in which only the ROI was preserved were fed as inputs to fine-tune various off-the-shelf models that encoded their own inductive biases. Among them, Data-efficient image Transformers (DeiTs) had the best classification performance. Nonetheless, when foods' shapes and textures were similar, the contextual features of the ROI-only images were not enough for accurate classification. Therefore, we introduced a novel type of combined architecture, RveRNet, which consisted of ROI, extra-ROI, and integration modules that allowed it to account for both the ROI's and global contexts. The RveRNet's F1 score was 10% better than other individual models when classifying ambiguous food images. If the RveRNet's modules were DeiT with the knowledge distillation from the CNN, performed the best. We investigated how architectures can be made robust against input noise caused by permutation and translocation. The results indicated that there was a trade-off between how much the CNN teacher's knowledge could be distilled to DeiT and DeiT's innate strength. Code is publicly available at: https://github.com/Seonwhee-Genome/RveRNet. |
2304.06028 | Runze Li | Runze Li, Dahun Kim, Bir Bhanu, Weicheng Kuo | RECLIP: Resource-efficient CLIP by Training with Small Images | Published at Transactions on Machine Learning Research | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We present RECLIP (Resource-efficient CLIP), a simple method that minimizes
computational resource footprint for CLIP (Contrastive Language Image
Pretraining). Inspired by the notion of coarse-to-fine in computer vision, we
leverage small images to learn from large-scale language supervision
efficiently, and finetune the model with high-resolution data in the end. Since
the complexity of the vision transformer heavily depends on input image size,
our approach significantly reduces the training resource requirements both in
theory and in practice. Using the same batch size and training epoch, RECLIP
achieves highly competitive zero-shot classification and image-text retrieval
accuracy with 6 to 8x less computational resources and 7 to 9x fewer FLOPs than
the baseline. Compared to the state-of-the-art contrastive learning methods,
RECLIP demonstrates 5 to 59x training resource savings while maintaining highly
competitive zero-shot classification and retrieval performance. Finally, RECLIP
matches the state of the art in transfer learning to open-vocabulary detection
tasks, achieving 32 APr on LVIS. We hope this work will pave the path for the
broader research community to explore language supervised pretraining in
resource-friendly settings.
| [
{
"created": "Wed, 12 Apr 2023 17:59:58 GMT",
"version": "v1"
},
{
"created": "Thu, 31 Aug 2023 04:36:04 GMT",
"version": "v2"
}
] | 2023-09-01 | [
[
"Li",
"Runze",
""
],
[
"Kim",
"Dahun",
""
],
[
"Bhanu",
"Bir",
""
],
[
"Kuo",
"Weicheng",
""
]
] | We present RECLIP (Resource-efficient CLIP), a simple method that minimizes computational resource footprint for CLIP (Contrastive Language Image Pretraining). Inspired by the notion of coarse-to-fine in computer vision, we leverage small images to learn from large-scale language supervision efficiently, and finetune the model with high-resolution data in the end. Since the complexity of the vision transformer heavily depends on input image size, our approach significantly reduces the training resource requirements both in theory and in practice. Using the same batch size and training epoch, RECLIP achieves highly competitive zero-shot classification and image-text retrieval accuracy with 6 to 8x less computational resources and 7 to 9x fewer FLOPs than the baseline. Compared to the state-of-the-art contrastive learning methods, RECLIP demonstrates 5 to 59x training resource savings while maintaining highly competitive zero-shot classification and retrieval performance. Finally, RECLIP matches the state of the art in transfer learning to open-vocabulary detection tasks, achieving 32 APr on LVIS. We hope this work will pave the path for the broader research community to explore language supervised pretraining in resource-friendly settings. |
2305.19953 | Hye-Jin Shim | Hye-jin Shim, Jee-weon Jung, Tomi Kinnunen | Multi-Dataset Co-Training with Sharpness-Aware Optimization for Audio
Anti-spoofing | Interspeech 2023 | null | null | null | cs.SD cs.LG eess.AS | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Audio anti-spoofing for automatic speaker verification aims to safeguard
users' identities from spoofing attacks. Although state-of-the-art spoofing
countermeasure(CM) models perform well on specific datasets, they lack
generalization when evaluated with different datasets. To address this
limitation, previous studies have explored large pre-trained models, which
require significant resources and time. We aim to develop a compact but
well-generalizing CM model that can compete with large pre-trained models. Our
approach involves multi-dataset co-training and sharpness-aware minimization,
which has not been investigated in this domain. Extensive experiments reveal
that proposed method yield competitive results across various datasets while
utilizing 4,000 times less parameters than the large pre-trained models.
| [
{
"created": "Wed, 31 May 2023 15:37:48 GMT",
"version": "v1"
},
{
"created": "Thu, 1 Jun 2023 06:50:06 GMT",
"version": "v2"
}
] | 2023-06-02 | [
[
"Shim",
"Hye-jin",
""
],
[
"Jung",
"Jee-weon",
""
],
[
"Kinnunen",
"Tomi",
""
]
] | Audio anti-spoofing for automatic speaker verification aims to safeguard users' identities from spoofing attacks. Although state-of-the-art spoofing countermeasure(CM) models perform well on specific datasets, they lack generalization when evaluated with different datasets. To address this limitation, previous studies have explored large pre-trained models, which require significant resources and time. We aim to develop a compact but well-generalizing CM model that can compete with large pre-trained models. Our approach involves multi-dataset co-training and sharpness-aware minimization, which has not been investigated in this domain. Extensive experiments reveal that proposed method yield competitive results across various datasets while utilizing 4,000 times less parameters than the large pre-trained models. |
1409.0315 | Roman Prutkin | Martin N\"ollenburg, Roman Prutkin, Ignaz Rutter | On Self-Approaching and Increasing-Chord Drawings of 3-Connected Planar
Graphs | 22 pages, 9 figures, full version of a paper appearing in Graph
Drawing 2014. Compared to the previous version, contains a new result on area
requirements of strongly monotone drawings | null | null | null | cs.CG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An $st$-path in a drawing of a graph is self-approaching if during the
traversal of the corresponding curve from $s$ to any point $t'$ on the curve
the distance to $t'$ is non-increasing. A path has increasing chords if it is
self-approaching in both directions. A drawing is self-approaching
(increasing-chord) if any pair of vertices is connected by a self-approaching
(increasing-chord) path.
We study self-approaching and increasing-chord drawings of triangulations and
3-connected planar graphs. We show that in the Euclidean plane, triangulations
admit increasing-chord drawings, and for planar 3-trees we can ensure
planarity. We prove that strongly monotone (and thus increasing-chord) drawings
of trees and binary cactuses require exponential resolution in the worst case,
answering an open question by Kindermann et al. [GD'14]. Moreover, we provide a
binary cactus that does not admit a self-approaching drawing. Finally, we show
that 3-connected planar graphs admit increasing-chord drawings in the
hyperbolic plane and characterize the trees that admit such drawings.
| [
{
"created": "Mon, 1 Sep 2014 08:02:25 GMT",
"version": "v1"
},
{
"created": "Thu, 4 Dec 2014 10:45:25 GMT",
"version": "v2"
}
] | 2014-12-05 | [
[
"Nöllenburg",
"Martin",
""
],
[
"Prutkin",
"Roman",
""
],
[
"Rutter",
"Ignaz",
""
]
] | An $st$-path in a drawing of a graph is self-approaching if during the traversal of the corresponding curve from $s$ to any point $t'$ on the curve the distance to $t'$ is non-increasing. A path has increasing chords if it is self-approaching in both directions. A drawing is self-approaching (increasing-chord) if any pair of vertices is connected by a self-approaching (increasing-chord) path. We study self-approaching and increasing-chord drawings of triangulations and 3-connected planar graphs. We show that in the Euclidean plane, triangulations admit increasing-chord drawings, and for planar 3-trees we can ensure planarity. We prove that strongly monotone (and thus increasing-chord) drawings of trees and binary cactuses require exponential resolution in the worst case, answering an open question by Kindermann et al. [GD'14]. Moreover, we provide a binary cactus that does not admit a self-approaching drawing. Finally, we show that 3-connected planar graphs admit increasing-chord drawings in the hyperbolic plane and characterize the trees that admit such drawings. |
2309.00140 | Alexandre Bittar | Alexandre Bittar, Paul Dixon, Mohammad Samragh, Kumari Nishu, Devang
Naik | Improving vision-inspired keyword spotting using dynamic module skipping
in streaming conformer encoder | null | ICASSP 2024 - 2024 IEEE International Conference on Acoustics,
Speech and Signal Processing (ICASSP) | 10.1109/ICASSP48485.2024.10447485 | null | cs.SD cs.CV cs.LG eess.AS | http://creativecommons.org/licenses/by/4.0/ | Using a vision-inspired keyword spotting framework, we propose an
architecture with input-dependent dynamic depth capable of processing streaming
audio. Specifically, we extend a conformer encoder with trainable binary gates
that allow us to dynamically skip network modules according to the input audio.
Our approach improves detection and localization accuracy on continuous speech
using Librispeech top-1000 most frequent words while maintaining a small memory
footprint. The inclusion of gates also reduces the average amount of processing
without affecting the overall performance. These benefits are shown to be even
more pronounced using the Google speech commands dataset placed over background
noise where up to 97% of the processing is skipped on non-speech inputs,
therefore making our method particularly interesting for an always-on keyword
spotter.
| [
{
"created": "Thu, 31 Aug 2023 21:25:57 GMT",
"version": "v1"
}
] | 2024-04-02 | [
[
"Bittar",
"Alexandre",
""
],
[
"Dixon",
"Paul",
""
],
[
"Samragh",
"Mohammad",
""
],
[
"Nishu",
"Kumari",
""
],
[
"Naik",
"Devang",
""
]
] | Using a vision-inspired keyword spotting framework, we propose an architecture with input-dependent dynamic depth capable of processing streaming audio. Specifically, we extend a conformer encoder with trainable binary gates that allow us to dynamically skip network modules according to the input audio. Our approach improves detection and localization accuracy on continuous speech using Librispeech top-1000 most frequent words while maintaining a small memory footprint. The inclusion of gates also reduces the average amount of processing without affecting the overall performance. These benefits are shown to be even more pronounced using the Google speech commands dataset placed over background noise where up to 97% of the processing is skipped on non-speech inputs, therefore making our method particularly interesting for an always-on keyword spotter. |
1904.05383 | Jim Basney | Andrew Adams (Pittsburgh Supercomputing Center), Kay Avila (NCSA), Jim
Basney (NCSA), Dana Brunson (Internet2), Robert Cowles (BrightLite
Information Security), Jeannette Dopheide (NCSA), Terry Fleury (NCSA), Elisa
Heymann (University of Wisconsin-Madison), Florence Hudson (Independent
Consultant), Craig Jackson (Indiana University), Ryan Kiser (Indiana
University), Mark Krenz (Indiana University), Jim Marsteller (Pittsburgh
Supercomputing Center), Barton P. Miller (University of Wisconsin-Madison),
Sean Peisert (Berkeley Lab), Scott Russell (Indiana University), Susan Sons
(Indiana University), Von Welch (Indiana University), John Zage (NCSA) | Trusted CI Experiences in Cybersecurity and Service to Open Science | 8 pages, PEARC '19: Practice and Experience in Advanced Research
Computing, July 28-August 1, 2019, Chicago, IL, USA | null | 10.1145/3332186.3340601 | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This article describes experiences and lessons learned from the Trusted CI
project, funded by the US National Science Foundation to serve the community as
the NSF Cybersecurity Center of Excellence. Trusted CI is an effort to address
cybersecurity for the open science community through a single organization that
provides leadership, training, consulting, and knowledge to that community. The
article describes the experiences and lessons learned of Trusted CI regarding
both cybersecurity for open science and managing the process of providing
centralized services to a broad and diverse community.
| [
{
"created": "Wed, 10 Apr 2019 18:38:27 GMT",
"version": "v1"
},
{
"created": "Wed, 15 May 2019 19:12:44 GMT",
"version": "v2"
},
{
"created": "Mon, 5 Aug 2019 16:31:13 GMT",
"version": "v3"
},
{
"created": "Wed, 7 Aug 2019 19:28:19 GMT",
"version": "v4"
}
] | 2019-08-09 | [
[
"Adams",
"Andrew",
"",
"Pittsburgh Supercomputing Center"
],
[
"Avila",
"Kay",
"",
"NCSA"
],
[
"Basney",
"Jim",
"",
"NCSA"
],
[
"Brunson",
"Dana",
"",
"Internet2"
],
[
"Cowles",
"Robert",
"",
"BrightLite\n Information Security"
],
[
"Dopheide",
"Jeannette",
"",
"NCSA"
],
[
"Fleury",
"Terry",
"",
"NCSA"
],
[
"Heymann",
"Elisa",
"",
"University of Wisconsin-Madison"
],
[
"Hudson",
"Florence",
"",
"Independent\n Consultant"
],
[
"Jackson",
"Craig",
"",
"Indiana University"
],
[
"Kiser",
"Ryan",
"",
"Indiana\n University"
],
[
"Krenz",
"Mark",
"",
"Indiana University"
],
[
"Marsteller",
"Jim",
"",
"Pittsburgh\n Supercomputing Center"
],
[
"Miller",
"Barton P.",
"",
"University of Wisconsin-Madison"
],
[
"Peisert",
"Sean",
"",
"Berkeley Lab"
],
[
"Russell",
"Scott",
"",
"Indiana University"
],
[
"Sons",
"Susan",
"",
"Indiana University"
],
[
"Welch",
"Von",
"",
"Indiana University"
],
[
"Zage",
"John",
"",
"NCSA"
]
] | This article describes experiences and lessons learned from the Trusted CI project, funded by the US National Science Foundation to serve the community as the NSF Cybersecurity Center of Excellence. Trusted CI is an effort to address cybersecurity for the open science community through a single organization that provides leadership, training, consulting, and knowledge to that community. The article describes the experiences and lessons learned of Trusted CI regarding both cybersecurity for open science and managing the process of providing centralized services to a broad and diverse community. |
1903.01292 | Piotr Mirowski | Piotr Mirowski, Andras Banki-Horvath, Keith Anderson, Denis
Teplyashin, Karl Moritz Hermann, Mateusz Malinowski, Matthew Koichi Grimes,
Karen Simonyan, Koray Kavukcuoglu, Andrew Zisserman, Raia Hadsell | The StreetLearn Environment and Dataset | 13 pages, 6 figures, 4 tables. arXiv admin note: text overlap with
arXiv:1804.00168 | null | null | null | cs.AI cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Navigation is a rich and well-grounded problem domain that drives progress in
many different areas of research: perception, planning, memory, exploration,
and optimisation in particular. Historically these challenges have been
separately considered and solutions built that rely on stationary datasets -
for example, recorded trajectories through an environment. These datasets
cannot be used for decision-making and reinforcement learning, however, and in
general the perspective of navigation as an interactive learning task, where
the actions and behaviours of a learning agent are learned simultaneously with
the perception and planning, is relatively unsupported. Thus, existing
navigation benchmarks generally rely on static datasets (Geiger et al., 2013;
Kendall et al., 2015) or simulators (Beattie et al., 2016; Shah et al., 2018).
To support and validate research in end-to-end navigation, we present
StreetLearn: an interactive, first-person, partially-observed visual
environment that uses Google Street View for its photographic content and broad
coverage, and give performance baselines for a challenging goal-driven
navigation task. The environment code, baseline agent code, and the dataset are
available at http://streetlearn.cc
| [
{
"created": "Mon, 4 Mar 2019 16:21:22 GMT",
"version": "v1"
}
] | 2019-03-06 | [
[
"Mirowski",
"Piotr",
""
],
[
"Banki-Horvath",
"Andras",
""
],
[
"Anderson",
"Keith",
""
],
[
"Teplyashin",
"Denis",
""
],
[
"Hermann",
"Karl Moritz",
""
],
[
"Malinowski",
"Mateusz",
""
],
[
"Grimes",
"Matthew Koichi",
""
],
[
"Simonyan",
"Karen",
""
],
[
"Kavukcuoglu",
"Koray",
""
],
[
"Zisserman",
"Andrew",
""
],
[
"Hadsell",
"Raia",
""
]
] | Navigation is a rich and well-grounded problem domain that drives progress in many different areas of research: perception, planning, memory, exploration, and optimisation in particular. Historically these challenges have been separately considered and solutions built that rely on stationary datasets - for example, recorded trajectories through an environment. These datasets cannot be used for decision-making and reinforcement learning, however, and in general the perspective of navigation as an interactive learning task, where the actions and behaviours of a learning agent are learned simultaneously with the perception and planning, is relatively unsupported. Thus, existing navigation benchmarks generally rely on static datasets (Geiger et al., 2013; Kendall et al., 2015) or simulators (Beattie et al., 2016; Shah et al., 2018). To support and validate research in end-to-end navigation, we present StreetLearn: an interactive, first-person, partially-observed visual environment that uses Google Street View for its photographic content and broad coverage, and give performance baselines for a challenging goal-driven navigation task. The environment code, baseline agent code, and the dataset are available at http://streetlearn.cc |
2305.11888 | Peter Zhang | Peter Zhang | Taking Advice from ChatGPT | 35 pages | null | null | null | cs.HC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | A growing literature studies how humans incorporate advice from algorithms.
This study examines an algorithm with millions of daily users: ChatGPT. In a
preregistered study, 118 student participants answer 2,828 multiple-choice
questions across 25 academic subjects. Participants receive advice from a GPT
model and can update their initial responses. The advisor's identity ("AI
chatbot" versus a human "expert"), presence of a written justification, and
advice correctness do not significantly affect weight on advice. Instead,
participants weigh advice more heavily if they (1) are unfamiliar with the
topic, (2) used ChatGPT in the past, or (3) received more accurate advice
previously. The last two effects -- algorithm familiarity and experience -- are
stronger with an AI chatbot as the advisor. Participants that receive written
justifications are able to discern correct advice and update accordingly.
Student participants are miscalibrated in their judgements of ChatGPT advice
accuracy; one reason is that they significantly misjudge the accuracy of
ChatGPT on 11/25 topics. Participants under-weigh advice by over 50% and can
score better by trusting ChatGPT more.
| [
{
"created": "Thu, 11 May 2023 15:03:15 GMT",
"version": "v1"
},
{
"created": "Tue, 23 May 2023 05:51:12 GMT",
"version": "v2"
},
{
"created": "Tue, 13 Jun 2023 01:34:31 GMT",
"version": "v3"
}
] | 2023-06-14 | [
[
"Zhang",
"Peter",
""
]
] | A growing literature studies how humans incorporate advice from algorithms. This study examines an algorithm with millions of daily users: ChatGPT. In a preregistered study, 118 student participants answer 2,828 multiple-choice questions across 25 academic subjects. Participants receive advice from a GPT model and can update their initial responses. The advisor's identity ("AI chatbot" versus a human "expert"), presence of a written justification, and advice correctness do not significantly affect weight on advice. Instead, participants weigh advice more heavily if they (1) are unfamiliar with the topic, (2) used ChatGPT in the past, or (3) received more accurate advice previously. The last two effects -- algorithm familiarity and experience -- are stronger with an AI chatbot as the advisor. Participants that receive written justifications are able to discern correct advice and update accordingly. Student participants are miscalibrated in their judgements of ChatGPT advice accuracy; one reason is that they significantly misjudge the accuracy of ChatGPT on 11/25 topics. Participants under-weigh advice by over 50% and can score better by trusting ChatGPT more. |
cs/0009007 | Tom Fawcett | Foster Provost and Tom Fawcett | Robust Classification for Imprecise Environments | 24 pages, 12 figures. To be published in Machine Learning Journal.
For related papers, see http://www.hpl.hp.com/personal/Tom_Fawcett/ROCCH/ | null | null | null | cs.LG | null | In real-world environments it usually is difficult to specify target
operating conditions precisely, for example, target misclassification costs.
This uncertainty makes building robust classification systems problematic. We
show that it is possible to build a hybrid classifier that will perform at
least as well as the best available classifier for any target conditions. In
some cases, the performance of the hybrid actually can surpass that of the best
known classifier. This robust performance extends across a wide variety of
comparison frameworks, including the optimization of metrics such as accuracy,
expected cost, lift, precision, recall, and workforce utilization. The hybrid
also is efficient to build, to store, and to update. The hybrid is based on a
method for the comparison of classifier performance that is robust to imprecise
class distributions and misclassification costs. The ROC convex hull (ROCCH)
method combines techniques from ROC analysis, decision analysis and
computational geometry, and adapts them to the particulars of analyzing learned
classifiers. The method is efficient and incremental, minimizes the management
of classifier performance data, and allows for clear visual comparisons and
sensitivity analyses. Finally, we point to empirical evidence that a robust
hybrid classifier indeed is needed for many real-world problems.
| [
{
"created": "Wed, 13 Sep 2000 21:09:47 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Provost",
"Foster",
""
],
[
"Fawcett",
"Tom",
""
]
] | In real-world environments it usually is difficult to specify target operating conditions precisely, for example, target misclassification costs. This uncertainty makes building robust classification systems problematic. We show that it is possible to build a hybrid classifier that will perform at least as well as the best available classifier for any target conditions. In some cases, the performance of the hybrid actually can surpass that of the best known classifier. This robust performance extends across a wide variety of comparison frameworks, including the optimization of metrics such as accuracy, expected cost, lift, precision, recall, and workforce utilization. The hybrid also is efficient to build, to store, and to update. The hybrid is based on a method for the comparison of classifier performance that is robust to imprecise class distributions and misclassification costs. The ROC convex hull (ROCCH) method combines techniques from ROC analysis, decision analysis and computational geometry, and adapts them to the particulars of analyzing learned classifiers. The method is efficient and incremental, minimizes the management of classifier performance data, and allows for clear visual comparisons and sensitivity analyses. Finally, we point to empirical evidence that a robust hybrid classifier indeed is needed for many real-world problems. |
1609.00475 | Pushpam Aji John | Pushpam Aji John, Rudolf Agren, Yu-Jung Chen, Christian Rohner, and
Edith Ngai (Uppsala University) | 868 MHz Wireless Sensor Network - A Study | 11th Swedish National Computer Networking Workshop SNCNW 2015 | null | null | null | cs.NI | http://creativecommons.org/licenses/by/4.0/ | Today 2.4 GHz based wireless sensor networks are increasing at a tremendous
pace, and are seen in widespread applications. Product innovation and support
by many vendors in 2.4 GHz makes it a preferred choice, but the networks are
prone to issues like interference, and range issues. On the other hand, the
less popular 868 MHz in the ISM band has not seen significant usage. In this
paper we explore the use of 868 MHz channel to implement a wireless sensor
network, and study the efficacy of this channel
| [
{
"created": "Fri, 2 Sep 2016 06:23:24 GMT",
"version": "v1"
}
] | 2016-09-05 | [
[
"John",
"Pushpam Aji",
"",
"Uppsala University"
],
[
"Agren",
"Rudolf",
"",
"Uppsala University"
],
[
"Chen",
"Yu-Jung",
"",
"Uppsala University"
],
[
"Rohner",
"Christian",
"",
"Uppsala University"
],
[
"Ngai",
"Edith",
"",
"Uppsala University"
]
] | Today 2.4 GHz based wireless sensor networks are increasing at a tremendous pace, and are seen in widespread applications. Product innovation and support by many vendors in 2.4 GHz makes it a preferred choice, but the networks are prone to issues like interference, and range issues. On the other hand, the less popular 868 MHz in the ISM band has not seen significant usage. In this paper we explore the use of 868 MHz channel to implement a wireless sensor network, and study the efficacy of this channel |
2403.04021 | Yewei Huang | Yewei Huang, Xi Lin, Brendan Englot | Multi-Robot Autonomous Exploration and Mapping Under Localization
Uncertainty with Expectation-Maximization | null | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | We propose an autonomous exploration algorithm designed for decentralized
multi-robot teams, which takes into account map and localization uncertainties
of range-sensing mobile robots. Virtual landmarks are used to quantify the
combined impact of process noise and sensor noise on map uncertainty.
Additionally, we employ an iterative expectation-maximization inspired
algorithm to assess the potential outcomes of both a local robot's and its
neighbors' next-step actions. To evaluate the effectiveness of our framework,
we conduct a comparative analysis with state-of-the-art algorithms. The results
of our experiments show the proposed algorithm's capacity to strike a balance
between curbing map uncertainty and achieving efficient task allocation among
robots.
| [
{
"created": "Wed, 6 Mar 2024 20:03:27 GMT",
"version": "v1"
}
] | 2024-03-08 | [
[
"Huang",
"Yewei",
""
],
[
"Lin",
"Xi",
""
],
[
"Englot",
"Brendan",
""
]
] | We propose an autonomous exploration algorithm designed for decentralized multi-robot teams, which takes into account map and localization uncertainties of range-sensing mobile robots. Virtual landmarks are used to quantify the combined impact of process noise and sensor noise on map uncertainty. Additionally, we employ an iterative expectation-maximization inspired algorithm to assess the potential outcomes of both a local robot's and its neighbors' next-step actions. To evaluate the effectiveness of our framework, we conduct a comparative analysis with state-of-the-art algorithms. The results of our experiments show the proposed algorithm's capacity to strike a balance between curbing map uncertainty and achieving efficient task allocation among robots. |
1905.02857 | Peng Gao | Peng Gao, Yipeng Ma, Ruyue Yuan, Liyi Xiao, Fei Wang | Learning Cascaded Siamese Networks for High Performance Visual Tracking | Accepted for IEEE 26th International Conference on Image Processing
(ICIP 2019) | null | null | null | cs.CV cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual tracking is one of the most challenging computer vision problems. In
order to achieve high performance visual tracking in various negative
scenarios, a novel cascaded Siamese network is proposed and developed based on
two different deep learning networks: a matching subnetwork and a
classification subnetwork. The matching subnetwork is a fully convolutional
Siamese network. According to the similarity score between the exemplar image
and the candidate image, it aims to search possible object positions and crop
scaled candidate patches. The classification subnetwork is designed to further
evaluate the cropped candidate patches and determine the optimal tracking
results based on the classification score. The matching subnetwork is trained
offline and fixed online, while the classification subnetwork performs
stochastic gradient descent online to learn more target-specific information.
To improve the tracking performance further, an effective classification
subnetwork update method based on both similarity and classification scores is
utilized for updating the classification subnetwork. Extensive experimental
results demonstrate that our proposed approach achieves state-of-the-art
performance in recent benchmarks.
| [
{
"created": "Wed, 8 May 2019 01:06:23 GMT",
"version": "v1"
}
] | 2019-05-09 | [
[
"Gao",
"Peng",
""
],
[
"Ma",
"Yipeng",
""
],
[
"Yuan",
"Ruyue",
""
],
[
"Xiao",
"Liyi",
""
],
[
"Wang",
"Fei",
""
]
] | Visual tracking is one of the most challenging computer vision problems. In order to achieve high performance visual tracking in various negative scenarios, a novel cascaded Siamese network is proposed and developed based on two different deep learning networks: a matching subnetwork and a classification subnetwork. The matching subnetwork is a fully convolutional Siamese network. According to the similarity score between the exemplar image and the candidate image, it aims to search possible object positions and crop scaled candidate patches. The classification subnetwork is designed to further evaluate the cropped candidate patches and determine the optimal tracking results based on the classification score. The matching subnetwork is trained offline and fixed online, while the classification subnetwork performs stochastic gradient descent online to learn more target-specific information. To improve the tracking performance further, an effective classification subnetwork update method based on both similarity and classification scores is utilized for updating the classification subnetwork. Extensive experimental results demonstrate that our proposed approach achieves state-of-the-art performance in recent benchmarks. |
2304.14590 | Sean Deyo | Sean Deyo, Veit Elser | A logical word embedding for learning grammar | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | We introduce the logical grammar emdebbing (LGE), a model inspired by
pregroup grammars and categorial grammars to enable unsupervised inference of
lexical categories and syntactic rules from a corpus of text. LGE produces
comprehensible output summarizing its inferences, has a completely transparent
process for producing novel sentences, and can learn from as few as a hundred
sentences.
| [
{
"created": "Fri, 28 Apr 2023 01:53:54 GMT",
"version": "v1"
},
{
"created": "Tue, 6 Jun 2023 00:46:49 GMT",
"version": "v2"
}
] | 2023-06-07 | [
[
"Deyo",
"Sean",
""
],
[
"Elser",
"Veit",
""
]
] | We introduce the logical grammar emdebbing (LGE), a model inspired by pregroup grammars and categorial grammars to enable unsupervised inference of lexical categories and syntactic rules from a corpus of text. LGE produces comprehensible output summarizing its inferences, has a completely transparent process for producing novel sentences, and can learn from as few as a hundred sentences. |
1210.0693 | Cedomir Stefanovic | \v{C}edomir Stefanovi\'c, Kasper F. Trilingsgaard, Nuno K. Pratas and
Petar Popovski | Joint Estimation and Contention-Resolution Protocol for Wireless Random
Access | Submitted to ICC 2013 | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a contention-based random-access protocol, designed for wireless
networks where the number of users is not a priori known. The protocol operates
in rounds divided into equal-duration slots, performing at the same time
estimation of the number of users and resolution of their transmissions. The
users independently access the wireless link on a slot basis with a predefined
probability, resulting in a distribution of user transmissions over slots,
based on which the estimation and contention resolution are performed.
Specifically, the contention resolution is performed using successive
interference cancellation which, coupled with the use of the optimized access
probabilities, enables throughputs that are substantially higher than the
traditional slotted ALOHA-like protocols. The key feature of the proposed
protocol is that the round durations are not a priori set and they are
terminated when the estimation/contention-resolution performance reach the
satisfactory levels.
| [
{
"created": "Tue, 2 Oct 2012 07:55:18 GMT",
"version": "v1"
}
] | 2012-10-03 | [
[
"Stefanović",
"Čedomir",
""
],
[
"Trilingsgaard",
"Kasper F.",
""
],
[
"Pratas",
"Nuno K.",
""
],
[
"Popovski",
"Petar",
""
]
] | We propose a contention-based random-access protocol, designed for wireless networks where the number of users is not a priori known. The protocol operates in rounds divided into equal-duration slots, performing at the same time estimation of the number of users and resolution of their transmissions. The users independently access the wireless link on a slot basis with a predefined probability, resulting in a distribution of user transmissions over slots, based on which the estimation and contention resolution are performed. Specifically, the contention resolution is performed using successive interference cancellation which, coupled with the use of the optimized access probabilities, enables throughputs that are substantially higher than the traditional slotted ALOHA-like protocols. The key feature of the proposed protocol is that the round durations are not a priori set and they are terminated when the estimation/contention-resolution performance reach the satisfactory levels. |
2102.07154 | Oren Weimann | Aviv Bar-Natan, Panagiotis Charalampopoulos, Pawe{\l} Gawrychowski,
Shay Mozes, Oren Weimann | Fault-Tolerant Distance Labeling for Planar Graphs | null | null | null | null | cs.DS | http://creativecommons.org/licenses/by/4.0/ | In fault-tolerant distance labeling we wish to assign short labels to the
vertices of a graph $G$ such that from the labels of any three vertices $u,v,f$
we can infer the $u$-to-$v$ distance in the graph $G\setminus \{f\}$. We show
that any directed weighted planar graph (and in fact any graph in a graph
family with $O(\sqrt{n})$-size separators, such as minor-free graphs) admits
fault-tolerant distance labels of size $O(n^{2/3})$. We extend these labels in
a way that allows us to also count the number of shortest paths, and provide
additional upper and lower bounds for labels and oracles for counting shortest
paths.
| [
{
"created": "Sun, 14 Feb 2021 13:39:27 GMT",
"version": "v1"
}
] | 2021-02-16 | [
[
"Bar-Natan",
"Aviv",
""
],
[
"Charalampopoulos",
"Panagiotis",
""
],
[
"Gawrychowski",
"Paweł",
""
],
[
"Mozes",
"Shay",
""
],
[
"Weimann",
"Oren",
""
]
] | In fault-tolerant distance labeling we wish to assign short labels to the vertices of a graph $G$ such that from the labels of any three vertices $u,v,f$ we can infer the $u$-to-$v$ distance in the graph $G\setminus \{f\}$. We show that any directed weighted planar graph (and in fact any graph in a graph family with $O(\sqrt{n})$-size separators, such as minor-free graphs) admits fault-tolerant distance labels of size $O(n^{2/3})$. We extend these labels in a way that allows us to also count the number of shortest paths, and provide additional upper and lower bounds for labels and oracles for counting shortest paths. |
1006.5166 | Amin Gohari | Amin Aminzadeh Gohari, Abbas El Gamal and Venkat Anantharam | On Marton's Inner Bound for the General Broadcast Channel | 14 pages, Submitted to IEEE Transactions in Information Theory | null | 10.1109/TIT.2011.2169537 | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We establish several new results on Marton's coding scheme and its
corresponding inner bound on the capacity region of the general broadcast
channel. We show that unlike the Gaussian case, Marton's coding scheme without
superposition coding is not optimal in general even for a degraded broadcast
channel with no common message. We then establish properties of Marton's inner
bound that help restrict the search space for computing the sum-rate. Next, we
show that the inner bound is optimal along certain directions. Finally, we
propose a coding scheme that may lead to a larger inner bound.
| [
{
"created": "Sat, 26 Jun 2010 21:19:41 GMT",
"version": "v1"
},
{
"created": "Sat, 11 Jun 2011 18:15:47 GMT",
"version": "v2"
}
] | 2016-11-18 | [
[
"Gohari",
"Amin Aminzadeh",
""
],
[
"Gamal",
"Abbas El",
""
],
[
"Anantharam",
"Venkat",
""
]
] | We establish several new results on Marton's coding scheme and its corresponding inner bound on the capacity region of the general broadcast channel. We show that unlike the Gaussian case, Marton's coding scheme without superposition coding is not optimal in general even for a degraded broadcast channel with no common message. We then establish properties of Marton's inner bound that help restrict the search space for computing the sum-rate. Next, we show that the inner bound is optimal along certain directions. Finally, we propose a coding scheme that may lead to a larger inner bound. |
1705.08971 | Scott Cheng-Hsin Yang | Scott Cheng-Hsin Yang, Yue Yu, Arash Givchi, Pei Wang, Wai Keen Vong,
and Patrick Shafto | Optimal Cooperative Inference | 16 pages (5 pages of Supplementary Material), 1 figure | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cooperative transmission of data fosters rapid accumulation of knowledge by
efficiently combining experiences across learners. Although well studied in
human learning and increasingly in machine learning, we lack formal frameworks
through which we may reason about the benefits and limitations of cooperative
inference. We present such a framework. We introduce novel indices for
measuring the effectiveness of probabilistic and cooperative information
transmission. We relate our indices to the well-known Teaching Dimension in
deterministic settings. We prove conditions under which optimal cooperative
inference can be achieved, including a representation theorem that constrains
the form of inductive biases for learners optimized for cooperative inference.
We conclude by demonstrating how these principles may inform the design of
machine learning algorithms and discuss implications for human and machine
learning.
| [
{
"created": "Wed, 24 May 2017 21:42:00 GMT",
"version": "v1"
},
{
"created": "Thu, 25 Jan 2018 19:51:57 GMT",
"version": "v2"
}
] | 2018-01-29 | [
[
"Yang",
"Scott Cheng-Hsin",
""
],
[
"Yu",
"Yue",
""
],
[
"Givchi",
"Arash",
""
],
[
"Wang",
"Pei",
""
],
[
"Vong",
"Wai Keen",
""
],
[
"Shafto",
"Patrick",
""
]
] | Cooperative transmission of data fosters rapid accumulation of knowledge by efficiently combining experiences across learners. Although well studied in human learning and increasingly in machine learning, we lack formal frameworks through which we may reason about the benefits and limitations of cooperative inference. We present such a framework. We introduce novel indices for measuring the effectiveness of probabilistic and cooperative information transmission. We relate our indices to the well-known Teaching Dimension in deterministic settings. We prove conditions under which optimal cooperative inference can be achieved, including a representation theorem that constrains the form of inductive biases for learners optimized for cooperative inference. We conclude by demonstrating how these principles may inform the design of machine learning algorithms and discuss implications for human and machine learning. |
1803.00091 | Murat Cubuktepe | Murat Cubuktepe and Ufuk Topcu | Verification of Markov Decision Processes with Risk-Sensitive Measures | 7 pages, to appear in ACC 2018 | null | null | null | cs.AI cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We develop a method for computing policies in Markov decision processes with
risk-sensitive measures subject to temporal logic constraints. Specifically, we
use a particular risk-sensitive measure from cumulative prospect theory, which
has been previously adopted in psychology and economics. The nonlinear
transformation of the probabilities and utility functions yields a nonlinear
programming problem, which makes computation of optimal policies typically
challenging. We show that this nonlinear weighting function can be accurately
approximated by the difference of two convex functions. This observation
enables efficient policy computation using convex-concave programming. We
demonstrate the effectiveness of the approach on several scenarios.
| [
{
"created": "Wed, 28 Feb 2018 21:14:37 GMT",
"version": "v1"
},
{
"created": "Sun, 19 Apr 2020 22:11:10 GMT",
"version": "v2"
}
] | 2020-04-21 | [
[
"Cubuktepe",
"Murat",
""
],
[
"Topcu",
"Ufuk",
""
]
] | We develop a method for computing policies in Markov decision processes with risk-sensitive measures subject to temporal logic constraints. Specifically, we use a particular risk-sensitive measure from cumulative prospect theory, which has been previously adopted in psychology and economics. The nonlinear transformation of the probabilities and utility functions yields a nonlinear programming problem, which makes computation of optimal policies typically challenging. We show that this nonlinear weighting function can be accurately approximated by the difference of two convex functions. This observation enables efficient policy computation using convex-concave programming. We demonstrate the effectiveness of the approach on several scenarios. |
1610.02003 | Paul Baltescu | Paul Baltescu | Scalable Machine Translation in Memory Constrained Environments | Master Thesis | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine translation is the discipline concerned with developing automated
tools for translating from one human language to another. Statistical machine
translation (SMT) is the dominant paradigm in this field. In SMT, translations
are generated by means of statistical models whose parameters are learned from
bilingual data. Scalability is a key concern in SMT, as one would like to make
use of as much data as possible to train better translation systems.
In recent years, mobile devices with adequate computing power have become
widely available. Despite being very successful, mobile applications relying on
NLP systems continue to follow a client-server architecture, which is of
limited use because access to internet is often limited and expensive. The goal
of this dissertation is to show how to construct a scalable machine translation
system that can operate with the limited resources available on a mobile
device.
The main challenge for porting translation systems on mobile devices is
memory usage. The amount of memory available on a mobile device is far less
than what is typically available on the server side of a client-server
application. In this thesis, we investigate alternatives for the two components
which prevent standard translation systems from working on mobile devices due
to high memory usage. We show that once these standard components are replaced
with our proposed alternatives, we obtain a scalable translation system that
can work on a device with limited memory.
| [
{
"created": "Thu, 6 Oct 2016 19:22:49 GMT",
"version": "v1"
}
] | 2016-10-07 | [
[
"Baltescu",
"Paul",
""
]
] | Machine translation is the discipline concerned with developing automated tools for translating from one human language to another. Statistical machine translation (SMT) is the dominant paradigm in this field. In SMT, translations are generated by means of statistical models whose parameters are learned from bilingual data. Scalability is a key concern in SMT, as one would like to make use of as much data as possible to train better translation systems. In recent years, mobile devices with adequate computing power have become widely available. Despite being very successful, mobile applications relying on NLP systems continue to follow a client-server architecture, which is of limited use because access to internet is often limited and expensive. The goal of this dissertation is to show how to construct a scalable machine translation system that can operate with the limited resources available on a mobile device. The main challenge for porting translation systems on mobile devices is memory usage. The amount of memory available on a mobile device is far less than what is typically available on the server side of a client-server application. In this thesis, we investigate alternatives for the two components which prevent standard translation systems from working on mobile devices due to high memory usage. We show that once these standard components are replaced with our proposed alternatives, we obtain a scalable translation system that can work on a device with limited memory. |
1408.0272 | Fr\'ed\'eric Meunier | Axel Parmentier and Fr\'ed\'eric Meunier | Stochastic Shortest Paths and Risk Measures | null | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider three shortest path problems in directed graphs with random arc
lengths. For the first and the second problems, a risk measure is involved.
While the first problem consists in finding a path minimizing this risk
measure, the second one consists in finding a path minimizing a deterministic
cost, while satisfying a constraint on the risk measure. We propose algorithms
solving these problems for a wide range of risk measures, which includes among
several others the $CVaR$ and the probability of being late. Their performances
are evaluated through experiments. One of the key elements in these algorithms
is the use of stochastic lower bounds that allow to discard partial solutions.
Good stochastic lower bounds are provided by the so-called Stochastic Ontime
Arrival Problem. This latter problem is the third one studied in this paper and
we propose a new and very efficient algorithm solving it. Complementary
discussions on the complexity of the problems are also provided.
| [
{
"created": "Fri, 1 Aug 2014 19:20:58 GMT",
"version": "v1"
},
{
"created": "Fri, 26 Sep 2014 16:38:46 GMT",
"version": "v2"
}
] | 2014-09-29 | [
[
"Parmentier",
"Axel",
""
],
[
"Meunier",
"Frédéric",
""
]
] | We consider three shortest path problems in directed graphs with random arc lengths. For the first and the second problems, a risk measure is involved. While the first problem consists in finding a path minimizing this risk measure, the second one consists in finding a path minimizing a deterministic cost, while satisfying a constraint on the risk measure. We propose algorithms solving these problems for a wide range of risk measures, which includes among several others the $CVaR$ and the probability of being late. Their performances are evaluated through experiments. One of the key elements in these algorithms is the use of stochastic lower bounds that allow to discard partial solutions. Good stochastic lower bounds are provided by the so-called Stochastic Ontime Arrival Problem. This latter problem is the third one studied in this paper and we propose a new and very efficient algorithm solving it. Complementary discussions on the complexity of the problems are also provided. |
2406.14429 | Lukas Struppek | Simeon Allmendinger, Domenique Zipperling, Lukas Struppek, Niklas
K\"uhl | CollaFuse: Collaborative Diffusion Models | 13 pages, 7 figures | null | null | null | cs.LG cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the landscape of generative artificial intelligence, diffusion-based
models have emerged as a promising method for generating synthetic images.
However, the application of diffusion models poses numerous challenges,
particularly concerning data availability, computational requirements, and
privacy. Traditional approaches to address these shortcomings, like federated
learning, often impose significant computational burdens on individual clients,
especially those with constrained resources. In response to these challenges,
we introduce a novel approach for distributed collaborative diffusion models
inspired by split learning. Our approach facilitates collaborative training of
diffusion models while alleviating client computational burdens during image
synthesis. This reduced computational burden is achieved by retaining data and
computationally inexpensive processes locally at each client while outsourcing
the computationally expensive processes to shared, more efficient server
resources. Through experiments on the common CelebA dataset, our approach
demonstrates enhanced privacy by reducing the necessity for sharing raw data.
These capabilities hold significant potential across various application areas,
including the design of edge computing solutions. Thus, our work advances
distributed machine learning by contributing to the evolution of collaborative
diffusion models.
| [
{
"created": "Thu, 20 Jun 2024 15:54:21 GMT",
"version": "v1"
}
] | 2024-06-21 | [
[
"Allmendinger",
"Simeon",
""
],
[
"Zipperling",
"Domenique",
""
],
[
"Struppek",
"Lukas",
""
],
[
"Kühl",
"Niklas",
""
]
] | In the landscape of generative artificial intelligence, diffusion-based models have emerged as a promising method for generating synthetic images. However, the application of diffusion models poses numerous challenges, particularly concerning data availability, computational requirements, and privacy. Traditional approaches to address these shortcomings, like federated learning, often impose significant computational burdens on individual clients, especially those with constrained resources. In response to these challenges, we introduce a novel approach for distributed collaborative diffusion models inspired by split learning. Our approach facilitates collaborative training of diffusion models while alleviating client computational burdens during image synthesis. This reduced computational burden is achieved by retaining data and computationally inexpensive processes locally at each client while outsourcing the computationally expensive processes to shared, more efficient server resources. Through experiments on the common CelebA dataset, our approach demonstrates enhanced privacy by reducing the necessity for sharing raw data. These capabilities hold significant potential across various application areas, including the design of edge computing solutions. Thus, our work advances distributed machine learning by contributing to the evolution of collaborative diffusion models. |
2403.11932 | Touraj Soleymani | Touraj Soleymani, John S. Baras, Siyi Wang, Sandra Hirche, Karl H.
Johansson | Consistency of Value of Information: Effects of Packet Loss and Time
Delay in Networked Control Systems Tasks | null | null | null | null | cs.IT math.IT math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this chapter, we study the consistency of the value of
information$\unicode{x2014}$a semantic metric that claims to determine the
right piece of information in networked control systems
tasks$\unicode{x2014}$in a lossy and delayed communication regime. Our analysis
begins with a focus on state estimation, and subsequently extends to feedback
control. To that end, we make a causal tradeoff between the packet rate and the
mean square error. Associated with this tradeoff, we demonstrate the existence
of an optimal policy profile, comprising a symmetric threshold scheduling
policy based on the value of information for the encoder and a non-Gaussian
linear estimation policy for the decoder. Our structural results assert that
the scheduling policy is expressible in terms of $3d-1$ variables related to
the source and the channel, where $d$ is the time delay, and that the
estimation policy incorporates no residual related to signaling. We then
construct an optimal control policy by exploiting the separation principle.
| [
{
"created": "Mon, 18 Mar 2024 16:31:21 GMT",
"version": "v1"
}
] | 2024-03-19 | [
[
"Soleymani",
"Touraj",
""
],
[
"Baras",
"John S.",
""
],
[
"Wang",
"Siyi",
""
],
[
"Hirche",
"Sandra",
""
],
[
"Johansson",
"Karl H.",
""
]
] | In this chapter, we study the consistency of the value of information$\unicode{x2014}$a semantic metric that claims to determine the right piece of information in networked control systems tasks$\unicode{x2014}$in a lossy and delayed communication regime. Our analysis begins with a focus on state estimation, and subsequently extends to feedback control. To that end, we make a causal tradeoff between the packet rate and the mean square error. Associated with this tradeoff, we demonstrate the existence of an optimal policy profile, comprising a symmetric threshold scheduling policy based on the value of information for the encoder and a non-Gaussian linear estimation policy for the decoder. Our structural results assert that the scheduling policy is expressible in terms of $3d-1$ variables related to the source and the channel, where $d$ is the time delay, and that the estimation policy incorporates no residual related to signaling. We then construct an optimal control policy by exploiting the separation principle. |
2002.02887 | Boris Oreshkin N | Boris N. Oreshkin, Dmitri Carpov, Nicolas Chapados, Yoshua Bengio | Meta-learning framework with applications to zero-shot time-series
forecasting | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Can meta-learning discover generic ways of processing time series (TS) from a
diverse dataset so as to greatly improve generalization on new TS coming from
different datasets? This work provides positive evidence to this using a broad
meta-learning framework which we show subsumes many existing meta-learning
algorithms. Our theoretical analysis suggests that residual connections act as
a meta-learning adaptation mechanism, generating a subset of task-specific
parameters based on a given TS input, thus gradually expanding the expressive
power of the architecture on-the-fly. The same mechanism is shown via
linearization analysis to have the interpretation of a sequential update of the
final linear layer. Our empirical results on a wide range of data emphasize the
importance of the identified meta-learning mechanisms for successful zero-shot
univariate forecasting, suggesting that it is viable to train a neural network
on a source TS dataset and deploy it on a different target TS dataset without
retraining, resulting in performance that is at least as good as that of
state-of-practice univariate forecasting models.
| [
{
"created": "Fri, 7 Feb 2020 16:39:43 GMT",
"version": "v1"
},
{
"created": "Sat, 21 Nov 2020 02:42:54 GMT",
"version": "v2"
},
{
"created": "Mon, 14 Dec 2020 19:33:05 GMT",
"version": "v3"
}
] | 2020-12-16 | [
[
"Oreshkin",
"Boris N.",
""
],
[
"Carpov",
"Dmitri",
""
],
[
"Chapados",
"Nicolas",
""
],
[
"Bengio",
"Yoshua",
""
]
] | Can meta-learning discover generic ways of processing time series (TS) from a diverse dataset so as to greatly improve generalization on new TS coming from different datasets? This work provides positive evidence to this using a broad meta-learning framework which we show subsumes many existing meta-learning algorithms. Our theoretical analysis suggests that residual connections act as a meta-learning adaptation mechanism, generating a subset of task-specific parameters based on a given TS input, thus gradually expanding the expressive power of the architecture on-the-fly. The same mechanism is shown via linearization analysis to have the interpretation of a sequential update of the final linear layer. Our empirical results on a wide range of data emphasize the importance of the identified meta-learning mechanisms for successful zero-shot univariate forecasting, suggesting that it is viable to train a neural network on a source TS dataset and deploy it on a different target TS dataset without retraining, resulting in performance that is at least as good as that of state-of-practice univariate forecasting models. |
1410.2463 | Lutz Schr\"oder | Alexander Kurz, Stefan Milius, Dirk Pattinson, Lutz Schr\"oder | Simplified Coalgebraic Trace Equivalence | null | null | null | null | cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The analysis of concurrent and reactive systems is based to a large degree on
various notions of process equivalence, ranging, on the so-called
linear-time/branching-time spectrum, from fine-grained equivalences such as
strong bisimilarity to coarse-grained ones such as trace equivalence. The
theory of concurrent systems at large has benefited from developments in
coalgebra, which has enabled uniform definitions and results that provide a
common umbrella for seemingly disparate system types including
non-deterministic, weighted, probabilistic, and game-based systems. In
particular, there has been some success in identifying a generic coalgebraic
theory of bisimulation that matches known definitions in many concrete cases.
The situation is currently somewhat less settled regarding trace equivalence. A
number of coalgebraic approaches to trace equivalence have been proposed, none
of which however cover all cases of interest; notably, all these approaches
depend on explicit termination, which is not always imposed in standard
systems, e.g. LTS. Here, we discuss a joint generalization of these approaches
based on embedding functors modelling various aspects of the system, such as
transition and braching, into a global monad; this approach appears to cover
all cases considered previously and some additional ones, notably standard LTS
and probabilistic labelled transition systems.
| [
{
"created": "Thu, 9 Oct 2014 13:54:41 GMT",
"version": "v1"
},
{
"created": "Thu, 16 Oct 2014 10:46:36 GMT",
"version": "v2"
}
] | 2014-10-17 | [
[
"Kurz",
"Alexander",
""
],
[
"Milius",
"Stefan",
""
],
[
"Pattinson",
"Dirk",
""
],
[
"Schröder",
"Lutz",
""
]
] | The analysis of concurrent and reactive systems is based to a large degree on various notions of process equivalence, ranging, on the so-called linear-time/branching-time spectrum, from fine-grained equivalences such as strong bisimilarity to coarse-grained ones such as trace equivalence. The theory of concurrent systems at large has benefited from developments in coalgebra, which has enabled uniform definitions and results that provide a common umbrella for seemingly disparate system types including non-deterministic, weighted, probabilistic, and game-based systems. In particular, there has been some success in identifying a generic coalgebraic theory of bisimulation that matches known definitions in many concrete cases. The situation is currently somewhat less settled regarding trace equivalence. A number of coalgebraic approaches to trace equivalence have been proposed, none of which however cover all cases of interest; notably, all these approaches depend on explicit termination, which is not always imposed in standard systems, e.g. LTS. Here, we discuss a joint generalization of these approaches based on embedding functors modelling various aspects of the system, such as transition and braching, into a global monad; this approach appears to cover all cases considered previously and some additional ones, notably standard LTS and probabilistic labelled transition systems. |
1803.06973 | Lorenzo Posani | Lorenzo Posani, Alessio Paccoia, Marco Moschettini | The carbon footprint of distributed cloud storage | null | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ICT (Information Communication Technologies) ecosystem is estimated to be
responsible, as of today, for 10% of the total worldwide energy demand -
equivalent to the combined energy production of Germany and Japan. Cloud
storage, mainly operated through large and densely-packed data centers,
constitutes a non-negligible part of it. However, since the cloud is a
fast-inflating market and the energy-efficiency of data centers is mostly an
insensitive issue for the collectivity, its carbon footprint shows no signs of
slowing down. In this paper, we analyze a novel paradigm for cloud storage
(implemented by Cubbit, http://cubbit.io), in which data are stored and
distributed over a network of p2p-interacting ARM-based single-board devices.
We compare Cubbit's distributed cloud to the traditional centralized solution
in terms of environmental footprint and energy efficiency. We demonstrate that,
compared to the centralized cloud, the distributed architecture of Cubbit has a
carbon footprint reduced of a 77% factor for data storage and of a 50% factor
for data transfers. These results provide an example of how a radical paradigm
shift in a large-reach technology can benefit both the final consumer as well
as our society as a whole.
| [
{
"created": "Mon, 19 Mar 2018 15:02:21 GMT",
"version": "v1"
},
{
"created": "Sun, 10 Mar 2019 17:21:41 GMT",
"version": "v2"
},
{
"created": "Wed, 26 Jun 2019 13:44:39 GMT",
"version": "v3"
}
] | 2019-06-27 | [
[
"Posani",
"Lorenzo",
""
],
[
"Paccoia",
"Alessio",
""
],
[
"Moschettini",
"Marco",
""
]
] | The ICT (Information Communication Technologies) ecosystem is estimated to be responsible, as of today, for 10% of the total worldwide energy demand - equivalent to the combined energy production of Germany and Japan. Cloud storage, mainly operated through large and densely-packed data centers, constitutes a non-negligible part of it. However, since the cloud is a fast-inflating market and the energy-efficiency of data centers is mostly an insensitive issue for the collectivity, its carbon footprint shows no signs of slowing down. In this paper, we analyze a novel paradigm for cloud storage (implemented by Cubbit, http://cubbit.io), in which data are stored and distributed over a network of p2p-interacting ARM-based single-board devices. We compare Cubbit's distributed cloud to the traditional centralized solution in terms of environmental footprint and energy efficiency. We demonstrate that, compared to the centralized cloud, the distributed architecture of Cubbit has a carbon footprint reduced of a 77% factor for data storage and of a 50% factor for data transfers. These results provide an example of how a radical paradigm shift in a large-reach technology can benefit both the final consumer as well as our society as a whole. |
2207.09858 | Kyunghoon Hur | Kyunghoon Hur, Jungwoo Oh, Junu Kim, Jiyoun Kim, Min Jae Lee, Eunbyeol
Cho, Seong-Eun Moon, Young-Hak Kim, Louis Atallah, Edward Choi | GenHPF: General Healthcare Predictive Framework with Multi-task
Multi-source Learning | Accepted by IEEE Journal of Biomedical and Health Informatics | IEEE Journal of Biomedical and Health Informatics 2024 | 10.1109/JBHI.2023.3327951 | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the remarkable progress in the development of predictive models for
healthcare, applying these algorithms on a large scale has been challenging.
Algorithms trained on a particular task, based on specific data formats
available in a set of medical records, tend to not generalize well to other
tasks or databases in which the data fields may differ. To address this
challenge, we propose General Healthcare Predictive Framework (GenHPF), which
is applicable to any EHR with minimal preprocessing for multiple prediction
tasks. GenHPF resolves heterogeneity in medical codes and schemas by converting
EHRs into a hierarchical textual representation while incorporating as many
features as possible. To evaluate the efficacy of GenHPF, we conduct multi-task
learning experiments with single-source and multi-source settings, on three
publicly available EHR datasets with different schemas for 12 clinically
meaningful prediction tasks. Our framework significantly outperforms baseline
models that utilize domain knowledge in multi-source learning, improving
average AUROC by 1.2%P in pooled learning and 2.6%P in transfer learning while
also showing comparable results when trained on a single EHR dataset.
Furthermore, we demonstrate that self-supervised pretraining using multi-source
datasets is effective when combined with GenHPF, resulting in a 0.6%P AUROC
improvement compared to models without pretraining. By eliminating the need for
preprocessing and feature engineering, we believe that this work offers a solid
framework for multi-task and multi-source learning that can be leveraged to
speed up the scaling and usage of predictive algorithms in healthcare.
| [
{
"created": "Wed, 20 Jul 2022 12:46:26 GMT",
"version": "v1"
},
{
"created": "Fri, 29 Jul 2022 10:27:02 GMT",
"version": "v2"
},
{
"created": "Wed, 15 Nov 2023 11:47:19 GMT",
"version": "v3"
}
] | 2023-11-16 | [
[
"Hur",
"Kyunghoon",
""
],
[
"Oh",
"Jungwoo",
""
],
[
"Kim",
"Junu",
""
],
[
"Kim",
"Jiyoun",
""
],
[
"Lee",
"Min Jae",
""
],
[
"Cho",
"Eunbyeol",
""
],
[
"Moon",
"Seong-Eun",
""
],
[
"Kim",
"Young-Hak",
""
],
[
"Atallah",
"Louis",
""
],
[
"Choi",
"Edward",
""
]
] | Despite the remarkable progress in the development of predictive models for healthcare, applying these algorithms on a large scale has been challenging. Algorithms trained on a particular task, based on specific data formats available in a set of medical records, tend to not generalize well to other tasks or databases in which the data fields may differ. To address this challenge, we propose General Healthcare Predictive Framework (GenHPF), which is applicable to any EHR with minimal preprocessing for multiple prediction tasks. GenHPF resolves heterogeneity in medical codes and schemas by converting EHRs into a hierarchical textual representation while incorporating as many features as possible. To evaluate the efficacy of GenHPF, we conduct multi-task learning experiments with single-source and multi-source settings, on three publicly available EHR datasets with different schemas for 12 clinically meaningful prediction tasks. Our framework significantly outperforms baseline models that utilize domain knowledge in multi-source learning, improving average AUROC by 1.2%P in pooled learning and 2.6%P in transfer learning while also showing comparable results when trained on a single EHR dataset. Furthermore, we demonstrate that self-supervised pretraining using multi-source datasets is effective when combined with GenHPF, resulting in a 0.6%P AUROC improvement compared to models without pretraining. By eliminating the need for preprocessing and feature engineering, we believe that this work offers a solid framework for multi-task and multi-source learning that can be leveraged to speed up the scaling and usage of predictive algorithms in healthcare. |
1010.4760 | Petr Jancar | Petr Jancar | A Short Decidability Proof for DPDA Language Equivalence via First-Order
Grammars | 28 pages, version 4 reworks the main proof and omits the
nondeterministic case where a problem was found by G. Senizergues | null | null | null | cs.FL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The main aim of the paper is to give a short self-contained proof of the
decidability of language equivalence for deterministic pushdown automata, which
is the famous problem solved by G. Senizergues, for which C. Stirling has
derived a primitive recursive complexity upper bound. The proof here is given
in the framework of first-order grammars, which seems to be particularly apt
for the aim. An appendix presents a modification of Stirling's approach,
yielding a complexity bound of the form tetr(2,g(n)) where tetr is the
(nonelementary) operator of iterated exponentiation (tetration) and g is an
elementary function of the input size.
| [
{
"created": "Fri, 22 Oct 2010 17:20:28 GMT",
"version": "v1"
},
{
"created": "Thu, 18 Nov 2010 18:24:35 GMT",
"version": "v2"
},
{
"created": "Thu, 9 Dec 2010 17:35:47 GMT",
"version": "v3"
},
{
"created": "Wed, 9 Mar 2011 11:08:23 GMT",
"version": "v4"
}
] | 2011-03-10 | [
[
"Jancar",
"Petr",
""
]
] | The main aim of the paper is to give a short self-contained proof of the decidability of language equivalence for deterministic pushdown automata, which is the famous problem solved by G. Senizergues, for which C. Stirling has derived a primitive recursive complexity upper bound. The proof here is given in the framework of first-order grammars, which seems to be particularly apt for the aim. An appendix presents a modification of Stirling's approach, yielding a complexity bound of the form tetr(2,g(n)) where tetr is the (nonelementary) operator of iterated exponentiation (tetration) and g is an elementary function of the input size. |
2304.14334 | Suleyman Olcay Polat | Solomon Ubani, Suleyman Olcay Polat, Rodney Nielsen | ZeroShotDataAug: Generating and Augmenting Training Data with ChatGPT | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this paper, we investigate the use of data obtained from prompting a large
generative language model, ChatGPT, to generate synthetic training data with
the aim of augmenting data in low resource scenarios. We show that with
appropriate task-specific ChatGPT prompts, we outperform the most popular
existing approaches for such data augmentation. Furthermore, we investigate
methodologies for evaluating the similarity of the augmented data generated
from ChatGPT with the aim of validating and assessing the quality of the data
generated.
| [
{
"created": "Thu, 27 Apr 2023 17:07:29 GMT",
"version": "v1"
}
] | 2023-04-28 | [
[
"Ubani",
"Solomon",
""
],
[
"Polat",
"Suleyman Olcay",
""
],
[
"Nielsen",
"Rodney",
""
]
] | In this paper, we investigate the use of data obtained from prompting a large generative language model, ChatGPT, to generate synthetic training data with the aim of augmenting data in low resource scenarios. We show that with appropriate task-specific ChatGPT prompts, we outperform the most popular existing approaches for such data augmentation. Furthermore, we investigate methodologies for evaluating the similarity of the augmented data generated from ChatGPT with the aim of validating and assessing the quality of the data generated. |
2005.03482 | Ao Liu | Ao Liu, Beibei Li, Tao Li, Pan Zhou, Rui wang | AN-GCN: An Anonymous Graph Convolutional Network Defense Against
Edge-Perturbing Attack | 15 pages, 11 figures | null | null | null | cs.LG cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent studies have revealed the vulnerability of graph convolutional
networks (GCNs) to edge-perturbing attacks, such as maliciously inserting or
deleting graph edges. However, a theoretical proof of such vulnerability
remains a big challenge, and effective defense schemes are still open issues.
In this paper, we first generalize the formulation of edge-perturbing attacks
and strictly prove the vulnerability of GCNs to such attacks in node
classification tasks. Following this, an anonymous graph convolutional network,
named AN-GCN, is proposed to counter against edge-perturbing attacks.
Specifically, we present a node localization theorem to demonstrate how the GCN
locates nodes during its training phase. In addition, we design a staggered
Gaussian noise based node position generator, and devise a spectral graph
convolution based discriminator in detecting the generated node positions.
Further, we give the optimization of the above generator and discriminator.
AN-GCN can classify nodes without taking their position as input. It is
demonstrated that the AN-GCN is secure against edge-perturbing attacks in node
classification tasks, as AN-GCN classifies nodes without the edge information
and thus makes it impossible for attackers to perturb edges anymore. Extensive
evaluations demonstrated the effectiveness of the general edge-perturbing
attack model in manipulating the classification results of the target nodes.
More importantly, the proposed AN-GCN can achieve 82.7% in node classification
accuracy without the edge-reading permission, which outperforms the
state-of-the-art GCN.
| [
{
"created": "Wed, 6 May 2020 08:15:24 GMT",
"version": "v1"
},
{
"created": "Thu, 29 Oct 2020 11:14:44 GMT",
"version": "v2"
},
{
"created": "Fri, 23 Apr 2021 13:44:19 GMT",
"version": "v3"
},
{
"created": "Fri, 7 May 2021 08:57:07 GMT",
"version": "v4"
},
{
"created": "Tue, 1 Jun 2021 03:17:58 GMT",
"version": "v5"
},
{
"created": "Thu, 17 Jun 2021 01:41:29 GMT",
"version": "v6"
}
] | 2021-06-18 | [
[
"Liu",
"Ao",
""
],
[
"Li",
"Beibei",
""
],
[
"Li",
"Tao",
""
],
[
"Zhou",
"Pan",
""
],
[
"wang",
"Rui",
""
]
] | Recent studies have revealed the vulnerability of graph convolutional networks (GCNs) to edge-perturbing attacks, such as maliciously inserting or deleting graph edges. However, a theoretical proof of such vulnerability remains a big challenge, and effective defense schemes are still open issues. In this paper, we first generalize the formulation of edge-perturbing attacks and strictly prove the vulnerability of GCNs to such attacks in node classification tasks. Following this, an anonymous graph convolutional network, named AN-GCN, is proposed to counter against edge-perturbing attacks. Specifically, we present a node localization theorem to demonstrate how the GCN locates nodes during its training phase. In addition, we design a staggered Gaussian noise based node position generator, and devise a spectral graph convolution based discriminator in detecting the generated node positions. Further, we give the optimization of the above generator and discriminator. AN-GCN can classify nodes without taking their position as input. It is demonstrated that the AN-GCN is secure against edge-perturbing attacks in node classification tasks, as AN-GCN classifies nodes without the edge information and thus makes it impossible for attackers to perturb edges anymore. Extensive evaluations demonstrated the effectiveness of the general edge-perturbing attack model in manipulating the classification results of the target nodes. More importantly, the proposed AN-GCN can achieve 82.7% in node classification accuracy without the edge-reading permission, which outperforms the state-of-the-art GCN. |
2010.12055 | Mehdi Rezaee | Mehdi Rezaee and Francis Ferraro | A Discrete Variational Recurrent Topic Model without the
Reparametrization Trick | To appear in Neural Information Processing Systems (NeurIPS 2020) | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We show how to learn a neural topic model with discrete random
variables---one that explicitly models each word's assigned topic---using
neural variational inference that does not rely on stochastic backpropagation
to handle the discrete variables. The model we utilize combines the expressive
power of neural methods for representing sequences of text with the topic
model's ability to capture global, thematic coherence. Using neural variational
inference, we show improved perplexity and document understanding across
multiple corpora. We examine the effect of prior parameters both on the model
and variational parameters and demonstrate how our approach can compete and
surpass a popular topic model implementation on an automatic measure of topic
quality.
| [
{
"created": "Thu, 22 Oct 2020 20:53:44 GMT",
"version": "v1"
}
] | 2020-10-26 | [
[
"Rezaee",
"Mehdi",
""
],
[
"Ferraro",
"Francis",
""
]
] | We show how to learn a neural topic model with discrete random variables---one that explicitly models each word's assigned topic---using neural variational inference that does not rely on stochastic backpropagation to handle the discrete variables. The model we utilize combines the expressive power of neural methods for representing sequences of text with the topic model's ability to capture global, thematic coherence. Using neural variational inference, we show improved perplexity and document understanding across multiple corpora. We examine the effect of prior parameters both on the model and variational parameters and demonstrate how our approach can compete and surpass a popular topic model implementation on an automatic measure of topic quality. |
2004.02421 | Deng Cai | Zibo Lin, Deng Cai, Yan Wang, Xiaojiang Liu, Hai-Tao Zheng, Shuming
Shi | The World is Not Binary: Learning to Rank with Grayscale Data for
Dialogue Response Selection | EMNLP2020 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Response selection plays a vital role in building retrieval-based
conversation systems. Despite that response selection is naturally a
learning-to-rank problem, most prior works take a point-wise view and train
binary classifiers for this task: each response candidate is labeled either
relevant (one) or irrelevant (zero). On the one hand, this formalization can be
sub-optimal due to its ignorance of the diversity of response quality. On the
other hand, annotating grayscale data for learning-to-rank can be prohibitively
expensive and challenging. In this work, we show that grayscale data can be
automatically constructed without human effort. Our method employs
off-the-shelf response retrieval models and response generation models as
automatic grayscale data generators. With the constructed grayscale data, we
propose multi-level ranking objectives for training, which can (1) teach a
matching model to capture more fine-grained context-response relevance
difference and (2) reduce the train-test discrepancy in terms of distractor
strength. Our method is simple, effective, and universal. Experiments on three
benchmark datasets and four state-of-the-art matching models show that the
proposed approach brings significant and consistent performance improvements.
| [
{
"created": "Mon, 6 Apr 2020 06:34:54 GMT",
"version": "v1"
},
{
"created": "Tue, 28 Apr 2020 02:39:39 GMT",
"version": "v2"
},
{
"created": "Wed, 16 Sep 2020 14:08:23 GMT",
"version": "v3"
},
{
"created": "Tue, 13 Oct 2020 07:08:07 GMT",
"version": "v4"
}
] | 2020-10-14 | [
[
"Lin",
"Zibo",
""
],
[
"Cai",
"Deng",
""
],
[
"Wang",
"Yan",
""
],
[
"Liu",
"Xiaojiang",
""
],
[
"Zheng",
"Hai-Tao",
""
],
[
"Shi",
"Shuming",
""
]
] | Response selection plays a vital role in building retrieval-based conversation systems. Despite that response selection is naturally a learning-to-rank problem, most prior works take a point-wise view and train binary classifiers for this task: each response candidate is labeled either relevant (one) or irrelevant (zero). On the one hand, this formalization can be sub-optimal due to its ignorance of the diversity of response quality. On the other hand, annotating grayscale data for learning-to-rank can be prohibitively expensive and challenging. In this work, we show that grayscale data can be automatically constructed without human effort. Our method employs off-the-shelf response retrieval models and response generation models as automatic grayscale data generators. With the constructed grayscale data, we propose multi-level ranking objectives for training, which can (1) teach a matching model to capture more fine-grained context-response relevance difference and (2) reduce the train-test discrepancy in terms of distractor strength. Our method is simple, effective, and universal. Experiments on three benchmark datasets and four state-of-the-art matching models show that the proposed approach brings significant and consistent performance improvements. |
2208.10280 | Taahir Patel | Taahir Aiyoob Patel, Clement N. Nyirenda | A Twitter-Driven Deep Learning Mechanism for the Determination of
Vehicle Hijacking Spots in Cities | null | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Vehicle hijacking is one of the leading crimes in many cities. For instance,
in South Africa, drivers must constantly remain vigilant on the road in order
to ensure that they do not become hijacking victims. This work is aimed at
developing a map depicting hijacking spots in a city by using Twitter data.
Tweets, which include the keyword "hijacking", are obtained in a designated
city of Cape Town, in this work. In order to extract relevant tweets, these
tweets are analyzed by using the following machine learning techniques: 1) a
Multi-layer Feed-forward Neural Network (MLFNN); 2) Convolutional Neural
Network; and Bidirectional Encoder Representations from Transformers (BERT).
Through training and testing, CNN achieved an accuracy of 99.66%, while MLFNN
and BERT achieve accuracies of 98.99% and 73.99% respectively. In terms of
Recall, Precision and F1-score, CNN also achieved the best results. Therefore,
CNN was used for the identification of relevant tweets. The relevant reports
that it generates are visually presented on a points map of the City of Cape
Town. This work used a small dataset of 426 tweets. In future, the use of
evolutionary computation will be explored for purposes of optimizing the deep
learning models. A mobile application is under development to make this
information usable by the general public.
| [
{
"created": "Thu, 11 Aug 2022 21:56:34 GMT",
"version": "v1"
}
] | 2022-08-23 | [
[
"Patel",
"Taahir Aiyoob",
""
],
[
"Nyirenda",
"Clement N.",
""
]
] | Vehicle hijacking is one of the leading crimes in many cities. For instance, in South Africa, drivers must constantly remain vigilant on the road in order to ensure that they do not become hijacking victims. This work is aimed at developing a map depicting hijacking spots in a city by using Twitter data. Tweets, which include the keyword "hijacking", are obtained in a designated city of Cape Town, in this work. In order to extract relevant tweets, these tweets are analyzed by using the following machine learning techniques: 1) a Multi-layer Feed-forward Neural Network (MLFNN); 2) Convolutional Neural Network; and Bidirectional Encoder Representations from Transformers (BERT). Through training and testing, CNN achieved an accuracy of 99.66%, while MLFNN and BERT achieve accuracies of 98.99% and 73.99% respectively. In terms of Recall, Precision and F1-score, CNN also achieved the best results. Therefore, CNN was used for the identification of relevant tweets. The relevant reports that it generates are visually presented on a points map of the City of Cape Town. This work used a small dataset of 426 tweets. In future, the use of evolutionary computation will be explored for purposes of optimizing the deep learning models. A mobile application is under development to make this information usable by the general public. |
2207.07212 | Ahmad Bdeir | Ahmad Bdeir, Jonas K. Falkner, Lars Schmidt-Thieme | Attention, Filling in The Gaps for Generalization in Routing Problems | Accepted at ECML-PKDD 2022 | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Machine Learning (ML) methods have become a useful tool for tackling vehicle
routing problems, either in combination with popular heuristics or as
standalone models. However, current methods suffer from poor generalization
when tackling problems of different sizes or different distributions. As a
result, ML in vehicle routing has witnessed an expansion phase with new
methodologies being created for particular problem instances that become
infeasible at larger problem sizes.
This paper aims at encouraging the consolidation of the field through
understanding and improving current existing models, namely the attention model
by Kool et al. We identify two discrepancy categories for VRP generalization.
The first is based on the differences that are inherent to the problems
themselves, and the second relates to architectural weaknesses that limit the
model's ability to generalize. Our contribution becomes threefold: We first
target model discrepancies by adapting the Kool et al. method and its loss
function for Sparse Dynamic Attention based on the alpha-entmax activation. We
then target inherent differences through the use of a mixed instance training
method that has been shown to outperform single instance training in certain
scenarios. Finally, we introduce a framework for inference level data
augmentation that improves performance by leveraging the model's lack of
invariance to rotation and dilation changes.
| [
{
"created": "Thu, 14 Jul 2022 21:36:51 GMT",
"version": "v1"
}
] | 2022-07-18 | [
[
"Bdeir",
"Ahmad",
""
],
[
"Falkner",
"Jonas K.",
""
],
[
"Schmidt-Thieme",
"Lars",
""
]
] | Machine Learning (ML) methods have become a useful tool for tackling vehicle routing problems, either in combination with popular heuristics or as standalone models. However, current methods suffer from poor generalization when tackling problems of different sizes or different distributions. As a result, ML in vehicle routing has witnessed an expansion phase with new methodologies being created for particular problem instances that become infeasible at larger problem sizes. This paper aims at encouraging the consolidation of the field through understanding and improving current existing models, namely the attention model by Kool et al. We identify two discrepancy categories for VRP generalization. The first is based on the differences that are inherent to the problems themselves, and the second relates to architectural weaknesses that limit the model's ability to generalize. Our contribution becomes threefold: We first target model discrepancies by adapting the Kool et al. method and its loss function for Sparse Dynamic Attention based on the alpha-entmax activation. We then target inherent differences through the use of a mixed instance training method that has been shown to outperform single instance training in certain scenarios. Finally, we introduce a framework for inference level data augmentation that improves performance by leveraging the model's lack of invariance to rotation and dilation changes. |
0809.4108 | Karama Kanoun | Ana E. Rugina (LAAS), Karama Kanoun (LAAS), Mohamed Kaaniche (LAAS) | The ADAPT Tool: From AADL Architectural Models to Stochastic Petri Nets
through Model Transformation | 6 pages | 7th European Dependable Computing Conference (EDCC), Kaunas :
Lituanie (2008) | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | ADAPT is a tool that aims at easing the task of evaluating dependability
measures in the context of modern model driven engineering processes based on
AADL (Architecture Analysis and Design Language). Hence, its input is an AADL
architectural model annotated with dependability-related information. Its
output is a dependability evaluation model in the form of a Generalized
Stochastic Petri Net (GSPN). The latter can be processed by existing
dependability evaluation tools, to compute quantitative measures such as
reliability, availability, etc.. ADAPT interfaces OSATE (the Open Source AADL
Tool Environment) on the AADL side and SURF-2, on the dependability evaluation
side. In addition, ADAPT provides the GSPN in XML/XMI format, which represents
a gateway to other dependability evaluation tools, as the processing techniques
for XML files allow it to be easily converted to a tool-specific GSPN.
| [
{
"created": "Wed, 24 Sep 2008 07:26:30 GMT",
"version": "v1"
}
] | 2008-09-25 | [
[
"Rugina",
"Ana E.",
"",
"LAAS"
],
[
"Kanoun",
"Karama",
"",
"LAAS"
],
[
"Kaaniche",
"Mohamed",
"",
"LAAS"
]
] | ADAPT is a tool that aims at easing the task of evaluating dependability measures in the context of modern model driven engineering processes based on AADL (Architecture Analysis and Design Language). Hence, its input is an AADL architectural model annotated with dependability-related information. Its output is a dependability evaluation model in the form of a Generalized Stochastic Petri Net (GSPN). The latter can be processed by existing dependability evaluation tools, to compute quantitative measures such as reliability, availability, etc.. ADAPT interfaces OSATE (the Open Source AADL Tool Environment) on the AADL side and SURF-2, on the dependability evaluation side. In addition, ADAPT provides the GSPN in XML/XMI format, which represents a gateway to other dependability evaluation tools, as the processing techniques for XML files allow it to be easily converted to a tool-specific GSPN. |
2110.08605 | Y. X. Rachel Wang | Lijia Wang, Xin Tong and Y.X. Rachel Wang | Statistics in everyone's backyard: an impact study via citation network
analysis | null | null | null | null | cs.DL stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The increasing availability of curated citation data provides a wealth of
resources for analyzing and understanding the intellectual influence of
scientific publications. In the field of statistics, current studies of
citation data have mostly focused on the interactions between statistical
journals and papers, limiting the measure of influence to mainly within
statistics itself. In this paper, we take the first step towards understanding
the impact statistics has made on other scientific fields in the era of Big
Data. By collecting comprehensive bibliometric data from the Web of Science
database for selected statistical journals, we investigate the citation trends
and compositions of citing fields over time to show that their diversity has
been increasing. Furthermore, we use the local clustering technique involving
personalized PageRank with conductance for size selection to find the most
relevant statistical research area for a given external topic of interest. We
provide theoretical guarantees for the procedure and, through a number of case
studies, show the results from our citation data align well with our knowledge
and intuition about these external topics. Overall, we have found that the
statistical theory and methods recently invented by the statistics community
have made increasing impact on other scientific fields.
| [
{
"created": "Sat, 16 Oct 2021 16:24:05 GMT",
"version": "v1"
}
] | 2021-10-19 | [
[
"Wang",
"Lijia",
""
],
[
"Tong",
"Xin",
""
],
[
"Wang",
"Y. X. Rachel",
""
]
] | The increasing availability of curated citation data provides a wealth of resources for analyzing and understanding the intellectual influence of scientific publications. In the field of statistics, current studies of citation data have mostly focused on the interactions between statistical journals and papers, limiting the measure of influence to mainly within statistics itself. In this paper, we take the first step towards understanding the impact statistics has made on other scientific fields in the era of Big Data. By collecting comprehensive bibliometric data from the Web of Science database for selected statistical journals, we investigate the citation trends and compositions of citing fields over time to show that their diversity has been increasing. Furthermore, we use the local clustering technique involving personalized PageRank with conductance for size selection to find the most relevant statistical research area for a given external topic of interest. We provide theoretical guarantees for the procedure and, through a number of case studies, show the results from our citation data align well with our knowledge and intuition about these external topics. Overall, we have found that the statistical theory and methods recently invented by the statistics community have made increasing impact on other scientific fields. |
1302.1294 | Firas Ajil Jassim | Firas Ajil Jassim and Fawzi Hasan Altaany | Image Interpolation Using Kriging Technique for Spatial Data | 6 pages, 8 figures, 3 tables | Canadian Journal on Image Processing and Computer Vision, Vol. 4
No. 2, February 2013 | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Image interpolation has been used spaciously by customary interpolation
techniques. Recently, Kriging technique has been widely implemented in
simulation area and geostatistics for prediction. In this article, Kriging
technique was used instead of the classical interpolation methods to predict
the unknown points in the digital image array. The efficiency of the proposed
technique was proven using the PSNR and compared with the traditional
interpolation techniques. The results showed that Kriging technique is almost
accurate as cubic interpolation and in some images Kriging has higher accuracy.
A miscellaneous test images have been used to consolidate the proposed
technique.
| [
{
"created": "Wed, 6 Feb 2013 09:22:58 GMT",
"version": "v1"
}
] | 2013-02-07 | [
[
"Jassim",
"Firas Ajil",
""
],
[
"Altaany",
"Fawzi Hasan",
""
]
] | Image interpolation has been used spaciously by customary interpolation techniques. Recently, Kriging technique has been widely implemented in simulation area and geostatistics for prediction. In this article, Kriging technique was used instead of the classical interpolation methods to predict the unknown points in the digital image array. The efficiency of the proposed technique was proven using the PSNR and compared with the traditional interpolation techniques. The results showed that Kriging technique is almost accurate as cubic interpolation and in some images Kriging has higher accuracy. A miscellaneous test images have been used to consolidate the proposed technique. |
2103.07765 | David Noever | David A. Noever, Samantha E. Miller Noever | Image Classifiers for Network Intrusions | null | null | null | null | cs.CR cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | This research recasts the network attack dataset from UNSW-NB15 as an
intrusion detection problem in image space. Using one-hot-encodings, the
resulting grayscale thumbnails provide a quarter-million examples for deep
learning algorithms. Applying the MobileNetV2's convolutional neural network
architecture, the work demonstrates a 97% accuracy in distinguishing normal and
attack traffic. Further class refinements to 9 individual attack families
(exploits, worms, shellcodes) show an overall 56% accuracy. Using feature
importance rank, a random forest solution on subsets show the most important
source-destination factors and the least important ones as mainly obscure
protocols. The dataset is available on Kaggle.
| [
{
"created": "Sat, 13 Mar 2021 18:09:08 GMT",
"version": "v1"
}
] | 2021-03-16 | [
[
"Noever",
"David A.",
""
],
[
"Noever",
"Samantha E. Miller",
""
]
] | This research recasts the network attack dataset from UNSW-NB15 as an intrusion detection problem in image space. Using one-hot-encodings, the resulting grayscale thumbnails provide a quarter-million examples for deep learning algorithms. Applying the MobileNetV2's convolutional neural network architecture, the work demonstrates a 97% accuracy in distinguishing normal and attack traffic. Further class refinements to 9 individual attack families (exploits, worms, shellcodes) show an overall 56% accuracy. Using feature importance rank, a random forest solution on subsets show the most important source-destination factors and the least important ones as mainly obscure protocols. The dataset is available on Kaggle. |
1707.06209 | Johannes Welbl | Johannes Welbl, Nelson F. Liu, Matt Gardner | Crowdsourcing Multiple Choice Science Questions | accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017 | null | null | null | cs.HC cs.AI cs.CL stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a novel method for obtaining high-quality, domain-targeted
multiple choice questions from crowd workers. Generating these questions can be
difficult without trading away originality, relevance or diversity in the
answer options. Our method addresses these problems by leveraging a large
corpus of domain-specific text and a small set of existing questions. It
produces model suggestions for document selection and answer distractor choice
which aid the human question generation process. With this method we have
assembled SciQ, a dataset of 13.7K multiple choice science exam questions
(Dataset available at http://allenai.org/data.html). We demonstrate that the
method produces in-domain questions by providing an analysis of this new
dataset and by showing that humans cannot distinguish the crowdsourced
questions from original questions. When using SciQ as additional training data
to existing questions, we observe accuracy improvements on real science exams.
| [
{
"created": "Wed, 19 Jul 2017 17:28:46 GMT",
"version": "v1"
}
] | 2017-07-20 | [
[
"Welbl",
"Johannes",
""
],
[
"Liu",
"Nelson F.",
""
],
[
"Gardner",
"Matt",
""
]
] | We present a novel method for obtaining high-quality, domain-targeted multiple choice questions from crowd workers. Generating these questions can be difficult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by leveraging a large corpus of domain-specific text and a small set of existing questions. It produces model suggestions for document selection and answer distractor choice which aid the human question generation process. With this method we have assembled SciQ, a dataset of 13.7K multiple choice science exam questions (Dataset available at http://allenai.org/data.html). We demonstrate that the method produces in-domain questions by providing an analysis of this new dataset and by showing that humans cannot distinguish the crowdsourced questions from original questions. When using SciQ as additional training data to existing questions, we observe accuracy improvements on real science exams. |
1708.05688 | Kevin Jasberg | Kevin Jasberg and Sergej Sizov | Human Uncertainty and Ranking Error -- The Secret of Successful
Evaluation in Predictive Data Mining | null | null | null | null | cs.HC cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the most crucial issues in data mining is to model human behaviour in
order to provide personalisation, adaptation and recommendation. This usually
involves implicit or explicit knowledge, either by observing user interactions,
or by asking users directly. But these sources of information are always
subject to the volatility of human decisions, making utilised data uncertain to
a particular extent. In this contribution, we elaborate on the impact of this
human uncertainty when it comes to comparative assessments of different data
mining approaches. In particular, we reveal two problems: (1) biasing effects
on various metrics of model-based prediction and (2) the propagation of
uncertainty and its thus induced error probabilities for algorithm rankings.
For this purpose, we introduce a probabilistic view and prove the existence of
those problems mathematically, as well as provide possible solution strategies.
We exemplify our theory mainly in the context of recommender systems along with
the metric RMSE as a prominent example of precision quality measures.
| [
{
"created": "Thu, 17 Aug 2017 12:44:08 GMT",
"version": "v1"
}
] | 2017-08-21 | [
[
"Jasberg",
"Kevin",
""
],
[
"Sizov",
"Sergej",
""
]
] | One of the most crucial issues in data mining is to model human behaviour in order to provide personalisation, adaptation and recommendation. This usually involves implicit or explicit knowledge, either by observing user interactions, or by asking users directly. But these sources of information are always subject to the volatility of human decisions, making utilised data uncertain to a particular extent. In this contribution, we elaborate on the impact of this human uncertainty when it comes to comparative assessments of different data mining approaches. In particular, we reveal two problems: (1) biasing effects on various metrics of model-based prediction and (2) the propagation of uncertainty and its thus induced error probabilities for algorithm rankings. For this purpose, we introduce a probabilistic view and prove the existence of those problems mathematically, as well as provide possible solution strategies. We exemplify our theory mainly in the context of recommender systems along with the metric RMSE as a prominent example of precision quality measures. |
2203.03971 | Pascal Mettes | Pascal Mettes | Universal Prototype Transport for Zero-Shot Action Recognition and
Localization | null | International Journal of Computer Vision (2023) | 10.1007/s11263-023-01846-2 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work addresses the problem of recognizing action categories in videos
when no training examples are available. The current state-of-the-art enables
such a zero-shot recognition by learning universal mappings from videos to a
semantic space, either trained on large-scale seen actions or on objects. While
effective, we find that universal action and object mappings are biased to
specific regions in the semantic space. These biases lead to a fundamental
problem: many unseen action categories are simply never inferred during
testing. For example on UCF-101, a quarter of the unseen actions are out of
reach with a state-of-the-art universal action model. To that end, this paper
introduces universal prototype transport for zero-shot action recognition. The
main idea is to re-position the semantic prototypes of unseen actions by
matching them to the distribution of all test videos. For universal action
models, we propose to match distributions through a hyperspherical optimal
transport from unseen action prototypes to the set of all projected test
videos. The resulting transport couplings in turn determine the target
prototype for each unseen action. Rather than directly using the target
prototype as final result, we re-position unseen action prototypes along the
geodesic spanned by the original and target prototypes as a form of semantic
regularization. For universal object models, we outline a variant that defines
target prototypes based on an optimal transport between unseen action
prototypes and object prototypes. Empirically, we show that universal prototype
transport diminishes the biased selection of unseen action prototypes and
boosts both universal action and object models for zero-shot classification and
spatio-temporal localization.
| [
{
"created": "Tue, 8 Mar 2022 09:58:40 GMT",
"version": "v1"
},
{
"created": "Tue, 1 Aug 2023 09:21:58 GMT",
"version": "v2"
}
] | 2023-08-02 | [
[
"Mettes",
"Pascal",
""
]
] | This work addresses the problem of recognizing action categories in videos when no training examples are available. The current state-of-the-art enables such a zero-shot recognition by learning universal mappings from videos to a semantic space, either trained on large-scale seen actions or on objects. While effective, we find that universal action and object mappings are biased to specific regions in the semantic space. These biases lead to a fundamental problem: many unseen action categories are simply never inferred during testing. For example on UCF-101, a quarter of the unseen actions are out of reach with a state-of-the-art universal action model. To that end, this paper introduces universal prototype transport for zero-shot action recognition. The main idea is to re-position the semantic prototypes of unseen actions by matching them to the distribution of all test videos. For universal action models, we propose to match distributions through a hyperspherical optimal transport from unseen action prototypes to the set of all projected test videos. The resulting transport couplings in turn determine the target prototype for each unseen action. Rather than directly using the target prototype as final result, we re-position unseen action prototypes along the geodesic spanned by the original and target prototypes as a form of semantic regularization. For universal object models, we outline a variant that defines target prototypes based on an optimal transport between unseen action prototypes and object prototypes. Empirically, we show that universal prototype transport diminishes the biased selection of unseen action prototypes and boosts both universal action and object models for zero-shot classification and spatio-temporal localization. |
2204.02553 | Umar Khalid | Umar Khalid, Ashkan Esmaeili, Nazmul Karim, Nazanin Rahnavard | RODD: A Self-Supervised Approach for Robust Out-of-Distribution
Detection | Accepted in CVPR Art of Robustness Workshop Proceedings | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent studies have addressed the concern of detecting and rejecting the
out-of-distribution (OOD) samples as a major challenge in the safe deployment
of deep learning (DL) models. It is desired that the DL model should only be
confident about the in-distribution (ID) data which reinforces the driving
principle of the OOD detection. In this paper, we propose a simple yet
effective generalized OOD detection method independent of out-of-distribution
datasets. Our approach relies on self-supervised feature learning of the
training samples, where the embeddings lie on a compact low-dimensional space.
Motivated by the recent studies that show self-supervised adversarial
contrastive learning helps robustify the model, we empirically show that a
pre-trained model with self-supervised contrastive learning yields a better
model for uni-dimensional feature learning in the latent space. The method
proposed in this work referred to as RODD outperforms SOTA detection
performance on an extensive suite of benchmark datasets on OOD detection tasks.
On the CIFAR-100 benchmarks, RODD achieves a 26.97 $\%$ lower false-positive
rate (FPR@95) compared to SOTA methods.
| [
{
"created": "Wed, 6 Apr 2022 03:05:58 GMT",
"version": "v1"
},
{
"created": "Wed, 13 Apr 2022 15:19:17 GMT",
"version": "v2"
},
{
"created": "Sat, 15 Oct 2022 00:41:28 GMT",
"version": "v3"
}
] | 2022-10-18 | [
[
"Khalid",
"Umar",
""
],
[
"Esmaeili",
"Ashkan",
""
],
[
"Karim",
"Nazmul",
""
],
[
"Rahnavard",
"Nazanin",
""
]
] | Recent studies have addressed the concern of detecting and rejecting the out-of-distribution (OOD) samples as a major challenge in the safe deployment of deep learning (DL) models. It is desired that the DL model should only be confident about the in-distribution (ID) data which reinforces the driving principle of the OOD detection. In this paper, we propose a simple yet effective generalized OOD detection method independent of out-of-distribution datasets. Our approach relies on self-supervised feature learning of the training samples, where the embeddings lie on a compact low-dimensional space. Motivated by the recent studies that show self-supervised adversarial contrastive learning helps robustify the model, we empirically show that a pre-trained model with self-supervised contrastive learning yields a better model for uni-dimensional feature learning in the latent space. The method proposed in this work referred to as RODD outperforms SOTA detection performance on an extensive suite of benchmark datasets on OOD detection tasks. On the CIFAR-100 benchmarks, RODD achieves a 26.97 $\%$ lower false-positive rate (FPR@95) compared to SOTA methods. |
2403.18367 | Florian Freye | Florian Freye and Jie Lou and Christian Lanius and Tobias Gemmeke | Merits of Time-Domain Computing for VMM -- A Quantitative Comparison | 8 pages, 12 figures. This paper was accepted at the 25th
International Symposium on Quality Electronic Design(ISQED) 2024. DOI:
10.1109/ISQED60706.2024.10528682 | null | 10.1109/ISQED60706.2024.10528682 | null | cs.AR | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Vector-matrix-multiplication (VMM) accel-erators have gained a lot of
traction, especially due to therise of convolutional neural networks (CNNs) and
the desireto compute them on the edge. Besides the classical digitalapproach,
analog computing has gone through a renais-sance to push energy efficiency
further. A more recent ap-proach is called time-domain (TD) computing. In
contrastto analog computing, TD computing permits easy technol-ogy as well as
voltage scaling. As it has received limitedresearch attention, it is not yet
clear which scenarios aremost suitable to be computed in the TD. In this work,
weinvestigate these scenarios, focussing on energy efficiencyconsidering
approximative computations that preserve ac-curacy. Both goals are addressed by
a novel efficiency met-ric, which is used to find a baseline design. We use
SPICEsimulation data which is fed into a python framework toevaluate how
performance scales for VMM computation.We see that TD computing offers best
energy efficiency forsmall to medium sized arrays. With throughput and sili-con
footprint we investigate two additional metrics, givinga holistic comparison.
| [
{
"created": "Wed, 27 Mar 2024 08:58:32 GMT",
"version": "v1"
},
{
"created": "Tue, 21 May 2024 13:23:02 GMT",
"version": "v2"
}
] | 2024-05-22 | [
[
"Freye",
"Florian",
""
],
[
"Lou",
"Jie",
""
],
[
"Lanius",
"Christian",
""
],
[
"Gemmeke",
"Tobias",
""
]
] | Vector-matrix-multiplication (VMM) accel-erators have gained a lot of traction, especially due to therise of convolutional neural networks (CNNs) and the desireto compute them on the edge. Besides the classical digitalapproach, analog computing has gone through a renais-sance to push energy efficiency further. A more recent ap-proach is called time-domain (TD) computing. In contrastto analog computing, TD computing permits easy technol-ogy as well as voltage scaling. As it has received limitedresearch attention, it is not yet clear which scenarios aremost suitable to be computed in the TD. In this work, weinvestigate these scenarios, focussing on energy efficiencyconsidering approximative computations that preserve ac-curacy. Both goals are addressed by a novel efficiency met-ric, which is used to find a baseline design. We use SPICEsimulation data which is fed into a python framework toevaluate how performance scales for VMM computation.We see that TD computing offers best energy efficiency forsmall to medium sized arrays. With throughput and sili-con footprint we investigate two additional metrics, givinga holistic comparison. |
1207.4104 | Weiyu Xu | Weiyu Xu, Erwei Bai and Myung Cho | Outliers and Random Noises in System Identification: a Compressed
Sensing Approach | 10 pages, 5 figures | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we consider robust system identification under sparse outliers
and random noises. In this problem, system parameters are observed through a
Toeplitz matrix. All observations are subject to random noises and a few are
corrupted with outliers. We reduce this problem of system identification to a
sparse error correcting problem using a Toeplitz structured real-numbered
coding matrix. We prove the performance guarantee of Toeplitz structured matrix
in sparse error correction. Thresholds on the percentage of correctable errors
for Toeplitz structured matrices are established. When both outliers and
observation noise are present, we have shown that the estimation error goes to
0 asymptotically as long as the probability density function for observation
noise is not "vanishing" around 0. No probabilistic assumptions are imposed on
the outliers.
| [
{
"created": "Tue, 17 Jul 2012 19:53:36 GMT",
"version": "v1"
},
{
"created": "Mon, 27 May 2013 04:46:05 GMT",
"version": "v2"
}
] | 2013-05-28 | [
[
"Xu",
"Weiyu",
""
],
[
"Bai",
"Erwei",
""
],
[
"Cho",
"Myung",
""
]
] | In this paper, we consider robust system identification under sparse outliers and random noises. In this problem, system parameters are observed through a Toeplitz matrix. All observations are subject to random noises and a few are corrupted with outliers. We reduce this problem of system identification to a sparse error correcting problem using a Toeplitz structured real-numbered coding matrix. We prove the performance guarantee of Toeplitz structured matrix in sparse error correction. Thresholds on the percentage of correctable errors for Toeplitz structured matrices are established. When both outliers and observation noise are present, we have shown that the estimation error goes to 0 asymptotically as long as the probability density function for observation noise is not "vanishing" around 0. No probabilistic assumptions are imposed on the outliers. |
2302.00247 | Ziji Shi | Ziji Shi, Le Jiang, Ang Wang, Jie Zhang, Xianyan Jia, Yong Li, Chencan
Wu, Jialin Li, Wei Lin | TAP: Accelerating Large-Scale DNN Training Through Tensor Automatic
Parallelisation | null | null | null | null | cs.LG cs.AI cs.DC | http://creativecommons.org/licenses/by/4.0/ | Model parallelism has become necessary to train large neural networks.
However, finding a suitable model parallel schedule for an arbitrary neural
network is a non-trivial task due to the exploding search space. In this work,
we present a model parallelism framework TAP that automatically searches for
the best data and tensor parallel schedules. Leveraging the key insight that a
neural network can be represented as a directed acyclic graph, within which may
only exist a limited set of frequent subgraphs, we design a graph pruning
algorithm to fold the search space efficiently. TAP runs at sub-linear
complexity concerning the neural network size. Experiments show that TAP is
$20\times- 160\times$ faster than the state-of-the-art automatic parallelism
framework, and the performance of its discovered schedules is competitive with
the expert-engineered ones.
| [
{
"created": "Wed, 1 Feb 2023 05:22:28 GMT",
"version": "v1"
}
] | 2023-02-02 | [
[
"Shi",
"Ziji",
""
],
[
"Jiang",
"Le",
""
],
[
"Wang",
"Ang",
""
],
[
"Zhang",
"Jie",
""
],
[
"Jia",
"Xianyan",
""
],
[
"Li",
"Yong",
""
],
[
"Wu",
"Chencan",
""
],
[
"Li",
"Jialin",
""
],
[
"Lin",
"Wei",
""
]
] | Model parallelism has become necessary to train large neural networks. However, finding a suitable model parallel schedule for an arbitrary neural network is a non-trivial task due to the exploding search space. In this work, we present a model parallelism framework TAP that automatically searches for the best data and tensor parallel schedules. Leveraging the key insight that a neural network can be represented as a directed acyclic graph, within which may only exist a limited set of frequent subgraphs, we design a graph pruning algorithm to fold the search space efficiently. TAP runs at sub-linear complexity concerning the neural network size. Experiments show that TAP is $20\times- 160\times$ faster than the state-of-the-art automatic parallelism framework, and the performance of its discovered schedules is competitive with the expert-engineered ones. |
2401.00448 | Nikhil Sardana | Nikhil Sardana and Jacob Portes and Sasha Doubov and Jonathan Frankle | Beyond Chinchilla-Optimal: Accounting for Inference in Language Model
Scaling Laws | 16 pages, 7 figures, To appear in the 41st International Conference
on Machine Learning, 2024 | null | null | null | cs.LG cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language model (LLM) scaling laws are empirical formulas that estimate
changes in model quality as a result of increasing parameter count and training
data. However, these formulas, including the popular Deepmind Chinchilla
scaling laws, neglect to include the cost of inference. We modify the
Chinchilla scaling laws to calculate the optimal LLM parameter count and
pre-training data size to train and deploy a model of a given quality and
inference demand. We conduct our analysis both in terms of a compute budget and
real-world costs and find that LLM researchers expecting reasonably large
inference demand (~1B requests) should train models smaller and longer than
Chinchilla-optimal. Furthermore, we train 47 models of varying sizes and
parameter counts to validate our formula and find that model quality continues
to improve as we scale tokens per parameter to extreme ranges (up to 10,000).
Finally, we ablate the procedure used to fit the Chinchilla scaling law
coefficients and find that developing scaling laws only from data collected at
typical token/parameter ratios overestimates the impact of additional tokens at
these extreme ranges.
| [
{
"created": "Sun, 31 Dec 2023 10:53:58 GMT",
"version": "v1"
},
{
"created": "Thu, 18 Jul 2024 14:23:29 GMT",
"version": "v2"
}
] | 2024-07-19 | [
[
"Sardana",
"Nikhil",
""
],
[
"Portes",
"Jacob",
""
],
[
"Doubov",
"Sasha",
""
],
[
"Frankle",
"Jonathan",
""
]
] | Large language model (LLM) scaling laws are empirical formulas that estimate changes in model quality as a result of increasing parameter count and training data. However, these formulas, including the popular Deepmind Chinchilla scaling laws, neglect to include the cost of inference. We modify the Chinchilla scaling laws to calculate the optimal LLM parameter count and pre-training data size to train and deploy a model of a given quality and inference demand. We conduct our analysis both in terms of a compute budget and real-world costs and find that LLM researchers expecting reasonably large inference demand (~1B requests) should train models smaller and longer than Chinchilla-optimal. Furthermore, we train 47 models of varying sizes and parameter counts to validate our formula and find that model quality continues to improve as we scale tokens per parameter to extreme ranges (up to 10,000). Finally, we ablate the procedure used to fit the Chinchilla scaling law coefficients and find that developing scaling laws only from data collected at typical token/parameter ratios overestimates the impact of additional tokens at these extreme ranges. |
2305.11509 | Tianyu Wang | Chuying Han, Yasong Feng, Tianyu Wang | From Random Search to Bandit Learning in Metric Measure Spaces | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Random Search is one of the most widely-used method for Hyperparameter
Optimization, and is critical to the success of deep learning models. Despite
its astonishing performance, little non-heuristic theory has been developed to
describe the underlying working mechanism. This paper gives a theoretical
accounting of Random Search. We introduce the concept of \emph{scattering
dimension} that describes the landscape of the underlying function, and
quantifies the performance of random search. We show that, when the environment
is noise-free, the output of random search converges to the optimal value in
probability at rate $ \widetilde{\mathcal{O}} \left( \left( \frac{1}{T}
\right)^{ \frac{1}{d_s} } \right) $, where $ d_s \ge 0 $ is the scattering
dimension of the underlying function. When the observed function values are
corrupted by bounded $iid$ noise, the output of random search converges to the
optimal value in probability at rate $ \widetilde{\mathcal{O}} \left( \left(
\frac{1}{T} \right)^{ \frac{1}{d_s + 1} } \right) $. In addition, based on the
principles of random search, we introduce an algorithm, called BLiN-MOS, for
Lipschitz bandits in doubling metric spaces that are also endowed with a
probability measure, and show that under mild conditions, BLiN-MOS achieves a
regret rate of order $ \widetilde{\mathcal{O}} \left( T^{ \frac{d_z}{d_z + 1} }
\right) $, where $d_z$ is the zooming dimension of the problem instance.
| [
{
"created": "Fri, 19 May 2023 08:18:49 GMT",
"version": "v1"
},
{
"created": "Tue, 23 May 2023 13:02:15 GMT",
"version": "v2"
},
{
"created": "Tue, 6 Jun 2023 13:28:39 GMT",
"version": "v3"
},
{
"created": "Thu, 10 Aug 2023 15:01:16 GMT",
"version": "v4"
},
{
"created": "Tue, 5 Sep 2023 12:31:03 GMT",
"version": "v5"
},
{
"created": "Mon, 12 Feb 2024 15:32:13 GMT",
"version": "v6"
}
] | 2024-02-13 | [
[
"Han",
"Chuying",
""
],
[
"Feng",
"Yasong",
""
],
[
"Wang",
"Tianyu",
""
]
] | Random Search is one of the most widely-used method for Hyperparameter Optimization, and is critical to the success of deep learning models. Despite its astonishing performance, little non-heuristic theory has been developed to describe the underlying working mechanism. This paper gives a theoretical accounting of Random Search. We introduce the concept of \emph{scattering dimension} that describes the landscape of the underlying function, and quantifies the performance of random search. We show that, when the environment is noise-free, the output of random search converges to the optimal value in probability at rate $ \widetilde{\mathcal{O}} \left( \left( \frac{1}{T} \right)^{ \frac{1}{d_s} } \right) $, where $ d_s \ge 0 $ is the scattering dimension of the underlying function. When the observed function values are corrupted by bounded $iid$ noise, the output of random search converges to the optimal value in probability at rate $ \widetilde{\mathcal{O}} \left( \left( \frac{1}{T} \right)^{ \frac{1}{d_s + 1} } \right) $. In addition, based on the principles of random search, we introduce an algorithm, called BLiN-MOS, for Lipschitz bandits in doubling metric spaces that are also endowed with a probability measure, and show that under mild conditions, BLiN-MOS achieves a regret rate of order $ \widetilde{\mathcal{O}} \left( T^{ \frac{d_z}{d_z + 1} } \right) $, where $d_z$ is the zooming dimension of the problem instance. |
2112.00579 | Avinandan Bose | Avinandan Bose, Pradeep Varakantham | Conditional Expectation based Value Decomposition for Scalable On-Demand
Ride Pooling | Preprint. Under Review. arXiv admin note: text overlap with
arXiv:1911.08842 | null | null | null | cs.LG cs.AI cs.CY cs.MA | http://creativecommons.org/licenses/by/4.0/ | Owing to the benefits for customers (lower prices), drivers (higher
revenues), aggregation companies (higher revenues) and the environment (fewer
vehicles), on-demand ride pooling (e.g., Uber pool, Grab Share) has become
quite popular. The significant computational complexity of matching vehicles to
combinations of requests has meant that traditional ride pooling approaches are
myopic in that they do not consider the impact of current matches on future
value for vehicles/drivers. Recently, Neural Approximate Dynamic Programming
(NeurADP) has employed value decomposition with Approximate Dynamic Programming
(ADP) to outperform leading approaches by considering the impact of an
individual agent's (vehicle) chosen actions on the future value of that agent.
However, in order to ensure scalability and facilitate city-scale ride pooling,
NeurADP completely ignores the impact of other agents actions on individual
agent/vehicle value. As demonstrated in our experimental results, ignoring the
impact of other agents actions on individual value can have a significant
impact on the overall performance when there is increased competition among
vehicles for demand. Our key contribution is a novel mechanism based on
computing conditional expectations through joint conditional probabilities for
capturing dependencies on other agents actions without increasing the
complexity of training or decision making. We show that our new approach,
Conditional Expectation based Value Decomposition (CEVD) outperforms NeurADP by
up to 9.76% in terms of overall requests served, which is a significant
improvement on a city wide benchmark taxi dataset.
| [
{
"created": "Wed, 1 Dec 2021 15:53:16 GMT",
"version": "v1"
}
] | 2021-12-02 | [
[
"Bose",
"Avinandan",
""
],
[
"Varakantham",
"Pradeep",
""
]
] | Owing to the benefits for customers (lower prices), drivers (higher revenues), aggregation companies (higher revenues) and the environment (fewer vehicles), on-demand ride pooling (e.g., Uber pool, Grab Share) has become quite popular. The significant computational complexity of matching vehicles to combinations of requests has meant that traditional ride pooling approaches are myopic in that they do not consider the impact of current matches on future value for vehicles/drivers. Recently, Neural Approximate Dynamic Programming (NeurADP) has employed value decomposition with Approximate Dynamic Programming (ADP) to outperform leading approaches by considering the impact of an individual agent's (vehicle) chosen actions on the future value of that agent. However, in order to ensure scalability and facilitate city-scale ride pooling, NeurADP completely ignores the impact of other agents actions on individual agent/vehicle value. As demonstrated in our experimental results, ignoring the impact of other agents actions on individual value can have a significant impact on the overall performance when there is increased competition among vehicles for demand. Our key contribution is a novel mechanism based on computing conditional expectations through joint conditional probabilities for capturing dependencies on other agents actions without increasing the complexity of training or decision making. We show that our new approach, Conditional Expectation based Value Decomposition (CEVD) outperforms NeurADP by up to 9.76% in terms of overall requests served, which is a significant improvement on a city wide benchmark taxi dataset. |
1707.07402 | Khanh Nguyen | Khanh Nguyen, Hal Daum\'e III and Jordan Boyd-Graber | Reinforcement Learning for Bandit Neural Machine Translation with
Simulated Human Feedback | 11 pages, 5 figures, In Proceedings of Empirical Methods in Natural
Language Processing (EMNLP) 2017 | null | null | null | cs.CL cs.AI cs.HC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors.
| [
{
"created": "Mon, 24 Jul 2017 04:35:19 GMT",
"version": "v1"
},
{
"created": "Tue, 1 Aug 2017 17:19:01 GMT",
"version": "v2"
},
{
"created": "Fri, 13 Oct 2017 06:10:55 GMT",
"version": "v3"
},
{
"created": "Sat, 11 Nov 2017 05:01:23 GMT",
"version": "v4"
}
] | 2017-11-15 | [
[
"Nguyen",
"Khanh",
""
],
[
"Daumé",
"Hal",
"III"
],
[
"Boyd-Graber",
"Jordan",
""
]
] | Machine translation is a natural candidate problem for reinforcement learning from human feedback: users provide quick, dirty ratings on candidate translations to guide a system to improve. Yet, current neural machine translation training focuses on expensive human-generated reference translations. We describe a reinforcement learning algorithm that improves neural machine translation systems from simulated human feedback. Our algorithm combines the advantage actor-critic algorithm (Mnih et al., 2016) with the attention-based neural encoder-decoder architecture (Luong et al., 2015). This algorithm (a) is well-designed for problems with a large action space and delayed rewards, (b) effectively optimizes traditional corpus-level machine translation metrics, and (c) is robust to skewed, high-variance, granular feedback modeled after actual human behaviors. |
1305.3978 | Puneet Kumar Mr. | Puneet Kumar, Dharminder Kumar | A Conceptual E-Governance Framework for Improving Child Immunization
Process in India | Published with International Journal of Computer Applications (IJCA) | Puneet Kumar, Dharminder Kumar. Article: A Conceptual E-Governance
Framework, International Journal of Computer Applications 69(1):39-43, May
2013. Published by Foundation of Computer Science, New York, USA | 10.5120/11808-7464 | null | cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | India is country having high population and great variations in the
educational level, economic conditions, population densities, cultures and
awareness levels. Due to these variations the immunization process is not so
much successful as per expectations of the state and central governments. In
some zones the significant amount of vaccines are wasted whereas some are
running out of vaccines. One of the reasons for such an imbalance is improper
quantity estimation of vaccines in a particular zone. Further a huge amount of
liquidity will be wasted in the form of vaccines. If we inculcate ICT
(Information and Communication Technology) in the process of immunization then
the problem can be rectified to some extent and hence we are proposing a
conceptual model using ICT to improve the process of vaccination.
| [
{
"created": "Fri, 17 May 2013 04:46:02 GMT",
"version": "v1"
}
] | 2013-05-20 | [
[
"Kumar",
"Puneet",
""
],
[
"Kumar",
"Dharminder",
""
]
] | India is country having high population and great variations in the educational level, economic conditions, population densities, cultures and awareness levels. Due to these variations the immunization process is not so much successful as per expectations of the state and central governments. In some zones the significant amount of vaccines are wasted whereas some are running out of vaccines. One of the reasons for such an imbalance is improper quantity estimation of vaccines in a particular zone. Further a huge amount of liquidity will be wasted in the form of vaccines. If we inculcate ICT (Information and Communication Technology) in the process of immunization then the problem can be rectified to some extent and hence we are proposing a conceptual model using ICT to improve the process of vaccination. |
1105.6163 | Vinod M. Prabhakaran | Vinod M. Prabhakaran and Manoj M. Prabhakaran | Assisted Common Information: Further Results | 8 pages, 3 figures, 1 appendix; to be presented at the IEEE
International Symposium on Information Theory, 2011 | null | 10.1109/ISIT.2011.6034098 | null | cs.IT cs.CR math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We presented assisted common information as a generalization of
G\'acs-K\"orner (GK) common information at ISIT 2010. The motivation for our
formulation was to improve upperbounds on the efficiency of protocols for
secure two-party sampling (which is a form of secure multi-party computation).
Our upperbound was based on a monotonicity property of a rate-region (called
the assisted residual information region) associated with the assisted common
information formulation. In this note we present further results. We explore
the connection of assisted common information with the Gray-Wyner system. We
show that the assisted residual information region and the Gray-Wyner region
are connected by a simple relationship: the assisted residual information
region is the increasing hull of the Gray-Wyner region under an affine map.
Several known relationships between GK common information and Gray-Wyner system
fall out as consequences of this. Quantities which arise in other source coding
contexts acquire new interpretations. In previous work we showed that assisted
common information can be used to derive upperbounds on the rate at which a
pair of parties can {\em securely sample} correlated random variables, given
correlated random variables from another distribution. Here we present an
example where the bound derived using assisted common information is much
better than previously known bounds, and in fact is tight. This example
considers correlated random variables defined in terms of standard variants of
oblivious transfer, and is interesting on its own as it answers a natural
question about these cryptographic primitives.
| [
{
"created": "Tue, 31 May 2011 05:26:17 GMT",
"version": "v1"
}
] | 2016-11-15 | [
[
"Prabhakaran",
"Vinod M.",
""
],
[
"Prabhakaran",
"Manoj M.",
""
]
] | We presented assisted common information as a generalization of G\'acs-K\"orner (GK) common information at ISIT 2010. The motivation for our formulation was to improve upperbounds on the efficiency of protocols for secure two-party sampling (which is a form of secure multi-party computation). Our upperbound was based on a monotonicity property of a rate-region (called the assisted residual information region) associated with the assisted common information formulation. In this note we present further results. We explore the connection of assisted common information with the Gray-Wyner system. We show that the assisted residual information region and the Gray-Wyner region are connected by a simple relationship: the assisted residual information region is the increasing hull of the Gray-Wyner region under an affine map. Several known relationships between GK common information and Gray-Wyner system fall out as consequences of this. Quantities which arise in other source coding contexts acquire new interpretations. In previous work we showed that assisted common information can be used to derive upperbounds on the rate at which a pair of parties can {\em securely sample} correlated random variables, given correlated random variables from another distribution. Here we present an example where the bound derived using assisted common information is much better than previously known bounds, and in fact is tight. This example considers correlated random variables defined in terms of standard variants of oblivious transfer, and is interesting on its own as it answers a natural question about these cryptographic primitives. |
2302.12028 | Vincent Lemaire | Colin Troisemaine and Vincent Lemaire and St\'ephane Gosselin and
Alexandre Reiffers-Masson and Joachim Flocon-Cholet and Sandrine Vaton | Novel Class Discovery: an Introduction and Key Concepts | 30 pages | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Novel Class Discovery (NCD) is a growing field where we are given during
training a labeled set of known classes and an unlabeled set of different
classes that must be discovered. In recent years, many methods have been
proposed to address this problem, and the field has begun to mature. In this
paper, we provide a comprehensive survey of the state-of-the-art NCD methods.
We start by formally defining the NCD problem and introducing important
notions. We then give an overview of the different families of approaches,
organized by the way they transfer knowledge from the labeled set to the
unlabeled set. We find that they either learn in two stages, by first
extracting knowledge from the labeled data only and then applying it to the
unlabeled data, or in one stage by conjointly learning on both sets. For each
family, we describe their general principle and detail a few representative
methods. Then, we briefly introduce some new related tasks inspired by the
increasing number of NCD works. We also present some common tools and
techniques used in NCD, such as pseudo labeling, self-supervised learning and
contrastive learning. Finally, to help readers unfamiliar with the NCD problem
differentiate it from other closely related domains, we summarize some of the
closest areas of research and discuss their main differences.
| [
{
"created": "Wed, 22 Feb 2023 10:07:01 GMT",
"version": "v1"
}
] | 2023-02-24 | [
[
"Troisemaine",
"Colin",
""
],
[
"Lemaire",
"Vincent",
""
],
[
"Gosselin",
"Stéphane",
""
],
[
"Reiffers-Masson",
"Alexandre",
""
],
[
"Flocon-Cholet",
"Joachim",
""
],
[
"Vaton",
"Sandrine",
""
]
] | Novel Class Discovery (NCD) is a growing field where we are given during training a labeled set of known classes and an unlabeled set of different classes that must be discovered. In recent years, many methods have been proposed to address this problem, and the field has begun to mature. In this paper, we provide a comprehensive survey of the state-of-the-art NCD methods. We start by formally defining the NCD problem and introducing important notions. We then give an overview of the different families of approaches, organized by the way they transfer knowledge from the labeled set to the unlabeled set. We find that they either learn in two stages, by first extracting knowledge from the labeled data only and then applying it to the unlabeled data, or in one stage by conjointly learning on both sets. For each family, we describe their general principle and detail a few representative methods. Then, we briefly introduce some new related tasks inspired by the increasing number of NCD works. We also present some common tools and techniques used in NCD, such as pseudo labeling, self-supervised learning and contrastive learning. Finally, to help readers unfamiliar with the NCD problem differentiate it from other closely related domains, we summarize some of the closest areas of research and discuss their main differences. |
1403.4158 | Reza Rahimi | A. A. Milani, Reza Rahimi | A Methodology for Implementation of MMS Client on Embedded Platforms | null | null | null | null | cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | MMS (Multimedia Messaging Service) is the next generation of messaging
services in multimedia mobile communications. MMS enables messaging with full
multimedia content including images, audios, videos, texts and data, from
client to client or e-mail. MMS is based on WAP technology, so it is technology
independent. This means that enabling messages from a GSM/GPRS network to be
sent to a TDMA or WCDMA network. In this paper a methodology for implementing
MMS client on embedded platforms especially on Wince OS is described.
| [
{
"created": "Mon, 17 Mar 2014 16:42:43 GMT",
"version": "v1"
}
] | 2014-03-18 | [
[
"Milani",
"A. A.",
""
],
[
"Rahimi",
"Reza",
""
]
] | MMS (Multimedia Messaging Service) is the next generation of messaging services in multimedia mobile communications. MMS enables messaging with full multimedia content including images, audios, videos, texts and data, from client to client or e-mail. MMS is based on WAP technology, so it is technology independent. This means that enabling messages from a GSM/GPRS network to be sent to a TDMA or WCDMA network. In this paper a methodology for implementing MMS client on embedded platforms especially on Wince OS is described. |
2205.12402 | Christopher Denniston | Christopher E. Denniston, Yun Chang, Andrzej Reinke, Kamak Ebadi,
Gaurav S. Sukhatme, Luca Carlone, Benjamin Morrell, Ali-akbar Agha-mohammadi | Loop Closure Prioritization for Efficient and Scalable Multi-Robot SLAM | 8 pages, Accepted to RA-L/IROS 2022 | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-robot SLAM systems in GPS-denied environments require loop closures to
maintain a drift-free centralized map. With an increasing number of robots and
size of the environment, checking and computing the transformation for all the
loop closure candidates becomes computationally infeasible. In this work, we
describe a loop closure module that is able to prioritize which loop closures
to compute based on the underlying pose graph, the proximity to known beacons,
and the characteristics of the point clouds. We validate this system in the
context of the DARPA Subterranean Challenge and on numerous challenging
underground datasets and demonstrate the ability of this system to generate and
maintain a map with low error. We find that our proposed techniques are able to
select effective loop closures which results in 51% mean reduction in median
error when compared to an odometric solution and 75% mean reduction in median
error when compared to a baseline version of this system with no
prioritization. We also find our proposed system is able to find a lower error
in the mission time of one hour when compared to a system that processes every
possible loop closure in four and a half hours. The code and dataset for this
work can be found https://github.com/NeBula-Autonomy/LAMP
| [
{
"created": "Tue, 24 May 2022 23:23:15 GMT",
"version": "v1"
},
{
"created": "Sat, 9 Jul 2022 00:36:19 GMT",
"version": "v2"
}
] | 2022-07-12 | [
[
"Denniston",
"Christopher E.",
""
],
[
"Chang",
"Yun",
""
],
[
"Reinke",
"Andrzej",
""
],
[
"Ebadi",
"Kamak",
""
],
[
"Sukhatme",
"Gaurav S.",
""
],
[
"Carlone",
"Luca",
""
],
[
"Morrell",
"Benjamin",
""
],
[
"Agha-mohammadi",
"Ali-akbar",
""
]
] | Multi-robot SLAM systems in GPS-denied environments require loop closures to maintain a drift-free centralized map. With an increasing number of robots and size of the environment, checking and computing the transformation for all the loop closure candidates becomes computationally infeasible. In this work, we describe a loop closure module that is able to prioritize which loop closures to compute based on the underlying pose graph, the proximity to known beacons, and the characteristics of the point clouds. We validate this system in the context of the DARPA Subterranean Challenge and on numerous challenging underground datasets and demonstrate the ability of this system to generate and maintain a map with low error. We find that our proposed techniques are able to select effective loop closures which results in 51% mean reduction in median error when compared to an odometric solution and 75% mean reduction in median error when compared to a baseline version of this system with no prioritization. We also find our proposed system is able to find a lower error in the mission time of one hour when compared to a system that processes every possible loop closure in four and a half hours. The code and dataset for this work can be found https://github.com/NeBula-Autonomy/LAMP |
1612.06961 | Biao He | Biao He, An Liu, Nan Yang, Vincent K. N. Lau | On the Design of Secure Non-Orthogonal Multiple Access Systems | to appear in IEEE Journal on Selected Areas in Communications | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a new design of non-orthogonal multiple access (NOMA)
under secrecy considerations. We focus on a NOMA system where a transmitter
sends confidential messages to multiple users in the presence of an external
eavesdropper. The optimal designs of decoding order, transmission rates, and
power allocated to each user are investigated. Considering the practical
passive eavesdropping scenario where the instantaneous channel state of the
eavesdropper is unknown, we adopt the secrecy outage probability as the secrecy
metric. We first consider the problem of minimizing the transmit power subject
to the secrecy outage and quality of service constraints, and derive the
closed-form solution to this problem. We then explore the problem of maximizing
the minimum confidential information rate among users subject to the secrecy
outage and transmit power constraints, and provide an iterative algorithm to
solve this problem. We find that the secrecy outage constraint in the studied
problems does not change the optimal decoding order for NOMA, and one should
increase the power allocated to the user whose channel is relatively bad when
the secrecy constraint becomes more stringent. Finally, we show the advantage
of NOMA over orthogonal multiple access in the studied problems both
analytically and numerically.
| [
{
"created": "Wed, 21 Dec 2016 03:22:20 GMT",
"version": "v1"
},
{
"created": "Tue, 23 May 2017 18:45:57 GMT",
"version": "v2"
}
] | 2017-05-25 | [
[
"He",
"Biao",
""
],
[
"Liu",
"An",
""
],
[
"Yang",
"Nan",
""
],
[
"Lau",
"Vincent K. N.",
""
]
] | This paper proposes a new design of non-orthogonal multiple access (NOMA) under secrecy considerations. We focus on a NOMA system where a transmitter sends confidential messages to multiple users in the presence of an external eavesdropper. The optimal designs of decoding order, transmission rates, and power allocated to each user are investigated. Considering the practical passive eavesdropping scenario where the instantaneous channel state of the eavesdropper is unknown, we adopt the secrecy outage probability as the secrecy metric. We first consider the problem of minimizing the transmit power subject to the secrecy outage and quality of service constraints, and derive the closed-form solution to this problem. We then explore the problem of maximizing the minimum confidential information rate among users subject to the secrecy outage and transmit power constraints, and provide an iterative algorithm to solve this problem. We find that the secrecy outage constraint in the studied problems does not change the optimal decoding order for NOMA, and one should increase the power allocated to the user whose channel is relatively bad when the secrecy constraint becomes more stringent. Finally, we show the advantage of NOMA over orthogonal multiple access in the studied problems both analytically and numerically. |
2203.11275 | Mehmet Aktas | Mehmet Emin Aktas, Esra Akbas, Ashley Hahn | Liars are more influential: Effect of Deception in Influence
Maximization on Social Networks | null | null | null | null | cs.SI math.AT | http://creativecommons.org/licenses/by/4.0/ | Detecting influential users, called the influence maximization problem on
social networks, is an important graph mining problem with many diverse
applications such as information propagation, market advertising, and rumor
controlling. There are many studies in the literature for influential users
detection problem in social networks. Although the current methods are
successfully used in many different applications, they assume that users are
honest with each other and ignore the role of deception on social networks. On
the other hand, deception appears to be surprisingly common among humans within
social networks. In this paper, we study the effect of deception in influence
maximization on social networks. We first model deception in social networks.
Then, we model the opinion dynamics on these networks taking the deception into
consideration thanks to a recent opinion dynamics model via sheaf Laplacian. We
then extend two influential node detection methods, namely Laplacian centrality
and DFF centrality, for the sheaf Laplacian to measure the effect of deception
in influence maximization. Our experimental results on synthetic and real-world
networks suggest that liars are more influential than honest users in social
networks.
| [
{
"created": "Mon, 21 Mar 2022 18:53:16 GMT",
"version": "v1"
}
] | 2022-03-23 | [
[
"Aktas",
"Mehmet Emin",
""
],
[
"Akbas",
"Esra",
""
],
[
"Hahn",
"Ashley",
""
]
] | Detecting influential users, called the influence maximization problem on social networks, is an important graph mining problem with many diverse applications such as information propagation, market advertising, and rumor controlling. There are many studies in the literature for influential users detection problem in social networks. Although the current methods are successfully used in many different applications, they assume that users are honest with each other and ignore the role of deception on social networks. On the other hand, deception appears to be surprisingly common among humans within social networks. In this paper, we study the effect of deception in influence maximization on social networks. We first model deception in social networks. Then, we model the opinion dynamics on these networks taking the deception into consideration thanks to a recent opinion dynamics model via sheaf Laplacian. We then extend two influential node detection methods, namely Laplacian centrality and DFF centrality, for the sheaf Laplacian to measure the effect of deception in influence maximization. Our experimental results on synthetic and real-world networks suggest that liars are more influential than honest users in social networks. |
2203.11325 | Tomer Wullach | Tomer Wullach, Shlomo E. Chazan | Enhancing Speech Recognition Decoding via Layer Aggregation | Submitted to Interspeech 2022 | null | null | null | cs.CL cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently proposed speech recognition systems are designed to predict using
representations generated by their top layers, employing greedy decoding which
isolates each timestep from the rest of the sequence. Aiming for improved
performance, a beam search algorithm is frequently utilized and a language
model is incorporated to assist with ranking the top candidates. In this work,
we experiment with several speech recognition models and find that logits
predicted using the top layers may hamper beam search from achieving optimal
results. Specifically, we show that fined-tuned Wav2Vec 2.0 and HuBERT yield
highly confident predictions, and hypothesize that the predictions are based on
local information and may not take full advantage of the information encoded in
intermediate layers. To this end, we perform a layer analysis to reveal and
visualize how predictions evolve throughout the inference flow. We then propose
a prediction method that aggregates the top M layers, potentially leveraging
useful information encoded in intermediate layers and relaxing model
confidence. We showcase the effectiveness of our approach via beam search
decoding, conducting our experiments on Librispeech test and dev sets and
achieving WER, and CER reduction of up to 10% and 22%, respectively.
| [
{
"created": "Mon, 21 Mar 2022 20:28:06 GMT",
"version": "v1"
},
{
"created": "Tue, 5 Apr 2022 08:38:04 GMT",
"version": "v2"
}
] | 2022-04-06 | [
[
"Wullach",
"Tomer",
""
],
[
"Chazan",
"Shlomo E.",
""
]
] | Recently proposed speech recognition systems are designed to predict using representations generated by their top layers, employing greedy decoding which isolates each timestep from the rest of the sequence. Aiming for improved performance, a beam search algorithm is frequently utilized and a language model is incorporated to assist with ranking the top candidates. In this work, we experiment with several speech recognition models and find that logits predicted using the top layers may hamper beam search from achieving optimal results. Specifically, we show that fined-tuned Wav2Vec 2.0 and HuBERT yield highly confident predictions, and hypothesize that the predictions are based on local information and may not take full advantage of the information encoded in intermediate layers. To this end, we perform a layer analysis to reveal and visualize how predictions evolve throughout the inference flow. We then propose a prediction method that aggregates the top M layers, potentially leveraging useful information encoded in intermediate layers and relaxing model confidence. We showcase the effectiveness of our approach via beam search decoding, conducting our experiments on Librispeech test and dev sets and achieving WER, and CER reduction of up to 10% and 22%, respectively. |
2108.07846 | Xianyuan Liu | Xianyuan Liu, Shuo Zhou, Tao Lei, Haiping Lu | Channel-Temporal Attention for First-Person Video Domain Adaptation | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Unsupervised Domain Adaptation (UDA) can transfer knowledge from labeled
source data to unlabeled target data of the same categories. However, UDA for
first-person action recognition is an under-explored problem, with lack of
datasets and limited consideration of first-person video characteristics. This
paper focuses on addressing this problem. Firstly, we propose two small-scale
first-person video domain adaptation datasets: ADL$_{small}$ and GTEA-KITCHEN.
Secondly, we introduce channel-temporal attention blocks to capture the
channel-wise and temporal-wise relationships and model their inter-dependencies
important to first-person vision. Finally, we propose a Channel-Temporal
Attention Network (CTAN) to integrate these blocks into existing architectures.
CTAN outperforms baselines on the two proposed datasets and one existing
dataset EPIC$_{cvpr20}$.
| [
{
"created": "Tue, 17 Aug 2021 19:30:42 GMT",
"version": "v1"
},
{
"created": "Thu, 19 Aug 2021 09:08:33 GMT",
"version": "v2"
}
] | 2021-08-20 | [
[
"Liu",
"Xianyuan",
""
],
[
"Zhou",
"Shuo",
""
],
[
"Lei",
"Tao",
""
],
[
"Lu",
"Haiping",
""
]
] | Unsupervised Domain Adaptation (UDA) can transfer knowledge from labeled source data to unlabeled target data of the same categories. However, UDA for first-person action recognition is an under-explored problem, with lack of datasets and limited consideration of first-person video characteristics. This paper focuses on addressing this problem. Firstly, we propose two small-scale first-person video domain adaptation datasets: ADL$_{small}$ and GTEA-KITCHEN. Secondly, we introduce channel-temporal attention blocks to capture the channel-wise and temporal-wise relationships and model their inter-dependencies important to first-person vision. Finally, we propose a Channel-Temporal Attention Network (CTAN) to integrate these blocks into existing architectures. CTAN outperforms baselines on the two proposed datasets and one existing dataset EPIC$_{cvpr20}$. |
1912.05845 | Anthony Ortiz | Anthony Ortiz, Caleb Robinson, Dan Morris, Olac Fuentes, Christopher
Kiekintveld, Md Mahmudulla Hassan and Nebojsa Jojic | Local Context Normalization: Revisiting Local Normalization | Accepted as a CVPR 2020 oral paper. arXiv admin note: text overlap
with arXiv:1803.08494 by other authors | CVPR 2020 | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Normalization layers have been shown to improve convergence in deep neural
networks, and even add useful inductive biases. In many vision applications the
local spatial context of the features is important, but most common
normalization schemes including Group Normalization (GN), Instance
Normalization (IN), and Layer Normalization (LN) normalize over the entire
spatial dimension of a feature. This can wash out important signals and degrade
performance. For example, in applications that use satellite imagery, input
images can be arbitrarily large; consequently, it is nonsensical to normalize
over the entire area. Positional Normalization (PN), on the other hand, only
normalizes over a single spatial position at a time. A natural compromise is to
normalize features by local context, while also taking into account group level
information. In this paper, we propose Local Context Normalization (LCN): a
normalization layer where every feature is normalized based on a window around
it and the filters in its group. We propose an algorithmic solution to make LCN
efficient for arbitrary window sizes, even if every point in the image has a
unique window. LCN outperforms its Batch Normalization (BN), GN, IN, and LN
counterparts for object detection, semantic segmentation, and instance
segmentation applications in several benchmark datasets, while keeping
performance independent of the batch size and facilitating transfer learning.
| [
{
"created": "Thu, 12 Dec 2019 09:28:24 GMT",
"version": "v1"
},
{
"created": "Fri, 13 Dec 2019 06:22:50 GMT",
"version": "v2"
},
{
"created": "Sat, 9 May 2020 09:27:12 GMT",
"version": "v3"
}
] | 2020-05-12 | [
[
"Ortiz",
"Anthony",
""
],
[
"Robinson",
"Caleb",
""
],
[
"Morris",
"Dan",
""
],
[
"Fuentes",
"Olac",
""
],
[
"Kiekintveld",
"Christopher",
""
],
[
"Hassan",
"Md Mahmudulla",
""
],
[
"Jojic",
"Nebojsa",
""
]
] | Normalization layers have been shown to improve convergence in deep neural networks, and even add useful inductive biases. In many vision applications the local spatial context of the features is important, but most common normalization schemes including Group Normalization (GN), Instance Normalization (IN), and Layer Normalization (LN) normalize over the entire spatial dimension of a feature. This can wash out important signals and degrade performance. For example, in applications that use satellite imagery, input images can be arbitrarily large; consequently, it is nonsensical to normalize over the entire area. Positional Normalization (PN), on the other hand, only normalizes over a single spatial position at a time. A natural compromise is to normalize features by local context, while also taking into account group level information. In this paper, we propose Local Context Normalization (LCN): a normalization layer where every feature is normalized based on a window around it and the filters in its group. We propose an algorithmic solution to make LCN efficient for arbitrary window sizes, even if every point in the image has a unique window. LCN outperforms its Batch Normalization (BN), GN, IN, and LN counterparts for object detection, semantic segmentation, and instance segmentation applications in several benchmark datasets, while keeping performance independent of the batch size and facilitating transfer learning. |
2312.15763 | Harshithanjani Athi | Rasagna Chigullapally, Harshithanjani Athi, Nikhil Karamchandani and
V. Lalitha | On Distributed Multi-User Secret Sharing with Multiple Secrets per User | null | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider a distributed multi-user secret sharing (DMUSS) setting in which
there is a dealer, $n$ storage nodes, and $m$ secrets. Each user demands a
$t$-subset of $m$ secrets. Earlier work in this setting dealt with the case of
$t=1$; in this work, we consider general $t$. The user downloads shares from
the storage nodes based on the designed access structure and reconstructs its
secrets. We identify a necessary condition on the access structures to ensure
weak secrecy. We also make a connection between access structures for this
problem and $t$-disjunct matrices. We apply various $t$-disjunct matrix
constructions in this setting and compare their performance in terms of the
number of storage nodes and communication complexity. We also derive bounds on
the optimal communication complexity of a distributed secret sharing protocol.
Finally, we characterize the capacity region of the DMUSS problem when the
access structure is specified.
| [
{
"created": "Mon, 25 Dec 2023 16:22:25 GMT",
"version": "v1"
},
{
"created": "Sun, 7 Jan 2024 16:40:36 GMT",
"version": "v2"
}
] | 2024-01-09 | [
[
"Chigullapally",
"Rasagna",
""
],
[
"Athi",
"Harshithanjani",
""
],
[
"Karamchandani",
"Nikhil",
""
],
[
"Lalitha",
"V.",
""
]
] | We consider a distributed multi-user secret sharing (DMUSS) setting in which there is a dealer, $n$ storage nodes, and $m$ secrets. Each user demands a $t$-subset of $m$ secrets. Earlier work in this setting dealt with the case of $t=1$; in this work, we consider general $t$. The user downloads shares from the storage nodes based on the designed access structure and reconstructs its secrets. We identify a necessary condition on the access structures to ensure weak secrecy. We also make a connection between access structures for this problem and $t$-disjunct matrices. We apply various $t$-disjunct matrix constructions in this setting and compare their performance in terms of the number of storage nodes and communication complexity. We also derive bounds on the optimal communication complexity of a distributed secret sharing protocol. Finally, we characterize the capacity region of the DMUSS problem when the access structure is specified. |
1909.01885 | Ghazal Tashakor | Ghazal Tashakor and Remo Suppi | Agent-based model for tumour-analysis using Python+Mesa | 7 pages, 3figures, The European Modeling And Simulation Symposium
(EMSS), Proceedings of a meeting held 17-19 September 2018, Budapest,
Hungary. Held at the International Multidisciplinary Modeling and Simulation
Multiconference (I3M 2018), ISBN: 9781510872240 | null | null | null | cs.MA q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The potential power provided and possibilities presented by computation
graphs has steered most of the available modeling techniques to
re-implementing, utilization and including the complex nature of System Biology
(SB). To model the dynamics of cellular population, we need to study a plethora
of scenarios ranging from cell differentiation to tumor growth and etcetera.
Test and verification of a model in research means running the model multiple
times with different or in some cases identical parameters, to see how the
model interacts and if some of the outputs would change regarding different
parameters. In this paper, we will describe the development and implementation
of a new agent-based model using Python. The model can be executed using a
development environment (based on Mesa, and extremely simplified for
convenience) with different parameters. The result is collecting large sets of
data, which will allow an in-depth analysis in the microenvironment of the
tumor by the means of network analysis.
| [
{
"created": "Wed, 4 Sep 2019 15:33:09 GMT",
"version": "v1"
}
] | 2019-09-05 | [
[
"Tashakor",
"Ghazal",
""
],
[
"Suppi",
"Remo",
""
]
] | The potential power provided and possibilities presented by computation graphs has steered most of the available modeling techniques to re-implementing, utilization and including the complex nature of System Biology (SB). To model the dynamics of cellular population, we need to study a plethora of scenarios ranging from cell differentiation to tumor growth and etcetera. Test and verification of a model in research means running the model multiple times with different or in some cases identical parameters, to see how the model interacts and if some of the outputs would change regarding different parameters. In this paper, we will describe the development and implementation of a new agent-based model using Python. The model can be executed using a development environment (based on Mesa, and extremely simplified for convenience) with different parameters. The result is collecting large sets of data, which will allow an in-depth analysis in the microenvironment of the tumor by the means of network analysis. |
2210.15147 | Zilin Yuan | Zilin Yuan, Yinghui Li, Yangning Li, Rui Xie, Wei Wu, Hai-Tao Zheng | A Curriculum Learning Approach for Multi-domain Text Classification
Using Keyword weight Ranking | Submitted to ICASSP2023 (currently under review) | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Text classification is a very classic NLP task, but it has two prominent
shortcomings: On the one hand, text classification is deeply domain-dependent.
That is, a classifier trained on the corpus of one domain may not perform so
well in another domain. On the other hand, text classification models require a
lot of annotated data for training. However, for some domains, there may not
exist enough annotated data. Therefore, it is valuable to investigate how to
efficiently utilize text data from different domains to improve the performance
of models in various domains. Some multi-domain text classification models are
trained by adversarial training to extract shared features among all domains
and the specific features of each domain. We noted that the distinctness of the
domain-specific features is different, so in this paper, we propose to use a
curriculum learning strategy based on keyword weight ranking to improve the
performance of multi-domain text classification models. The experimental
results on the Amazon review and FDU-MTL datasets show that our curriculum
learning strategy effectively improves the performance of multi-domain text
classification models based on adversarial learning and outperforms
state-of-the-art methods.
| [
{
"created": "Thu, 27 Oct 2022 03:15:26 GMT",
"version": "v1"
}
] | 2022-10-28 | [
[
"Yuan",
"Zilin",
""
],
[
"Li",
"Yinghui",
""
],
[
"Li",
"Yangning",
""
],
[
"Xie",
"Rui",
""
],
[
"Wu",
"Wei",
""
],
[
"Zheng",
"Hai-Tao",
""
]
] | Text classification is a very classic NLP task, but it has two prominent shortcomings: On the one hand, text classification is deeply domain-dependent. That is, a classifier trained on the corpus of one domain may not perform so well in another domain. On the other hand, text classification models require a lot of annotated data for training. However, for some domains, there may not exist enough annotated data. Therefore, it is valuable to investigate how to efficiently utilize text data from different domains to improve the performance of models in various domains. Some multi-domain text classification models are trained by adversarial training to extract shared features among all domains and the specific features of each domain. We noted that the distinctness of the domain-specific features is different, so in this paper, we propose to use a curriculum learning strategy based on keyword weight ranking to improve the performance of multi-domain text classification models. The experimental results on the Amazon review and FDU-MTL datasets show that our curriculum learning strategy effectively improves the performance of multi-domain text classification models based on adversarial learning and outperforms state-of-the-art methods. |
2408.03314 | Charlie Snell | Charlie Snell, Jaehoon Lee, Kelvin Xu, Aviral Kumar | Scaling LLM Test-Time Compute Optimally can be More Effective than
Scaling Model Parameters | null | null | null | null | cs.LG cs.CL | http://creativecommons.org/licenses/by/4.0/ | Enabling LLMs to improve their outputs by using more test-time computation is
a critical step towards building generally self-improving agents that can
operate on open-ended natural language. In this paper, we study the scaling of
inference-time computation in LLMs, with a focus on answering the question: if
an LLM is allowed to use a fixed but non-trivial amount of inference-time
compute, how much can it improve its performance on a challenging prompt?
Answering this question has implications not only on the achievable performance
of LLMs, but also on the future of LLM pretraining and how one should tradeoff
inference-time and pre-training compute. Despite its importance, little
research attempted to understand the scaling behaviors of various test-time
inference methods. Moreover, current work largely provides negative results for
a number of these strategies. In this work, we analyze two primary mechanisms
to scale test-time computation: (1) searching against dense, process-based
verifier reward models; and (2) updating the model's distribution over a
response adaptively, given the prompt at test time. We find that in both cases,
the effectiveness of different approaches to scaling test-time compute
critically varies depending on the difficulty of the prompt. This observation
motivates applying a "compute-optimal" scaling strategy, which acts to most
effectively allocate test-time compute adaptively per prompt. Using this
compute-optimal strategy, we can improve the efficiency of test-time compute
scaling by more than 4x compared to a best-of-N baseline. Additionally, in a
FLOPs-matched evaluation, we find that on problems where a smaller base model
attains somewhat non-trivial success rates, test-time compute can be used to
outperform a 14x larger model.
| [
{
"created": "Tue, 6 Aug 2024 17:35:05 GMT",
"version": "v1"
}
] | 2024-08-07 | [
[
"Snell",
"Charlie",
""
],
[
"Lee",
"Jaehoon",
""
],
[
"Xu",
"Kelvin",
""
],
[
"Kumar",
"Aviral",
""
]
] | Enabling LLMs to improve their outputs by using more test-time computation is a critical step towards building generally self-improving agents that can operate on open-ended natural language. In this paper, we study the scaling of inference-time computation in LLMs, with a focus on answering the question: if an LLM is allowed to use a fixed but non-trivial amount of inference-time compute, how much can it improve its performance on a challenging prompt? Answering this question has implications not only on the achievable performance of LLMs, but also on the future of LLM pretraining and how one should tradeoff inference-time and pre-training compute. Despite its importance, little research attempted to understand the scaling behaviors of various test-time inference methods. Moreover, current work largely provides negative results for a number of these strategies. In this work, we analyze two primary mechanisms to scale test-time computation: (1) searching against dense, process-based verifier reward models; and (2) updating the model's distribution over a response adaptively, given the prompt at test time. We find that in both cases, the effectiveness of different approaches to scaling test-time compute critically varies depending on the difficulty of the prompt. This observation motivates applying a "compute-optimal" scaling strategy, which acts to most effectively allocate test-time compute adaptively per prompt. Using this compute-optimal strategy, we can improve the efficiency of test-time compute scaling by more than 4x compared to a best-of-N baseline. Additionally, in a FLOPs-matched evaluation, we find that on problems where a smaller base model attains somewhat non-trivial success rates, test-time compute can be used to outperform a 14x larger model. |
2012.09602 | Amit Sahu | Amit Sahu and Noelia V\'allez and Rosana Rodr\'iguez-Bobada and
Mohamad Alhaddad and Omar Moured and Georg Neugschwandtner | Application of the Neural Network Dependability Kit in Real-World
Environments | 10 pages, 7 Figures including 2 appendices Main Content: 5 pages, 1
Figure | null | null | null | cs.LG cs.SE | http://creativecommons.org/licenses/by/4.0/ | In this paper, we provide a guideline for using the Neural Network
Dependability Kit (NNDK) during the development process of NN models, and show
how the algorithm is applied in two image classification use cases. The case
studies demonstrate the usage of the dependability kit to obtain insights about
the NN model and how they informed the development process of the neural
network model. After interpreting neural networks via the different metrics
available in the NNDK, the developers were able to increase the NNs' accuracy,
trust the developed networks, and make them more robust. In addition, we
obtained a novel application-oriented technique to provide supporting evidence
for an NN's classification result to the user. In the medical image
classification use case, it was used to retrieve case images from the training
dataset that were similar to the current patient's image and could therefore
act as a support for the NN model's decision and aid doctors in interpreting
the results.
| [
{
"created": "Mon, 14 Dec 2020 06:53:13 GMT",
"version": "v1"
}
] | 2020-12-18 | [
[
"Sahu",
"Amit",
""
],
[
"Vállez",
"Noelia",
""
],
[
"Rodríguez-Bobada",
"Rosana",
""
],
[
"Alhaddad",
"Mohamad",
""
],
[
"Moured",
"Omar",
""
],
[
"Neugschwandtner",
"Georg",
""
]
] | In this paper, we provide a guideline for using the Neural Network Dependability Kit (NNDK) during the development process of NN models, and show how the algorithm is applied in two image classification use cases. The case studies demonstrate the usage of the dependability kit to obtain insights about the NN model and how they informed the development process of the neural network model. After interpreting neural networks via the different metrics available in the NNDK, the developers were able to increase the NNs' accuracy, trust the developed networks, and make them more robust. In addition, we obtained a novel application-oriented technique to provide supporting evidence for an NN's classification result to the user. In the medical image classification use case, it was used to retrieve case images from the training dataset that were similar to the current patient's image and could therefore act as a support for the NN model's decision and aid doctors in interpreting the results. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.