id stringlengths 9 10 | submitter stringlengths 1 64 ⌀ | authors stringlengths 4 20.7k | title stringlengths 4 246 | comments stringlengths 1 523 ⌀ | journal-ref stringlengths 4 404 ⌀ | doi stringlengths 11 153 ⌀ | report-no stringlengths 2 254 ⌀ | categories stringlengths 5 98 | license stringclasses 9 values | orig_abstract stringlengths 14 3.35k | versions listlengths 1 60 | update_date stringlengths 10 10 | authors_parsed listlengths 1 1.35k | abstract stringlengths 11 3.34k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2210.01055 | Tianyu Huang | Tianyu Huang, Bowen Dong, Yunhan Yang, Xiaoshui Huang, Rynson W.H.
Lau, Wanli Ouyang, Wangmeng Zuo | CLIP2Point: Transfer CLIP to Point Cloud Classification with Image-Depth
Pre-training | Accepted by ICCV2023 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Pre-training across 3D vision and language remains under development because
of limited training data. Recent works attempt to transfer vision-language
pre-training models to 3D vision. PointCLIP converts point cloud data to
multi-view depth maps, adopting CLIP for shape classification. However, its
performance is restricted by the domain gap between rendered depth maps and
images, as well as the diversity of depth distributions. To address this issue,
we propose CLIP2Point, an image-depth pre-training method by contrastive
learning to transfer CLIP to the 3D domain, and adapt it to point cloud
classification. We introduce a new depth rendering setting that forms a better
visual effect, and then render 52,460 pairs of images and depth maps from
ShapeNet for pre-training. The pre-training scheme of CLIP2Point combines
cross-modality learning to enforce the depth features for capturing expressive
visual and textual features and intra-modality learning to enhance the
invariance of depth aggregation. Additionally, we propose a novel Dual-Path
Adapter (DPA) module, i.e., a dual-path structure with simplified adapters for
few-shot learning. The dual-path structure allows the joint use of CLIP and
CLIP2Point, and the simplified adapter can well fit few-shot tasks without
post-search. Experimental results show that CLIP2Point is effective in
transferring CLIP knowledge to 3D vision. Our CLIP2Point outperforms PointCLIP
and other self-supervised 3D networks, achieving state-of-the-art results on
zero-shot and few-shot classification.
| [
{
"created": "Mon, 3 Oct 2022 16:13:14 GMT",
"version": "v1"
},
{
"created": "Sun, 20 Nov 2022 12:08:19 GMT",
"version": "v2"
},
{
"created": "Wed, 23 Aug 2023 03:24:13 GMT",
"version": "v3"
}
] | 2023-08-24 | [
[
"Huang",
"Tianyu",
""
],
[
"Dong",
"Bowen",
""
],
[
"Yang",
"Yunhan",
""
],
[
"Huang",
"Xiaoshui",
""
],
[
"Lau",
"Rynson W. H.",
""
],
[
"Ouyang",
"Wanli",
""
],
[
"Zuo",
"Wangmeng",
""
]
] | Pre-training across 3D vision and language remains under development because of limited training data. Recent works attempt to transfer vision-language pre-training models to 3D vision. PointCLIP converts point cloud data to multi-view depth maps, adopting CLIP for shape classification. However, its performance is restricted by the domain gap between rendered depth maps and images, as well as the diversity of depth distributions. To address this issue, we propose CLIP2Point, an image-depth pre-training method by contrastive learning to transfer CLIP to the 3D domain, and adapt it to point cloud classification. We introduce a new depth rendering setting that forms a better visual effect, and then render 52,460 pairs of images and depth maps from ShapeNet for pre-training. The pre-training scheme of CLIP2Point combines cross-modality learning to enforce the depth features for capturing expressive visual and textual features and intra-modality learning to enhance the invariance of depth aggregation. Additionally, we propose a novel Dual-Path Adapter (DPA) module, i.e., a dual-path structure with simplified adapters for few-shot learning. The dual-path structure allows the joint use of CLIP and CLIP2Point, and the simplified adapter can well fit few-shot tasks without post-search. Experimental results show that CLIP2Point is effective in transferring CLIP knowledge to 3D vision. Our CLIP2Point outperforms PointCLIP and other self-supervised 3D networks, achieving state-of-the-art results on zero-shot and few-shot classification. |
2304.07162 | Thomas Neele | Thomas Neele, Jaco van de Pol | Operations on Fixpoint Equation Systems | null | Logical Methods in Computer Science, Volume 20, Issue 3 (July 10,
2024) lmcs:11199 | 10.46298/lmcs-20(3:5)2024 | null | cs.LO | http://creativecommons.org/licenses/by/4.0/ | We study operations on fixpoint equation systems (FES) over arbitrary
complete lattices. We investigate under which conditions these operations, such
as substituting variables by their definition, and swapping the ordering of
equations, preserve the solution of a FES. We provide rigorous,
computer-checked proofs. Along the way, we list a number of known and new
identities and inequalities on extremal fixpoints in complete lattices.
| [
{
"created": "Fri, 14 Apr 2023 14:35:18 GMT",
"version": "v1"
},
{
"created": "Fri, 9 Feb 2024 14:23:48 GMT",
"version": "v2"
},
{
"created": "Tue, 14 May 2024 08:49:22 GMT",
"version": "v3"
},
{
"created": "Tue, 9 Jul 2024 11:24:33 GMT",
"version": "v4"
}
] | 2024-08-07 | [
[
"Neele",
"Thomas",
""
],
[
"van de Pol",
"Jaco",
""
]
] | We study operations on fixpoint equation systems (FES) over arbitrary complete lattices. We investigate under which conditions these operations, such as substituting variables by their definition, and swapping the ordering of equations, preserve the solution of a FES. We provide rigorous, computer-checked proofs. Along the way, we list a number of known and new identities and inequalities on extremal fixpoints in complete lattices. |
2201.04678 | Weidong Luo | Weidong Luo | Polynomial Turing Compressions for Some Graph Problems Parameterized by
Modular-Width | 18 pages | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A polynomial Turing compression (PTC) for a parameterized problem $L$ is a
polynomial time Turing machine that has access to an oracle for a problem $L'$
such that a polynomial in the input parameter bounds each query. Meanwhile, a
polynomial (many-one) compression (PC) can be regarded as a restricted variant
of PTC where the machine can query the oracle exactly once and must output the
same answer as the oracle. Bodlaender et al. (ICALP 2008) and Fortnow and
Santhanam (STOC 2008) initiated an impressive hardness theory for PC under the
assumption coNP $\not\subseteq$ NP/poly. Since PTC is a generalization of PC,
we define $\mathcal{C}$ as the set of all problems that have PTCs but have no
PCs under the assumption coNP $\not\subseteq$ NP/poly. Based on the hardness
theory for PC, Fernau et al. (STACS 2009) found the first problem Leaf
Out-tree($k$) in $\mathcal{C}$. However, very little is known about
$\mathcal{C}$, as only a dozen problems were shown to belong to the complexity
class in the last ten years. Several problems are open, for example, whether
CNF-SAT($n$) and $k$-path are in $\mathcal{C}$, and novel ideas are required to
better understand the fundamental differences between PTCs and PCs.
In this paper, we enrich our knowledge about $\mathcal{C}$ by showing that
several problems parameterized by modular-width ($mw$) belong to $\mathcal{C}$.
More specifically, exploiting the properties of the well-studied structural
graph parameter $mw$, we demonstrate 17 problems parameterized by $mw$ are in
$\mathcal{C}$, such as Chromatic Number($mw$) and Hamiltonian Cycle($mw$). In
addition, we develop a general recipe to prove the existence of PTCs for a
large class of problems, including our 17 problems.
| [
{
"created": "Wed, 12 Jan 2022 20:12:41 GMT",
"version": "v1"
},
{
"created": "Thu, 21 Apr 2022 16:59:42 GMT",
"version": "v2"
},
{
"created": "Thu, 14 Dec 2023 04:17:26 GMT",
"version": "v3"
}
] | 2023-12-15 | [
[
"Luo",
"Weidong",
""
]
] | A polynomial Turing compression (PTC) for a parameterized problem $L$ is a polynomial time Turing machine that has access to an oracle for a problem $L'$ such that a polynomial in the input parameter bounds each query. Meanwhile, a polynomial (many-one) compression (PC) can be regarded as a restricted variant of PTC where the machine can query the oracle exactly once and must output the same answer as the oracle. Bodlaender et al. (ICALP 2008) and Fortnow and Santhanam (STOC 2008) initiated an impressive hardness theory for PC under the assumption coNP $\not\subseteq$ NP/poly. Since PTC is a generalization of PC, we define $\mathcal{C}$ as the set of all problems that have PTCs but have no PCs under the assumption coNP $\not\subseteq$ NP/poly. Based on the hardness theory for PC, Fernau et al. (STACS 2009) found the first problem Leaf Out-tree($k$) in $\mathcal{C}$. However, very little is known about $\mathcal{C}$, as only a dozen problems were shown to belong to the complexity class in the last ten years. Several problems are open, for example, whether CNF-SAT($n$) and $k$-path are in $\mathcal{C}$, and novel ideas are required to better understand the fundamental differences between PTCs and PCs. In this paper, we enrich our knowledge about $\mathcal{C}$ by showing that several problems parameterized by modular-width ($mw$) belong to $\mathcal{C}$. More specifically, exploiting the properties of the well-studied structural graph parameter $mw$, we demonstrate 17 problems parameterized by $mw$ are in $\mathcal{C}$, such as Chromatic Number($mw$) and Hamiltonian Cycle($mw$). In addition, we develop a general recipe to prove the existence of PTCs for a large class of problems, including our 17 problems. |
0908.2083 | Janusz Brzozowski | J. Brzozowski, G. Jir\'askov\'a, B. Li | Quotient complexity of ideal languages | 24 pages, 9 .eepic figures, 2 tables, use llncs.cls | null | null | null | cs.FL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the state complexity of regular operations in the class of ideal
languages. A language L over an alphabet Sigma is a right (left) ideal if it
satisfies L = L Sigma* (L = Sigma* L). It is a two-sided ideal if L = Sigma* L
Sigma *, and an all-sided ideal if it is the shuffle of Sigma* with L. We
prefer the term "quotient complexity" instead of "state complexity", and we use
derivatives to calculate upper bounds on quotient complexity, whenever it is
convenient. We find tight upper bounds on the quotient complexity of each type
of ideal language in terms of the complexity of an arbitrary generator and of
its minimal generator, the complexity of the minimal generator, and also on the
operations union, intersection, set difference, symmetric difference,
concatenation, star and reversal of ideal languages.
| [
{
"created": "Fri, 14 Aug 2009 15:24:15 GMT",
"version": "v1"
}
] | 2009-08-17 | [
[
"Brzozowski",
"J.",
""
],
[
"Jirásková",
"G.",
""
],
[
"Li",
"B.",
""
]
] | We study the state complexity of regular operations in the class of ideal languages. A language L over an alphabet Sigma is a right (left) ideal if it satisfies L = L Sigma* (L = Sigma* L). It is a two-sided ideal if L = Sigma* L Sigma *, and an all-sided ideal if it is the shuffle of Sigma* with L. We prefer the term "quotient complexity" instead of "state complexity", and we use derivatives to calculate upper bounds on quotient complexity, whenever it is convenient. We find tight upper bounds on the quotient complexity of each type of ideal language in terms of the complexity of an arbitrary generator and of its minimal generator, the complexity of the minimal generator, and also on the operations union, intersection, set difference, symmetric difference, concatenation, star and reversal of ideal languages. |
2108.11299 | Tobias Lorenz | Tobias Lorenz, Marta Kwiatkowska, Mario Fritz | Certifiers Make Neural Networks Vulnerable to Availability Attacks | Published at 16th ACM Workshop on Artificial Intelligence and
Security (AISec '23) | null | 10.1145/3605764.3623917 | null | cs.LG cs.CR cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To achieve reliable, robust, and safe AI systems, it is vital to implement
fallback strategies when AI predictions cannot be trusted. Certifiers for
neural networks are a reliable way to check the robustness of these
predictions. They guarantee for some predictions that a certain class of
manipulations or attacks could not have changed the outcome. For the remaining
predictions without guarantees, the method abstains from making a prediction,
and a fallback strategy needs to be invoked, which typically incurs additional
costs, can require a human operator, or even fail to provide any prediction.
While this is a key concept towards safe and secure AI, we show for the first
time that this approach comes with its own security risks, as such fallback
strategies can be deliberately triggered by an adversary. In addition to
naturally occurring abstains for some inputs and perturbations, the adversary
can use training-time attacks to deliberately trigger the fallback with high
probability. This transfers the main system load onto the fallback, reducing
the overall system's integrity and/or availability. We design two novel
availability attacks, which show the practical relevance of these threats. For
example, adding 1% poisoned data during training is sufficient to trigger the
fallback and hence make the model unavailable for up to 100% of all inputs by
inserting the trigger. Our extensive experiments across multiple datasets,
model architectures, and certifiers demonstrate the broad applicability of
these attacks. An initial investigation into potential defenses shows that
current approaches are insufficient to mitigate the issue, highlighting the
need for new, specific solutions.
| [
{
"created": "Wed, 25 Aug 2021 15:49:10 GMT",
"version": "v1"
},
{
"created": "Mon, 7 Mar 2022 09:42:15 GMT",
"version": "v2"
},
{
"created": "Fri, 13 May 2022 12:11:56 GMT",
"version": "v3"
},
{
"created": "Sun, 2 Oct 2022 16:58:47 GMT",
"version": "v4"
},
{
"created": "Tue, 3 Oct 2023 13:08:50 GMT",
"version": "v5"
}
] | 2023-10-04 | [
[
"Lorenz",
"Tobias",
""
],
[
"Kwiatkowska",
"Marta",
""
],
[
"Fritz",
"Mario",
""
]
] | To achieve reliable, robust, and safe AI systems, it is vital to implement fallback strategies when AI predictions cannot be trusted. Certifiers for neural networks are a reliable way to check the robustness of these predictions. They guarantee for some predictions that a certain class of manipulations or attacks could not have changed the outcome. For the remaining predictions without guarantees, the method abstains from making a prediction, and a fallback strategy needs to be invoked, which typically incurs additional costs, can require a human operator, or even fail to provide any prediction. While this is a key concept towards safe and secure AI, we show for the first time that this approach comes with its own security risks, as such fallback strategies can be deliberately triggered by an adversary. In addition to naturally occurring abstains for some inputs and perturbations, the adversary can use training-time attacks to deliberately trigger the fallback with high probability. This transfers the main system load onto the fallback, reducing the overall system's integrity and/or availability. We design two novel availability attacks, which show the practical relevance of these threats. For example, adding 1% poisoned data during training is sufficient to trigger the fallback and hence make the model unavailable for up to 100% of all inputs by inserting the trigger. Our extensive experiments across multiple datasets, model architectures, and certifiers demonstrate the broad applicability of these attacks. An initial investigation into potential defenses shows that current approaches are insufficient to mitigate the issue, highlighting the need for new, specific solutions. |
2004.12091 | Onur G\"unl\"u Dr.-Ing. | Onur G\"unl\"u, Peter Trifonov, Muah Kim, Rafael F. Schaefer, and
Vladimir Sidorenko | Randomized Nested Polar Subcode Constructions for Privacy, Secrecy, and
Storage | Shorter version to appear in 2020 IEEE International Symposium on
Information Theory and Applications. Decoding complexity results are added | null | null | null | cs.IT cs.CR cs.MM eess.SP math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider polar subcodes (PSCs), which are polar codes (PCs) with
dynamically-frozen symbols, to increase the minimum distance as compared to
corresponding PCs. A randomized nested PSC construction with a low-rate PSC and
a high-rate PC, is proposed for list and sequential successive cancellation
decoders. This code construction aims to perform lossy compression with side
information. Nested PSCs are used in the key agreement problem with physical
identifiers. Gains in terms of the secret-key vs. storage rate ratio as
compared to nested PCs with the same list size are illustrated to show that
nested PSCs significantly improve on nested PCs. The performance of the nested
PSCs is shown to improve with larger list sizes, which is not the case for
nested PCs considered.
| [
{
"created": "Sat, 25 Apr 2020 08:57:17 GMT",
"version": "v1"
},
{
"created": "Tue, 30 Jun 2020 12:57:02 GMT",
"version": "v2"
},
{
"created": "Wed, 29 Jul 2020 10:26:19 GMT",
"version": "v3"
}
] | 2020-07-30 | [
[
"Günlü",
"Onur",
""
],
[
"Trifonov",
"Peter",
""
],
[
"Kim",
"Muah",
""
],
[
"Schaefer",
"Rafael F.",
""
],
[
"Sidorenko",
"Vladimir",
""
]
] | We consider polar subcodes (PSCs), which are polar codes (PCs) with dynamically-frozen symbols, to increase the minimum distance as compared to corresponding PCs. A randomized nested PSC construction with a low-rate PSC and a high-rate PC, is proposed for list and sequential successive cancellation decoders. This code construction aims to perform lossy compression with side information. Nested PSCs are used in the key agreement problem with physical identifiers. Gains in terms of the secret-key vs. storage rate ratio as compared to nested PCs with the same list size are illustrated to show that nested PSCs significantly improve on nested PCs. The performance of the nested PSCs is shown to improve with larger list sizes, which is not the case for nested PCs considered. |
2012.07959 | Li-Yi Wei | Peihan Tu, Li-Yi Wei, Koji Yatani, Takeo Igarashi, Matthias Zwicker | Continuous Curve Textures | null | null | 10.1145/3414685.3417780 | null | cs.GR | http://creativecommons.org/licenses/by/4.0/ | Repetitive patterns are ubiquitous in natural and human-made objects, and can
be created with a variety of tools and methods. Manual authoring provides
unmatched degree of freedom and control, but can require significant artistic
expertise and manual labor. Computational methods can automate parts of the
manual creation process, but are mainly tailored for discrete pixels or
elements instead of more general continuous structures. We propose an
example-based method to synthesize continuous curve patterns from exemplars.
Our main idea is to extend prior sample-based discrete element synthesis
methods to consider not only sample positions (geometry) but also their
connections (topology). Since continuous structures can exhibit higher
complexity than discrete elements, we also propose robust, hierarchical
synthesis to enhance output quality. Our algorithm can generate a variety of
continuous curve patterns fully automatically. For further quality improvement
and customization, we also present an autocomplete user interface to facilitate
interactive creation and iterative editing. We evaluate our methods and
interface via different patterns, ablation studies, and comparisons with
alternative methods.
| [
{
"created": "Mon, 14 Dec 2020 21:51:17 GMT",
"version": "v1"
}
] | 2020-12-16 | [
[
"Tu",
"Peihan",
""
],
[
"Wei",
"Li-Yi",
""
],
[
"Yatani",
"Koji",
""
],
[
"Igarashi",
"Takeo",
""
],
[
"Zwicker",
"Matthias",
""
]
] | Repetitive patterns are ubiquitous in natural and human-made objects, and can be created with a variety of tools and methods. Manual authoring provides unmatched degree of freedom and control, but can require significant artistic expertise and manual labor. Computational methods can automate parts of the manual creation process, but are mainly tailored for discrete pixels or elements instead of more general continuous structures. We propose an example-based method to synthesize continuous curve patterns from exemplars. Our main idea is to extend prior sample-based discrete element synthesis methods to consider not only sample positions (geometry) but also their connections (topology). Since continuous structures can exhibit higher complexity than discrete elements, we also propose robust, hierarchical synthesis to enhance output quality. Our algorithm can generate a variety of continuous curve patterns fully automatically. For further quality improvement and customization, we also present an autocomplete user interface to facilitate interactive creation and iterative editing. We evaluate our methods and interface via different patterns, ablation studies, and comparisons with alternative methods. |
2212.14527 | Sikun Yang | Sikun Yang, Hongyuan Zha | Estimating Latent Population Flows from Aggregated Data via Inversing
Multi-Marginal Optimal Transport | null | null | null | null | cs.LG | http://creativecommons.org/publicdomain/zero/1.0/ | We study the problem of estimating latent population flows from aggregated
count data. This problem arises when individual trajectories are not available
due to privacy issues or measurement fidelity. Instead, the aggregated
observations are measured over discrete-time points, for estimating the
population flows among states. Most related studies tackle the problems by
learning the transition parameters of a time-homogeneous Markov process.
Nonetheless, most real-world population flows can be influenced by various
uncertainties such as traffic jam and weather conditions. Thus, in many cases,
a time-homogeneous Markov model is a poor approximation of the much more
complex population flows. To circumvent this difficulty, we resort to a
multi-marginal optimal transport (MOT) formulation that can naturally represent
aggregated observations with constrained marginals, and encode time-dependent
transition matrices by the cost functions. In particular, we propose to
estimate the transition flows from aggregated data by learning the cost
functions of the MOT framework, which enables us to capture time-varying
dynamic patterns. The experiments demonstrate the improved accuracy of the
proposed algorithms than the related methods in estimating several real-world
transition flows.
| [
{
"created": "Fri, 30 Dec 2022 03:03:23 GMT",
"version": "v1"
}
] | 2023-01-02 | [
[
"Yang",
"Sikun",
""
],
[
"Zha",
"Hongyuan",
""
]
] | We study the problem of estimating latent population flows from aggregated count data. This problem arises when individual trajectories are not available due to privacy issues or measurement fidelity. Instead, the aggregated observations are measured over discrete-time points, for estimating the population flows among states. Most related studies tackle the problems by learning the transition parameters of a time-homogeneous Markov process. Nonetheless, most real-world population flows can be influenced by various uncertainties such as traffic jam and weather conditions. Thus, in many cases, a time-homogeneous Markov model is a poor approximation of the much more complex population flows. To circumvent this difficulty, we resort to a multi-marginal optimal transport (MOT) formulation that can naturally represent aggregated observations with constrained marginals, and encode time-dependent transition matrices by the cost functions. In particular, we propose to estimate the transition flows from aggregated data by learning the cost functions of the MOT framework, which enables us to capture time-varying dynamic patterns. The experiments demonstrate the improved accuracy of the proposed algorithms than the related methods in estimating several real-world transition flows. |
1612.04164 | Stefan Wagner | Sebastian V\"ost and Stefan Wagner | Keeping Continuous Deliveries Safe | 4 pages, 3 figures | ICSE-C '17 Proceedings of the 39th International Conference on
Software Engineering Companion, pages 259-261. IEEE, 2017 | 10.1109/ICSE-C.2017.135 | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Allowing swift release cycles, Continuous Delivery has become popular in
application software development and is starting to be applied in
safety-critical domains such as the automotive industry. These domains require
thorough analysis regarding safety constraints, which can be achieved by formal
verification and the execution of safety tests resulting from a safety analysis
on the product. With continuous delivery in place, such tests need to be
executed with every build to ensure the latest software still fulfills all
safety requirements. Even more though, the safety analysis has to be updated
with every change to ensure the safety test suite is still up-to-date. We thus
propose that a safety analysis should be treated no differently from other
deliverables such as source-code and dependencies, formulate guidelines on how
to achieve this and advert areas where future research is needed.
| [
{
"created": "Tue, 13 Dec 2016 13:38:24 GMT",
"version": "v1"
}
] | 2017-11-15 | [
[
"Vöst",
"Sebastian",
""
],
[
"Wagner",
"Stefan",
""
]
] | Allowing swift release cycles, Continuous Delivery has become popular in application software development and is starting to be applied in safety-critical domains such as the automotive industry. These domains require thorough analysis regarding safety constraints, which can be achieved by formal verification and the execution of safety tests resulting from a safety analysis on the product. With continuous delivery in place, such tests need to be executed with every build to ensure the latest software still fulfills all safety requirements. Even more though, the safety analysis has to be updated with every change to ensure the safety test suite is still up-to-date. We thus propose that a safety analysis should be treated no differently from other deliverables such as source-code and dependencies, formulate guidelines on how to achieve this and advert areas where future research is needed. |
2404.03099 | Leonardo Ferreira Guilhoto | Leonardo Ferreira Guilhoto, Paris Perdikaris | Composite Bayesian Optimization In Function Spaces Using NEON -- Neural
Epistemic Operator Networks | null | null | null | null | cs.LG cs.AI cs.CE cs.IT math.IT stat.ML | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Operator learning is a rising field of scientific computing where inputs or
outputs of a machine learning model are functions defined in
infinite-dimensional spaces. In this paper, we introduce NEON (Neural Epistemic
Operator Networks), an architecture for generating predictions with uncertainty
using a single operator network backbone, which presents orders of magnitude
less trainable parameters than deep ensembles of comparable performance. We
showcase the utility of this method for sequential decision-making by examining
the problem of composite Bayesian Optimization (BO), where we aim to optimize a
function $f=g\circ h$, where $h:X\to C(\mathcal{Y},\mathbb{R}^{d_s})$ is an
unknown map which outputs elements of a function space, and $g:
C(\mathcal{Y},\mathbb{R}^{d_s})\to \mathbb{R}$ is a known and cheap-to-compute
functional. By comparing our approach to other state-of-the-art methods on toy
and real world scenarios, we demonstrate that NEON achieves state-of-the-art
performance while requiring orders of magnitude less trainable parameters.
| [
{
"created": "Wed, 3 Apr 2024 22:42:37 GMT",
"version": "v1"
}
] | 2024-04-05 | [
[
"Guilhoto",
"Leonardo Ferreira",
""
],
[
"Perdikaris",
"Paris",
""
]
] | Operator learning is a rising field of scientific computing where inputs or outputs of a machine learning model are functions defined in infinite-dimensional spaces. In this paper, we introduce NEON (Neural Epistemic Operator Networks), an architecture for generating predictions with uncertainty using a single operator network backbone, which presents orders of magnitude less trainable parameters than deep ensembles of comparable performance. We showcase the utility of this method for sequential decision-making by examining the problem of composite Bayesian Optimization (BO), where we aim to optimize a function $f=g\circ h$, where $h:X\to C(\mathcal{Y},\mathbb{R}^{d_s})$ is an unknown map which outputs elements of a function space, and $g: C(\mathcal{Y},\mathbb{R}^{d_s})\to \mathbb{R}$ is a known and cheap-to-compute functional. By comparing our approach to other state-of-the-art methods on toy and real world scenarios, we demonstrate that NEON achieves state-of-the-art performance while requiring orders of magnitude less trainable parameters. |
1802.10233 | Daniel Lemire | Edmon Begoli, Jes\'us Camacho Rodr\'iguez, Julian Hyde, Michael J.
Mior, Daniel Lemire | Apache Calcite: A Foundational Framework for Optimized Query Processing
Over Heterogeneous Data Sources | SIGMOD'18 | null | 10.1145/3183713.3190662 | null | cs.DB | http://creativecommons.org/licenses/by/4.0/ | Apache Calcite is a foundational software framework that provides query
processing, optimization, and query language support to many popular
open-source data processing systems such as Apache Hive, Apache Storm, Apache
Flink, Druid, and MapD. Calcite's architecture consists of a modular and
extensible query optimizer with hundreds of built-in optimization rules, a
query processor capable of processing a variety of query languages, an adapter
architecture designed for extensibility, and support for heterogeneous data
models and stores (relational, semi-structured, streaming, and geospatial).
This flexible, embeddable, and extensible architecture is what makes Calcite an
attractive choice for adoption in big-data frameworks. It is an active project
that continues to introduce support for the new types of data sources, query
languages, and approaches to query processing and optimization.
| [
{
"created": "Wed, 28 Feb 2018 02:10:36 GMT",
"version": "v1"
}
] | 2020-10-09 | [
[
"Begoli",
"Edmon",
""
],
[
"Rodríguez",
"Jesús Camacho",
""
],
[
"Hyde",
"Julian",
""
],
[
"Mior",
"Michael J.",
""
],
[
"Lemire",
"Daniel",
""
]
] | Apache Calcite is a foundational software framework that provides query processing, optimization, and query language support to many popular open-source data processing systems such as Apache Hive, Apache Storm, Apache Flink, Druid, and MapD. Calcite's architecture consists of a modular and extensible query optimizer with hundreds of built-in optimization rules, a query processor capable of processing a variety of query languages, an adapter architecture designed for extensibility, and support for heterogeneous data models and stores (relational, semi-structured, streaming, and geospatial). This flexible, embeddable, and extensible architecture is what makes Calcite an attractive choice for adoption in big-data frameworks. It is an active project that continues to introduce support for the new types of data sources, query languages, and approaches to query processing and optimization. |
2006.05814 | Paul Mireault | Paul Mireault | Implementation Strategies for Multidimensional Spreadsheets | 12 Pages, 18 Colour Figures. arXiv admin note: text overlap with
arXiv:1801.09777 | Proceedings of the EuSpRIG 2019 Conference "Spreadsheet Risk
Management", Browns, Covent Garden, London, pp103-114, ISBN:
978-1-905404-56-8 | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Seasoned Excel developers were invited to participate in a challenge to
implement a spreadsheet with multi-dimensional variables. We analyzed their
spreadsheet to see the different implement strategies employed. We identified
two strategies: most participants used a projection of three or
four-dimensional variables on the two-dimensional plane used by Excel. A few
participants used a database approach where the multi-dimensional variables are
presented in the form of a dataset table with the appropriate primary key. This
approach leads to simpler formulas.
| [
{
"created": "Thu, 4 Jun 2020 21:12:06 GMT",
"version": "v1"
}
] | 2020-06-11 | [
[
"Mireault",
"Paul",
""
]
] | Seasoned Excel developers were invited to participate in a challenge to implement a spreadsheet with multi-dimensional variables. We analyzed their spreadsheet to see the different implement strategies employed. We identified two strategies: most participants used a projection of three or four-dimensional variables on the two-dimensional plane used by Excel. A few participants used a database approach where the multi-dimensional variables are presented in the form of a dataset table with the appropriate primary key. This approach leads to simpler formulas. |
2004.00307 | Filipe Assun\c{c}\~ao | Filipe Assun\c{c}\~ao, Nuno Louren\c{c}o, Bernardete Ribeiro, and
Penousal Machado | Evolution of Scikit-Learn Pipelines with Dynamic Structured Grammatical
Evolution | EvoApps 2020 | null | null | null | cs.NE cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The deployment of Machine Learning (ML) models is a difficult and
time-consuming job that comprises a series of sequential and correlated tasks
that go from the data pre-processing, and the design and extraction of
features, to the choice of the ML algorithm and its parameterisation. The task
is even more challenging considering that the design of features is in many
cases problem specific, and thus requires domain-expertise. To overcome these
limitations Automated Machine Learning (AutoML) methods seek to automate, with
few or no human-intervention, the design of pipelines, i.e., automate the
selection of the sequence of methods that have to be applied to the raw data.
These methods have the potential to enable non-expert users to use ML, and
provide expert users with solutions that they would unlikely consider. In
particular, this paper describes AutoML-DSGE - a novel grammar-based framework
that adapts Dynamic Structured Grammatical Evolution (DSGE) to the evolution of
Scikit-Learn classification pipelines. The experimental results include
comparing AutoML-DSGE to another grammar-based AutoML framework, Resilient
ClassificationPipeline Evolution (RECIPE), and show that the average
performance of the classification pipelines generated by AutoML-DSGE is always
superior to the average performance of RECIPE; the differences are
statistically significant in 3 out of the 10 used datasets.
| [
{
"created": "Wed, 1 Apr 2020 09:31:34 GMT",
"version": "v1"
}
] | 2020-04-02 | [
[
"Assunção",
"Filipe",
""
],
[
"Lourenço",
"Nuno",
""
],
[
"Ribeiro",
"Bernardete",
""
],
[
"Machado",
"Penousal",
""
]
] | The deployment of Machine Learning (ML) models is a difficult and time-consuming job that comprises a series of sequential and correlated tasks that go from the data pre-processing, and the design and extraction of features, to the choice of the ML algorithm and its parameterisation. The task is even more challenging considering that the design of features is in many cases problem specific, and thus requires domain-expertise. To overcome these limitations Automated Machine Learning (AutoML) methods seek to automate, with few or no human-intervention, the design of pipelines, i.e., automate the selection of the sequence of methods that have to be applied to the raw data. These methods have the potential to enable non-expert users to use ML, and provide expert users with solutions that they would unlikely consider. In particular, this paper describes AutoML-DSGE - a novel grammar-based framework that adapts Dynamic Structured Grammatical Evolution (DSGE) to the evolution of Scikit-Learn classification pipelines. The experimental results include comparing AutoML-DSGE to another grammar-based AutoML framework, Resilient ClassificationPipeline Evolution (RECIPE), and show that the average performance of the classification pipelines generated by AutoML-DSGE is always superior to the average performance of RECIPE; the differences are statistically significant in 3 out of the 10 used datasets. |
2407.03059 | Mariia Vladimirova | Mariia Vladimirova, Federico Pavone, Eustache Diemert | FairJob: A Real-World Dataset for Fairness in Online Systems | 24 pages, 15 figures | null | null | null | cs.LG cs.AI cs.CY stat.ML | http://creativecommons.org/licenses/by/4.0/ | We introduce a fairness-aware dataset for job recommendation in advertising,
designed to foster research in algorithmic fairness within real-world
scenarios. It was collected and prepared to comply with privacy standards and
business confidentiality. An additional challenge is the lack of access to
protected user attributes such as gender, for which we propose a solution to
obtain a proxy estimate. Despite being anonymized and including a proxy for a
sensitive attribute, our dataset preserves predictive power and maintains a
realistic and challenging benchmark. This dataset addresses a significant gap
in the availability of fairness-focused resources for high-impact domains like
advertising -- the actual impact being having access or not to precious
employment opportunities, where balancing fairness and utility is a common
industrial challenge. We also explore various stages in the advertising process
where unfairness can occur and introduce a method to compute a fair utility
metric for the job recommendations in online systems case from a biased
dataset. Experimental evaluations of bias mitigation techniques on the released
dataset demonstrate potential improvements in fairness and the associated
trade-offs with utility.
| [
{
"created": "Wed, 3 Jul 2024 12:30:39 GMT",
"version": "v1"
}
] | 2024-07-04 | [
[
"Vladimirova",
"Mariia",
""
],
[
"Pavone",
"Federico",
""
],
[
"Diemert",
"Eustache",
""
]
] | We introduce a fairness-aware dataset for job recommendation in advertising, designed to foster research in algorithmic fairness within real-world scenarios. It was collected and prepared to comply with privacy standards and business confidentiality. An additional challenge is the lack of access to protected user attributes such as gender, for which we propose a solution to obtain a proxy estimate. Despite being anonymized and including a proxy for a sensitive attribute, our dataset preserves predictive power and maintains a realistic and challenging benchmark. This dataset addresses a significant gap in the availability of fairness-focused resources for high-impact domains like advertising -- the actual impact being having access or not to precious employment opportunities, where balancing fairness and utility is a common industrial challenge. We also explore various stages in the advertising process where unfairness can occur and introduce a method to compute a fair utility metric for the job recommendations in online systems case from a biased dataset. Experimental evaluations of bias mitigation techniques on the released dataset demonstrate potential improvements in fairness and the associated trade-offs with utility. |
2001.10721 | Shunchuan Yang | Yu Cheng, Guangzhi Chen, Xiang-Hua Wang, and Shunchuan Yang | Investigation of Numerical Dispersion with Time Step of The FDTD
Methods: Avoiding Erroneous Conclusions | null | null | null | null | cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is widely thought that small time steps lead to small numerical errors in
the finite-difference time-domain (FDTD) simulations. In this paper, we
investigated how time steps impact on numerical dispersion of two FDTD methods
including the FDTD(2,2) method and the FDTD(2,4) method. Through rigorously
analytical and numerical analysis, it is found that small time steps of the
FDTD methods do not always have small numerical errors. Our findings reveal
that these two FDTD methods present different behaviors with respect to time
steps: (1) for the FDTD(2,2) method, smaller time steps limited by the
Courant-Friedrichs-Lewy (CFL) condition increase numerical dispersion and lead
to larger simulation errors; (2) for the FDTD(2,4) method, as time step
increases, numerical dispersion errors first decrease and then increase. Our
findings are also comprehensively validated from one- to three-dimensional
cases through several numerical examples including wave propagation, resonant
frequencies of cavities and a practical electromagnetic compatibility (EMC)
problem.
| [
{
"created": "Wed, 29 Jan 2020 08:28:46 GMT",
"version": "v1"
},
{
"created": "Thu, 20 Feb 2020 16:46:30 GMT",
"version": "v2"
}
] | 2020-02-21 | [
[
"Cheng",
"Yu",
""
],
[
"Chen",
"Guangzhi",
""
],
[
"Wang",
"Xiang-Hua",
""
],
[
"Yang",
"Shunchuan",
""
]
] | It is widely thought that small time steps lead to small numerical errors in the finite-difference time-domain (FDTD) simulations. In this paper, we investigated how time steps impact on numerical dispersion of two FDTD methods including the FDTD(2,2) method and the FDTD(2,4) method. Through rigorously analytical and numerical analysis, it is found that small time steps of the FDTD methods do not always have small numerical errors. Our findings reveal that these two FDTD methods present different behaviors with respect to time steps: (1) for the FDTD(2,2) method, smaller time steps limited by the Courant-Friedrichs-Lewy (CFL) condition increase numerical dispersion and lead to larger simulation errors; (2) for the FDTD(2,4) method, as time step increases, numerical dispersion errors first decrease and then increase. Our findings are also comprehensively validated from one- to three-dimensional cases through several numerical examples including wave propagation, resonant frequencies of cavities and a practical electromagnetic compatibility (EMC) problem. |
2205.09453 | Roxana Danger | Roxana Danger | Differential Privacy: What is all the noise about? | 27 pages, 7 figures | null | null | null | cs.CR cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Differential Privacy (DP) is a formal definition of privacy that provides
rigorous guarantees against risks of privacy breaches during data processing.
It makes no assumptions about the knowledge or computational power of
adversaries, and provides an interpretable, quantifiable and composable
formalism. DP has been actively researched during the last 15 years, but it is
still hard to master for many Machine Learning (ML)) practitioners. This paper
aims to provide an overview of the most important ideas, concepts and uses of
DP in ML, with special focus on its intersection with Federated Learning (FL).
| [
{
"created": "Thu, 19 May 2022 10:12:29 GMT",
"version": "v1"
}
] | 2022-05-20 | [
[
"Danger",
"Roxana",
""
]
] | Differential Privacy (DP) is a formal definition of privacy that provides rigorous guarantees against risks of privacy breaches during data processing. It makes no assumptions about the knowledge or computational power of adversaries, and provides an interpretable, quantifiable and composable formalism. DP has been actively researched during the last 15 years, but it is still hard to master for many Machine Learning (ML)) practitioners. This paper aims to provide an overview of the most important ideas, concepts and uses of DP in ML, with special focus on its intersection with Federated Learning (FL). |
1807.04118 | Takumi Ichimura | Takumi Ichimura, Takuya Uemoto, Akira Hara | Emergence of Altruism Behavior for Multi Feeding Areas in Army Ant
Social Evolutionary System | 6 pages, 11 figures | Proc. of 2014 IEEE International Conference on Systems, Man, and
Cybernetics (IEEE SMC 2014) | 10.1109/SMC.2014.6973902 | null | cs.MA cs.ET cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Army ants perform the altruism that an ant sacrifices its own well-being for
the benefit of another ants. Army ants build bridges using their own bodies
along the path from a food to the nest. We developed the army ant inspired
social evolutionary system which can perform the altruism. The system has 2
kinds of ant agents, `Major ant' and `Minor ant' and the ants communicate with
each other via pheromones. One ants can recognize them as the signals from the
other ants. The pheromones evaporate with the certain ratio and diffused into
the space of neighbors stochastically. If the optimal bridge is found, the path
through the bridge is the shortest route from foods to the nest. We define the
probability for an ant to leave a bridge at a low occupancy condition of ants
and propose the constructing method of the optimal route. In this paper, the
behaviors of ant under the environment with two or more feeding spots are
observed. Some experimental results show the behaviors of great interest with
respect to altruism of ants. The description in some computer simulation is
reported in this paper.
| [
{
"created": "Tue, 10 Jul 2018 04:40:38 GMT",
"version": "v1"
}
] | 2018-07-12 | [
[
"Ichimura",
"Takumi",
""
],
[
"Uemoto",
"Takuya",
""
],
[
"Hara",
"Akira",
""
]
] | Army ants perform the altruism that an ant sacrifices its own well-being for the benefit of another ants. Army ants build bridges using their own bodies along the path from a food to the nest. We developed the army ant inspired social evolutionary system which can perform the altruism. The system has 2 kinds of ant agents, `Major ant' and `Minor ant' and the ants communicate with each other via pheromones. One ants can recognize them as the signals from the other ants. The pheromones evaporate with the certain ratio and diffused into the space of neighbors stochastically. If the optimal bridge is found, the path through the bridge is the shortest route from foods to the nest. We define the probability for an ant to leave a bridge at a low occupancy condition of ants and propose the constructing method of the optimal route. In this paper, the behaviors of ant under the environment with two or more feeding spots are observed. Some experimental results show the behaviors of great interest with respect to altruism of ants. The description in some computer simulation is reported in this paper. |
2106.11097 | Pengfei Xiong | Han Fang, Pengfei Xiong, Luhui Xu, Yu Chen | CLIP2Video: Mastering Video-Text Retrieval via Image CLIP | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present CLIP2Video network to transfer the image-language pre-training
model to video-text retrieval in an end-to-end manner. Leading approaches in
the domain of video-and-language learning try to distill the spatio-temporal
video features and multi-modal interaction between videos and languages from a
large-scale video-text dataset. Different from them, we leverage pretrained
image-language model, simplify it as a two-stage framework with co-learning of
image-text and enhancing temporal relations between video frames and video-text
respectively, make it able to train on comparatively small datasets.
Specifically, based on the spatial semantics captured by Contrastive
Language-Image Pretraining (CLIP) model, our model involves a Temporal
Difference Block to capture motions at fine temporal video frames, and a
Temporal Alignment Block to re-align the tokens of video clips and phrases and
enhance the multi-modal correlation. We conduct thorough ablation studies, and
achieve state-of-the-art performance on major text-to-video and video-to-text
retrieval benchmarks, including new records of retrieval accuracy on MSR-VTT,
MSVD and VATEX.
| [
{
"created": "Mon, 21 Jun 2021 13:30:33 GMT",
"version": "v1"
}
] | 2021-06-22 | [
[
"Fang",
"Han",
""
],
[
"Xiong",
"Pengfei",
""
],
[
"Xu",
"Luhui",
""
],
[
"Chen",
"Yu",
""
]
] | We present CLIP2Video network to transfer the image-language pre-training model to video-text retrieval in an end-to-end manner. Leading approaches in the domain of video-and-language learning try to distill the spatio-temporal video features and multi-modal interaction between videos and languages from a large-scale video-text dataset. Different from them, we leverage pretrained image-language model, simplify it as a two-stage framework with co-learning of image-text and enhancing temporal relations between video frames and video-text respectively, make it able to train on comparatively small datasets. Specifically, based on the spatial semantics captured by Contrastive Language-Image Pretraining (CLIP) model, our model involves a Temporal Difference Block to capture motions at fine temporal video frames, and a Temporal Alignment Block to re-align the tokens of video clips and phrases and enhance the multi-modal correlation. We conduct thorough ablation studies, and achieve state-of-the-art performance on major text-to-video and video-to-text retrieval benchmarks, including new records of retrieval accuracy on MSR-VTT, MSVD and VATEX. |
0906.2135 | Michael Nelson | Herbert Van de Sompel, Carl Lagoze, Michael L. Nelson, Simeon Warner,
Robert Sanderson, Pete Johnston | Adding eScience Assets to the Data Web | 10 pages, 7 figures. Proceedings of Linked Data on the Web (LDOW2009)
Workshop | null | null | null | cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Aggregations of Web resources are increasingly important in scholarship as it
adopts new methods that are data-centric, collaborative, and networked-based.
The same notion of aggregations of resources is common to the mashed-up,
socially networked information environment of Web 2.0. We present a mechanism
to identify and describe aggregations of Web resources that has resulted from
the Open Archives Initiative - Object Reuse and Exchange (OAI-ORE) project. The
OAI-ORE specifications are based on the principles of the Architecture of the
World Wide Web, the Semantic Web, and the Linked Data effort. Therefore, their
incorporation into the cyberinfrastructure that supports eScholarship will
ensure the integration of the products of scholarly research into the Data Web.
| [
{
"created": "Thu, 11 Jun 2009 15:33:37 GMT",
"version": "v1"
}
] | 2009-06-12 | [
[
"Van de Sompel",
"Herbert",
""
],
[
"Lagoze",
"Carl",
""
],
[
"Nelson",
"Michael L.",
""
],
[
"Warner",
"Simeon",
""
],
[
"Sanderson",
"Robert",
""
],
[
"Johnston",
"Pete",
""
]
] | Aggregations of Web resources are increasingly important in scholarship as it adopts new methods that are data-centric, collaborative, and networked-based. The same notion of aggregations of resources is common to the mashed-up, socially networked information environment of Web 2.0. We present a mechanism to identify and describe aggregations of Web resources that has resulted from the Open Archives Initiative - Object Reuse and Exchange (OAI-ORE) project. The OAI-ORE specifications are based on the principles of the Architecture of the World Wide Web, the Semantic Web, and the Linked Data effort. Therefore, their incorporation into the cyberinfrastructure that supports eScholarship will ensure the integration of the products of scholarly research into the Data Web. |
1312.6461 | Sho Sonoda | Sho Sonoda, Noboru Murata | Nonparametric Weight Initialization of Neural Networks via Integral
Representation | For ICLR2014, revised into 9 pages; revised into 12 pages (with
supplements) | null | null | null | cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A new initialization method for hidden parameters in a neural network is
proposed. Derived from the integral representation of the neural network, a
nonparametric probability distribution of hidden parameters is introduced. In
this proposal, hidden parameters are initialized by samples drawn from this
distribution, and output parameters are fitted by ordinary linear regression.
Numerical experiments show that backpropagation with proposed initialization
converges faster than uniformly random initialization. Also it is shown that
the proposed method achieves enough accuracy by itself without backpropagation
in some cases.
| [
{
"created": "Mon, 23 Dec 2013 03:23:04 GMT",
"version": "v1"
},
{
"created": "Tue, 24 Dec 2013 02:54:29 GMT",
"version": "v2"
},
{
"created": "Wed, 19 Feb 2014 20:02:05 GMT",
"version": "v3"
}
] | 2014-02-20 | [
[
"Sonoda",
"Sho",
""
],
[
"Murata",
"Noboru",
""
]
] | A new initialization method for hidden parameters in a neural network is proposed. Derived from the integral representation of the neural network, a nonparametric probability distribution of hidden parameters is introduced. In this proposal, hidden parameters are initialized by samples drawn from this distribution, and output parameters are fitted by ordinary linear regression. Numerical experiments show that backpropagation with proposed initialization converges faster than uniformly random initialization. Also it is shown that the proposed method achieves enough accuracy by itself without backpropagation in some cases. |
1811.06295 | Chen Du | Chen Du, Chunheng Wang, Yanna Wang, Cunzhao Shi, Baihua Xiao | Selective Feature Connection Mechanism: Concatenating Multi-layer CNN
Features with a Feature Selector | The paper is under consideration at Pattern Recognition Letters | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Different layers of deep convolutional neural networks(CNNs) can encode
different-level information. High-layer features always contain more semantic
information, and low-layer features contain more detail information. However,
low-layer features suffer from the background clutter and semantic ambiguity.
During visual recognition, the feature combination of the low-layer and
high-level features plays an important role in context modulation. If directly
combining the high-layer and low-layer features, the background clutter and
semantic ambiguity may be caused due to the introduction of detailed
information. In this paper, we propose a general network architecture to
concatenate CNN features of different layers in a simple and effective way,
called Selective Feature Connection Mechanism (SFCM). Low-level features are
selectively linked to high-level features with a feature selector which is
generated by high-level features. The proposed connection mechanism can
effectively overcome the above-mentioned drawbacks. We demonstrate the
effectiveness, superiority, and universal applicability of this method on
multiple challenging computer vision tasks, including image classification,
scene text detection, and image-to-image translation.
| [
{
"created": "Thu, 15 Nov 2018 10:58:21 GMT",
"version": "v1"
},
{
"created": "Wed, 17 Apr 2019 05:53:51 GMT",
"version": "v2"
},
{
"created": "Sun, 21 Apr 2019 07:44:13 GMT",
"version": "v3"
}
] | 2019-04-23 | [
[
"Du",
"Chen",
""
],
[
"Wang",
"Chunheng",
""
],
[
"Wang",
"Yanna",
""
],
[
"Shi",
"Cunzhao",
""
],
[
"Xiao",
"Baihua",
""
]
] | Different layers of deep convolutional neural networks(CNNs) can encode different-level information. High-layer features always contain more semantic information, and low-layer features contain more detail information. However, low-layer features suffer from the background clutter and semantic ambiguity. During visual recognition, the feature combination of the low-layer and high-level features plays an important role in context modulation. If directly combining the high-layer and low-layer features, the background clutter and semantic ambiguity may be caused due to the introduction of detailed information. In this paper, we propose a general network architecture to concatenate CNN features of different layers in a simple and effective way, called Selective Feature Connection Mechanism (SFCM). Low-level features are selectively linked to high-level features with a feature selector which is generated by high-level features. The proposed connection mechanism can effectively overcome the above-mentioned drawbacks. We demonstrate the effectiveness, superiority, and universal applicability of this method on multiple challenging computer vision tasks, including image classification, scene text detection, and image-to-image translation. |
1902.09759 | Shuai Wang | Shuai Wang, Minghua Xia, and Yik-Chung Wu | Joint Communication and Motion Energy Minimization in UGV Backscatter
Communication | Proc. IEEE ICC'19, Shanghai, China, May 2019, 6 pages | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While backscatter communication emerges as a promising solution to reduce
power consumption at IoT devices, the transmission range of backscatter
communication is short. To this end, this work integrates unmanned ground
vehicles (UGVs) into the backscatter system. With such a scheme, the UGV could
facilitate the communication by approaching various IoT devices. However,
moving also costs energy consumption and a fundamental question is: what is the
right balance between spending energy on moving versus on communication? To
answer this question, this paper proposes a joint graph mobility and
backscatter communication model. With the proposed model, the total energy
minimization at UGV is formulated as a mixed integer nonlinear programming
(MINLP) problem. Furthermore, an efficient algorithm that achieves a local
optimal solution is derived, and it leads to automatic trade-off between
spending energy on moving versus on communication. Numerical results are
provided to validate the performance of the proposed algorithm.
| [
{
"created": "Tue, 26 Feb 2019 06:55:37 GMT",
"version": "v1"
}
] | 2019-02-27 | [
[
"Wang",
"Shuai",
""
],
[
"Xia",
"Minghua",
""
],
[
"Wu",
"Yik-Chung",
""
]
] | While backscatter communication emerges as a promising solution to reduce power consumption at IoT devices, the transmission range of backscatter communication is short. To this end, this work integrates unmanned ground vehicles (UGVs) into the backscatter system. With such a scheme, the UGV could facilitate the communication by approaching various IoT devices. However, moving also costs energy consumption and a fundamental question is: what is the right balance between spending energy on moving versus on communication? To answer this question, this paper proposes a joint graph mobility and backscatter communication model. With the proposed model, the total energy minimization at UGV is formulated as a mixed integer nonlinear programming (MINLP) problem. Furthermore, an efficient algorithm that achieves a local optimal solution is derived, and it leads to automatic trade-off between spending energy on moving versus on communication. Numerical results are provided to validate the performance of the proposed algorithm. |
2210.09347 | Huy Ha | Alper Canberk, Cheng Chi, Huy Ha, Benjamin Burchfiel, Eric Cousineau,
Siyuan Feng, Shuran Song | Cloth Funnels: Canonicalized-Alignment for Multi-Purpose Garment
Manipulation | 8 pages, 8 figures, website at https://clothfunnels.cs.columbia.edu/ | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | Automating garment manipulation is challenging due to extremely high
variability in object configurations. To reduce this intrinsic variation, we
introduce the task of "canonicalized-alignment" that simplifies downstream
applications by reducing the possible garment configurations. This task can be
considered as "cloth state funnel" that manipulates arbitrarily configured
clothing items into a predefined deformable configuration (i.e.
canonicalization) at an appropriate rigid pose (i.e. alignment). In the end,
the cloth items will result in a compact set of structured and highly visible
configurations - which are desirable for downstream manipulation skills. To
enable this task, we propose a novel canonicalized-alignment objective that
effectively guides learning to avoid adverse local minima during learning.
Using this objective, we learn a multi-arm, multi-primitive policy that
strategically chooses between dynamic flings and quasi-static pick and place
actions to achieve efficient canonicalized-alignment. We evaluate this approach
on a real-world ironing and folding system that relies on this learned policy
as the common first step. Empirically, we demonstrate that our task-agnostic
canonicalized-alignment can enable even simple manually-designed policies to
work well where they were previously inadequate, thus bridging the gap between
automated non-deformable manufacturing and deformable manipulation. Code and
qualitative visualizations are available at
https://clothfunnels.cs.columbia.edu/. Video can be found at
https://www.youtube.com/watch?v=TkUn0b7mbj0.
| [
{
"created": "Mon, 17 Oct 2022 18:36:59 GMT",
"version": "v1"
}
] | 2022-10-19 | [
[
"Canberk",
"Alper",
""
],
[
"Chi",
"Cheng",
""
],
[
"Ha",
"Huy",
""
],
[
"Burchfiel",
"Benjamin",
""
],
[
"Cousineau",
"Eric",
""
],
[
"Feng",
"Siyuan",
""
],
[
"Song",
"Shuran",
""
]
] | Automating garment manipulation is challenging due to extremely high variability in object configurations. To reduce this intrinsic variation, we introduce the task of "canonicalized-alignment" that simplifies downstream applications by reducing the possible garment configurations. This task can be considered as "cloth state funnel" that manipulates arbitrarily configured clothing items into a predefined deformable configuration (i.e. canonicalization) at an appropriate rigid pose (i.e. alignment). In the end, the cloth items will result in a compact set of structured and highly visible configurations - which are desirable for downstream manipulation skills. To enable this task, we propose a novel canonicalized-alignment objective that effectively guides learning to avoid adverse local minima during learning. Using this objective, we learn a multi-arm, multi-primitive policy that strategically chooses between dynamic flings and quasi-static pick and place actions to achieve efficient canonicalized-alignment. We evaluate this approach on a real-world ironing and folding system that relies on this learned policy as the common first step. Empirically, we demonstrate that our task-agnostic canonicalized-alignment can enable even simple manually-designed policies to work well where they were previously inadequate, thus bridging the gap between automated non-deformable manufacturing and deformable manipulation. Code and qualitative visualizations are available at https://clothfunnels.cs.columbia.edu/. Video can be found at https://www.youtube.com/watch?v=TkUn0b7mbj0. |
2201.01391 | Ademola Okerinde | Ademola Okerinde and Sam Hoggatt and Divya Vani Lakkireddy and Nolan
Brubaker and William Hsu and Lior Shamir and Brian Spiesman | Self-Supervised Approach to Addressing Zero-Shot Learning Problem | null | The 4th International Conference on Computing and Data Science
(CONF-CDS 2022) | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | In recent years, self-supervised learning has had significant success in
applications involving computer vision and natural language processing. The
type of pretext task is important to this boost in performance. One common
pretext task is the measure of similarity and dissimilarity between pairs of
images. In this scenario, the two images that make up the negative pair are
visibly different to humans. However, in entomology, species are nearly
indistinguishable and thus hard to differentiate. In this study, we explored
the performance of a Siamese neural network using contrastive loss by learning
to push apart embeddings of bumblebee species pair that are dissimilar, and
pull together similar embeddings. Our experimental results show a 61% F1-score
on zero-shot instances, a performance showing 11% improvement on samples of
classes that share intersections with the training set.
| [
{
"created": "Wed, 5 Jan 2022 00:08:36 GMT",
"version": "v1"
},
{
"created": "Fri, 21 Jan 2022 14:09:29 GMT",
"version": "v2"
}
] | 2022-01-24 | [
[
"Okerinde",
"Ademola",
""
],
[
"Hoggatt",
"Sam",
""
],
[
"Lakkireddy",
"Divya Vani",
""
],
[
"Brubaker",
"Nolan",
""
],
[
"Hsu",
"William",
""
],
[
"Shamir",
"Lior",
""
],
[
"Spiesman",
"Brian",
""
]
] | In recent years, self-supervised learning has had significant success in applications involving computer vision and natural language processing. The type of pretext task is important to this boost in performance. One common pretext task is the measure of similarity and dissimilarity between pairs of images. In this scenario, the two images that make up the negative pair are visibly different to humans. However, in entomology, species are nearly indistinguishable and thus hard to differentiate. In this study, we explored the performance of a Siamese neural network using contrastive loss by learning to push apart embeddings of bumblebee species pair that are dissimilar, and pull together similar embeddings. Our experimental results show a 61% F1-score on zero-shot instances, a performance showing 11% improvement on samples of classes that share intersections with the training set. |
2205.03651 | Manjanna Basappa | Vishwanath R. Singireddy and Manjanna Basappa | Dispersing Facilities on Planar Segment and Circle Amidst Repulsion | 16 figures | null | null | null | cs.CG | http://creativecommons.org/licenses/by/4.0/ | In this paper we consider the problem of locating $k$ obnoxious facilities
(congruent disks of maximum radius) amidst $n$ demand points (existing
repulsive facility sites) ordered from left to right in the plane so that none
of the existing facility sites are affected (no demand point falls in the
interior of the disks). We study this problem in two restricted settings: (i)
the obnoxious facilities are constrained to be centered on along a
predetermined horizontal line segment $\bar{pq}$, and (ii) the obnoxious
facilities are constrained to lie on the boundary arc of a predetermined disk
$\cal C$. An $(1-\epsilon)$-approximation algorithm was given recently to solve
the constrained problem in (i) in time
$O((n+k)\log{\frac{||pq||}{2(k-1)\epsilon}})$, where $\epsilon>0$
\cite{Sing2021}. Here, for the problem in (i), we first propose an exact
polynomial-time algorithm based on a binary search on all candidate radii
computed explicitly. This algorithm runs in
$O((nk)^2\log{(nk)}+(n+k)\log{(nk)})$ time. We then show that using the
parametric search technique of Megiddo \cite{MG1983}; we can solve the problem
exactly in $O((n+k)^2)$ time, which is faster than the latter. Continuing
further, using the improved parametric technique we give an $O(n\log^2 n)$-time
algorithm for $k=2$. We finally show that the above
$(1-\epsilon)$-approximation algorithm of \cite{Sing2021} can be easily adapted
to solve the circular constrained problem of (ii) with an extra multiplicative
factor of $n$ in the running time.
| [
{
"created": "Sat, 7 May 2022 13:16:04 GMT",
"version": "v1"
},
{
"created": "Thu, 12 May 2022 11:17:01 GMT",
"version": "v2"
}
] | 2022-05-13 | [
[
"Singireddy",
"Vishwanath R.",
""
],
[
"Basappa",
"Manjanna",
""
]
] | In this paper we consider the problem of locating $k$ obnoxious facilities (congruent disks of maximum radius) amidst $n$ demand points (existing repulsive facility sites) ordered from left to right in the plane so that none of the existing facility sites are affected (no demand point falls in the interior of the disks). We study this problem in two restricted settings: (i) the obnoxious facilities are constrained to be centered on along a predetermined horizontal line segment $\bar{pq}$, and (ii) the obnoxious facilities are constrained to lie on the boundary arc of a predetermined disk $\cal C$. An $(1-\epsilon)$-approximation algorithm was given recently to solve the constrained problem in (i) in time $O((n+k)\log{\frac{||pq||}{2(k-1)\epsilon}})$, where $\epsilon>0$ \cite{Sing2021}. Here, for the problem in (i), we first propose an exact polynomial-time algorithm based on a binary search on all candidate radii computed explicitly. This algorithm runs in $O((nk)^2\log{(nk)}+(n+k)\log{(nk)})$ time. We then show that using the parametric search technique of Megiddo \cite{MG1983}; we can solve the problem exactly in $O((n+k)^2)$ time, which is faster than the latter. Continuing further, using the improved parametric technique we give an $O(n\log^2 n)$-time algorithm for $k=2$. We finally show that the above $(1-\epsilon)$-approximation algorithm of \cite{Sing2021} can be easily adapted to solve the circular constrained problem of (ii) with an extra multiplicative factor of $n$ in the running time. |
2401.06826 | Jialiang Tang | Jialiang Tang, Shuo Chen, Gang Niu, Hongyuan Zhu, Joey Tianyi Zhou,
Chen Gong, Masashi Sugiyama | Direct Distillation between Different Domains | null | null | null | null | cs.LG cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge Distillation (KD) aims to learn a compact student network using
knowledge from a large pre-trained teacher network, where both networks are
trained on data from the same distribution. However, in practical applications,
the student network may be required to perform in a new scenario (i.e., the
target domain), which usually exhibits significant differences from the known
scenario of the teacher network (i.e., the source domain). The traditional
domain adaptation techniques can be integrated with KD in a two-stage process
to bridge the domain gap, but the ultimate reliability of two-stage approaches
tends to be limited due to the high computational consumption and the
additional errors accumulated from both stages. To solve this problem, we
propose a new one-stage method dubbed ``Direct Distillation between Different
Domains" (4Ds). We first design a learnable adapter based on the Fourier
transform to separate the domain-invariant knowledge from the domain-specific
knowledge. Then, we build a fusion-activation mechanism to transfer the
valuable domain-invariant knowledge to the student network, while
simultaneously encouraging the adapter within the teacher network to learn the
domain-specific knowledge of the target data. As a result, the teacher network
can effectively transfer categorical knowledge that aligns with the target
domain of the student network. Intensive experiments on various benchmark
datasets demonstrate that our proposed 4Ds method successfully produces
reliable student networks and outperforms state-of-the-art approaches.
| [
{
"created": "Fri, 12 Jan 2024 02:48:51 GMT",
"version": "v1"
}
] | 2024-01-17 | [
[
"Tang",
"Jialiang",
""
],
[
"Chen",
"Shuo",
""
],
[
"Niu",
"Gang",
""
],
[
"Zhu",
"Hongyuan",
""
],
[
"Zhou",
"Joey Tianyi",
""
],
[
"Gong",
"Chen",
""
],
[
"Sugiyama",
"Masashi",
""
]
] | Knowledge Distillation (KD) aims to learn a compact student network using knowledge from a large pre-trained teacher network, where both networks are trained on data from the same distribution. However, in practical applications, the student network may be required to perform in a new scenario (i.e., the target domain), which usually exhibits significant differences from the known scenario of the teacher network (i.e., the source domain). The traditional domain adaptation techniques can be integrated with KD in a two-stage process to bridge the domain gap, but the ultimate reliability of two-stage approaches tends to be limited due to the high computational consumption and the additional errors accumulated from both stages. To solve this problem, we propose a new one-stage method dubbed ``Direct Distillation between Different Domains" (4Ds). We first design a learnable adapter based on the Fourier transform to separate the domain-invariant knowledge from the domain-specific knowledge. Then, we build a fusion-activation mechanism to transfer the valuable domain-invariant knowledge to the student network, while simultaneously encouraging the adapter within the teacher network to learn the domain-specific knowledge of the target data. As a result, the teacher network can effectively transfer categorical knowledge that aligns with the target domain of the student network. Intensive experiments on various benchmark datasets demonstrate that our proposed 4Ds method successfully produces reliable student networks and outperforms state-of-the-art approaches. |
2312.16174 | Yujiao Hu | Yujiao Hu, Qingmin Jia, Yuao Yao, Yong Lee, Mengjie Lee, Chenyi Wang,
Xiaomao Zhou, Renchao Xie, F. Richard Yu | Industrial Internet of Things Intelligence Empowering Smart
Manufacturing: A Literature Review | Accepted by IoTJ | IEEE Internet of Things Journal,2024 | 10.1109/JIOT.2024.3367692 | null | cs.AI cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The fiercely competitive business environment and increasingly personalized
customization needs are driving the digital transformation and upgrading of the
manufacturing industry. IIoT intelligence, which can provide innovative and
efficient solutions for various aspects of the manufacturing value chain,
illuminates the path of transformation for the manufacturing industry. It's
time to provide a systematic vision of IIoT intelligence. However, existing
surveys often focus on specific areas of IIoT intelligence, leading researchers
and readers to have biases in their understanding of IIoT intelligence, that
is, believing that research in one direction is the most important for the
development of IIoT intelligence, while ignoring contributions from other
directions. Therefore, this paper provides a comprehensive overview of IIoT
intelligence. We first conduct an in-depth analysis of the inevitability of
manufacturing transformation and study the successful experiences from the
practices of Chinese enterprises. Then we give our definition of IIoT
intelligence and demonstrate the value of IIoT intelligence for industries in
fucntions, operations, deployments, and application. Afterwards, we propose a
hierarchical development architecture for IIoT intelligence, which consists of
five layers. The practical values of technical upgrades at each layer are
illustrated by a close look on lighthouse factories. Following that, we
identify seven kinds of technologies that accelerate the transformation of
manufacturing, and clarify their contributions. The ethical implications and
environmental impacts of adopting IIoT intelligence in manufacturing are
analyzed as well. Finally, we explore the open challenges and development
trends from four aspects to inspire future researches.
| [
{
"created": "Sat, 2 Dec 2023 06:08:39 GMT",
"version": "v1"
},
{
"created": "Thu, 22 Feb 2024 02:28:57 GMT",
"version": "v2"
}
] | 2024-02-23 | [
[
"Hu",
"Yujiao",
""
],
[
"Jia",
"Qingmin",
""
],
[
"Yao",
"Yuao",
""
],
[
"Lee",
"Yong",
""
],
[
"Lee",
"Mengjie",
""
],
[
"Wang",
"Chenyi",
""
],
[
"Zhou",
"Xiaomao",
""
],
[
"Xie",
"Renchao",
""
],
[
"Yu",
"F. Richard",
""
]
] | The fiercely competitive business environment and increasingly personalized customization needs are driving the digital transformation and upgrading of the manufacturing industry. IIoT intelligence, which can provide innovative and efficient solutions for various aspects of the manufacturing value chain, illuminates the path of transformation for the manufacturing industry. It's time to provide a systematic vision of IIoT intelligence. However, existing surveys often focus on specific areas of IIoT intelligence, leading researchers and readers to have biases in their understanding of IIoT intelligence, that is, believing that research in one direction is the most important for the development of IIoT intelligence, while ignoring contributions from other directions. Therefore, this paper provides a comprehensive overview of IIoT intelligence. We first conduct an in-depth analysis of the inevitability of manufacturing transformation and study the successful experiences from the practices of Chinese enterprises. Then we give our definition of IIoT intelligence and demonstrate the value of IIoT intelligence for industries in fucntions, operations, deployments, and application. Afterwards, we propose a hierarchical development architecture for IIoT intelligence, which consists of five layers. The practical values of technical upgrades at each layer are illustrated by a close look on lighthouse factories. Following that, we identify seven kinds of technologies that accelerate the transformation of manufacturing, and clarify their contributions. The ethical implications and environmental impacts of adopting IIoT intelligence in manufacturing are analyzed as well. Finally, we explore the open challenges and development trends from four aspects to inspire future researches. |
1907.05205 | Vidyasagar Sadhu | Vidyasagar Sadhu, Sanjana Devaraj, Dario Pompili | Towards Ultra-low-power Realization of Analog Joint Source-Channel
Coding using MOSFETs | 5 pages, IEEE ISCAS 2019. arXiv admin note: text overlap with
arXiv:1907.00968 | 2019 IEEE International Symposium on Circuits and Systems (ISCAS),
Sapporo, Japan, 2019, pp. 1-5 | 10.1109/ISCAS.2019.8702302 | null | cs.ET cs.NI eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Certain sensing applications such as Internet of Things (IoTs), where the
sensing phenomenon may change rapidly in both time and space, requires sensors
that consume ultra-low power (so that they do not need to be put to sleep
leading to loss of temporal and spatial resolution) and have low costs (for
high density deployment). A novel encoding based on Metal Oxide Semiconductor
Field Effect Transistors (MOSFETs) is proposed to realize Analog Joint Source
Channel Coding (AJSCC), a low-complexity technique to compress two (or more)
signals into one with controlled distortion. In AJSCC, the y-axis is quantized
while the x-axis is continuously captured. A power-efficient design to support
multiple quantization levels is presented so that the digital receiver can
decide the optimum quantization and the analog transmitter circuit is able to
realize that. The approach is verified via Spice and MATLAB simulations.
| [
{
"created": "Sun, 30 Jun 2019 06:46:58 GMT",
"version": "v1"
}
] | 2019-07-12 | [
[
"Sadhu",
"Vidyasagar",
""
],
[
"Devaraj",
"Sanjana",
""
],
[
"Pompili",
"Dario",
""
]
] | Certain sensing applications such as Internet of Things (IoTs), where the sensing phenomenon may change rapidly in both time and space, requires sensors that consume ultra-low power (so that they do not need to be put to sleep leading to loss of temporal and spatial resolution) and have low costs (for high density deployment). A novel encoding based on Metal Oxide Semiconductor Field Effect Transistors (MOSFETs) is proposed to realize Analog Joint Source Channel Coding (AJSCC), a low-complexity technique to compress two (or more) signals into one with controlled distortion. In AJSCC, the y-axis is quantized while the x-axis is continuously captured. A power-efficient design to support multiple quantization levels is presented so that the digital receiver can decide the optimum quantization and the analog transmitter circuit is able to realize that. The approach is verified via Spice and MATLAB simulations. |
1902.04742 | Vaishnavh Nagarajan | Vaishnavh Nagarajan, J. Zico Kolter | Uniform convergence may be unable to explain generalization in deep
learning | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Aimed at explaining the surprisingly good generalization behavior of
overparameterized deep networks, recent works have developed a variety of
generalization bounds for deep learning, all based on the fundamental
learning-theoretic technique of uniform convergence. While it is well-known
that many of these existing bounds are numerically large, through numerous
experiments, we bring to light a more concerning aspect of these bounds: in
practice, these bounds can {\em increase} with the training dataset size.
Guided by our observations, we then present examples of overparameterized
linear classifiers and neural networks trained by gradient descent (GD) where
uniform convergence provably cannot "explain generalization" -- even if we take
into account the implicit bias of GD {\em to the fullest extent possible}. More
precisely, even if we consider only the set of classifiers output by GD, which
have test errors less than some small $\epsilon$ in our settings, we show that
applying (two-sided) uniform convergence on this set of classifiers will yield
only a vacuous generalization guarantee larger than $1-\epsilon$. Through these
findings, we cast doubt on the power of uniform convergence-based
generalization bounds to provide a complete picture of why overparameterized
deep networks generalize well.
| [
{
"created": "Wed, 13 Feb 2019 05:09:38 GMT",
"version": "v1"
},
{
"created": "Tue, 2 Apr 2019 17:20:02 GMT",
"version": "v2"
},
{
"created": "Thu, 19 Dec 2019 20:09:41 GMT",
"version": "v3"
},
{
"created": "Sun, 17 Oct 2021 20:04:09 GMT",
"version": "v4"
}
] | 2021-10-19 | [
[
"Nagarajan",
"Vaishnavh",
""
],
[
"Kolter",
"J. Zico",
""
]
] | Aimed at explaining the surprisingly good generalization behavior of overparameterized deep networks, recent works have developed a variety of generalization bounds for deep learning, all based on the fundamental learning-theoretic technique of uniform convergence. While it is well-known that many of these existing bounds are numerically large, through numerous experiments, we bring to light a more concerning aspect of these bounds: in practice, these bounds can {\em increase} with the training dataset size. Guided by our observations, we then present examples of overparameterized linear classifiers and neural networks trained by gradient descent (GD) where uniform convergence provably cannot "explain generalization" -- even if we take into account the implicit bias of GD {\em to the fullest extent possible}. More precisely, even if we consider only the set of classifiers output by GD, which have test errors less than some small $\epsilon$ in our settings, we show that applying (two-sided) uniform convergence on this set of classifiers will yield only a vacuous generalization guarantee larger than $1-\epsilon$. Through these findings, we cast doubt on the power of uniform convergence-based generalization bounds to provide a complete picture of why overparameterized deep networks generalize well. |
2311.10372 | Kaiwen Ning | Zibin Zheng and Kaiwen Ning and Yanlin Wang and Jingwen Zhang and Dewu
Zheng and Mingxi Ye and Jiachi Chen | A Survey of Large Language Models for Code: Evolution, Benchmarking, and
Future Trends | null | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | General large language models (LLMs), represented by ChatGPT, have
demonstrated significant potential in tasks such as code generation in software
engineering. This has led to the development of specialized LLMs for software
engineering, known as Code LLMs. A considerable portion of Code LLMs is derived
from general LLMs through model fine-tuning. As a result, Code LLMs are often
updated frequently and their performance can be influenced by the base LLMs.
However, there is currently a lack of systematic investigation into Code LLMs
and their performance. In this study, we conduct a comprehensive survey and
analysis of the types of Code LLMs and their differences in performance
compared to general LLMs. We aim to address three questions: (1) What LLMs are
specifically designed for software engineering tasks, and what is the
relationship between these Code LLMs? (2) Do Code LLMs really outperform
general LLMs in software engineering tasks? (3) Which LLMs are more proficient
in different software engineering tasks? To answer these questions, we first
collect relevant literature and work from five major databases and open-source
communities, resulting in 134 works for analysis. Next, we categorize the Code
LLMs based on their publishers and examine their relationships with general
LLMs and among themselves. Furthermore, we investigate the performance
differences between general LLMs and Code LLMs in various software engineering
tasks to demonstrate the impact of base models and Code LLMs. Finally, we
comprehensively maintained the performance of LLMs across multiple mainstream
benchmarks to identify the best-performing LLMs for each software engineering
task. Our research not only assists developers of Code LLMs in choosing base
models for the development of more advanced LLMs but also provides insights for
practitioners to better understand key improvement directions for Code LLMs.
| [
{
"created": "Fri, 17 Nov 2023 07:55:16 GMT",
"version": "v1"
},
{
"created": "Mon, 8 Jan 2024 05:41:51 GMT",
"version": "v2"
}
] | 2024-01-09 | [
[
"Zheng",
"Zibin",
""
],
[
"Ning",
"Kaiwen",
""
],
[
"Wang",
"Yanlin",
""
],
[
"Zhang",
"Jingwen",
""
],
[
"Zheng",
"Dewu",
""
],
[
"Ye",
"Mingxi",
""
],
[
"Chen",
"Jiachi",
""
]
] | General large language models (LLMs), represented by ChatGPT, have demonstrated significant potential in tasks such as code generation in software engineering. This has led to the development of specialized LLMs for software engineering, known as Code LLMs. A considerable portion of Code LLMs is derived from general LLMs through model fine-tuning. As a result, Code LLMs are often updated frequently and their performance can be influenced by the base LLMs. However, there is currently a lack of systematic investigation into Code LLMs and their performance. In this study, we conduct a comprehensive survey and analysis of the types of Code LLMs and their differences in performance compared to general LLMs. We aim to address three questions: (1) What LLMs are specifically designed for software engineering tasks, and what is the relationship between these Code LLMs? (2) Do Code LLMs really outperform general LLMs in software engineering tasks? (3) Which LLMs are more proficient in different software engineering tasks? To answer these questions, we first collect relevant literature and work from five major databases and open-source communities, resulting in 134 works for analysis. Next, we categorize the Code LLMs based on their publishers and examine their relationships with general LLMs and among themselves. Furthermore, we investigate the performance differences between general LLMs and Code LLMs in various software engineering tasks to demonstrate the impact of base models and Code LLMs. Finally, we comprehensively maintained the performance of LLMs across multiple mainstream benchmarks to identify the best-performing LLMs for each software engineering task. Our research not only assists developers of Code LLMs in choosing base models for the development of more advanced LLMs but also provides insights for practitioners to better understand key improvement directions for Code LLMs. |
1806.00801 | Gui-Song Xia | Pu Jin, Gui-Song Xia, Fan Hu, Qikai Lu, Liangpei Zhang | AID++: An Updated Version of AID on Scene Classification | IGARSS'18 conference paper | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Aerial image scene classification is a fundamental problem for understanding
high-resolution remote sensing images and has become an active research task in
the field of remote sensing due to its important role in a wide range of
applications. However, the limitations of existing datasets for scene
classification, such as the small scale and low-diversity, severely hamper the
potential usage of the new generation deep convolutional neural networks
(CNNs). Although huge efforts have been made in building large-scale datasets
very recently, e.g., the Aerial Image Dataset (AID) which contains 10,000 image
samples, they are still far from sufficient to fully train a high-capacity deep
CNN model. To this end, we present a larger-scale dataset in this paper, named
as AID++, for aerial scene classification based on the AID dataset. The
proposed AID++ consists of more than 400,000 image samples that are
semi-automatically annotated by using the existing the geographical data. We
evaluate several prevalent CNN models on the proposed dataset, and the results
show that our dataset can be used as a promising benchmark for scene
classification.
| [
{
"created": "Sun, 3 Jun 2018 14:40:20 GMT",
"version": "v1"
}
] | 2018-06-05 | [
[
"Jin",
"Pu",
""
],
[
"Xia",
"Gui-Song",
""
],
[
"Hu",
"Fan",
""
],
[
"Lu",
"Qikai",
""
],
[
"Zhang",
"Liangpei",
""
]
] | Aerial image scene classification is a fundamental problem for understanding high-resolution remote sensing images and has become an active research task in the field of remote sensing due to its important role in a wide range of applications. However, the limitations of existing datasets for scene classification, such as the small scale and low-diversity, severely hamper the potential usage of the new generation deep convolutional neural networks (CNNs). Although huge efforts have been made in building large-scale datasets very recently, e.g., the Aerial Image Dataset (AID) which contains 10,000 image samples, they are still far from sufficient to fully train a high-capacity deep CNN model. To this end, we present a larger-scale dataset in this paper, named as AID++, for aerial scene classification based on the AID dataset. The proposed AID++ consists of more than 400,000 image samples that are semi-automatically annotated by using the existing the geographical data. We evaluate several prevalent CNN models on the proposed dataset, and the results show that our dataset can be used as a promising benchmark for scene classification. |
1809.02768 | Yifan Gao | Yifan Gao, Lidong Bing, Piji Li, Irwin King, Michael R. Lyu | Generating Distractors for Reading Comprehension Questions from Real
Examinations | AAAI2019 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the task of distractor generation for multiple choice reading
comprehension questions from examinations. In contrast to all previous works,
we do not aim at preparing words or short phrases distractors, instead, we
endeavor to generate longer and semantic-rich distractors which are closer to
distractors in real reading comprehension from examinations. Taking a reading
comprehension article, a pair of question and its correct option as input, our
goal is to generate several distractors which are somehow related to the
answer, consistent with the semantic context of the question and have some
trace in the article. We propose a hierarchical encoder-decoder framework with
static and dynamic attention mechanisms to tackle this task. Specifically, the
dynamic attention can combine sentence-level and word-level attention varying
at each recurrent time step to generate a more readable sequence. The static
attention is to modulate the dynamic attention not to focus on question
irrelevant sentences or sentences which contribute to the correct option. Our
proposed framework outperforms several strong baselines on the first prepared
distractor generation dataset of real reading comprehension questions. For
human evaluation, compared with those distractors generated by baselines, our
generated distractors are more functional to confuse the annotators.
| [
{
"created": "Sat, 8 Sep 2018 07:11:15 GMT",
"version": "v1"
},
{
"created": "Tue, 18 Dec 2018 07:04:50 GMT",
"version": "v2"
}
] | 2018-12-19 | [
[
"Gao",
"Yifan",
""
],
[
"Bing",
"Lidong",
""
],
[
"Li",
"Piji",
""
],
[
"King",
"Irwin",
""
],
[
"Lyu",
"Michael R.",
""
]
] | We investigate the task of distractor generation for multiple choice reading comprehension questions from examinations. In contrast to all previous works, we do not aim at preparing words or short phrases distractors, instead, we endeavor to generate longer and semantic-rich distractors which are closer to distractors in real reading comprehension from examinations. Taking a reading comprehension article, a pair of question and its correct option as input, our goal is to generate several distractors which are somehow related to the answer, consistent with the semantic context of the question and have some trace in the article. We propose a hierarchical encoder-decoder framework with static and dynamic attention mechanisms to tackle this task. Specifically, the dynamic attention can combine sentence-level and word-level attention varying at each recurrent time step to generate a more readable sequence. The static attention is to modulate the dynamic attention not to focus on question irrelevant sentences or sentences which contribute to the correct option. Our proposed framework outperforms several strong baselines on the first prepared distractor generation dataset of real reading comprehension questions. For human evaluation, compared with those distractors generated by baselines, our generated distractors are more functional to confuse the annotators. |
2308.16822 | Arthur Leroy | Chunchao Ma, Arthur Leroy, Mauricio Alvarez | Latent Variable Multi-output Gaussian Processes for Hierarchical
Datasets | 29 pages | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Multi-output Gaussian processes (MOGPs) have been introduced to deal with
multiple tasks by exploiting the correlations between different outputs.
Generally, MOGPs models assume a flat correlation structure between the
outputs. However, such a formulation does not account for more elaborate
relationships, for instance, if several replicates were observed for each
output (which is a typical setting in biological experiments). This paper
proposes an extension of MOGPs for hierarchical datasets (i.e. datasets for
which the relationships between observations can be represented within a tree
structure). Our model defines a tailored kernel function accounting for
hierarchical structures in the data to capture different levels of correlations
while leveraging the introduction of latent variables to express the underlying
dependencies between outputs through a dedicated kernel. This latter feature is
expected to significantly improve scalability as the number of tasks increases.
An extensive experimental study involving both synthetic and real-world data
from genomics and motion capture is proposed to support our claims.
| [
{
"created": "Thu, 31 Aug 2023 15:52:35 GMT",
"version": "v1"
}
] | 2023-09-01 | [
[
"Ma",
"Chunchao",
""
],
[
"Leroy",
"Arthur",
""
],
[
"Alvarez",
"Mauricio",
""
]
] | Multi-output Gaussian processes (MOGPs) have been introduced to deal with multiple tasks by exploiting the correlations between different outputs. Generally, MOGPs models assume a flat correlation structure between the outputs. However, such a formulation does not account for more elaborate relationships, for instance, if several replicates were observed for each output (which is a typical setting in biological experiments). This paper proposes an extension of MOGPs for hierarchical datasets (i.e. datasets for which the relationships between observations can be represented within a tree structure). Our model defines a tailored kernel function accounting for hierarchical structures in the data to capture different levels of correlations while leveraging the introduction of latent variables to express the underlying dependencies between outputs through a dedicated kernel. This latter feature is expected to significantly improve scalability as the number of tasks increases. An extensive experimental study involving both synthetic and real-world data from genomics and motion capture is proposed to support our claims. |
1909.03723 | Marco Virgolin | Marco Virgolin, Ziyuan Wang, Tanja Alderliesten, Peter A. N. Bosman | Machine learning for automatic construction of pseudo-realistic
pediatric abdominal phantoms | Currently submitted to SPIE Medical Imaging journal | null | null | null | cs.LG physics.med-ph stat.AP stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine Learning (ML) is proving extremely beneficial in many healthcare
applications. In pediatric oncology, retrospective studies that investigate the
relationship between treatment and late adverse effects still rely on simple
heuristics. To assess the effects of radiation therapy, treatment plans are
typically simulated on phantoms, i.e., virtual surrogates of patient anatomy.
Currently, phantoms are built according to reasonable, yet simple,
human-designed criteria. This often results in a lack of individualization. We
present a novel approach that combines imaging and ML to build individualized
phantoms automatically. Given the features of a patient treated historically
(only 2D radiographs available), and a database of 3D Computed Tomography (CT)
imaging with organ segmentations and relative patient features, our approach
uses ML to predict how to assemble a patient-specific phantom automatically.
Experiments on 60 abdominal CTs of pediatric patients show that our approach
constructs significantly more representative phantoms than using current
phantom building criteria, in terms of location and shape of the abdomen and of
two considered organs, the liver and the spleen. Among several ML algorithms
considered, the Gene-pool Optimal Mixing Evolutionary Algorithm for Genetic
Programming (GP-GOMEA) is found to deliver the best performing models, which
are, moreover, transparent and interpretable mathematical expressions.
| [
{
"created": "Mon, 9 Sep 2019 09:38:22 GMT",
"version": "v1"
}
] | 2019-09-10 | [
[
"Virgolin",
"Marco",
""
],
[
"Wang",
"Ziyuan",
""
],
[
"Alderliesten",
"Tanja",
""
],
[
"Bosman",
"Peter A. N.",
""
]
] | Machine Learning (ML) is proving extremely beneficial in many healthcare applications. In pediatric oncology, retrospective studies that investigate the relationship between treatment and late adverse effects still rely on simple heuristics. To assess the effects of radiation therapy, treatment plans are typically simulated on phantoms, i.e., virtual surrogates of patient anatomy. Currently, phantoms are built according to reasonable, yet simple, human-designed criteria. This often results in a lack of individualization. We present a novel approach that combines imaging and ML to build individualized phantoms automatically. Given the features of a patient treated historically (only 2D radiographs available), and a database of 3D Computed Tomography (CT) imaging with organ segmentations and relative patient features, our approach uses ML to predict how to assemble a patient-specific phantom automatically. Experiments on 60 abdominal CTs of pediatric patients show that our approach constructs significantly more representative phantoms than using current phantom building criteria, in terms of location and shape of the abdomen and of two considered organs, the liver and the spleen. Among several ML algorithms considered, the Gene-pool Optimal Mixing Evolutionary Algorithm for Genetic Programming (GP-GOMEA) is found to deliver the best performing models, which are, moreover, transparent and interpretable mathematical expressions. |
2004.07798 | Neil Lutz | Jack H. Lutz, Neil Lutz, Elvira Mayordomo | Extending the Reach of the Point-to-Set Principle | null | null | null | null | cs.CC math.CA math.MG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The point-to-set principle of J. Lutz and N. Lutz (2018) has recently enabled
the theory of computing to be used to answer open questions about fractal
geometry in Euclidean spaces $\mathbb{R}^n$. These are classical questions,
meaning that their statements do not involve computation or related aspects of
logic.
In this paper we extend the reach of the point-to-set principle from
Euclidean spaces to arbitrary separable metric spaces $X$. We first extend two
fractal dimensions--computability-theoretic versions of classical Hausdorff and
packing dimensions that assign dimensions $\dim(x)$ and $\textrm{Dim}(x)$ to
individual points $x\in X$--to arbitrary separable metric spaces and to
arbitrary gauge families. Our first two main results then extend the
point-to-set principle to arbitrary separable metric spaces and to a large
class of gauge families.
We demonstrate the power of our extended point-to-set principle by using it
to prove new theorems about classical fractal dimensions in hyperspaces. (For a
concrete computational example, the stages $E_0, E_1, E_2, \ldots$ used to
construct a self-similar fractal $E$ in the plane are elements of the
hyperspace of the plane, and they converge to $E$ in the hyperspace.) Our third
main result, proven via our extended point-to-set principle, states that, under
a wide variety of gauge families, the classical packing dimension agrees with
the classical upper Minkowski dimension on all hyperspaces of compact sets. We
use this theorem to give, for all sets $E$ that are analytic, i.e.,
$\mathbf{\Sigma}^1_1$, a tight bound on the packing dimension of the hyperspace
of $E$ in terms of the packing dimension of $E$ itself.
| [
{
"created": "Thu, 16 Apr 2020 17:43:37 GMT",
"version": "v1"
},
{
"created": "Tue, 6 Oct 2020 19:28:01 GMT",
"version": "v2"
},
{
"created": "Sat, 13 Feb 2021 22:53:18 GMT",
"version": "v3"
}
] | 2021-02-16 | [
[
"Lutz",
"Jack H.",
""
],
[
"Lutz",
"Neil",
""
],
[
"Mayordomo",
"Elvira",
""
]
] | The point-to-set principle of J. Lutz and N. Lutz (2018) has recently enabled the theory of computing to be used to answer open questions about fractal geometry in Euclidean spaces $\mathbb{R}^n$. These are classical questions, meaning that their statements do not involve computation or related aspects of logic. In this paper we extend the reach of the point-to-set principle from Euclidean spaces to arbitrary separable metric spaces $X$. We first extend two fractal dimensions--computability-theoretic versions of classical Hausdorff and packing dimensions that assign dimensions $\dim(x)$ and $\textrm{Dim}(x)$ to individual points $x\in X$--to arbitrary separable metric spaces and to arbitrary gauge families. Our first two main results then extend the point-to-set principle to arbitrary separable metric spaces and to a large class of gauge families. We demonstrate the power of our extended point-to-set principle by using it to prove new theorems about classical fractal dimensions in hyperspaces. (For a concrete computational example, the stages $E_0, E_1, E_2, \ldots$ used to construct a self-similar fractal $E$ in the plane are elements of the hyperspace of the plane, and they converge to $E$ in the hyperspace.) Our third main result, proven via our extended point-to-set principle, states that, under a wide variety of gauge families, the classical packing dimension agrees with the classical upper Minkowski dimension on all hyperspaces of compact sets. We use this theorem to give, for all sets $E$ that are analytic, i.e., $\mathbf{\Sigma}^1_1$, a tight bound on the packing dimension of the hyperspace of $E$ in terms of the packing dimension of $E$ itself. |
1104.0919 | Benjamin Burton | Benjamin A. Burton and Mathias Hiron | Locating regions in a sequence under density constraints | 17 pages, 8 figures; v2: minor revisions, additional explanations; to
appear in SIAM Journal on Computing | SIAM Journal on Computing 42 (2013), no. 3, 1201-1215 | 10.1137/110830605 | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Several biological problems require the identification of regions in a
sequence where some feature occurs within a target density range: examples
including the location of GC-rich regions, identification of CpG islands, and
sequence matching. Mathematically, this corresponds to searching a string of 0s
and 1s for a substring whose relative proportion of 1s lies between given lower
and upper bounds. We consider the algorithmic problem of locating the longest
such substring, as well as other related problems (such as finding the shortest
substring or a maximal set of disjoint substrings). For locating the longest
such substring, we develop an algorithm that runs in O(n) time, improving upon
the previous best-known O(n log n) result. For the related problems we develop
O(n log log n) algorithms, again improving upon the best-known O(n log n)
results. Practical testing verifies that our new algorithms enjoy significantly
smaller time and memory footprints, and can process sequences that are orders
of magnitude longer as a result.
| [
{
"created": "Tue, 5 Apr 2011 19:42:00 GMT",
"version": "v1"
},
{
"created": "Tue, 2 Apr 2013 21:14:45 GMT",
"version": "v2"
}
] | 2013-08-15 | [
[
"Burton",
"Benjamin A.",
""
],
[
"Hiron",
"Mathias",
""
]
] | Several biological problems require the identification of regions in a sequence where some feature occurs within a target density range: examples including the location of GC-rich regions, identification of CpG islands, and sequence matching. Mathematically, this corresponds to searching a string of 0s and 1s for a substring whose relative proportion of 1s lies between given lower and upper bounds. We consider the algorithmic problem of locating the longest such substring, as well as other related problems (such as finding the shortest substring or a maximal set of disjoint substrings). For locating the longest such substring, we develop an algorithm that runs in O(n) time, improving upon the previous best-known O(n log n) result. For the related problems we develop O(n log log n) algorithms, again improving upon the best-known O(n log n) results. Practical testing verifies that our new algorithms enjoy significantly smaller time and memory footprints, and can process sequences that are orders of magnitude longer as a result. |
1902.06531 | Yansong Gao Dr | Yansong Gao, Chang Xu, Derui Wang, Shiping Chen, Damith C.Ranasinghe,
Surya Nepal | STRIP: A Defence Against Trojan Attacks on Deep Neural Networks | 13 pages | In 2019 Annual Computer Security Applications Conference (ACSAC
19), December 9-13, 2019, San Juan, PR, USA. ACM, New York, NY, USA | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A recent trojan attack on deep neural network (DNN) models is one insidious
variant of data poisoning attacks. Trojan attacks exploit an effective backdoor
created in a DNN model by leveraging the difficulty in interpretability of the
learned model to misclassify any inputs signed with the attacker's chosen
trojan trigger. Since the trojan trigger is a secret guarded and exploited by
the attacker, detecting such trojan inputs is a challenge, especially at
run-time when models are in active operation. This work builds STRong
Intentional Perturbation (STRIP) based run-time trojan attack detection system
and focuses on vision system. We intentionally perturb the incoming input, for
instance by superimposing various image patterns, and observe the randomness of
predicted classes for perturbed inputs from a given deployed model---malicious
or benign. A low entropy in predicted classes violates the input-dependence
property of a benign model and implies the presence of a malicious input---a
characteristic of a trojaned input. The high efficacy of our method is
validated through case studies on three popular and contrasting datasets:
MNIST, CIFAR10 and GTSRB. We achieve an overall false acceptance rate (FAR) of
less than 1%, given a preset false rejection rate (FRR) of 1%, for different
types of triggers. Using CIFAR10 and GTSRB, we have empirically achieved result
of 0% for both FRR and FAR. We have also evaluated STRIP robustness against a
number of trojan attack variants and adaptive attacks.
| [
{
"created": "Mon, 18 Feb 2019 11:49:33 GMT",
"version": "v1"
},
{
"created": "Fri, 17 Jan 2020 03:27:26 GMT",
"version": "v2"
}
] | 2020-01-20 | [
[
"Gao",
"Yansong",
""
],
[
"Xu",
"Chang",
""
],
[
"Wang",
"Derui",
""
],
[
"Chen",
"Shiping",
""
],
[
"Ranasinghe",
"Damith C.",
""
],
[
"Nepal",
"Surya",
""
]
] | A recent trojan attack on deep neural network (DNN) models is one insidious variant of data poisoning attacks. Trojan attacks exploit an effective backdoor created in a DNN model by leveraging the difficulty in interpretability of the learned model to misclassify any inputs signed with the attacker's chosen trojan trigger. Since the trojan trigger is a secret guarded and exploited by the attacker, detecting such trojan inputs is a challenge, especially at run-time when models are in active operation. This work builds STRong Intentional Perturbation (STRIP) based run-time trojan attack detection system and focuses on vision system. We intentionally perturb the incoming input, for instance by superimposing various image patterns, and observe the randomness of predicted classes for perturbed inputs from a given deployed model---malicious or benign. A low entropy in predicted classes violates the input-dependence property of a benign model and implies the presence of a malicious input---a characteristic of a trojaned input. The high efficacy of our method is validated through case studies on three popular and contrasting datasets: MNIST, CIFAR10 and GTSRB. We achieve an overall false acceptance rate (FAR) of less than 1%, given a preset false rejection rate (FRR) of 1%, for different types of triggers. Using CIFAR10 and GTSRB, we have empirically achieved result of 0% for both FRR and FAR. We have also evaluated STRIP robustness against a number of trojan attack variants and adaptive attacks. |
1905.04849 | Cai Shaofeng | Shaofeng Cai, Yao Shu, Wei Wang, Beng Chin Ooi | Dynamic Routing Networks | 10 pages, 3 figures, 3 tables | null | null | null | cs.LG cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The deployment of deep neural networks in real-world applications is mostly
restricted by their high inference costs. Extensive efforts have been made to
improve the accuracy with expert-designed or algorithm-searched architectures.
However, the incremental improvement is typically achieved with increasingly
more expensive models that only a small portion of input instances really need.
Inference with a static architecture that processes all input instances via the
same transformation would thus incur unnecessary computational costs.
Therefore, customizing the model capacity in an instance-aware manner is much
needed for higher inference efficiency. In this paper, we propose Dynamic
Routing Networks (DRNets), which support efficient instance-aware inference by
routing the input instance to only necessary transformation branches selected
from a candidate set of branches for each connection between transformation
nodes. The branch selection is dynamically determined via the corresponding
branch importance weights, which are first generated from lightweight
hypernetworks (RouterNets) and then recalibrated with Gumbel-Softmax before the
selection. Extensive experiments show that DRNets can reduce a substantial
amount of parameter size and FLOPs during inference with prediction performance
comparable to state-of-the-art architectures.
| [
{
"created": "Mon, 13 May 2019 03:45:42 GMT",
"version": "v1"
},
{
"created": "Thu, 23 May 2019 16:45:18 GMT",
"version": "v2"
},
{
"created": "Thu, 24 Oct 2019 04:47:36 GMT",
"version": "v3"
},
{
"created": "Tue, 28 Jul 2020 16:26:29 GMT",
"version": "v4"
},
{
"created": "Sun, 8 Nov 2020 13:11:45 GMT",
"version": "v5"
}
] | 2020-11-10 | [
[
"Cai",
"Shaofeng",
""
],
[
"Shu",
"Yao",
""
],
[
"Wang",
"Wei",
""
],
[
"Ooi",
"Beng Chin",
""
]
] | The deployment of deep neural networks in real-world applications is mostly restricted by their high inference costs. Extensive efforts have been made to improve the accuracy with expert-designed or algorithm-searched architectures. However, the incremental improvement is typically achieved with increasingly more expensive models that only a small portion of input instances really need. Inference with a static architecture that processes all input instances via the same transformation would thus incur unnecessary computational costs. Therefore, customizing the model capacity in an instance-aware manner is much needed for higher inference efficiency. In this paper, we propose Dynamic Routing Networks (DRNets), which support efficient instance-aware inference by routing the input instance to only necessary transformation branches selected from a candidate set of branches for each connection between transformation nodes. The branch selection is dynamically determined via the corresponding branch importance weights, which are first generated from lightweight hypernetworks (RouterNets) and then recalibrated with Gumbel-Softmax before the selection. Extensive experiments show that DRNets can reduce a substantial amount of parameter size and FLOPs during inference with prediction performance comparable to state-of-the-art architectures. |
1405.4100 | Cristian Prisacariu | Cristian Prisacariu | Higher Dimensional Modal Logic | null | null | null | null | cs.LO | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Higher dimensional automata (HDA) are a model of concurrency that can express
most of the traditional partial order models like Mazurkiewicz traces, pomsets,
event structures, or Petri nets. Modal logics, interpreted over Kripke
structures, are the logics for reasoning about sequential behavior and
interleaved concurrency. Modal logic is a well behaved subset of first-order
logic; many variants of modal logic are decidable. However, there are no
modal-like logics for the more expressive HDA models. In this paper we
introduce and investigate a modal logic over HDAs which incorporates two
modalities for reasoning about "during" and "after". We prove that this general
higher dimensional modal logic (HDML) is decidable and we define an axiomatic
system for it. We also show how, when the HDA model is restricted to Kripke
structures, a syntactic restriction of HDML becomes the standard modal logic.
Then we isolate the class of HDAs that encode Mazurkiewicz traces and show how
HDML, with natural definitions of corresponding Until operators, can be
restricted to LTrL (the linear time temporal logic over Mazurkiewicz traces) or
the branching time ISTL. We also study the expressiveness of the basic HDML
language wrt. bisimulations and conclude that HDML captures the
split-bisimulation.
| [
{
"created": "Fri, 16 May 2014 09:11:38 GMT",
"version": "v1"
}
] | 2014-05-19 | [
[
"Prisacariu",
"Cristian",
""
]
] | Higher dimensional automata (HDA) are a model of concurrency that can express most of the traditional partial order models like Mazurkiewicz traces, pomsets, event structures, or Petri nets. Modal logics, interpreted over Kripke structures, are the logics for reasoning about sequential behavior and interleaved concurrency. Modal logic is a well behaved subset of first-order logic; many variants of modal logic are decidable. However, there are no modal-like logics for the more expressive HDA models. In this paper we introduce and investigate a modal logic over HDAs which incorporates two modalities for reasoning about "during" and "after". We prove that this general higher dimensional modal logic (HDML) is decidable and we define an axiomatic system for it. We also show how, when the HDA model is restricted to Kripke structures, a syntactic restriction of HDML becomes the standard modal logic. Then we isolate the class of HDAs that encode Mazurkiewicz traces and show how HDML, with natural definitions of corresponding Until operators, can be restricted to LTrL (the linear time temporal logic over Mazurkiewicz traces) or the branching time ISTL. We also study the expressiveness of the basic HDML language wrt. bisimulations and conclude that HDML captures the split-bisimulation. |
1704.08045 | Quynh Nguyen | Quynh Nguyen and Matthias Hein | The loss surface of deep and wide neural networks | ICML 2017. Main results now hold for larger classes of loss functions | null | null | null | cs.LG cs.AI cs.CV cs.NE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While the optimization problem behind deep neural networks is highly
non-convex, it is frequently observed in practice that training deep networks
seems possible without getting stuck in suboptimal points. It has been argued
that this is the case as all local minima are close to being globally optimal.
We show that this is (almost) true, in fact almost all local minima are
globally optimal, for a fully connected network with squared loss and analytic
activation function given that the number of hidden units of one layer of the
network is larger than the number of training points and the network structure
from this layer on is pyramidal.
| [
{
"created": "Wed, 26 Apr 2017 10:24:54 GMT",
"version": "v1"
},
{
"created": "Mon, 12 Jun 2017 19:43:39 GMT",
"version": "v2"
}
] | 2017-06-14 | [
[
"Nguyen",
"Quynh",
""
],
[
"Hein",
"Matthias",
""
]
] | While the optimization problem behind deep neural networks is highly non-convex, it is frequently observed in practice that training deep networks seems possible without getting stuck in suboptimal points. It has been argued that this is the case as all local minima are close to being globally optimal. We show that this is (almost) true, in fact almost all local minima are globally optimal, for a fully connected network with squared loss and analytic activation function given that the number of hidden units of one layer of the network is larger than the number of training points and the network structure from this layer on is pyramidal. |
1911.09845 | Jun Gao | Jun Gao, Wei Bi, Xiaojiang Liu, Junhui Li, Guodong Zhou, Shuming Shi | A Discrete CVAE for Response Generation on Short-Text Conversation | Accepted for publication at EMNLP 2019 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural conversation models such as encoder-decoder models are easy to
generate bland and generic responses. Some researchers propose to use the
conditional variational autoencoder(CVAE) which maximizes the lower bound on
the conditional log-likelihood on a continuous latent variable. With different
sampled la-tent variables, the model is expected to generate diverse responses.
Although the CVAE-based models have shown tremendous potential, their
improvement of generating high-quality responses is still unsatisfactory. In
this paper, we introduce a discrete latent variable with an explicit semantic
meaning to improve the CVAE on short-text conversation. A major advantage of
our model is that we can exploit the semantic distance between the latent
variables to maintain good diversity between the sampled latent variables.
Accordingly, we pro-pose a two-stage sampling approach to enable efficient
diverse variable selection from a large latent space assumed in the short-text
conversation task. Experimental results indicate that our model outperforms
various kinds of generation models under both automatic and human evaluations
and generates more diverse and in-formative responses.
| [
{
"created": "Fri, 22 Nov 2019 04:14:31 GMT",
"version": "v1"
}
] | 2019-11-25 | [
[
"Gao",
"Jun",
""
],
[
"Bi",
"Wei",
""
],
[
"Liu",
"Xiaojiang",
""
],
[
"Li",
"Junhui",
""
],
[
"Zhou",
"Guodong",
""
],
[
"Shi",
"Shuming",
""
]
] | Neural conversation models such as encoder-decoder models are easy to generate bland and generic responses. Some researchers propose to use the conditional variational autoencoder(CVAE) which maximizes the lower bound on the conditional log-likelihood on a continuous latent variable. With different sampled la-tent variables, the model is expected to generate diverse responses. Although the CVAE-based models have shown tremendous potential, their improvement of generating high-quality responses is still unsatisfactory. In this paper, we introduce a discrete latent variable with an explicit semantic meaning to improve the CVAE on short-text conversation. A major advantage of our model is that we can exploit the semantic distance between the latent variables to maintain good diversity between the sampled latent variables. Accordingly, we pro-pose a two-stage sampling approach to enable efficient diverse variable selection from a large latent space assumed in the short-text conversation task. Experimental results indicate that our model outperforms various kinds of generation models under both automatic and human evaluations and generates more diverse and in-formative responses. |
1701.04551 | Mingchao Yu | Mingchao Yu, Parastoo Sadeghi | Approximating Throughput and Packet Decoding Delay in Linear Network
Coded Wireless Broadcast | 5 pages, 2 figures, 1 table, submitted to ISIT2017 | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we study a wireless packet broadcast system that uses linear
network coding (LNC) to help receivers recover data packets that are missing
due to packet erasures. We study two intertwined performance metrics, namely
throughput and average packet decoding delay (APDD) and establish strong/weak
approximation relations based on whether the approximation holds for the
performance of every receiver (strong) or for the average performance across
all receivers (weak). We prove an equivalence between strong throughput
approximation and strong APDD approximation. We prove that throughput-optimal
LNC techniques can strongly approximate APDD, and partition-based LNC
techniques may weakly approximate throughput. We also prove that memoryless LNC
techniques, including instantly decodable network coding techniques, are not
strong throughput and APDD approximation nor weak throughput approximation
techniques.
| [
{
"created": "Tue, 17 Jan 2017 07:26:00 GMT",
"version": "v1"
}
] | 2017-01-18 | [
[
"Yu",
"Mingchao",
""
],
[
"Sadeghi",
"Parastoo",
""
]
] | In this paper, we study a wireless packet broadcast system that uses linear network coding (LNC) to help receivers recover data packets that are missing due to packet erasures. We study two intertwined performance metrics, namely throughput and average packet decoding delay (APDD) and establish strong/weak approximation relations based on whether the approximation holds for the performance of every receiver (strong) or for the average performance across all receivers (weak). We prove an equivalence between strong throughput approximation and strong APDD approximation. We prove that throughput-optimal LNC techniques can strongly approximate APDD, and partition-based LNC techniques may weakly approximate throughput. We also prove that memoryless LNC techniques, including instantly decodable network coding techniques, are not strong throughput and APDD approximation nor weak throughput approximation techniques. |
2404.00367 | Yan Zhang Main | Bin Wang, Yan Zhang, Yan Ma, Yaohui Jin, Yanyan Xu | SA-LSPL:Sequence-Aware Long- and Short- Term Preference Learning for
next POI recommendation | null | null | null | null | cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The next Point of Interest (POI) recommendation aims to recommend the next
POI for users at a specific time. As users' check-in records can be viewed as a
long sequence, methods based on Recurrent Neural Networks (RNNs) have recently
shown good applicability to this task. However, existing methods often struggle
to fully explore the spatio-temporal correlations and dependencies at the
sequence level, and don't take full consideration for various factors
influencing users' preferences. To address these issues, we propose a novel
approach called Sequence-Aware Long- and Short-Term Preference Learning
(SA-LSPL) for next-POI recommendation. We combine various information features
to effectively model users' long-term preferences. Specifically, our proposed
model uses a multi-modal embedding module to embed diverse check-in details,
taking into account both user's personalized preferences and social influences
comprehensively. Additionally, we consider explicit spatio-temporal
correlations at the sequence level and implicit sequence dependencies.
Furthermore, SA-LSPL learns the spatio-temporal correlations of consecutive and
non-consecutive visits in the current check-in sequence, as well as transition
dependencies between categories, providing a comprehensive capture of user's
short-term preferences. Extensive experiments on two real-world datasets
demonstrate the superiority of SA-LSPL over state-of-the-art baseline methods.
| [
{
"created": "Sat, 30 Mar 2024 13:40:25 GMT",
"version": "v1"
}
] | 2024-04-02 | [
[
"Wang",
"Bin",
""
],
[
"Zhang",
"Yan",
""
],
[
"Ma",
"Yan",
""
],
[
"Jin",
"Yaohui",
""
],
[
"Xu",
"Yanyan",
""
]
] | The next Point of Interest (POI) recommendation aims to recommend the next POI for users at a specific time. As users' check-in records can be viewed as a long sequence, methods based on Recurrent Neural Networks (RNNs) have recently shown good applicability to this task. However, existing methods often struggle to fully explore the spatio-temporal correlations and dependencies at the sequence level, and don't take full consideration for various factors influencing users' preferences. To address these issues, we propose a novel approach called Sequence-Aware Long- and Short-Term Preference Learning (SA-LSPL) for next-POI recommendation. We combine various information features to effectively model users' long-term preferences. Specifically, our proposed model uses a multi-modal embedding module to embed diverse check-in details, taking into account both user's personalized preferences and social influences comprehensively. Additionally, we consider explicit spatio-temporal correlations at the sequence level and implicit sequence dependencies. Furthermore, SA-LSPL learns the spatio-temporal correlations of consecutive and non-consecutive visits in the current check-in sequence, as well as transition dependencies between categories, providing a comprehensive capture of user's short-term preferences. Extensive experiments on two real-world datasets demonstrate the superiority of SA-LSPL over state-of-the-art baseline methods. |
1303.5759 | Hong Xu | Hong Xu | An Efficient Implementation of Belief Function Propagation | Appears in Proceedings of the Seventh Conference on Uncertainty in
Artificial Intelligence (UAI1991) | null | null | UAI-P-1991-PG-425-432 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The local computation technique (Shafer et al. 1987, Shafer and Shenoy 1988,
Shenoy and Shafer 1986) is used for propagating belief functions in so called a
Markov Tree. In this paper, we describe an efficient implementation of belief
function propagation on the basis of the local computation technique. The
presented method avoids all the redundant computations in the propagation
process, and so makes the computational complexity decrease with respect to
other existing implementations (Hsia and Shenoy 1989, Zarley et al. 1988). We
also give a combined algorithm for both propagation and re-propagation which
makes the re-propagation process more efficient when one or more of the prior
belief functions is changed.
| [
{
"created": "Wed, 20 Mar 2013 15:34:07 GMT",
"version": "v1"
}
] | 2013-03-26 | [
[
"Xu",
"Hong",
""
]
] | The local computation technique (Shafer et al. 1987, Shafer and Shenoy 1988, Shenoy and Shafer 1986) is used for propagating belief functions in so called a Markov Tree. In this paper, we describe an efficient implementation of belief function propagation on the basis of the local computation technique. The presented method avoids all the redundant computations in the propagation process, and so makes the computational complexity decrease with respect to other existing implementations (Hsia and Shenoy 1989, Zarley et al. 1988). We also give a combined algorithm for both propagation and re-propagation which makes the re-propagation process more efficient when one or more of the prior belief functions is changed. |
2204.07936 | Jessica Leu | Jessica Leu, Yujiao Cheng, Changliu Liu, Masayoshi Tomizuka | Robust Task Planning for Assembly Lines with Human-Robot Collaboration | null | null | null | null | cs.RO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Efficient and robust task planning for a human-robot collaboration (HRC)
system remains challenging. The human-aware task planner needs to assign jobs
to both robots and human workers so that they can work collaboratively to
achieve better time efficiency. However, the complexity of the tasks and the
stochastic nature of the human collaborators bring challenges to such task
planning. To reduce the complexity of the planning problem, we utilize the
hierarchical task model, which explicitly captures the sequential and parallel
relationships of the task. We model human movements with the sigma-lognormal
functions to account for human-induced uncertainties. A human action model
adaptation scheme is applied during run-time, and it provides a measure for
modeling the human-induced uncertainties. We propose a sampling-based method to
estimate human job completion time uncertainties. Next, we propose a robust
task planner, which formulates the planning problem as a robust optimization
problem by considering the task structure and the uncertainties. We conduct
simulations of a robot arm collaborating with a human worker in an electronics
assembly setting. The results show that our proposed planner can reduce task
completion time when human-induced uncertainties occur compared to the baseline
planner.
| [
{
"created": "Sun, 17 Apr 2022 05:54:01 GMT",
"version": "v1"
}
] | 2022-04-19 | [
[
"Leu",
"Jessica",
""
],
[
"Cheng",
"Yujiao",
""
],
[
"Liu",
"Changliu",
""
],
[
"Tomizuka",
"Masayoshi",
""
]
] | Efficient and robust task planning for a human-robot collaboration (HRC) system remains challenging. The human-aware task planner needs to assign jobs to both robots and human workers so that they can work collaboratively to achieve better time efficiency. However, the complexity of the tasks and the stochastic nature of the human collaborators bring challenges to such task planning. To reduce the complexity of the planning problem, we utilize the hierarchical task model, which explicitly captures the sequential and parallel relationships of the task. We model human movements with the sigma-lognormal functions to account for human-induced uncertainties. A human action model adaptation scheme is applied during run-time, and it provides a measure for modeling the human-induced uncertainties. We propose a sampling-based method to estimate human job completion time uncertainties. Next, we propose a robust task planner, which formulates the planning problem as a robust optimization problem by considering the task structure and the uncertainties. We conduct simulations of a robot arm collaborating with a human worker in an electronics assembly setting. The results show that our proposed planner can reduce task completion time when human-induced uncertainties occur compared to the baseline planner. |
2104.10715 | Rishab Khincha | Utkarsh Sarawgi, Rishab Khincha, Wazeer Zulfikar, Satrajit Ghosh,
Pattie Maes | Uncertainty-Aware Boosted Ensembling in Multi-Modal Settings | Accepted at IJCNN 2021, to appear in IEEE proceedings. Equal
contributions from US, RK and WZ | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Reliability of machine learning (ML) systems is crucial in safety-critical
applications such as healthcare, and uncertainty estimation is a widely
researched method to highlight the confidence of ML systems in deployment.
Sequential and parallel ensemble techniques have shown improved performance of
ML systems in multi-modal settings by leveraging the feature sets together. We
propose an uncertainty-aware boosting technique for multi-modal ensembling in
order to focus on the data points with higher associated uncertainty estimates,
rather than the ones with higher loss values. We evaluate this method on
healthcare tasks related to Dementia and Parkinson's disease which involve
real-world multi-modal speech and text data, wherein our method shows an
improved performance. Additional analysis suggests that introducing
uncertainty-awareness into the boosted ensembles decreases the overall entropy
of the system, making it more robust to heteroscedasticity in the data, as well
as better calibrating each of the modalities along with high quality prediction
intervals. We open-source our entire codebase at
https://github.com/usarawgi911/Uncertainty-aware-boosting
| [
{
"created": "Wed, 21 Apr 2021 18:28:13 GMT",
"version": "v1"
}
] | 2021-04-23 | [
[
"Sarawgi",
"Utkarsh",
""
],
[
"Khincha",
"Rishab",
""
],
[
"Zulfikar",
"Wazeer",
""
],
[
"Ghosh",
"Satrajit",
""
],
[
"Maes",
"Pattie",
""
]
] | Reliability of machine learning (ML) systems is crucial in safety-critical applications such as healthcare, and uncertainty estimation is a widely researched method to highlight the confidence of ML systems in deployment. Sequential and parallel ensemble techniques have shown improved performance of ML systems in multi-modal settings by leveraging the feature sets together. We propose an uncertainty-aware boosting technique for multi-modal ensembling in order to focus on the data points with higher associated uncertainty estimates, rather than the ones with higher loss values. We evaluate this method on healthcare tasks related to Dementia and Parkinson's disease which involve real-world multi-modal speech and text data, wherein our method shows an improved performance. Additional analysis suggests that introducing uncertainty-awareness into the boosted ensembles decreases the overall entropy of the system, making it more robust to heteroscedasticity in the data, as well as better calibrating each of the modalities along with high quality prediction intervals. We open-source our entire codebase at https://github.com/usarawgi911/Uncertainty-aware-boosting |
2005.03210 | Dylan Losey | Hong Jun Jeon, Dylan P. Losey, Dorsa Sadigh | Shared Autonomy with Learned Latent Actions | null | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Assistive robots enable people with disabilities to conduct everyday tasks on
their own. However, these tasks can be complex, containing both coarse reaching
motions and fine-grained manipulation. For example, when eating, not only does
one need to move to the correct food item, but they must also precisely
manipulate the food in different ways (e.g., cutting, stabbing, scooping).
Shared autonomy methods make robot teleoperation safer and more precise by
arbitrating user inputs with robot controls. However, these works have focused
mainly on the high-level task of reaching a goal from a discrete set, while
largely ignoring manipulation of objects at that goal. Meanwhile,
dimensionality reduction techniques for teleoperation map useful
high-dimensional robot actions into an intuitive low-dimensional controller,
but it is unclear if these methods can achieve the requisite precision for
tasks like eating. Our insight is that---by combining intuitive embeddings from
learned latent actions with robotic assistance from shared autonomy---we can
enable precise assistive manipulation. In this work, we adopt learned latent
actions for shared autonomy by proposing a new model structure that changes the
meaning of the human's input based on the robot's confidence of the goal. We
show convergence bounds on the robot's distance to the most likely goal, and
develop a training procedure to learn a controller that is able to move between
goals even in the presence of shared autonomy. We evaluate our method in
simulations and an eating user study. See videos of our experiments here:
https://youtu.be/7BouKojzVyk
| [
{
"created": "Thu, 7 May 2020 02:39:56 GMT",
"version": "v1"
},
{
"created": "Mon, 11 May 2020 16:50:25 GMT",
"version": "v2"
}
] | 2020-05-12 | [
[
"Jeon",
"Hong Jun",
""
],
[
"Losey",
"Dylan P.",
""
],
[
"Sadigh",
"Dorsa",
""
]
] | Assistive robots enable people with disabilities to conduct everyday tasks on their own. However, these tasks can be complex, containing both coarse reaching motions and fine-grained manipulation. For example, when eating, not only does one need to move to the correct food item, but they must also precisely manipulate the food in different ways (e.g., cutting, stabbing, scooping). Shared autonomy methods make robot teleoperation safer and more precise by arbitrating user inputs with robot controls. However, these works have focused mainly on the high-level task of reaching a goal from a discrete set, while largely ignoring manipulation of objects at that goal. Meanwhile, dimensionality reduction techniques for teleoperation map useful high-dimensional robot actions into an intuitive low-dimensional controller, but it is unclear if these methods can achieve the requisite precision for tasks like eating. Our insight is that---by combining intuitive embeddings from learned latent actions with robotic assistance from shared autonomy---we can enable precise assistive manipulation. In this work, we adopt learned latent actions for shared autonomy by proposing a new model structure that changes the meaning of the human's input based on the robot's confidence of the goal. We show convergence bounds on the robot's distance to the most likely goal, and develop a training procedure to learn a controller that is able to move between goals even in the presence of shared autonomy. We evaluate our method in simulations and an eating user study. See videos of our experiments here: https://youtu.be/7BouKojzVyk |
cs/0605060 | Rajiv Ranjan Mr. | Rajiv Ranjan, Aaron Harwood and Rajkumar Buyya | A Case for Cooperative and Incentive-Based Coupling of Distributed
Clusters | 22 pages, extended version of the conference paper published at IEEE
Cluster'05, Boston, MA | In Proceedings of the 7th IEEE International Conference on Cluster
Computing (Cluster 2005), IEEE Computer Society Press, September 27 - 30,
2005, Boston, Massachusetts, USA. | 10.1109/CLUSTR.2005.347038 | null | cs.DC | null | Research interest in Grid computing has grown significantly over the past
five years. Management of distributed resources is one of the key issues in
Grid computing. Central to management of resources is the effectiveness of
resource allocation as it determines the overall utility of the system. The
current approaches to superscheduling in a grid environment are non-coordinated
since application level schedulers or brokers make scheduling decisions
independently of the others in the system. Clearly, this can exacerbate the
load sharing and utilization problems of distributed resources due to
suboptimal schedules that are likely to occur. To overcome these limitations,
we propose a mechanism for coordinated sharing of distributed clusters based on
computational economy. The resulting environment, called
\emph{Grid-Federation}, allows the transparent use of resources from the
federation when local resources are insufficient to meet its users'
requirements. The use of computational economy methodology in coordinating
resource allocation not only facilitates the QoS based scheduling, but also
enhances utility delivered by resources.
| [
{
"created": "Mon, 15 May 2006 10:21:22 GMT",
"version": "v1"
}
] | 2016-11-18 | [
[
"Ranjan",
"Rajiv",
""
],
[
"Harwood",
"Aaron",
""
],
[
"Buyya",
"Rajkumar",
""
]
] | Research interest in Grid computing has grown significantly over the past five years. Management of distributed resources is one of the key issues in Grid computing. Central to management of resources is the effectiveness of resource allocation as it determines the overall utility of the system. The current approaches to superscheduling in a grid environment are non-coordinated since application level schedulers or brokers make scheduling decisions independently of the others in the system. Clearly, this can exacerbate the load sharing and utilization problems of distributed resources due to suboptimal schedules that are likely to occur. To overcome these limitations, we propose a mechanism for coordinated sharing of distributed clusters based on computational economy. The resulting environment, called \emph{Grid-Federation}, allows the transparent use of resources from the federation when local resources are insufficient to meet its users' requirements. The use of computational economy methodology in coordinating resource allocation not only facilitates the QoS based scheduling, but also enhances utility delivered by resources. |
1507.06462 | Matteo Sammartino | Roberto Bruni, Ugo Montanari, Matteo Sammartino | A coalgebraic semantics for causality in Petri nets | Accepted by Journal of Logical and Algebraic Methods in Programming | null | 10.1016/j.jlamp.2015.07.003 | null | cs.LO | http://creativecommons.org/licenses/by/4.0/ | In this paper we revisit some pioneering efforts to equip Petri nets with
compact operational models for expressing causality. The models we propose have
a bisimilarity relation and a minimal representative for each equivalence
class, and they can be fully explained as coalgebras on a presheaf category on
an index category of partial orders. First, we provide a set-theoretic model in
the form of a a causal case graph, that is a labeled transition system where
states and transitions represent markings and firings of the net, respectively,
and are equipped with causal information. Most importantly, each state has a
poset representing causal dependencies among past events. Our first result
shows the correspondence with behavior structure semantics as proposed by
Trakhtenbrot and Rabinovich. Causal case graphs may be infinitely-branching and
have infinitely many states, but we show how they can be refined to get an
equivalent finitely-branching model. In it, states are equipped with
symmetries, which are essential for the existence of a minimal, often
finite-state, model. The next step is constructing a coalgebraic model. We
exploit the fact that events can be represented as names, and event generation
as name generation. Thus we can apply the Fiore-Turi framework: we model causal
relations as a suitable category of posets with action labels, and generation
of new events with causal dependencies as an endofunctor on this category. Then
we define a well-behaved category of coalgebras. Our coalgebraic model is still
infinite-state, but we exploit the equivalence between coalgebras over a class
of presheaves and History Dependent automata to derive a compact
representation, which is equivalent to our set-theoretical compact model.
Remarkably, state reduction is automatically performed along the equivalence.
| [
{
"created": "Thu, 23 Jul 2015 12:08:22 GMT",
"version": "v1"
}
] | 2015-07-24 | [
[
"Bruni",
"Roberto",
""
],
[
"Montanari",
"Ugo",
""
],
[
"Sammartino",
"Matteo",
""
]
] | In this paper we revisit some pioneering efforts to equip Petri nets with compact operational models for expressing causality. The models we propose have a bisimilarity relation and a minimal representative for each equivalence class, and they can be fully explained as coalgebras on a presheaf category on an index category of partial orders. First, we provide a set-theoretic model in the form of a a causal case graph, that is a labeled transition system where states and transitions represent markings and firings of the net, respectively, and are equipped with causal information. Most importantly, each state has a poset representing causal dependencies among past events. Our first result shows the correspondence with behavior structure semantics as proposed by Trakhtenbrot and Rabinovich. Causal case graphs may be infinitely-branching and have infinitely many states, but we show how they can be refined to get an equivalent finitely-branching model. In it, states are equipped with symmetries, which are essential for the existence of a minimal, often finite-state, model. The next step is constructing a coalgebraic model. We exploit the fact that events can be represented as names, and event generation as name generation. Thus we can apply the Fiore-Turi framework: we model causal relations as a suitable category of posets with action labels, and generation of new events with causal dependencies as an endofunctor on this category. Then we define a well-behaved category of coalgebras. Our coalgebraic model is still infinite-state, but we exploit the equivalence between coalgebras over a class of presheaves and History Dependent automata to derive a compact representation, which is equivalent to our set-theoretical compact model. Remarkably, state reduction is automatically performed along the equivalence. |
2205.01643 | Jinze Yu | Jinze Yu, Jiaming Liu, Xiaobao Wei, Haoyi Zhou, Yohei Nakata, Denis
Gudovskiy, Tomoyuki Okuno, Jianxin Li, Kurt Keutzer, Shanghang Zhang | MTTrans: Cross-Domain Object Detection with Mean-Teacher Transformer | Accepted by ECCV 2022 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Recently, DEtection TRansformer (DETR), an end-to-end object detection
pipeline, has achieved promising performance. However, it requires large-scale
labeled data and suffers from domain shift, especially when no labeled data is
available in the target domain. To solve this problem, we propose an end-to-end
cross-domain detection Transformer based on the mean teacher framework,
MTTrans, which can fully exploit unlabeled target domain data in object
detection training and transfer knowledge between domains via pseudo labels. We
further propose the comprehensive multi-level feature alignment to improve the
pseudo labels generated by the mean teacher framework taking advantage of the
cross-scale self-attention mechanism in Deformable DETR. Image and object
features are aligned at the local, global, and instance levels with domain
query-based feature alignment (DQFA), bi-level graph-based prototype alignment
(BGPA), and token-wise image feature alignment (TIFA). On the other hand, the
unlabeled target domain data pseudo-labeled and available for the object
detection training by the mean teacher framework can lead to better feature
extraction and alignment. Thus, the mean teacher framework and the
comprehensive multi-level feature alignment can be optimized iteratively and
mutually based on the architecture of Transformers. Extensive experiments
demonstrate that our proposed method achieves state-of-the-art performance in
three domain adaptation scenarios, especially the result of Sim10k to
Cityscapes scenario is remarkably improved from 52.6 mAP to 57.9 mAP. Code will
be released.
| [
{
"created": "Tue, 3 May 2022 17:11:55 GMT",
"version": "v1"
},
{
"created": "Tue, 16 Aug 2022 09:55:23 GMT",
"version": "v2"
}
] | 2022-08-17 | [
[
"Yu",
"Jinze",
""
],
[
"Liu",
"Jiaming",
""
],
[
"Wei",
"Xiaobao",
""
],
[
"Zhou",
"Haoyi",
""
],
[
"Nakata",
"Yohei",
""
],
[
"Gudovskiy",
"Denis",
""
],
[
"Okuno",
"Tomoyuki",
""
],
[
"Li",
"Jianxin",
""
],
[
"Keutzer",
"Kurt",
""
],
[
"Zhang",
"Shanghang",
""
]
] | Recently, DEtection TRansformer (DETR), an end-to-end object detection pipeline, has achieved promising performance. However, it requires large-scale labeled data and suffers from domain shift, especially when no labeled data is available in the target domain. To solve this problem, we propose an end-to-end cross-domain detection Transformer based on the mean teacher framework, MTTrans, which can fully exploit unlabeled target domain data in object detection training and transfer knowledge between domains via pseudo labels. We further propose the comprehensive multi-level feature alignment to improve the pseudo labels generated by the mean teacher framework taking advantage of the cross-scale self-attention mechanism in Deformable DETR. Image and object features are aligned at the local, global, and instance levels with domain query-based feature alignment (DQFA), bi-level graph-based prototype alignment (BGPA), and token-wise image feature alignment (TIFA). On the other hand, the unlabeled target domain data pseudo-labeled and available for the object detection training by the mean teacher framework can lead to better feature extraction and alignment. Thus, the mean teacher framework and the comprehensive multi-level feature alignment can be optimized iteratively and mutually based on the architecture of Transformers. Extensive experiments demonstrate that our proposed method achieves state-of-the-art performance in three domain adaptation scenarios, especially the result of Sim10k to Cityscapes scenario is remarkably improved from 52.6 mAP to 57.9 mAP. Code will be released. |
2006.01304 | Kyungmi Lee | Kyungmi Lee, Anantha P. Chandrakasan | Rethinking Empirical Evaluation of Adversarial Robustness Using
First-Order Attack Methods | null | null | null | null | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We identify three common cases that lead to overestimation of adversarial
accuracy against bounded first-order attack methods, which is popularly used as
a proxy for adversarial robustness in empirical studies. For each case, we
propose compensation methods that either address sources of inaccurate gradient
computation, such as numerical instability near zero and non-differentiability,
or reduce the total number of back-propagations for iterative attacks by
approximating second-order information. These compensation methods can be
combined with existing attack methods for a more precise empirical evaluation
metric. We illustrate the impact of these three cases with examples of
practical interest, such as benchmarking model capacity and regularization
techniques for robustness. Overall, our work shows that overestimated
adversarial accuracy that is not indicative of robustness is prevalent even for
conventionally trained deep neural networks, and highlights cautions of using
empirical evaluation without guaranteed bounds.
| [
{
"created": "Mon, 1 Jun 2020 22:55:09 GMT",
"version": "v1"
}
] | 2020-06-03 | [
[
"Lee",
"Kyungmi",
""
],
[
"Chandrakasan",
"Anantha P.",
""
]
] | We identify three common cases that lead to overestimation of adversarial accuracy against bounded first-order attack methods, which is popularly used as a proxy for adversarial robustness in empirical studies. For each case, we propose compensation methods that either address sources of inaccurate gradient computation, such as numerical instability near zero and non-differentiability, or reduce the total number of back-propagations for iterative attacks by approximating second-order information. These compensation methods can be combined with existing attack methods for a more precise empirical evaluation metric. We illustrate the impact of these three cases with examples of practical interest, such as benchmarking model capacity and regularization techniques for robustness. Overall, our work shows that overestimated adversarial accuracy that is not indicative of robustness is prevalent even for conventionally trained deep neural networks, and highlights cautions of using empirical evaluation without guaranteed bounds. |
1511.03518 | Ya-Hui An | Ya-Hui An, Qiang Dong, Chong-Jing Sun, Da-Cheng Nie and Yan Fu | Diffusion-like recommendation with enhanced similarity of objects | null | Physica A: Statistical Mechanics and its Applications 461 (2016)
708-715 | 10.1016/j.physa.2016.06.027 | null | cs.IR cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In last decades, diversity and accuracy have been regarded as two important
measures in evaluating a recommendation model. However, a clear concern is that
a model focusing excessively on one measure will put the other one at risk,
thus it is not easy to greatly improve diversity and accuracy simultaneously.
In this paper, we propose to enhance the Resource-Allocation (RA) similarity in
resource transfer equations of diffusion-like models, by giving a tunable
exponent to the RA similarity, and traversing the value of the exponent to
achieve the optimal recommendation results. In this way, we can increase the
recommendation scores (allocated resource) of many unpopular objects.
Experiments on three benchmark data sets, MovieLens, Netflix, and RateYourMusic
show that the modified models can yield remarkable performance improvement
compared with the original ones.
| [
{
"created": "Wed, 11 Nov 2015 14:43:32 GMT",
"version": "v1"
},
{
"created": "Thu, 11 Oct 2018 06:13:15 GMT",
"version": "v2"
}
] | 2018-10-17 | [
[
"An",
"Ya-Hui",
""
],
[
"Dong",
"Qiang",
""
],
[
"Sun",
"Chong-Jing",
""
],
[
"Nie",
"Da-Cheng",
""
],
[
"Fu",
"Yan",
""
]
] | In last decades, diversity and accuracy have been regarded as two important measures in evaluating a recommendation model. However, a clear concern is that a model focusing excessively on one measure will put the other one at risk, thus it is not easy to greatly improve diversity and accuracy simultaneously. In this paper, we propose to enhance the Resource-Allocation (RA) similarity in resource transfer equations of diffusion-like models, by giving a tunable exponent to the RA similarity, and traversing the value of the exponent to achieve the optimal recommendation results. In this way, we can increase the recommendation scores (allocated resource) of many unpopular objects. Experiments on three benchmark data sets, MovieLens, Netflix, and RateYourMusic show that the modified models can yield remarkable performance improvement compared with the original ones. |
2003.09986 | Yao Qiang | Yao Qiang, Xin Li, Dongxiao Zhu | Toward Tag-free Aspect Based Sentiment Analysis: A Multiple Attention
Network Approach | to appear in the proceedings of IJCNN'20 | null | null | null | cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing aspect based sentiment analysis (ABSA) approaches leverage various
neural network models to extract the aspect sentiments via learning
aspect-specific feature representations. However, these approaches heavily rely
on manual tagging of user reviews according to the predefined aspects as the
input, a laborious and time-consuming process. Moreover, the underlying methods
do not explain how and why the opposing aspect level polarities in a user
review lead to the overall polarity. In this paper, we tackle these two
problems by designing and implementing a new Multiple-Attention Network (MAN)
approach for more powerful ABSA without the need for aspect tags using two new
tag-free data sets crawled directly from TripAdvisor
({https://www.tripadvisor.com}). With the Self- and Position-Aware attention
mechanism, MAN is capable of extracting both aspect level and overall
sentiments from the text reviews using the aspect level and overall customer
ratings, and it can also detect the vital aspect(s) leading to the overall
sentiment polarity among different aspects via a new aspect ranking scheme. We
carry out extensive experiments to demonstrate the strong performance of MAN
compared to other state-of-the-art ABSA approaches and the explainability of
our approach by visualizing and interpreting attention weights in case studies.
| [
{
"created": "Sun, 22 Mar 2020 20:18:20 GMT",
"version": "v1"
}
] | 2020-03-24 | [
[
"Qiang",
"Yao",
""
],
[
"Li",
"Xin",
""
],
[
"Zhu",
"Dongxiao",
""
]
] | Existing aspect based sentiment analysis (ABSA) approaches leverage various neural network models to extract the aspect sentiments via learning aspect-specific feature representations. However, these approaches heavily rely on manual tagging of user reviews according to the predefined aspects as the input, a laborious and time-consuming process. Moreover, the underlying methods do not explain how and why the opposing aspect level polarities in a user review lead to the overall polarity. In this paper, we tackle these two problems by designing and implementing a new Multiple-Attention Network (MAN) approach for more powerful ABSA without the need for aspect tags using two new tag-free data sets crawled directly from TripAdvisor ({https://www.tripadvisor.com}). With the Self- and Position-Aware attention mechanism, MAN is capable of extracting both aspect level and overall sentiments from the text reviews using the aspect level and overall customer ratings, and it can also detect the vital aspect(s) leading to the overall sentiment polarity among different aspects via a new aspect ranking scheme. We carry out extensive experiments to demonstrate the strong performance of MAN compared to other state-of-the-art ABSA approaches and the explainability of our approach by visualizing and interpreting attention weights in case studies. |
2103.10241 | Ke Lai | Ke Lai, Jing Lei, Yansha Deng, Lei Wen, Gaojie Chen | Analyzing Uplink Grant-free Sparse Code Multiple Access System in
Massive IoT Networks | null | null | null | null | cs.IT eess.SP math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Grant-free sparse code multiple access (GF-SCMA) is considered to be a
promising multiple access candidate for future wireless networks. In this
paper, we focus on characterizing the performance of uplink GF-SCMA schemes in
a network with ubiquitous connections, such as the Internet of Things (IoT)
networks. To provide a tractable approach to evaluate the performance of
GF-SCMA, we first develop a theoretical model taking into account the property
of multi-user detection (MUD) in the SCMA system. We then analyze the error
rate performance of GF-SCMA in the case of codebook collision to investigate
the reliability of GF-SCMA when reusing codebook in massive IoT networks. For
performance evaluation, accurate approximations for both success probability
and average symbol error probability (ASEP) are derived. To elaborate further,
we utilize the analytical results to discuss the impact of codeword sparse
degree in GFSCMA. After that, we conduct a comparative study between SCMA and
its variant, dense code multiple access (DCMA), with GF transmission to offer
insights into the effectiveness of these two schemes. This facilitates the
GF-SCMA system design in practical implementation. Simulation results show that
denser codebooks can help to support more UEs and increase the reliability of
data transmission in a GF-SCMA network. Moreover, a higher success probability
can be achieved by GFSCMA with denser UE deployment at low detection thresholds
since SCMA can achieve overloading gain.
| [
{
"created": "Thu, 18 Mar 2021 13:23:23 GMT",
"version": "v1"
}
] | 2021-03-19 | [
[
"Lai",
"Ke",
""
],
[
"Lei",
"Jing",
""
],
[
"Deng",
"Yansha",
""
],
[
"Wen",
"Lei",
""
],
[
"Chen",
"Gaojie",
""
]
] | Grant-free sparse code multiple access (GF-SCMA) is considered to be a promising multiple access candidate for future wireless networks. In this paper, we focus on characterizing the performance of uplink GF-SCMA schemes in a network with ubiquitous connections, such as the Internet of Things (IoT) networks. To provide a tractable approach to evaluate the performance of GF-SCMA, we first develop a theoretical model taking into account the property of multi-user detection (MUD) in the SCMA system. We then analyze the error rate performance of GF-SCMA in the case of codebook collision to investigate the reliability of GF-SCMA when reusing codebook in massive IoT networks. For performance evaluation, accurate approximations for both success probability and average symbol error probability (ASEP) are derived. To elaborate further, we utilize the analytical results to discuss the impact of codeword sparse degree in GFSCMA. After that, we conduct a comparative study between SCMA and its variant, dense code multiple access (DCMA), with GF transmission to offer insights into the effectiveness of these two schemes. This facilitates the GF-SCMA system design in practical implementation. Simulation results show that denser codebooks can help to support more UEs and increase the reliability of data transmission in a GF-SCMA network. Moreover, a higher success probability can be achieved by GFSCMA with denser UE deployment at low detection thresholds since SCMA can achieve overloading gain. |
0710.3279 | Meixia Tao | Meixia Tao, Ying-Chang Liang and Fan Zhang | Resource Allocation for Delay Differentiated Traffic in Multiuser OFDM
Systems | 29 pages, 8 figures, submitted to IEEE Transactions on Wireless
Communications | null | 10.1109/ICC.2006.255331 | null | cs.NI cs.IT math.IT | null | Most existing work on adaptive allocation of subcarriers and power in
multiuser orthogonal frequency division multiplexing (OFDM) systems has focused
on homogeneous traffic consisting solely of either delay-constrained data
(guaranteed service) or non-delay-constrained data (best-effort service). In
this paper, we investigate the resource allocation problem in a heterogeneous
multiuser OFDM system with both delay-constrained (DC) and
non-delay-constrained (NDC) traffic. The objective is to maximize the sum-rate
of all the users with NDC traffic while maintaining guaranteed rates for the
users with DC traffic under a total transmit power constraint. Through our
analysis we show that the optimal power allocation over subcarriers follows a
multi-level water-filling principle; moreover, the valid candidates competing
for each subcarrier include only one NDC user but all DC users. By converting
this combinatorial problem with exponential complexity into a convex problem or
showing that it can be solved in the dual domain, efficient iterative
algorithms are proposed to find the optimal solutions. To further reduce the
computational cost, a low-complexity suboptimal algorithm is also developed.
Numerical studies are conducted to evaluate the performance the proposed
algorithms in terms of service outage probability, achievable transmission rate
pairs for DC and NDC traffic, and multiuser diversity.
| [
{
"created": "Wed, 17 Oct 2007 12:04:34 GMT",
"version": "v1"
}
] | 2016-11-15 | [
[
"Tao",
"Meixia",
""
],
[
"Liang",
"Ying-Chang",
""
],
[
"Zhang",
"Fan",
""
]
] | Most existing work on adaptive allocation of subcarriers and power in multiuser orthogonal frequency division multiplexing (OFDM) systems has focused on homogeneous traffic consisting solely of either delay-constrained data (guaranteed service) or non-delay-constrained data (best-effort service). In this paper, we investigate the resource allocation problem in a heterogeneous multiuser OFDM system with both delay-constrained (DC) and non-delay-constrained (NDC) traffic. The objective is to maximize the sum-rate of all the users with NDC traffic while maintaining guaranteed rates for the users with DC traffic under a total transmit power constraint. Through our analysis we show that the optimal power allocation over subcarriers follows a multi-level water-filling principle; moreover, the valid candidates competing for each subcarrier include only one NDC user but all DC users. By converting this combinatorial problem with exponential complexity into a convex problem or showing that it can be solved in the dual domain, efficient iterative algorithms are proposed to find the optimal solutions. To further reduce the computational cost, a low-complexity suboptimal algorithm is also developed. Numerical studies are conducted to evaluate the performance the proposed algorithms in terms of service outage probability, achievable transmission rate pairs for DC and NDC traffic, and multiuser diversity. |
cs/0110052 | Nandlal L. Sarda | N. L. Sarda and Ankur Jain | Mragyati : A System for Keyword-based Searching in Databases | null | null | null | null | cs.DB | null | The web, through many search engine sites, has popularized the keyword-based
search paradigm, where a user can specify a string of keywords and expect to
retrieve relevant documents, possibly ranked by their relevance to the query.
Since a lot of information is stored in databases (and not as HTML documents),
it is important to provide a similar search paradigm for databases, where users
can query a database without knowing the database schema and database query
languages such as SQL. In this paper, we propose such a database search system,
which accepts a free-form query as a collection of keywords, translates it into
queries on the database using the database metadata, and presents query results
in a well-structured and browsable form. Th eysytem maps keywords onto the
database schema and uses inter-relationships (i.e., data semantics) among the
referred tables to generate meaningful query results. We also describe our
prototype for database search, called Mragyati. Th eapproach proposed here is
scalable, as it does not build an in-memory graph of the entire database for
searching for relationships among the objects selected by the user's query.
| [
{
"created": "Thu, 25 Oct 2001 08:55:57 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Sarda",
"N. L.",
""
],
[
"Jain",
"Ankur",
""
]
] | The web, through many search engine sites, has popularized the keyword-based search paradigm, where a user can specify a string of keywords and expect to retrieve relevant documents, possibly ranked by their relevance to the query. Since a lot of information is stored in databases (and not as HTML documents), it is important to provide a similar search paradigm for databases, where users can query a database without knowing the database schema and database query languages such as SQL. In this paper, we propose such a database search system, which accepts a free-form query as a collection of keywords, translates it into queries on the database using the database metadata, and presents query results in a well-structured and browsable form. Th eysytem maps keywords onto the database schema and uses inter-relationships (i.e., data semantics) among the referred tables to generate meaningful query results. We also describe our prototype for database search, called Mragyati. Th eapproach proposed here is scalable, as it does not build an in-memory graph of the entire database for searching for relationships among the objects selected by the user's query. |
2407.12101 | Marc Pickett | Marc Pickett, Jeremy Hartman, Ayan Kumar Bhowmick, Raquib-ul Alam,
Aditya Vempaty | Better RAG using Relevant Information Gain | 4 page paper submitted to EMNLP | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | A common way to extend the memory of large language models (LLMs) is by
retrieval augmented generation (RAG), which inserts text retrieved from a
larger memory into an LLM's context window. However, the context window is
typically limited to several thousand tokens, which limits the number of
retrieved passages that can inform a model's response. For this reason, it's
important to avoid occupying context window space with redundant information by
ensuring a degree of diversity among retrieved passages. At the same time, the
information should also be relevant to the current task. Most prior methods
that encourage diversity among retrieved results, such as Maximal Marginal
Relevance (MMR), do so by incorporating an objective that explicitly trades off
diversity and relevance. We propose a novel simple optimization metric based on
relevant information gain, a probabilistic measure of the total information
relevant to a query for a set of retrieved results. By optimizing this metric,
diversity organically emerges from our system. When used as a drop-in
replacement for the retrieval component of a RAG system, this method yields
state-of-the-art performance on question answering tasks from the Retrieval
Augmented Generation Benchmark (RGB), outperforming existing metrics that
directly optimize for relevance and diversity.
| [
{
"created": "Tue, 16 Jul 2024 18:09:21 GMT",
"version": "v1"
}
] | 2024-07-18 | [
[
"Pickett",
"Marc",
""
],
[
"Hartman",
"Jeremy",
""
],
[
"Bhowmick",
"Ayan Kumar",
""
],
[
"Alam",
"Raquib-ul",
""
],
[
"Vempaty",
"Aditya",
""
]
] | A common way to extend the memory of large language models (LLMs) is by retrieval augmented generation (RAG), which inserts text retrieved from a larger memory into an LLM's context window. However, the context window is typically limited to several thousand tokens, which limits the number of retrieved passages that can inform a model's response. For this reason, it's important to avoid occupying context window space with redundant information by ensuring a degree of diversity among retrieved passages. At the same time, the information should also be relevant to the current task. Most prior methods that encourage diversity among retrieved results, such as Maximal Marginal Relevance (MMR), do so by incorporating an objective that explicitly trades off diversity and relevance. We propose a novel simple optimization metric based on relevant information gain, a probabilistic measure of the total information relevant to a query for a set of retrieved results. By optimizing this metric, diversity organically emerges from our system. When used as a drop-in replacement for the retrieval component of a RAG system, this method yields state-of-the-art performance on question answering tasks from the Retrieval Augmented Generation Benchmark (RGB), outperforming existing metrics that directly optimize for relevance and diversity. |
2002.06075 | David Aparicio | David Apar\'icio, Ricardo Barata, Jo\~ao Bravo, Jo\~ao Tiago
Ascens\~ao, Pedro Bizarro | ARMS: Automated rules management system for fraud detection | 11 pages, 12 figures, submitted to KDD '20 Applied Data Science Track | null | null | null | cs.LG cs.AI cs.DB stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fraud detection is essential in financial services, with the potential of
greatly reducing criminal activities and saving considerable resources for
businesses and customers. We address online fraud detection, which consists of
classifying incoming transactions as either legitimate or fraudulent in
real-time. Modern fraud detection systems consist of a machine learning model
and rules defined by human experts. Often, the rules performance degrades over
time due to concept drift, especially of adversarial nature. Furthermore, they
can be costly to maintain, either because they are computationally expensive or
because they send transactions for manual review. We propose ARMS, an automated
rules management system that evaluates the contribution of individual rules and
optimizes the set of active rules using heuristic search and a user-defined
loss-function. It complies with critical domain-specific requirements, such as
handling different actions (e.g., accept, alert, and decline), priorities,
blacklists, and large datasets (i.e., hundreds of rules and millions of
transactions). We use ARMS to optimize the rule-based systems of two real-world
clients. Results show that it can maintain the original systems' performance
(e.g., recall, or false-positive rate) using only a fraction of the original
rules (~ 50% in one case, and ~ 20% in the other).
| [
{
"created": "Fri, 14 Feb 2020 15:29:59 GMT",
"version": "v1"
}
] | 2020-02-17 | [
[
"Aparício",
"David",
""
],
[
"Barata",
"Ricardo",
""
],
[
"Bravo",
"João",
""
],
[
"Ascensão",
"João Tiago",
""
],
[
"Bizarro",
"Pedro",
""
]
] | Fraud detection is essential in financial services, with the potential of greatly reducing criminal activities and saving considerable resources for businesses and customers. We address online fraud detection, which consists of classifying incoming transactions as either legitimate or fraudulent in real-time. Modern fraud detection systems consist of a machine learning model and rules defined by human experts. Often, the rules performance degrades over time due to concept drift, especially of adversarial nature. Furthermore, they can be costly to maintain, either because they are computationally expensive or because they send transactions for manual review. We propose ARMS, an automated rules management system that evaluates the contribution of individual rules and optimizes the set of active rules using heuristic search and a user-defined loss-function. It complies with critical domain-specific requirements, such as handling different actions (e.g., accept, alert, and decline), priorities, blacklists, and large datasets (i.e., hundreds of rules and millions of transactions). We use ARMS to optimize the rule-based systems of two real-world clients. Results show that it can maintain the original systems' performance (e.g., recall, or false-positive rate) using only a fraction of the original rules (~ 50% in one case, and ~ 20% in the other). |
1505.00346 | Azra Abtahi | Azra Abtahi, M. Modarres-Hashemi, Farokh Marvasti, and Foroogh S.
Tabataba | Power Allocation and Measurement Matrix Design for Block CS-Based
Distributed MIMO Radars | The paper is accepted in Elseveir Aerospace Science and Technology | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multiple-input multiple-output (MIMO) radars offer higher resolution, better
target detection, and more accurate target parameter estimation. Due to the
sparsity of the targets in space-velocity domain, we can exploit Compressive
Sensing (CS) to improve the performance of MIMO radars when the sampling rate
is much less than the Nyquist rate. In distributed MIMO radars, block CS
methods can be used instead of classical CS ones for more performance
improvement, because the received signal in this group of MIMO radars is a
block sparse signal in a basis. In this paper, two new methods are proposed to
improve the performance of the block CS-based distributed MIMO radars. The
first one is a new method for optimal energy allocation to the transmitters,
and the other one is a new method for optimal design of the measurement matrix.
These methods are based on the minimization of an upper bound of the sensing
matrix block-coherence. Simulation results show an increase in the accuracy of
multiple targets parameters estimation for both proposed methods.
| [
{
"created": "Sat, 2 May 2015 14:49:50 GMT",
"version": "v1"
},
{
"created": "Tue, 20 Oct 2015 15:33:49 GMT",
"version": "v2"
},
{
"created": "Tue, 8 Mar 2016 19:21:16 GMT",
"version": "v3"
}
] | 2016-03-09 | [
[
"Abtahi",
"Azra",
""
],
[
"Modarres-Hashemi",
"M.",
""
],
[
"Marvasti",
"Farokh",
""
],
[
"Tabataba",
"Foroogh S.",
""
]
] | Multiple-input multiple-output (MIMO) radars offer higher resolution, better target detection, and more accurate target parameter estimation. Due to the sparsity of the targets in space-velocity domain, we can exploit Compressive Sensing (CS) to improve the performance of MIMO radars when the sampling rate is much less than the Nyquist rate. In distributed MIMO radars, block CS methods can be used instead of classical CS ones for more performance improvement, because the received signal in this group of MIMO radars is a block sparse signal in a basis. In this paper, two new methods are proposed to improve the performance of the block CS-based distributed MIMO radars. The first one is a new method for optimal energy allocation to the transmitters, and the other one is a new method for optimal design of the measurement matrix. These methods are based on the minimization of an upper bound of the sensing matrix block-coherence. Simulation results show an increase in the accuracy of multiple targets parameters estimation for both proposed methods. |
1304.5940 | Emil Bj\"ornson | Nafiseh Shariati and Emil Bj\"ornson and Mats Bengtsson and M\'erouane
Debbah | Low-Complexity Channel Estimation in Large-Scale MIMO using Polynomial
Expansion | Published at IEEE International Symposium on Personal, Indoor and
Mobile Radio Communications (PIMRC 2013), 8-11 September 2013, 6 pages, 4
figures, 1 table | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper considers pilot-based channel estimation in large-scale
multiple-input multiple-output (MIMO) communication systems, also known as
"massive MIMO". Unlike previous works on this topic, which mainly considered
the impact of inter-cell disturbance due to pilot reuse (so-called pilot
contamination), we are concerned with the computational complexity. The
conventional minimum mean square error (MMSE) and minimum variance unbiased
(MVU) channel estimators rely on inverting covariance matrices, which has cubic
complexity in the multiplication of number of antennas at each side. Since this
is extremely expensive when there are hundreds of antennas, we propose to
approximate the inversion by an L-order matrix polynomial. A set of
low-complexity Bayesian channel estimators, coined Polynomial ExpAnsion CHannel
(PEACH) estimators, are introduced. The coefficients of the polynomials are
optimized to yield small mean square error (MSE). We show numerically that
near-optimal performance is achieved with low polynomial orders. In practice,
the order L can be selected to balance between complexity and MSE.
Interestingly, pilot contamination is beneficial to the PEACH estimators in the
sense that smaller L can be used to achieve near-optimal MSEs.
| [
{
"created": "Mon, 22 Apr 2013 13:17:32 GMT",
"version": "v1"
},
{
"created": "Wed, 17 Jul 2013 07:42:18 GMT",
"version": "v2"
}
] | 2013-07-18 | [
[
"Shariati",
"Nafiseh",
""
],
[
"Björnson",
"Emil",
""
],
[
"Bengtsson",
"Mats",
""
],
[
"Debbah",
"Mérouane",
""
]
] | This paper considers pilot-based channel estimation in large-scale multiple-input multiple-output (MIMO) communication systems, also known as "massive MIMO". Unlike previous works on this topic, which mainly considered the impact of inter-cell disturbance due to pilot reuse (so-called pilot contamination), we are concerned with the computational complexity. The conventional minimum mean square error (MMSE) and minimum variance unbiased (MVU) channel estimators rely on inverting covariance matrices, which has cubic complexity in the multiplication of number of antennas at each side. Since this is extremely expensive when there are hundreds of antennas, we propose to approximate the inversion by an L-order matrix polynomial. A set of low-complexity Bayesian channel estimators, coined Polynomial ExpAnsion CHannel (PEACH) estimators, are introduced. The coefficients of the polynomials are optimized to yield small mean square error (MSE). We show numerically that near-optimal performance is achieved with low polynomial orders. In practice, the order L can be selected to balance between complexity and MSE. Interestingly, pilot contamination is beneficial to the PEACH estimators in the sense that smaller L can be used to achieve near-optimal MSEs. |
2007.08147 | Michel Rigo | E. Charlier, A. Massuir, M. Rigo, E. Rowland | Ultimate periodicity problem for linear numeration systems | 39 pages, 2 figures. This is an improved version of the original
submission. It clarifies some arguments taking into account several comments
from reviews | International Journal of Algebra and Computation 32 (2022) 561-596 | 10.1142/S0218196722500254 | null | cs.DM math.CO math.NT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address the following decision problem. Given a numeration system $U$ and
a $U$-recognizable set $X\subseteq\mathbb{N}$, i.e. the set of its greedy
$U$-representations is recognized by a finite automaton, decide whether or not
$X$ is ultimately periodic. We prove that this problem is decidable for a large
class of numeration systems built on linearly recurrent sequences. Based on
arithmetical considerations about the recurrence equation and on $p$-adic
methods, the DFA given as input provides a bound on the admissible periods to
test.
| [
{
"created": "Thu, 16 Jul 2020 07:12:39 GMT",
"version": "v1"
},
{
"created": "Fri, 10 Dec 2021 12:08:35 GMT",
"version": "v2"
}
] | 2023-09-04 | [
[
"Charlier",
"E.",
""
],
[
"Massuir",
"A.",
""
],
[
"Rigo",
"M.",
""
],
[
"Rowland",
"E.",
""
]
] | We address the following decision problem. Given a numeration system $U$ and a $U$-recognizable set $X\subseteq\mathbb{N}$, i.e. the set of its greedy $U$-representations is recognized by a finite automaton, decide whether or not $X$ is ultimately periodic. We prove that this problem is decidable for a large class of numeration systems built on linearly recurrent sequences. Based on arithmetical considerations about the recurrence equation and on $p$-adic methods, the DFA given as input provides a bound on the admissible periods to test. |
2103.01169 | Luca Maria Aiello | Sanja Scepanovic, Luca Maria Aiello, Ke Zhou, Sagar Joglekar, Daniele
Quercia | The Healthy States of America: Creating a Health Taxonomy with Social
Media | In proceedings of the International Conference on Web and Social
Media (ICWSM'21) | null | null | null | cs.CY cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Since the uptake of social media, researchers have mined online discussions
to track the outbreak and evolution of specific diseases or chronic conditions
such as influenza or depression. To broaden the set of diseases under study, we
developed a Deep Learning tool for Natural Language Processing that extracts
mentions of virtually any medical condition or disease from unstructured social
media text. With that tool at hand, we processed Reddit and Twitter posts,
analyzed the clusters of the two resulting co-occurrence networks of
conditions, and discovered that they correspond to well-defined categories of
medical conditions. This resulted in the creation of the first comprehensive
taxonomy of medical conditions automatically derived from online discussions.
We validated the structure of our taxonomy against the official International
Statistical Classification of Diseases and Related Health Problems (ICD-11),
finding matches of our clusters with 20 official categories, out of 22. Based
on the mentions of our taxonomy's sub-categories on Reddit posts geo-referenced
in the U.S., we were then able to compute disease-specific health scores. As
opposed to counts of disease mentions or counts with no knowledge of our
taxonomy's structure, we found that our disease-specific health scores are
causally linked with the officially reported prevalence of 18 conditions.
| [
{
"created": "Mon, 1 Mar 2021 18:07:47 GMT",
"version": "v1"
}
] | 2021-03-02 | [
[
"Scepanovic",
"Sanja",
""
],
[
"Aiello",
"Luca Maria",
""
],
[
"Zhou",
"Ke",
""
],
[
"Joglekar",
"Sagar",
""
],
[
"Quercia",
"Daniele",
""
]
] | Since the uptake of social media, researchers have mined online discussions to track the outbreak and evolution of specific diseases or chronic conditions such as influenza or depression. To broaden the set of diseases under study, we developed a Deep Learning tool for Natural Language Processing that extracts mentions of virtually any medical condition or disease from unstructured social media text. With that tool at hand, we processed Reddit and Twitter posts, analyzed the clusters of the two resulting co-occurrence networks of conditions, and discovered that they correspond to well-defined categories of medical conditions. This resulted in the creation of the first comprehensive taxonomy of medical conditions automatically derived from online discussions. We validated the structure of our taxonomy against the official International Statistical Classification of Diseases and Related Health Problems (ICD-11), finding matches of our clusters with 20 official categories, out of 22. Based on the mentions of our taxonomy's sub-categories on Reddit posts geo-referenced in the U.S., we were then able to compute disease-specific health scores. As opposed to counts of disease mentions or counts with no knowledge of our taxonomy's structure, we found that our disease-specific health scores are causally linked with the officially reported prevalence of 18 conditions. |
2406.04829 | Zijia An | Zijia An, Boyu Diao, Libo Huang, Ruiqi Liu, Zhulin An, Yongjun Xu | EGOR: Efficient Generated Objects Replay for incremental object
detection | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Incremental object detection aims to simultaneously maintain old-class
accuracy and detect emerging new-class objects in incremental data. Most
existing distillation-based methods underperform when unlabeled old-class
objects are absent in the incremental dataset. While the absence can be
mitigated by generating old-class samples, it also incurs high computational
costs. In this paper, we argue that the extra computational cost stems from the
inconsistency between the detector and the generative model, along with
redundant generation. To overcome this problem, we propose Efficient Generated
Object Replay (EGOR). Specifically, we generate old-class samples by inversing
the original detectors, thus eliminating the necessity of training and storing
additional generative models. We also propose augmented replay to reuse the
objects in generated samples, thereby reducing the redundant generation. In
addition, we propose high-response knowledge distillation focusing on the
knowledge related to the old class, which transfers the knowledge in generated
objects to the incremental detector. With the addition of the generated objects
and losses, we observe a bias towards old classes in the detector. We balance
the losses for old and new classes to alleviate the bias, thereby increasing
the overall detection accuracy. Extensive experiments conducted on MS COCO 2017
demonstrate that our method can efficiently improve detection performance in
the absence of old-class objects.
| [
{
"created": "Fri, 7 Jun 2024 10:54:40 GMT",
"version": "v1"
}
] | 2024-06-10 | [
[
"An",
"Zijia",
""
],
[
"Diao",
"Boyu",
""
],
[
"Huang",
"Libo",
""
],
[
"Liu",
"Ruiqi",
""
],
[
"An",
"Zhulin",
""
],
[
"Xu",
"Yongjun",
""
]
] | Incremental object detection aims to simultaneously maintain old-class accuracy and detect emerging new-class objects in incremental data. Most existing distillation-based methods underperform when unlabeled old-class objects are absent in the incremental dataset. While the absence can be mitigated by generating old-class samples, it also incurs high computational costs. In this paper, we argue that the extra computational cost stems from the inconsistency between the detector and the generative model, along with redundant generation. To overcome this problem, we propose Efficient Generated Object Replay (EGOR). Specifically, we generate old-class samples by inversing the original detectors, thus eliminating the necessity of training and storing additional generative models. We also propose augmented replay to reuse the objects in generated samples, thereby reducing the redundant generation. In addition, we propose high-response knowledge distillation focusing on the knowledge related to the old class, which transfers the knowledge in generated objects to the incremental detector. With the addition of the generated objects and losses, we observe a bias towards old classes in the detector. We balance the losses for old and new classes to alleviate the bias, thereby increasing the overall detection accuracy. Extensive experiments conducted on MS COCO 2017 demonstrate that our method can efficiently improve detection performance in the absence of old-class objects. |
2209.02298 | Dipankar Chaki | Manan Choksi, Dipankar Chaki, Abdallah Lakhdari, Athman Bouguettaya | You Are What You Use: Usage-based Profiling in IoT Environments | 4 pages, 2 figures | null | null | null | cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Habit extraction is essential to automate services and provide appliance
usage insights in the smart home environment. However, habit extraction comes
with plenty of challenges in viewing typical start and end times for particular
activities. This paper introduces a novel way of identifying habits using an
ensemble of unsupervised clustering techniques. We use different clustering
algorithms to extract habits based on how static or dynamic they are.
Silhouette coefficients and a novel noise metric are utilized to extract habits
appropriately. Furthermore, we associate the extracted habits with time
intervals and a confidence score to denote how confident we are that a habit is
likely to occur at that time.
| [
{
"created": "Tue, 6 Sep 2022 08:51:08 GMT",
"version": "v1"
}
] | 2022-09-07 | [
[
"Choksi",
"Manan",
""
],
[
"Chaki",
"Dipankar",
""
],
[
"Lakhdari",
"Abdallah",
""
],
[
"Bouguettaya",
"Athman",
""
]
] | Habit extraction is essential to automate services and provide appliance usage insights in the smart home environment. However, habit extraction comes with plenty of challenges in viewing typical start and end times for particular activities. This paper introduces a novel way of identifying habits using an ensemble of unsupervised clustering techniques. We use different clustering algorithms to extract habits based on how static or dynamic they are. Silhouette coefficients and a novel noise metric are utilized to extract habits appropriately. Furthermore, we associate the extracted habits with time intervals and a confidence score to denote how confident we are that a habit is likely to occur at that time. |
2111.01909 | Wang Guangjun | Guangjun Wang | Geometrical holographic display | null | null | null | null | cs.GR physics.optics | http://creativecommons.org/licenses/by/4.0/ | Is it possible to realize a holographic display with commercial available
components and devices? Is it possible to manipulate light to reconstruct light
field without using coherent light and complicated optical components? Is it
possible to minimize the amount of date involved in building 3D scenes? Is it
possible to design a holographic video display that doesn't require huge
computational cost? Is it possible to realize a holographic video display with
portable form like a flat-panel display commercialized nowadays? This research
gives yes answers to all the above questions. A novel geometrical holographic
display was proposed, which uses geometrical optical principle to reproduce
realistic 3D images with all human visual cues and without visual side effects.
In addition, a least necessary light field representation was introduced which
can provide guidance for how to minimize the amount of data involved when
designing a FP3D or building 3D scenes. Finally, a proof-of-concept prototype
is set up which can provide true 3D scenes with depth range larger than 5m.
| [
{
"created": "Mon, 1 Nov 2021 11:56:23 GMT",
"version": "v1"
}
] | 2021-11-04 | [
[
"Wang",
"Guangjun",
""
]
] | Is it possible to realize a holographic display with commercial available components and devices? Is it possible to manipulate light to reconstruct light field without using coherent light and complicated optical components? Is it possible to minimize the amount of date involved in building 3D scenes? Is it possible to design a holographic video display that doesn't require huge computational cost? Is it possible to realize a holographic video display with portable form like a flat-panel display commercialized nowadays? This research gives yes answers to all the above questions. A novel geometrical holographic display was proposed, which uses geometrical optical principle to reproduce realistic 3D images with all human visual cues and without visual side effects. In addition, a least necessary light field representation was introduced which can provide guidance for how to minimize the amount of data involved when designing a FP3D or building 3D scenes. Finally, a proof-of-concept prototype is set up which can provide true 3D scenes with depth range larger than 5m. |
2405.11210 | Emilio Mart\'inez-Pa\~neda | C. Cui, P. Bortot, M. Ortolani, E. Mart\'inez-Pa\~neda | Computational predictions of hydrogen-assisted fatigue crack growth | null | null | null | null | cs.CE cond-mat.mtrl-sci physics.app-ph physics.chem-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A new model is presented to predict hydrogen-assisted fatigue. The model
combines a phase field description of fracture and fatigue, stress-assisted
hydrogen diffusion, and a toughness degradation formulation with cyclic and
hydrogen contributions. Hydrogen-assisted fatigue crack growth predictions
exhibit an excellent agreement with experiments over all the scenarios
considered, spanning multiple load ratios, H2 pressures and loading
frequencies. These are obtained without any calibration with hydrogen-assisted
fatigue data, taking as input only mechanical and hydrogen transport material
properties, the material's fatigue characteristics (from a single test in air),
and the sensitivity of fracture toughness to hydrogen content. Furthermore, the
model is used to determine: (i) what are suitable test loading frequencies to
obtain conservative data, and (ii) the underestimation made when not
pre-charging samples. The model can handle both laboratory specimens and
large-scale engineering components, enabling the Virtual Testing paradigm in
infrastructure exposed to hydrogen environments and cyclic loading.
| [
{
"created": "Sat, 18 May 2024 07:34:48 GMT",
"version": "v1"
}
] | 2024-05-21 | [
[
"Cui",
"C.",
""
],
[
"Bortot",
"P.",
""
],
[
"Ortolani",
"M.",
""
],
[
"Martínez-Pañeda",
"E.",
""
]
] | A new model is presented to predict hydrogen-assisted fatigue. The model combines a phase field description of fracture and fatigue, stress-assisted hydrogen diffusion, and a toughness degradation formulation with cyclic and hydrogen contributions. Hydrogen-assisted fatigue crack growth predictions exhibit an excellent agreement with experiments over all the scenarios considered, spanning multiple load ratios, H2 pressures and loading frequencies. These are obtained without any calibration with hydrogen-assisted fatigue data, taking as input only mechanical and hydrogen transport material properties, the material's fatigue characteristics (from a single test in air), and the sensitivity of fracture toughness to hydrogen content. Furthermore, the model is used to determine: (i) what are suitable test loading frequencies to obtain conservative data, and (ii) the underestimation made when not pre-charging samples. The model can handle both laboratory specimens and large-scale engineering components, enabling the Virtual Testing paradigm in infrastructure exposed to hydrogen environments and cyclic loading. |
1006.5686 | Steven Weber | Nan Xie, Steven Weber | Geometric Approximations of Some Aloha-like Stability Regions | Presented at IEEE ISIT 2010 (Austin, TX) | null | 10.1109/ISIT.2010.5513425 | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most bounds on the stability region of Aloha give necessary and sufficient
conditions for the stability of an arrival rate vector under a specific
contention probability (control) vector. But such results do not yield
easy-to-check bounds on the overall Aloha stability region because they
potentially require checking membership in an uncountably infinite number of
sets parameterized by each possible control vector. In this paper we consider
an important specific inner bound on Aloha that has this property of difficulty
to check membership in the set. We provide ellipsoids (for which membership is
easy-to-check) that we conjecture are inner and outer bounds on this set. We
also study the set of controls that stabilize a fixed arrival rate vector; this
set is shown to be a convex set.
| [
{
"created": "Tue, 29 Jun 2010 17:32:13 GMT",
"version": "v1"
}
] | 2016-11-18 | [
[
"Xie",
"Nan",
""
],
[
"Weber",
"Steven",
""
]
] | Most bounds on the stability region of Aloha give necessary and sufficient conditions for the stability of an arrival rate vector under a specific contention probability (control) vector. But such results do not yield easy-to-check bounds on the overall Aloha stability region because they potentially require checking membership in an uncountably infinite number of sets parameterized by each possible control vector. In this paper we consider an important specific inner bound on Aloha that has this property of difficulty to check membership in the set. We provide ellipsoids (for which membership is easy-to-check) that we conjecture are inner and outer bounds on this set. We also study the set of controls that stabilize a fixed arrival rate vector; this set is shown to be a convex set. |
1507.04438 | Akbar Rafiey | Binay Bhattacharya, Ante \'Custi\'c, Akbar Rafiey, Arash Rafiey,
Vladyslav Sokol | Approximation Algorithms for Generalized MST and TSP in Grid Clusters | null | null | null | null | cs.DM cs.CG cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider a special case of the generalized minimum spanning tree problem
(GMST) and the generalized travelling salesman problem (GTSP) where we are
given a set of points inside the integer grid (in Euclidean plane) where each
grid cell is $1 \times 1$. In the MST version of the problem, the goal is to
find a minimum tree that contains exactly one point from each non-empty grid
cell (cluster). Similarly, in the TSP version of the problem, the goal is to
find a minimum weight cycle containing one point from each non-empty grid cell.
We give a $(1+4\sqrt{2}+\epsilon)$ and $(1.5+8\sqrt{2}+\epsilon)$-approximation
algorithm for these two problems in the described setting, respectively.
Our motivation is based on the problem posed in [7] for a constant
approximation algorithm. The authors designed a PTAS for the more special case
of the GMST where non-empty cells are connected end dense enough. However,
their algorithm heavily relies on this connectivity restriction and is
unpractical. Our results develop the topic further.
| [
{
"created": "Thu, 16 Jul 2015 03:00:41 GMT",
"version": "v1"
}
] | 2015-07-17 | [
[
"Bhattacharya",
"Binay",
""
],
[
"Ćustić",
"Ante",
""
],
[
"Rafiey",
"Akbar",
""
],
[
"Rafiey",
"Arash",
""
],
[
"Sokol",
"Vladyslav",
""
]
] | We consider a special case of the generalized minimum spanning tree problem (GMST) and the generalized travelling salesman problem (GTSP) where we are given a set of points inside the integer grid (in Euclidean plane) where each grid cell is $1 \times 1$. In the MST version of the problem, the goal is to find a minimum tree that contains exactly one point from each non-empty grid cell (cluster). Similarly, in the TSP version of the problem, the goal is to find a minimum weight cycle containing one point from each non-empty grid cell. We give a $(1+4\sqrt{2}+\epsilon)$ and $(1.5+8\sqrt{2}+\epsilon)$-approximation algorithm for these two problems in the described setting, respectively. Our motivation is based on the problem posed in [7] for a constant approximation algorithm. The authors designed a PTAS for the more special case of the GMST where non-empty cells are connected end dense enough. However, their algorithm heavily relies on this connectivity restriction and is unpractical. Our results develop the topic further. |
2404.03225 | Xu Wang | Xu Wang, Tian Ye, Rajgopal Kannan, Viktor Prasanna | FACTUAL: A Novel Framework for Contrastive Learning Based Robust SAR
Image Classification | 2024 IEEE Radar Conference | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep Learning (DL) Models for Synthetic Aperture Radar (SAR) Automatic Target
Recognition (ATR), while delivering improved performance, have been shown to be
quite vulnerable to adversarial attacks. Existing works improve robustness by
training models on adversarial samples. However, by focusing mostly on attacks
that manipulate images randomly, they neglect the real-world feasibility of
such attacks. In this paper, we propose FACTUAL, a novel Contrastive Learning
framework for Adversarial Training and robust SAR classification. FACTUAL
consists of two components: (1) Differing from existing works, a novel
perturbation scheme that incorporates realistic physical adversarial attacks
(such as OTSA) to build a supervised adversarial pre-training network. This
network utilizes class labels for clustering clean and perturbed images
together into a more informative feature space. (2) A linear classifier
cascaded after the encoder to use the computed representations to predict the
target labels. By pre-training and fine-tuning our model on both clean and
adversarial samples, we show that our model achieves high prediction accuracy
on both cases. Our model achieves 99.7% accuracy on clean samples, and 89.6% on
perturbed samples, both outperforming previous state-of-the-art methods.
| [
{
"created": "Thu, 4 Apr 2024 06:20:22 GMT",
"version": "v1"
}
] | 2024-04-05 | [
[
"Wang",
"Xu",
""
],
[
"Ye",
"Tian",
""
],
[
"Kannan",
"Rajgopal",
""
],
[
"Prasanna",
"Viktor",
""
]
] | Deep Learning (DL) Models for Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR), while delivering improved performance, have been shown to be quite vulnerable to adversarial attacks. Existing works improve robustness by training models on adversarial samples. However, by focusing mostly on attacks that manipulate images randomly, they neglect the real-world feasibility of such attacks. In this paper, we propose FACTUAL, a novel Contrastive Learning framework for Adversarial Training and robust SAR classification. FACTUAL consists of two components: (1) Differing from existing works, a novel perturbation scheme that incorporates realistic physical adversarial attacks (such as OTSA) to build a supervised adversarial pre-training network. This network utilizes class labels for clustering clean and perturbed images together into a more informative feature space. (2) A linear classifier cascaded after the encoder to use the computed representations to predict the target labels. By pre-training and fine-tuning our model on both clean and adversarial samples, we show that our model achieves high prediction accuracy on both cases. Our model achieves 99.7% accuracy on clean samples, and 89.6% on perturbed samples, both outperforming previous state-of-the-art methods. |
2003.09148 | Nico Messikommer | Nico Messikommer, Daniel Gehrig, Antonio Loquercio, Davide Scaramuzza | Event-based Asynchronous Sparse Convolutional Networks | null | European Conference on Computer Vision (ECCV), 2020 | null | null | cs.CV cs.LG eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Event cameras are bio-inspired sensors that respond to per-pixel brightness
changes in the form of asynchronous and sparse "events". Recently, pattern
recognition algorithms, such as learning-based methods, have made significant
progress with event cameras by converting events into synchronous dense,
image-like representations and applying traditional machine learning methods
developed for standard cameras. However, these approaches discard the spatial
and temporal sparsity inherent in event data at the cost of higher
computational complexity and latency. In this work, we present a general
framework for converting models trained on synchronous image-like event
representations into asynchronous models with identical output, thus directly
leveraging the intrinsic asynchronous and sparse nature of the event data. We
show both theoretically and experimentally that this drastically reduces the
computational complexity and latency of high-capacity, synchronous neural
networks without sacrificing accuracy. In addition, our framework has several
desirable characteristics: (i) it exploits spatio-temporal sparsity of events
explicitly, (ii) it is agnostic to the event representation, network
architecture, and task, and (iii) it does not require any train-time change,
since it is compatible with the standard neural networks' training process. We
thoroughly validate the proposed framework on two computer vision tasks: object
detection and object recognition. In these tasks, we reduce the computational
complexity up to 20 times with respect to high-latency neural networks. At the
same time, we outperform state-of-the-art asynchronous approaches up to 24% in
prediction accuracy.
| [
{
"created": "Fri, 20 Mar 2020 08:39:49 GMT",
"version": "v1"
},
{
"created": "Fri, 17 Jul 2020 15:52:12 GMT",
"version": "v2"
}
] | 2020-07-20 | [
[
"Messikommer",
"Nico",
""
],
[
"Gehrig",
"Daniel",
""
],
[
"Loquercio",
"Antonio",
""
],
[
"Scaramuzza",
"Davide",
""
]
] | Event cameras are bio-inspired sensors that respond to per-pixel brightness changes in the form of asynchronous and sparse "events". Recently, pattern recognition algorithms, such as learning-based methods, have made significant progress with event cameras by converting events into synchronous dense, image-like representations and applying traditional machine learning methods developed for standard cameras. However, these approaches discard the spatial and temporal sparsity inherent in event data at the cost of higher computational complexity and latency. In this work, we present a general framework for converting models trained on synchronous image-like event representations into asynchronous models with identical output, thus directly leveraging the intrinsic asynchronous and sparse nature of the event data. We show both theoretically and experimentally that this drastically reduces the computational complexity and latency of high-capacity, synchronous neural networks without sacrificing accuracy. In addition, our framework has several desirable characteristics: (i) it exploits spatio-temporal sparsity of events explicitly, (ii) it is agnostic to the event representation, network architecture, and task, and (iii) it does not require any train-time change, since it is compatible with the standard neural networks' training process. We thoroughly validate the proposed framework on two computer vision tasks: object detection and object recognition. In these tasks, we reduce the computational complexity up to 20 times with respect to high-latency neural networks. At the same time, we outperform state-of-the-art asynchronous approaches up to 24% in prediction accuracy. |
2203.12412 | Ahmet Caner Y\"uz\"ug\"uler | Ahmet Caner Y\"uz\"ug\"uler, Nikolaos Dimitriadis, Pascal Frossard | U-Boost NAS: Utilization-Boosted Differentiable Neural Architecture
Search | null | null | null | null | cs.LG cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | Optimizing resource utilization in target platforms is key to achieving high
performance during DNN inference. While optimizations have been proposed for
inference latency, memory footprint, and energy consumption, prior
hardware-aware neural architecture search (NAS) methods have omitted resource
utilization, preventing DNNs to take full advantage of the target inference
platforms. Modeling resource utilization efficiently and accurately is
challenging, especially for widely-used array-based inference accelerators such
as Google TPU. In this work, we propose a novel hardware-aware NAS framework
that does not only optimize for task accuracy and inference latency, but also
for resource utilization. We also propose and validate a new computational
model for resource utilization in inference accelerators. By using the proposed
NAS framework and the proposed resource utilization model, we achieve 2.8 - 4x
speedup for DNN inference compared to prior hardware-aware NAS methods while
attaining similar or improved accuracy in image classification on CIFAR-10 and
Imagenet-100 datasets.
| [
{
"created": "Wed, 23 Mar 2022 13:44:15 GMT",
"version": "v1"
}
] | 2022-03-24 | [
[
"Yüzügüler",
"Ahmet Caner",
""
],
[
"Dimitriadis",
"Nikolaos",
""
],
[
"Frossard",
"Pascal",
""
]
] | Optimizing resource utilization in target platforms is key to achieving high performance during DNN inference. While optimizations have been proposed for inference latency, memory footprint, and energy consumption, prior hardware-aware neural architecture search (NAS) methods have omitted resource utilization, preventing DNNs to take full advantage of the target inference platforms. Modeling resource utilization efficiently and accurately is challenging, especially for widely-used array-based inference accelerators such as Google TPU. In this work, we propose a novel hardware-aware NAS framework that does not only optimize for task accuracy and inference latency, but also for resource utilization. We also propose and validate a new computational model for resource utilization in inference accelerators. By using the proposed NAS framework and the proposed resource utilization model, we achieve 2.8 - 4x speedup for DNN inference compared to prior hardware-aware NAS methods while attaining similar or improved accuracy in image classification on CIFAR-10 and Imagenet-100 datasets. |
1608.04309 | Yasin Yazicioglu | A. Yasin Yazicioglu, Waseem Abbas, and Magnus Egerstedt | Graph Distances and Controllability of Networks | Accepted to the IEEE Transactions on Automatic Control | null | 10.1109/TAC.2016.2546180 | null | cs.SY cs.SI math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this technical note, we study the controllability of diffusively coupled
networks from a graph theoretic perspective. We consider leader-follower
networks, where the external control inputs are injected to only some of the
agents, namely the leaders. Our main result relates the controllability of such
systems to the graph distances between the agents. More specifically, we
present a graph topological lower bound on the rank of the controllability
matrix. This lower bound is tight, and it is applicable to systems with
arbitrary network topologies, coupling weights, and number of leaders. An
algorithm for computing the lower bound is also provided. Furthermore, as a
prominent application, we present how the proposed bound can be utilized to
select a minimal set of leaders for achieving controllability, even when the
coupling weights are unknown.
| [
{
"created": "Mon, 15 Aug 2016 15:51:42 GMT",
"version": "v1"
}
] | 2016-08-17 | [
[
"Yazicioglu",
"A. Yasin",
""
],
[
"Abbas",
"Waseem",
""
],
[
"Egerstedt",
"Magnus",
""
]
] | In this technical note, we study the controllability of diffusively coupled networks from a graph theoretic perspective. We consider leader-follower networks, where the external control inputs are injected to only some of the agents, namely the leaders. Our main result relates the controllability of such systems to the graph distances between the agents. More specifically, we present a graph topological lower bound on the rank of the controllability matrix. This lower bound is tight, and it is applicable to systems with arbitrary network topologies, coupling weights, and number of leaders. An algorithm for computing the lower bound is also provided. Furthermore, as a prominent application, we present how the proposed bound can be utilized to select a minimal set of leaders for achieving controllability, even when the coupling weights are unknown. |
1911.06489 | Yanjie Gou | Yanjie Gou, Yinjie Lei, Lingqiao Liu, Pingping Zhang, Xi Peng | Improving Distant Supervised Relation Extraction by Dynamic Neural
Network | 29 pages, 8 figures | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Distant Supervised Relation Extraction (DSRE) is usually formulated as a
problem of classifying a bag of sentences that contain two query entities, into
the predefined relation classes. Most existing methods consider those relation
classes as distinct semantic categories while ignoring their potential
connection to query entities. In this paper, we propose to leverage this
connection to improve the relation extraction accuracy. Our key ideas are
twofold: (1) For sentences belonging to the same relation class, the expression
style, i.e. words choice, can vary according to the query entities. To account
for this style shift, the model should adjust its parameters in accordance with
entity types. (2) Some relation classes are semantically similar, and the
entity types appear in one relation may also appear in others. Therefore, it
can be trained cross different relation classes and further enhance those
classes with few samples, i.e., long-tail classes. To unify these two
arguments, we developed a novel Dynamic Neural Network for Relation Extraction
(DNNRE). The network adopts a novel dynamic parameter generator that
dynamically generates the network parameters according to the query entity
types and relation classes. By using this mechanism, the network can
simultaneously handle the style shift problem and enhance the prediction
accuracy for long-tail classes. Through our experimental study, we demonstrate
the effectiveness of the proposed method and show that it can achieve superior
performance over the state-of-the-art methods.
| [
{
"created": "Fri, 15 Nov 2019 06:31:13 GMT",
"version": "v1"
},
{
"created": "Fri, 13 Dec 2019 04:29:41 GMT",
"version": "v2"
}
] | 2019-12-16 | [
[
"Gou",
"Yanjie",
""
],
[
"Lei",
"Yinjie",
""
],
[
"Liu",
"Lingqiao",
""
],
[
"Zhang",
"Pingping",
""
],
[
"Peng",
"Xi",
""
]
] | Distant Supervised Relation Extraction (DSRE) is usually formulated as a problem of classifying a bag of sentences that contain two query entities, into the predefined relation classes. Most existing methods consider those relation classes as distinct semantic categories while ignoring their potential connection to query entities. In this paper, we propose to leverage this connection to improve the relation extraction accuracy. Our key ideas are twofold: (1) For sentences belonging to the same relation class, the expression style, i.e. words choice, can vary according to the query entities. To account for this style shift, the model should adjust its parameters in accordance with entity types. (2) Some relation classes are semantically similar, and the entity types appear in one relation may also appear in others. Therefore, it can be trained cross different relation classes and further enhance those classes with few samples, i.e., long-tail classes. To unify these two arguments, we developed a novel Dynamic Neural Network for Relation Extraction (DNNRE). The network adopts a novel dynamic parameter generator that dynamically generates the network parameters according to the query entity types and relation classes. By using this mechanism, the network can simultaneously handle the style shift problem and enhance the prediction accuracy for long-tail classes. Through our experimental study, we demonstrate the effectiveness of the proposed method and show that it can achieve superior performance over the state-of-the-art methods. |
2303.01277 | Meng Zhang | Meng Zhang, Qinghao Hu, Peng Sun, Yonggang Wen, Tianwei Zhang | Boosting Distributed Full-graph GNN Training with Asynchronous One-bit
Communication | null | null | null | null | cs.DC cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Training Graph Neural Networks (GNNs) on large graphs is challenging due to
the conflict between the high memory demand and limited GPU memory. Recently,
distributed full-graph GNN training has been widely adopted to tackle this
problem. However, the substantial inter-GPU communication overhead can cause
severe throughput degradation. Existing communication compression techniques
mainly focus on traditional DNN training, whose bottleneck lies in
synchronizing gradients and parameters. We find they do not work well in
distributed GNN training as the barrier is the layer-wise communication of
features during the forward pass & feature gradients during the backward pass.
To this end, we propose an efficient distributed GNN training framework Sylvie,
which employs one-bit quantization technique in GNNs and further pipelines the
curtailed communication with computation to enormously shrink the overhead
while maintaining the model quality. In detail, Sylvie provides a lightweight
Low-bit Module to quantize the sent data and dequantize the received data back
to full precision values in each layer. Additionally, we propose a Bounded
Staleness Adaptor to control the introduced staleness to achieve further
performance enhancement. We conduct theoretical convergence analysis and
extensive experiments on various models & datasets to demonstrate Sylvie can
considerably boost the training throughput by up to 28.1x.
| [
{
"created": "Thu, 2 Mar 2023 14:02:39 GMT",
"version": "v1"
}
] | 2023-03-03 | [
[
"Zhang",
"Meng",
""
],
[
"Hu",
"Qinghao",
""
],
[
"Sun",
"Peng",
""
],
[
"Wen",
"Yonggang",
""
],
[
"Zhang",
"Tianwei",
""
]
] | Training Graph Neural Networks (GNNs) on large graphs is challenging due to the conflict between the high memory demand and limited GPU memory. Recently, distributed full-graph GNN training has been widely adopted to tackle this problem. However, the substantial inter-GPU communication overhead can cause severe throughput degradation. Existing communication compression techniques mainly focus on traditional DNN training, whose bottleneck lies in synchronizing gradients and parameters. We find they do not work well in distributed GNN training as the barrier is the layer-wise communication of features during the forward pass & feature gradients during the backward pass. To this end, we propose an efficient distributed GNN training framework Sylvie, which employs one-bit quantization technique in GNNs and further pipelines the curtailed communication with computation to enormously shrink the overhead while maintaining the model quality. In detail, Sylvie provides a lightweight Low-bit Module to quantize the sent data and dequantize the received data back to full precision values in each layer. Additionally, we propose a Bounded Staleness Adaptor to control the introduced staleness to achieve further performance enhancement. We conduct theoretical convergence analysis and extensive experiments on various models & datasets to demonstrate Sylvie can considerably boost the training throughput by up to 28.1x. |
2001.00182 | Francesco Malandrino | Christian Vitale and Carla Fabiana Chiasserini and Francesco
Malandrino and Senay Semu Tadesse | Characterizing Delay and Control Traffic of the Cellular MME with IoT
Support | null | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the main use cases for advanced cellular networks is represented by
massive Internet-of-things (MIoT), i.e., an enormous number of IoT devices that
transmit data toward the cellular network infrastructure. To make cellular MIoT
a reality, data transfer and control procedures specifically designed for the
support of IoT are needed. For this reason, 3GPP has introduced the Control
Plane Cellular IoT optimization, which foresees a simplified bearer
instantiation, with the Mobility Management Entity (MME) handling both control
and data traffic. The performance of the MME has therefore become critical, and
properly scaling its computational capability can determine the ability of the
whole network to tackle MIoT effectively. In particular, considering
virtualized networks and the need for an efficient allocation of computing
resources, it is paramount to characterize the MME performance as the MIoT
traffic load changes. We address this need by presenting compact, closed-form
expressions linking the number of IoT sources with the rate at which bearers
are requested, and such a rate with the delay incurred by the IoT data. We show
that our analysis, supported by testbed experiments and verified through
large-scale simulations, represents a valuable tool to make effective scaling
decisions in virtualized cellular core networks.
| [
{
"created": "Wed, 1 Jan 2020 10:14:54 GMT",
"version": "v1"
}
] | 2020-01-03 | [
[
"Vitale",
"Christian",
""
],
[
"Chiasserini",
"Carla Fabiana",
""
],
[
"Malandrino",
"Francesco",
""
],
[
"Tadesse",
"Senay Semu",
""
]
] | One of the main use cases for advanced cellular networks is represented by massive Internet-of-things (MIoT), i.e., an enormous number of IoT devices that transmit data toward the cellular network infrastructure. To make cellular MIoT a reality, data transfer and control procedures specifically designed for the support of IoT are needed. For this reason, 3GPP has introduced the Control Plane Cellular IoT optimization, which foresees a simplified bearer instantiation, with the Mobility Management Entity (MME) handling both control and data traffic. The performance of the MME has therefore become critical, and properly scaling its computational capability can determine the ability of the whole network to tackle MIoT effectively. In particular, considering virtualized networks and the need for an efficient allocation of computing resources, it is paramount to characterize the MME performance as the MIoT traffic load changes. We address this need by presenting compact, closed-form expressions linking the number of IoT sources with the rate at which bearers are requested, and such a rate with the delay incurred by the IoT data. We show that our analysis, supported by testbed experiments and verified through large-scale simulations, represents a valuable tool to make effective scaling decisions in virtualized cellular core networks. |
1412.2824 | Yu Zhang | Vignesh Narayanan, Yu Zhang, Nathaniel Mendoza and Subbarao
Kambhampati | Plan or not: Remote Human-robot Teaming with Incomplete Task Information | null | null | null | null | cs.AI cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human-robot interaction can be divided into two categories based on the
physical distance between the human and robot: remote and proximal. In proximal
interaction, the human and robot often engage in close coordination; in remote
interaction, the human and robot are less coupled due to communication
constraints. As a result, providing automation for the robot in remote
interaction becomes more important. Thus far, human factor studies on
automation in remote human-robot interaction have been restricted to various
forms of supervision, in which the robot is essentially being used as a smart
mobile manipulation platform with sensing capabilities. In this paper, we
investigate the incorporation of general planning capability into the robot to
facilitate peer-to-peer human-robot teaming, in which the human and robot are
viewed as teammates that are physically separated. The human and robot share
the same global goal and collaborate to achieve it. Note that humans may feel
uncomfortable at such robot autonomy, which can potentially reduce teaming
performance. One important difference between peer-to-peer teaming and
supervised teaming is that an autonomous robot in peer-to-peer teaming can
achieve the goal alone when the task information is completely specified.
However, incompleteness often exists, which implies information asymmetry.
While information asymmetry can be desirable sometimes, it may also lead to the
robot choosing improper actions that negatively influence the teaming
performance. We aim to investigate the various trade-offs, e.g., mental
workload and situation awareness, between these two types of remote human-robot
teaming.
| [
{
"created": "Tue, 9 Dec 2014 01:05:59 GMT",
"version": "v1"
}
] | 2014-12-10 | [
[
"Narayanan",
"Vignesh",
""
],
[
"Zhang",
"Yu",
""
],
[
"Mendoza",
"Nathaniel",
""
],
[
"Kambhampati",
"Subbarao",
""
]
] | Human-robot interaction can be divided into two categories based on the physical distance between the human and robot: remote and proximal. In proximal interaction, the human and robot often engage in close coordination; in remote interaction, the human and robot are less coupled due to communication constraints. As a result, providing automation for the robot in remote interaction becomes more important. Thus far, human factor studies on automation in remote human-robot interaction have been restricted to various forms of supervision, in which the robot is essentially being used as a smart mobile manipulation platform with sensing capabilities. In this paper, we investigate the incorporation of general planning capability into the robot to facilitate peer-to-peer human-robot teaming, in which the human and robot are viewed as teammates that are physically separated. The human and robot share the same global goal and collaborate to achieve it. Note that humans may feel uncomfortable at such robot autonomy, which can potentially reduce teaming performance. One important difference between peer-to-peer teaming and supervised teaming is that an autonomous robot in peer-to-peer teaming can achieve the goal alone when the task information is completely specified. However, incompleteness often exists, which implies information asymmetry. While information asymmetry can be desirable sometimes, it may also lead to the robot choosing improper actions that negatively influence the teaming performance. We aim to investigate the various trade-offs, e.g., mental workload and situation awareness, between these two types of remote human-robot teaming. |
2405.00251 | Dylan Green | Dylan Green, William Harvey, Saeid Naderiparizi, Matthew Niedoba,
Yunpeng Liu, Xiaoxuan Liang, Jonathan Lavington, Ke Zhang, Vasileios Lioutas,
Setareh Dabiri, Adam Scibior, Berend Zwartsenberg, Frank Wood | Semantically Consistent Video Inpainting with Conditional Diffusion
Models | null | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | Current state-of-the-art methods for video inpainting typically rely on
optical flow or attention-based approaches to inpaint masked regions by
propagating visual information across frames. While such approaches have led to
significant progress on standard benchmarks, they struggle with tasks that
require the synthesis of novel content that is not present in other frames. In
this paper we reframe video inpainting as a conditional generative modeling
problem and present a framework for solving such problems with conditional
video diffusion models. We highlight the advantages of using a generative
approach for this task, showing that our method is capable of generating
diverse, high-quality inpaintings and synthesizing new content that is
spatially, temporally, and semantically consistent with the provided context.
| [
{
"created": "Tue, 30 Apr 2024 23:49:26 GMT",
"version": "v1"
}
] | 2024-05-02 | [
[
"Green",
"Dylan",
""
],
[
"Harvey",
"William",
""
],
[
"Naderiparizi",
"Saeid",
""
],
[
"Niedoba",
"Matthew",
""
],
[
"Liu",
"Yunpeng",
""
],
[
"Liang",
"Xiaoxuan",
""
],
[
"Lavington",
"Jonathan",
""
],
[
"Zhang",
"Ke",
""
],
[
"Lioutas",
"Vasileios",
""
],
[
"Dabiri",
"Setareh",
""
],
[
"Scibior",
"Adam",
""
],
[
"Zwartsenberg",
"Berend",
""
],
[
"Wood",
"Frank",
""
]
] | Current state-of-the-art methods for video inpainting typically rely on optical flow or attention-based approaches to inpaint masked regions by propagating visual information across frames. While such approaches have led to significant progress on standard benchmarks, they struggle with tasks that require the synthesis of novel content that is not present in other frames. In this paper we reframe video inpainting as a conditional generative modeling problem and present a framework for solving such problems with conditional video diffusion models. We highlight the advantages of using a generative approach for this task, showing that our method is capable of generating diverse, high-quality inpaintings and synthesizing new content that is spatially, temporally, and semantically consistent with the provided context. |
2405.07481 | Xiaoyi Zhang | Tianci Bi, Xiaoyi Zhang, Zhizheng Zhang, Wenxuan Xie, Cuiling Lan, Yan
Lu and Nanning Zheng | Text Grouping Adapter: Adapting Pre-trained Text Detector for Layout
Analysis | Accepted to CVPR 2024 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Significant progress has been made in scene text detection models since the
rise of deep learning, but scene text layout analysis, which aims to group
detected text instances as paragraphs, has not kept pace. Previous works either
treated text detection and grouping using separate models, or train a model
from scratch while using a unified one. All of them have not yet made full use
of the already well-trained text detectors and easily obtainable detection
datasets. In this paper, we present Text Grouping Adapter (TGA), a module that
can enable the utilization of various pre-trained text detectors to learn
layout analysis, allowing us to adopt a well-trained text detector right off
the shelf or just fine-tune it efficiently. Designed to be compatible with
various text detector architectures, TGA takes detected text regions and image
features as universal inputs to assemble text instance features. To capture
broader contextual information for layout analysis, we propose to predict text
group masks from text instance features by one-to-many assignment. Our
comprehensive experiments demonstrate that, even with frozen pre-trained
models, incorporating our TGA into various pre-trained text detectors and text
spotters can achieve superior layout analysis performance, simultaneously
inheriting generalized text detection ability from pre-training. In the case of
full parameter fine-tuning, we can further improve layout analysis performance.
| [
{
"created": "Mon, 13 May 2024 05:48:35 GMT",
"version": "v1"
}
] | 2024-05-14 | [
[
"Bi",
"Tianci",
""
],
[
"Zhang",
"Xiaoyi",
""
],
[
"Zhang",
"Zhizheng",
""
],
[
"Xie",
"Wenxuan",
""
],
[
"Lan",
"Cuiling",
""
],
[
"Lu",
"Yan",
""
],
[
"Zheng",
"Nanning",
""
]
] | Significant progress has been made in scene text detection models since the rise of deep learning, but scene text layout analysis, which aims to group detected text instances as paragraphs, has not kept pace. Previous works either treated text detection and grouping using separate models, or train a model from scratch while using a unified one. All of them have not yet made full use of the already well-trained text detectors and easily obtainable detection datasets. In this paper, we present Text Grouping Adapter (TGA), a module that can enable the utilization of various pre-trained text detectors to learn layout analysis, allowing us to adopt a well-trained text detector right off the shelf or just fine-tune it efficiently. Designed to be compatible with various text detector architectures, TGA takes detected text regions and image features as universal inputs to assemble text instance features. To capture broader contextual information for layout analysis, we propose to predict text group masks from text instance features by one-to-many assignment. Our comprehensive experiments demonstrate that, even with frozen pre-trained models, incorporating our TGA into various pre-trained text detectors and text spotters can achieve superior layout analysis performance, simultaneously inheriting generalized text detection ability from pre-training. In the case of full parameter fine-tuning, we can further improve layout analysis performance. |
2308.16635 | Jin Liu | Jin Liu, Xi Wang, Xiaomeng Fu, Yesheng Chai, Cai Yu, Jiao Dai, Jizhong
Han | MFR-Net: Multi-faceted Responsive Listening Head Generation via
Denoising Diffusion Model | Accepted by ACM MM 2023 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Face-to-face communication is a common scenario including roles of speakers
and listeners. Most existing research methods focus on producing speaker
videos, while the generation of listener heads remains largely overlooked.
Responsive listening head generation is an important task that aims to model
face-to-face communication scenarios by generating a listener head video given
a speaker video and a listener head image. An ideal generated responsive
listening video should respond to the speaker with attitude or viewpoint
expressing while maintaining diversity in interaction patterns and accuracy in
listener identity information. To achieve this goal, we propose the
\textbf{M}ulti-\textbf{F}aceted \textbf{R}esponsive Listening Head Generation
Network (MFR-Net). Specifically, MFR-Net employs the probabilistic denoising
diffusion model to predict diverse head pose and expression features. In order
to perform multi-faceted response to the speaker video, while maintaining
accurate listener identity preservation, we design the Feature Aggregation
Module to boost listener identity features and fuse them with other
speaker-related features. Finally, a renderer finetuned with identity
consistency loss produces the final listening head videos. Our extensive
experiments demonstrate that MFR-Net not only achieves multi-faceted responses
in diversity and speaker identity information but also in attitude and
viewpoint expression.
| [
{
"created": "Thu, 31 Aug 2023 11:10:28 GMT",
"version": "v1"
}
] | 2023-09-01 | [
[
"Liu",
"Jin",
""
],
[
"Wang",
"Xi",
""
],
[
"Fu",
"Xiaomeng",
""
],
[
"Chai",
"Yesheng",
""
],
[
"Yu",
"Cai",
""
],
[
"Dai",
"Jiao",
""
],
[
"Han",
"Jizhong",
""
]
] | Face-to-face communication is a common scenario including roles of speakers and listeners. Most existing research methods focus on producing speaker videos, while the generation of listener heads remains largely overlooked. Responsive listening head generation is an important task that aims to model face-to-face communication scenarios by generating a listener head video given a speaker video and a listener head image. An ideal generated responsive listening video should respond to the speaker with attitude or viewpoint expressing while maintaining diversity in interaction patterns and accuracy in listener identity information. To achieve this goal, we propose the \textbf{M}ulti-\textbf{F}aceted \textbf{R}esponsive Listening Head Generation Network (MFR-Net). Specifically, MFR-Net employs the probabilistic denoising diffusion model to predict diverse head pose and expression features. In order to perform multi-faceted response to the speaker video, while maintaining accurate listener identity preservation, we design the Feature Aggregation Module to boost listener identity features and fuse them with other speaker-related features. Finally, a renderer finetuned with identity consistency loss produces the final listening head videos. Our extensive experiments demonstrate that MFR-Net not only achieves multi-faceted responses in diversity and speaker identity information but also in attitude and viewpoint expression. |
2209.12248 | Rui He | Rui He, Zehua Fu, Qingjie Liu, Yunhong Wang, Xunxun Chen | D$^{\bf{3}}$: Duplicate Detection Decontaminator for Multi-Athlete
Tracking in Sports Videos | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tracking multiple athletes in sports videos is a very challenging
Multi-Object Tracking (MOT) task, since athletes often have the same appearance
and are intimately covered with each other, making a common occlusion problem
becomes an abhorrent duplicate detection. In this paper, the duplicate
detection is newly and precisely defined as occlusion misreporting on the same
athlete by multiple detection boxes in one frame. To address this problem, we
meticulously design a novel transformer-based Duplicate Detection
Decontaminator (D$^3$) for training, and a specific algorithm Rally-Hungarian
(RH) for matching. Once duplicate detection occurs, D$^3$ immediately modifies
the procedure by generating enhanced boxes losses. RH, triggered by the team
sports substitution rules, is exceedingly suitable for sports videos. Moreover,
to complement the tracking dataset that without shot changes, we release a new
dataset based on sports video named RallyTrack. Extensive experiments on
RallyTrack show that combining D$^3$ and RH can dramatically improve the
tracking performance with 9.2 in MOTA and 4.5 in HOTA. Meanwhile, experiments
on MOT-series and DanceTrack discover that D$^3$ can accelerate convergence
during training, especially save up to 80 percent of the original training time
on MOT17. Finally, our model, which is trained only with volleyball videos, can
be applied directly to basketball and soccer videos for MAT, which shows
priority of our method. Our dataset is available at
https://github.com/heruihr/rallytrack.
| [
{
"created": "Sun, 25 Sep 2022 15:46:39 GMT",
"version": "v1"
}
] | 2022-09-27 | [
[
"He",
"Rui",
""
],
[
"Fu",
"Zehua",
""
],
[
"Liu",
"Qingjie",
""
],
[
"Wang",
"Yunhong",
""
],
[
"Chen",
"Xunxun",
""
]
] | Tracking multiple athletes in sports videos is a very challenging Multi-Object Tracking (MOT) task, since athletes often have the same appearance and are intimately covered with each other, making a common occlusion problem becomes an abhorrent duplicate detection. In this paper, the duplicate detection is newly and precisely defined as occlusion misreporting on the same athlete by multiple detection boxes in one frame. To address this problem, we meticulously design a novel transformer-based Duplicate Detection Decontaminator (D$^3$) for training, and a specific algorithm Rally-Hungarian (RH) for matching. Once duplicate detection occurs, D$^3$ immediately modifies the procedure by generating enhanced boxes losses. RH, triggered by the team sports substitution rules, is exceedingly suitable for sports videos. Moreover, to complement the tracking dataset that without shot changes, we release a new dataset based on sports video named RallyTrack. Extensive experiments on RallyTrack show that combining D$^3$ and RH can dramatically improve the tracking performance with 9.2 in MOTA and 4.5 in HOTA. Meanwhile, experiments on MOT-series and DanceTrack discover that D$^3$ can accelerate convergence during training, especially save up to 80 percent of the original training time on MOT17. Finally, our model, which is trained only with volleyball videos, can be applied directly to basketball and soccer videos for MAT, which shows priority of our method. Our dataset is available at https://github.com/heruihr/rallytrack. |
1802.01958 | Philipp Harzig | Philipp Harzig, Stephan Brehm, Rainer Lienhart, Carolin Kaiser, Ren\'e
Schallner | Multimodal Image Captioning for Marketing Analysis | 4 pages, 1 figure, accepted at MIPR2018 | null | 10.1109/MIPR.2018.00035 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatically captioning images with natural language sentences is an
important research topic. State of the art models are able to produce
human-like sentences. These models typically describe the depicted scene as a
whole and do not target specific objects of interest or emotional relationships
between these objects in the image. However, marketing companies require to
describe these important attributes of a given scene. In our case, objects of
interest are consumer goods, which are usually identifiable by a product logo
and are associated with certain brands. From a marketing point of view, it is
desirable to also evaluate the emotional context of a trademarked product,
i.e., whether it appears in a positive or a negative connotation. We address
the problem of finding brands in images and deriving corresponding captions by
introducing a modified image captioning network. We also add a third output
modality, which simultaneously produces real-valued image ratings. Our network
is trained using a classification-aware loss function in order to stimulate the
generation of sentences with an emphasis on words identifying the brand of a
product. We evaluate our model on a dataset of images depicting interactions
between humans and branded products. The introduced network improves mean class
accuracy by 24.5 percent. Thanks to adding the third output modality, it also
considerably improves the quality of generated captions for images depicting
branded products.
| [
{
"created": "Tue, 6 Feb 2018 14:23:32 GMT",
"version": "v1"
},
{
"created": "Mon, 6 May 2019 10:35:53 GMT",
"version": "v2"
}
] | 2019-08-07 | [
[
"Harzig",
"Philipp",
""
],
[
"Brehm",
"Stephan",
""
],
[
"Lienhart",
"Rainer",
""
],
[
"Kaiser",
"Carolin",
""
],
[
"Schallner",
"René",
""
]
] | Automatically captioning images with natural language sentences is an important research topic. State of the art models are able to produce human-like sentences. These models typically describe the depicted scene as a whole and do not target specific objects of interest or emotional relationships between these objects in the image. However, marketing companies require to describe these important attributes of a given scene. In our case, objects of interest are consumer goods, which are usually identifiable by a product logo and are associated with certain brands. From a marketing point of view, it is desirable to also evaluate the emotional context of a trademarked product, i.e., whether it appears in a positive or a negative connotation. We address the problem of finding brands in images and deriving corresponding captions by introducing a modified image captioning network. We also add a third output modality, which simultaneously produces real-valued image ratings. Our network is trained using a classification-aware loss function in order to stimulate the generation of sentences with an emphasis on words identifying the brand of a product. We evaluate our model on a dataset of images depicting interactions between humans and branded products. The introduced network improves mean class accuracy by 24.5 percent. Thanks to adding the third output modality, it also considerably improves the quality of generated captions for images depicting branded products. |
2007.11348 | Ori Shapira | Ori Shapira and Ran Levy | Massive Multi-Document Summarization of Product Reviews with Weak
Supervision | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Product reviews summarization is a type of Multi-Document Summarization (MDS)
task in which the summarized document sets are often far larger than in
traditional MDS (up to tens of thousands of reviews). We highlight this
difference and coin the term "Massive Multi-Document Summarization" (MMDS) to
denote an MDS task that involves hundreds of documents or more. Prior work on
product reviews summarization considered small samples of the reviews, mainly
due to the difficulty of handling massive document sets. We show that
summarizing small samples can result in loss of important information and
provide misleading evaluation results. We propose a schema for summarizing a
massive set of reviews on top of a standard summarization algorithm. Since
writing large volumes of reference summaries needed for advanced neural network
models is impractical, our solution relies on weak supervision. Finally, we
propose an evaluation scheme that is based on multiple crowdsourced reference
summaries and aims to capture the massive review collection. We show that an
initial implementation of our schema significantly improves over several
baselines in ROUGE scores, and exhibits strong coherence in a manual linguistic
quality assessment.
| [
{
"created": "Wed, 22 Jul 2020 11:22:57 GMT",
"version": "v1"
}
] | 2020-07-23 | [
[
"Shapira",
"Ori",
""
],
[
"Levy",
"Ran",
""
]
] | Product reviews summarization is a type of Multi-Document Summarization (MDS) task in which the summarized document sets are often far larger than in traditional MDS (up to tens of thousands of reviews). We highlight this difference and coin the term "Massive Multi-Document Summarization" (MMDS) to denote an MDS task that involves hundreds of documents or more. Prior work on product reviews summarization considered small samples of the reviews, mainly due to the difficulty of handling massive document sets. We show that summarizing small samples can result in loss of important information and provide misleading evaluation results. We propose a schema for summarizing a massive set of reviews on top of a standard summarization algorithm. Since writing large volumes of reference summaries needed for advanced neural network models is impractical, our solution relies on weak supervision. Finally, we propose an evaluation scheme that is based on multiple crowdsourced reference summaries and aims to capture the massive review collection. We show that an initial implementation of our schema significantly improves over several baselines in ROUGE scores, and exhibits strong coherence in a manual linguistic quality assessment. |
1503.05157 | Jeremy Debattista | Jeremy Debattista, Santiago Londo\~no, Christoph Lange, S\"oren Auer | Quality Assessment of Linked Datasets using Probabilistic Approximation | 15 pages, 2 figures, To appear in ESWC 2015 proceedings | null | null | null | cs.DB cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the increasing application of Linked Open Data, assessing the quality of
datasets by computing quality metrics becomes an issue of crucial importance.
For large and evolving datasets, an exact, deterministic computation of the
quality metrics is too time consuming or expensive. We employ probabilistic
techniques such as Reservoir Sampling, Bloom Filters and Clustering Coefficient
estimation for implementing a broad set of data quality metrics in an
approximate but sufficiently accurate way. Our implementation is integrated in
the comprehensive data quality assessment framework Luzzu. We evaluated its
performance and accuracy on Linked Open Datasets of broad relevance.
| [
{
"created": "Tue, 17 Mar 2015 18:39:22 GMT",
"version": "v1"
}
] | 2015-03-18 | [
[
"Debattista",
"Jeremy",
""
],
[
"Londoño",
"Santiago",
""
],
[
"Lange",
"Christoph",
""
],
[
"Auer",
"Sören",
""
]
] | With the increasing application of Linked Open Data, assessing the quality of datasets by computing quality metrics becomes an issue of crucial importance. For large and evolving datasets, an exact, deterministic computation of the quality metrics is too time consuming or expensive. We employ probabilistic techniques such as Reservoir Sampling, Bloom Filters and Clustering Coefficient estimation for implementing a broad set of data quality metrics in an approximate but sufficiently accurate way. Our implementation is integrated in the comprehensive data quality assessment framework Luzzu. We evaluated its performance and accuracy on Linked Open Datasets of broad relevance. |
2102.12606 | Nahyun Kwon | Nahyun Kwon, Chen Liang and Jeeeun Kim | 3D4ALL: Toward an Inclusive Pipeline to Classify 3D Contents | 9 pages, 2 figures, TExSS, ACM IUI 2021 Workshops | null | null | null | cs.HC | http://creativecommons.org/licenses/by/4.0/ | Algorithmic content moderation manages an explosive number of user-created
content shared online everyday. Despite a massive number of 3D designs that are
free to be downloaded, shared, and 3D printed by the users, detecting
sensitivity with transparency and fairness has been controversial. Although
sensitive 3D content might have a greater impact than other media due to its
possible reproducibility and replicability without restriction, prevailed
unawareness resulted in proliferation of sensitive 3D models online and a lack
of discussion on transparent and fair 3D content moderation. As the 3D content
exists as a document on the web mainly consisting of text and images, we first
study the existing algorithmic efforts based on text and images and the prior
endeavors to encompass transparency and fairness in moderation, which can also
be useful in a 3D printing domain. At the same time, we identify 3D specific
features that should be addressed to advance a 3D specialized algorithmic
moderation. As a potential solution, we suggest a human-in-the-loop pipeline
using augmented learning, powered by various stakeholders with different
backgrounds and perspectives in understanding the content. Our pipeline aims to
minimize personal biases by enabling diverse stakeholders to be vocal in
reflecting various factors to interpret the content. We add our initial
proposal for redesigning metadata of open 3D repositories, to invoke users'
responsible actions of being granted consent from the subject upon sharing
contents for free in the public spaces.
| [
{
"created": "Wed, 24 Feb 2021 23:58:07 GMT",
"version": "v1"
}
] | 2021-02-26 | [
[
"Kwon",
"Nahyun",
""
],
[
"Liang",
"Chen",
""
],
[
"Kim",
"Jeeeun",
""
]
] | Algorithmic content moderation manages an explosive number of user-created content shared online everyday. Despite a massive number of 3D designs that are free to be downloaded, shared, and 3D printed by the users, detecting sensitivity with transparency and fairness has been controversial. Although sensitive 3D content might have a greater impact than other media due to its possible reproducibility and replicability without restriction, prevailed unawareness resulted in proliferation of sensitive 3D models online and a lack of discussion on transparent and fair 3D content moderation. As the 3D content exists as a document on the web mainly consisting of text and images, we first study the existing algorithmic efforts based on text and images and the prior endeavors to encompass transparency and fairness in moderation, which can also be useful in a 3D printing domain. At the same time, we identify 3D specific features that should be addressed to advance a 3D specialized algorithmic moderation. As a potential solution, we suggest a human-in-the-loop pipeline using augmented learning, powered by various stakeholders with different backgrounds and perspectives in understanding the content. Our pipeline aims to minimize personal biases by enabling diverse stakeholders to be vocal in reflecting various factors to interpret the content. We add our initial proposal for redesigning metadata of open 3D repositories, to invoke users' responsible actions of being granted consent from the subject upon sharing contents for free in the public spaces. |
2204.11965 | Mohamed Raed El Aoun | Mohamed Raed El aoun, Heng Li, Foutse Khomh, Lionel Tidjon | Bug Characteristics in Quantum Software Ecosystem | null | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the advance in quantum computing in recent years, quantum software
becomes vital for exploring the full potential of quantum computing systems.
Quantum programming is different from classical programming, for example, the
state of a quantum program is probabilistic in nature, and a quantum computer
is error-prone due to the instability of quantum mechanisms. Therefore, the
characteristics of bugs in quantum software projects may be very different from
that of classical software projects. This work aims to understand the
characteristics of bugs in quantum software projects, in order to provide
insights to help devise effective testing and debugging mechanisms. To achieve
this goal, we conduct an empirical study on the bug reports of 125 quantum
software projects. We observe that quantum software projects are more buggy
than classical software projects and that quantum project bugs are more costly
to fix than classical project bugs. We also identify the types of the bugs and
the quantum programming components where they occurred. Our study shows that
the bugs are spread across different components, but quantum-specific bugs
particularly appear in the compiler, gate operation, and state preparation
components. The three most occurring types of bugs are Program anomaly bugs,
Configuration bugs, and Data type and structure bugs. Our study highlights some
particularly challenging areas in quantum software development, such as the
lack of scientific quantum computation libraries that implement comprehensive
mathematical functions for quantum computing. Quantum developers also seek
specialized data manipulation libraries for quantum software engineering like
Numpy for quantum computing. Our findings also provide insights for future work
to advance the quantum program development, testing, and debugging of quantum
software, such as providing tooling support for debugging low-level circuits.
| [
{
"created": "Mon, 25 Apr 2022 20:59:46 GMT",
"version": "v1"
}
] | 2022-04-27 | [
[
"aoun",
"Mohamed Raed El",
""
],
[
"Li",
"Heng",
""
],
[
"Khomh",
"Foutse",
""
],
[
"Tidjon",
"Lionel",
""
]
] | With the advance in quantum computing in recent years, quantum software becomes vital for exploring the full potential of quantum computing systems. Quantum programming is different from classical programming, for example, the state of a quantum program is probabilistic in nature, and a quantum computer is error-prone due to the instability of quantum mechanisms. Therefore, the characteristics of bugs in quantum software projects may be very different from that of classical software projects. This work aims to understand the characteristics of bugs in quantum software projects, in order to provide insights to help devise effective testing and debugging mechanisms. To achieve this goal, we conduct an empirical study on the bug reports of 125 quantum software projects. We observe that quantum software projects are more buggy than classical software projects and that quantum project bugs are more costly to fix than classical project bugs. We also identify the types of the bugs and the quantum programming components where they occurred. Our study shows that the bugs are spread across different components, but quantum-specific bugs particularly appear in the compiler, gate operation, and state preparation components. The three most occurring types of bugs are Program anomaly bugs, Configuration bugs, and Data type and structure bugs. Our study highlights some particularly challenging areas in quantum software development, such as the lack of scientific quantum computation libraries that implement comprehensive mathematical functions for quantum computing. Quantum developers also seek specialized data manipulation libraries for quantum software engineering like Numpy for quantum computing. Our findings also provide insights for future work to advance the quantum program development, testing, and debugging of quantum software, such as providing tooling support for debugging low-level circuits. |
2307.09742 | Guanbin Li | Ganlong Zhao, Guanbin Li, Yipeng Qin, Yizhou Yu | Improved Distribution Matching for Dataset Condensation | CVPR2023 | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dataset Condensation aims to condense a large dataset into a smaller one
while maintaining its ability to train a well-performing model, thus reducing
the storage cost and training effort in deep learning applications. However,
conventional dataset condensation methods are optimization-oriented and
condense the dataset by performing gradient or parameter matching during model
optimization, which is computationally intensive even on small datasets and
models. In this paper, we propose a novel dataset condensation method based on
distribution matching, which is more efficient and promising. Specifically, we
identify two important shortcomings of naive distribution matching (i.e.,
imbalanced feature numbers and unvalidated embeddings for distance computation)
and address them with three novel techniques (i.e., partitioning and expansion
augmentation, efficient and enriched model sampling, and class-aware
distribution regularization). Our simple yet effective method outperforms most
previous optimization-oriented methods with much fewer computational resources,
thereby scaling data condensation to larger datasets and models. Extensive
experiments demonstrate the effectiveness of our method. Codes are available at
https://github.com/uitrbn/IDM
| [
{
"created": "Wed, 19 Jul 2023 04:07:33 GMT",
"version": "v1"
}
] | 2023-07-20 | [
[
"Zhao",
"Ganlong",
""
],
[
"Li",
"Guanbin",
""
],
[
"Qin",
"Yipeng",
""
],
[
"Yu",
"Yizhou",
""
]
] | Dataset Condensation aims to condense a large dataset into a smaller one while maintaining its ability to train a well-performing model, thus reducing the storage cost and training effort in deep learning applications. However, conventional dataset condensation methods are optimization-oriented and condense the dataset by performing gradient or parameter matching during model optimization, which is computationally intensive even on small datasets and models. In this paper, we propose a novel dataset condensation method based on distribution matching, which is more efficient and promising. Specifically, we identify two important shortcomings of naive distribution matching (i.e., imbalanced feature numbers and unvalidated embeddings for distance computation) and address them with three novel techniques (i.e., partitioning and expansion augmentation, efficient and enriched model sampling, and class-aware distribution regularization). Our simple yet effective method outperforms most previous optimization-oriented methods with much fewer computational resources, thereby scaling data condensation to larger datasets and models. Extensive experiments demonstrate the effectiveness of our method. Codes are available at https://github.com/uitrbn/IDM |
1406.1516 | Zhiguo Ding | Zhiguo Ding and Zheng Yang and Pingzhi Fan and H. Vincent Poor | On the Performance of Non-Orthogonal Multiple Access in 5G Systems with
Randomly Deployed Users | null | null | 10.1109/LSP.2014.2343971 | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this letter, the performance of non-orthogonal multiple access (NOMA) is
investigated in a cellular downlink scenario with randomly deployed users. The
developed analytical results show that NOMA can achieve superior performance in
terms of ergodic sum rates; however, the outage performance of NOMA depends
critically on the choices of the users' targeted data rates and allocated
power. In particular, a wrong choice of the targeted data rates and allocated
power can lead to a situation in which the user's outage probability is always
one, i.e. the user's targeted quality of service will never be met.
| [
{
"created": "Thu, 5 Jun 2014 20:36:58 GMT",
"version": "v1"
}
] | 2015-06-19 | [
[
"Ding",
"Zhiguo",
""
],
[
"Yang",
"Zheng",
""
],
[
"Fan",
"Pingzhi",
""
],
[
"Poor",
"H. Vincent",
""
]
] | In this letter, the performance of non-orthogonal multiple access (NOMA) is investigated in a cellular downlink scenario with randomly deployed users. The developed analytical results show that NOMA can achieve superior performance in terms of ergodic sum rates; however, the outage performance of NOMA depends critically on the choices of the users' targeted data rates and allocated power. In particular, a wrong choice of the targeted data rates and allocated power can lead to a situation in which the user's outage probability is always one, i.e. the user's targeted quality of service will never be met. |
1906.05797 | Julian Straub | Julian Straub, Thomas Whelan, Lingni Ma, Yufan Chen, Erik Wijmans,
Simon Green, Jakob J. Engel, Raul Mur-Artal, Carl Ren, Shobhit Verma, Anton
Clarkson, Mingfei Yan, Brian Budge, Yajie Yan, Xiaqing Pan, June Yon, Yuyang
Zou, Kimberly Leon, Nigel Carter, Jesus Briales, Tyler Gillingham, Elias
Mueggler, Luis Pesqueira, Manolis Savva, Dhruv Batra, Hauke M. Strasdat,
Renzo De Nardi, Michael Goesele, Steven Lovegrove, Richard Newcombe | The Replica Dataset: A Digital Replica of Indoor Spaces | null | null | null | null | cs.CV cs.GR eess.IV | http://creativecommons.org/licenses/by/4.0/ | We introduce Replica, a dataset of 18 highly photo-realistic 3D indoor scene
reconstructions at room and building scale. Each scene consists of a dense
mesh, high-resolution high-dynamic-range (HDR) textures, per-primitive semantic
class and instance information, and planar mirror and glass reflectors. The
goal of Replica is to enable machine learning (ML) research that relies on
visually, geometrically, and semantically realistic generative models of the
world - for instance, egocentric computer vision, semantic segmentation in 2D
and 3D, geometric inference, and the development of embodied agents (virtual
robots) performing navigation, instruction following, and question answering.
Due to the high level of realism of the renderings from Replica, there is hope
that ML systems trained on Replica may transfer directly to real world image
and video data. Together with the data, we are releasing a minimal C++ SDK as a
starting point for working with the Replica dataset. In addition, Replica is
`Habitat-compatible', i.e. can be natively used with AI Habitat for training
and testing embodied agents.
| [
{
"created": "Thu, 13 Jun 2019 16:29:58 GMT",
"version": "v1"
}
] | 2019-06-14 | [
[
"Straub",
"Julian",
""
],
[
"Whelan",
"Thomas",
""
],
[
"Ma",
"Lingni",
""
],
[
"Chen",
"Yufan",
""
],
[
"Wijmans",
"Erik",
""
],
[
"Green",
"Simon",
""
],
[
"Engel",
"Jakob J.",
""
],
[
"Mur-Artal",
"Raul",
""
],
[
"Ren",
"Carl",
""
],
[
"Verma",
"Shobhit",
""
],
[
"Clarkson",
"Anton",
""
],
[
"Yan",
"Mingfei",
""
],
[
"Budge",
"Brian",
""
],
[
"Yan",
"Yajie",
""
],
[
"Pan",
"Xiaqing",
""
],
[
"Yon",
"June",
""
],
[
"Zou",
"Yuyang",
""
],
[
"Leon",
"Kimberly",
""
],
[
"Carter",
"Nigel",
""
],
[
"Briales",
"Jesus",
""
],
[
"Gillingham",
"Tyler",
""
],
[
"Mueggler",
"Elias",
""
],
[
"Pesqueira",
"Luis",
""
],
[
"Savva",
"Manolis",
""
],
[
"Batra",
"Dhruv",
""
],
[
"Strasdat",
"Hauke M.",
""
],
[
"De Nardi",
"Renzo",
""
],
[
"Goesele",
"Michael",
""
],
[
"Lovegrove",
"Steven",
""
],
[
"Newcombe",
"Richard",
""
]
] | We introduce Replica, a dataset of 18 highly photo-realistic 3D indoor scene reconstructions at room and building scale. Each scene consists of a dense mesh, high-resolution high-dynamic-range (HDR) textures, per-primitive semantic class and instance information, and planar mirror and glass reflectors. The goal of Replica is to enable machine learning (ML) research that relies on visually, geometrically, and semantically realistic generative models of the world - for instance, egocentric computer vision, semantic segmentation in 2D and 3D, geometric inference, and the development of embodied agents (virtual robots) performing navigation, instruction following, and question answering. Due to the high level of realism of the renderings from Replica, there is hope that ML systems trained on Replica may transfer directly to real world image and video data. Together with the data, we are releasing a minimal C++ SDK as a starting point for working with the Replica dataset. In addition, Replica is `Habitat-compatible', i.e. can be natively used with AI Habitat for training and testing embodied agents. |
2308.07783 | Mohammad Baradaran | Mohammad Baradaran, Robert Bergevin | Future Video Prediction from a Single Frame for Video Anomaly Detection | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Video anomaly detection (VAD) is an important but challenging task in
computer vision. The main challenge rises due to the rarity of training samples
to model all anomaly cases. Hence, semi-supervised anomaly detection methods
have gotten more attention, since they focus on modeling normals and they
detect anomalies by measuring the deviations from normal patterns. Despite
impressive advances of these methods in modeling normal motion and appearance,
long-term motion modeling has not been effectively explored so far. Inspired by
the abilities of the future frame prediction proxy-task, we introduce the task
of future video prediction from a single frame, as a novel proxy-task for video
anomaly detection. This proxy-task alleviates the challenges of previous
methods in learning longer motion patterns. Moreover, we replace the initial
and future raw frames with their corresponding semantic segmentation map, which
not only makes the method aware of object class but also makes the prediction
task less complex for the model. Extensive experiments on the benchmark
datasets (ShanghaiTech, UCSD-Ped1, and UCSD-Ped2) show the effectiveness of the
method and the superiority of its performance compared to SOTA prediction-based
VAD methods.
| [
{
"created": "Tue, 15 Aug 2023 14:04:50 GMT",
"version": "v1"
}
] | 2023-08-16 | [
[
"Baradaran",
"Mohammad",
""
],
[
"Bergevin",
"Robert",
""
]
] | Video anomaly detection (VAD) is an important but challenging task in computer vision. The main challenge rises due to the rarity of training samples to model all anomaly cases. Hence, semi-supervised anomaly detection methods have gotten more attention, since they focus on modeling normals and they detect anomalies by measuring the deviations from normal patterns. Despite impressive advances of these methods in modeling normal motion and appearance, long-term motion modeling has not been effectively explored so far. Inspired by the abilities of the future frame prediction proxy-task, we introduce the task of future video prediction from a single frame, as a novel proxy-task for video anomaly detection. This proxy-task alleviates the challenges of previous methods in learning longer motion patterns. Moreover, we replace the initial and future raw frames with their corresponding semantic segmentation map, which not only makes the method aware of object class but also makes the prediction task less complex for the model. Extensive experiments on the benchmark datasets (ShanghaiTech, UCSD-Ped1, and UCSD-Ped2) show the effectiveness of the method and the superiority of its performance compared to SOTA prediction-based VAD methods. |
1809.00251 | Adrian Viera | Leonardo Le\'on, Felipe Moreno-Vera, Renato Castro, Jos\'e Nav\'io,
Marco Capcha | Car Monitoring System in Apartment Garages by Small Autonomous Car using
Deep Learning | 13 pages, 12 figures, Version 1 accepted in SimBig 2018. Improving to
get better results | null | null | null | cs.CV cs.LG cs.RO | http://creativecommons.org/publicdomain/zero/1.0/ | Currently, there is an increase in the number of Peruvian families living in
apartments instead of houses for the lots of advantage; However, in some cases
there are troubles such as robberies of goods that are usually left at the
parking lots or the entrance of strangers that use the tenants parking lots
(this last trouble sometimes is related to kidnappings or robberies in building
apartments). Due to these problems, the use of a self-driving mini-car is
proposed to implement a monitoring system of license plates in an underground
garage inside a building using a deep learning model with the aim of recording
the vehicles and identifying their owners if they were tenants or not. In
addition, the small robot has its own location system using beacons that allow
us to identify the position of the parking lot corresponding to each tenant of
the building while the mini-car is on its way. Finally, one of the objectives
of this work is to build a low-cost mini-robot that would replace expensive
cameras or work together in order to keep safe the goods of tenants.
| [
{
"created": "Sat, 1 Sep 2018 21:00:58 GMT",
"version": "v1"
},
{
"created": "Sat, 29 Sep 2018 01:00:32 GMT",
"version": "v2"
},
{
"created": "Sat, 14 Sep 2019 21:40:42 GMT",
"version": "v3"
}
] | 2019-09-17 | [
[
"León",
"Leonardo",
""
],
[
"Moreno-Vera",
"Felipe",
""
],
[
"Castro",
"Renato",
""
],
[
"Navío",
"José",
""
],
[
"Capcha",
"Marco",
""
]
] | Currently, there is an increase in the number of Peruvian families living in apartments instead of houses for the lots of advantage; However, in some cases there are troubles such as robberies of goods that are usually left at the parking lots or the entrance of strangers that use the tenants parking lots (this last trouble sometimes is related to kidnappings or robberies in building apartments). Due to these problems, the use of a self-driving mini-car is proposed to implement a monitoring system of license plates in an underground garage inside a building using a deep learning model with the aim of recording the vehicles and identifying their owners if they were tenants or not. In addition, the small robot has its own location system using beacons that allow us to identify the position of the parking lot corresponding to each tenant of the building while the mini-car is on its way. Finally, one of the objectives of this work is to build a low-cost mini-robot that would replace expensive cameras or work together in order to keep safe the goods of tenants. |
2204.06736 | EPTCS | Alexander Bolotov (University of Westminster) | On the Expressive Power of the Normal Form for Branching-Time Temporal
Logics | In Proceedings NCL 2022, arXiv:2204.06359 | EPTCS 358, 2022, pp. 254-269 | 10.4204/EPTCS.358.19 | null | cs.FL cs.LO | http://creativecommons.org/licenses/by/4.0/ | With the emerging applications that involve complex distributed systems
branching-time specifications are specifically important as they reflect
dynamic and non-deterministic nature of such applications. We describe the
expressive power of a simple yet powerful branching-time specification
framework -- branching-time normal form (BNF), which has been developed as part
of clausal resolution for branching-time temporal logics. We show the encoding
of Buchi Tree Automata in the language of the normal form, thus representing,
syntactically, tree automata in a high-level way. Thus we can treat BNF as a
normal form for the latter. These results enable us (1) to translate given
problem specifications into the normal form and apply as a verification method
a deductive reasoning technique -- the clausal temporal resolution; (2) to
apply one of the core components of the resolution method -- the loop searching
to extract, syntactically, hidden invariants in a wide range of complex
temporal specifications.
| [
{
"created": "Thu, 14 Apr 2022 03:28:29 GMT",
"version": "v1"
}
] | 2022-04-15 | [
[
"Bolotov",
"Alexander",
"",
"University of Westminster"
]
] | With the emerging applications that involve complex distributed systems branching-time specifications are specifically important as they reflect dynamic and non-deterministic nature of such applications. We describe the expressive power of a simple yet powerful branching-time specification framework -- branching-time normal form (BNF), which has been developed as part of clausal resolution for branching-time temporal logics. We show the encoding of Buchi Tree Automata in the language of the normal form, thus representing, syntactically, tree automata in a high-level way. Thus we can treat BNF as a normal form for the latter. These results enable us (1) to translate given problem specifications into the normal form and apply as a verification method a deductive reasoning technique -- the clausal temporal resolution; (2) to apply one of the core components of the resolution method -- the loop searching to extract, syntactically, hidden invariants in a wide range of complex temporal specifications. |
2103.08720 | Weilong Ren | Weilong Ren, Xiang Lian, Kambiz Ghazinour | Online Topic-Aware Entity Resolution Over Incomplete Data Streams
(Technical Report) | Technical report of the paper entitled "Online Topic-Aware Entity
Resolution Over Incomplete Data Streams", published on SIGMOD 2021 | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many real applications such as the data integration, social network
analysis, and the Semantic Web, the entity resolution (ER) is an important and
fundamental problem, which identifies and links the same real-world entities
from various data sources. While prior works usually consider ER over static
and complete data, in practice, application data are usually collected in a
streaming fashion, and often incur missing attributes (due to the inaccuracy of
data extraction techniques). Therefore, in this paper, we will formulate and
tackle a novel problem, topic-aware entity resolution over incomplete data
streams (TER-iDS), which online imputes incomplete tuples and detects pairs of
topic-related matching entities from incomplete data streams. In order to
effectively and efficiently tackle the TER-iDS problem, we propose an effective
imputation strategy, carefully design effective pruning strategies, as well as
indexes/synopsis, and develop an efficient TER-iDS algorithm via index joins.
Extensive experiments have been conducted to evaluate the effectiveness and
efficiency of our proposed TER-iDS approach over real data sets.
| [
{
"created": "Mon, 15 Mar 2021 21:06:12 GMT",
"version": "v1"
}
] | 2021-03-17 | [
[
"Ren",
"Weilong",
""
],
[
"Lian",
"Xiang",
""
],
[
"Ghazinour",
"Kambiz",
""
]
] | In many real applications such as the data integration, social network analysis, and the Semantic Web, the entity resolution (ER) is an important and fundamental problem, which identifies and links the same real-world entities from various data sources. While prior works usually consider ER over static and complete data, in practice, application data are usually collected in a streaming fashion, and often incur missing attributes (due to the inaccuracy of data extraction techniques). Therefore, in this paper, we will formulate and tackle a novel problem, topic-aware entity resolution over incomplete data streams (TER-iDS), which online imputes incomplete tuples and detects pairs of topic-related matching entities from incomplete data streams. In order to effectively and efficiently tackle the TER-iDS problem, we propose an effective imputation strategy, carefully design effective pruning strategies, as well as indexes/synopsis, and develop an efficient TER-iDS algorithm via index joins. Extensive experiments have been conducted to evaluate the effectiveness and efficiency of our proposed TER-iDS approach over real data sets. |
2109.09300 | Jieming Zhou | Jieming Zhou, Tong Zhang, Pengfei Fang, Lars Petersson, Mehrtash
Harandi | Feature Correlation Aggregation: on the Path to Better Graph Neural
Networks | null | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Prior to the introduction of Graph Neural Networks (GNNs), modeling and
analyzing irregular data, particularly graphs, was thought to be the Achilles'
heel of deep learning. The core concept of GNNs is to find a representation by
recursively aggregating the representations of a central node and those of its
neighbors. The core concept of GNNs is to find a representation by recursively
aggregating the representations of a central node and those of its neighbor,
and its success has been demonstrated by many GNNs' designs. However, most of
them only focus on using the first-order information between a node and its
neighbors. In this paper, we introduce a central node permutation variant
function through a frustratingly simple and innocent-looking modification to
the core operation of a GNN, namely the Feature cOrrelation aGgregation (FOG)
module which learns the second-order information from feature correlation
between a node and its neighbors in the pipeline. By adding FOG into existing
variants of GNNs, we empirically verify this second-order information
complements the features generated by original GNNs across a broad set of
benchmarks. A tangible boost in performance of the model is observed where the
model surpasses previous state-of-the-art results by a significant margin while
employing fewer parameters. (e.g., 33.116% improvement on a real-world
molecular dataset using graph convolutional networks).
| [
{
"created": "Mon, 20 Sep 2021 05:04:26 GMT",
"version": "v1"
}
] | 2021-09-22 | [
[
"Zhou",
"Jieming",
""
],
[
"Zhang",
"Tong",
""
],
[
"Fang",
"Pengfei",
""
],
[
"Petersson",
"Lars",
""
],
[
"Harandi",
"Mehrtash",
""
]
] | Prior to the introduction of Graph Neural Networks (GNNs), modeling and analyzing irregular data, particularly graphs, was thought to be the Achilles' heel of deep learning. The core concept of GNNs is to find a representation by recursively aggregating the representations of a central node and those of its neighbors. The core concept of GNNs is to find a representation by recursively aggregating the representations of a central node and those of its neighbor, and its success has been demonstrated by many GNNs' designs. However, most of them only focus on using the first-order information between a node and its neighbors. In this paper, we introduce a central node permutation variant function through a frustratingly simple and innocent-looking modification to the core operation of a GNN, namely the Feature cOrrelation aGgregation (FOG) module which learns the second-order information from feature correlation between a node and its neighbors in the pipeline. By adding FOG into existing variants of GNNs, we empirically verify this second-order information complements the features generated by original GNNs across a broad set of benchmarks. A tangible boost in performance of the model is observed where the model surpasses previous state-of-the-art results by a significant margin while employing fewer parameters. (e.g., 33.116% improvement on a real-world molecular dataset using graph convolutional networks). |
2311.14704 | Ezequiel Santos | David de Oliveira Lemes, Ezequiel Fran\c{c}a dos Santos, Eduardo
Romanek, Celso Fujimoto, Adriano Felix Valente | An\'alise e modelagem de jogos digitais: Relato de uma experi\^encia
educacional utlizando PBL em um grupo multidisciplinar | in Portuguese language | null | null | null | cs.CY | http://creativecommons.org/licenses/by/4.0/ | Traditional software engineering education generally emphasizes strict
collaboration and technical skills However active teaching strategies where
students actively engage with the material transitioning from passive observers
to active manipulators of realworld tools have shown effectiveness in software
engineering The evolving market demands new skills in the context of digital
transformation presenting challenges such as modeling complex business
scenarios and navigating the interconnections between people systems and
technologies Shifting from conventional software engineering instruction to
active methodologies like ProblemBased Learning PBL has proven to bring
realworld market challenges and realities into the classroom This article
details an experience from the Digital Games Analysis and Modeling course in
the Digital Games Masters program at Pontifical Catholic University of Sao
Paulo It covers the discussed concepts case study rolebased work method and
steps of the meetings We also present examples of outcomes like requirement
diagrams context diagrams use case diagrams class diagrams interviews and
others that contributed to the Game Design Document GDD These were created by
each group during the meetings alongside their game prototypes Additionally a
discussion on the developed capabilities is included
| [
{
"created": "Sat, 11 Nov 2023 20:28:51 GMT",
"version": "v1"
}
] | 2023-11-28 | [
[
"Lemes",
"David de Oliveira",
""
],
[
"Santos",
"Ezequiel França dos",
""
],
[
"Romanek",
"Eduardo",
""
],
[
"Fujimoto",
"Celso",
""
],
[
"Valente",
"Adriano Felix",
""
]
] | Traditional software engineering education generally emphasizes strict collaboration and technical skills However active teaching strategies where students actively engage with the material transitioning from passive observers to active manipulators of realworld tools have shown effectiveness in software engineering The evolving market demands new skills in the context of digital transformation presenting challenges such as modeling complex business scenarios and navigating the interconnections between people systems and technologies Shifting from conventional software engineering instruction to active methodologies like ProblemBased Learning PBL has proven to bring realworld market challenges and realities into the classroom This article details an experience from the Digital Games Analysis and Modeling course in the Digital Games Masters program at Pontifical Catholic University of Sao Paulo It covers the discussed concepts case study rolebased work method and steps of the meetings We also present examples of outcomes like requirement diagrams context diagrams use case diagrams class diagrams interviews and others that contributed to the Game Design Document GDD These were created by each group during the meetings alongside their game prototypes Additionally a discussion on the developed capabilities is included |
2204.11046 | Yueqi Xie | Yueqi Xie, Peilin Zhou, Sunghun Kim | Decoupled Side Information Fusion for Sequential Recommendation | Accepted to SIGIR 2022 | null | null | null | cs.IR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Side information fusion for sequential recommendation (SR) aims to
effectively leverage various side information to enhance the performance of
next-item prediction. Most state-of-the-art methods build on self-attention
networks and focus on exploring various solutions to integrate the item
embedding and side information embeddings before the attention layer. However,
our analysis shows that the early integration of various types of embeddings
limits the expressiveness of attention matrices due to a rank bottleneck and
constrains the flexibility of gradients. Also, it involves mixed correlations
among the different heterogeneous information resources, which brings extra
disturbance to attention calculation. Motivated by this, we propose Decoupled
Side Information Fusion for Sequential Recommendation (DIF-SR), which moves the
side information from the input to the attention layer and decouples the
attention calculation of various side information and item representation. We
theoretically and empirically show that the proposed solution allows
higher-rank attention matrices and flexible gradients to enhance the modeling
capacity of side information fusion. Also, auxiliary attribute predictors are
proposed to further activate the beneficial interaction between side
information and item representation learning. Extensive experiments on four
real-world datasets demonstrate that our proposed solution stably outperforms
state-of-the-art SR models. Further studies show that our proposed solution can
be readily incorporated into current attention-based SR models and
significantly boost performance. Our source code is available at
https://github.com/AIM-SE/DIF-SR.
| [
{
"created": "Sat, 23 Apr 2022 10:53:36 GMT",
"version": "v1"
}
] | 2022-04-26 | [
[
"Xie",
"Yueqi",
""
],
[
"Zhou",
"Peilin",
""
],
[
"Kim",
"Sunghun",
""
]
] | Side information fusion for sequential recommendation (SR) aims to effectively leverage various side information to enhance the performance of next-item prediction. Most state-of-the-art methods build on self-attention networks and focus on exploring various solutions to integrate the item embedding and side information embeddings before the attention layer. However, our analysis shows that the early integration of various types of embeddings limits the expressiveness of attention matrices due to a rank bottleneck and constrains the flexibility of gradients. Also, it involves mixed correlations among the different heterogeneous information resources, which brings extra disturbance to attention calculation. Motivated by this, we propose Decoupled Side Information Fusion for Sequential Recommendation (DIF-SR), which moves the side information from the input to the attention layer and decouples the attention calculation of various side information and item representation. We theoretically and empirically show that the proposed solution allows higher-rank attention matrices and flexible gradients to enhance the modeling capacity of side information fusion. Also, auxiliary attribute predictors are proposed to further activate the beneficial interaction between side information and item representation learning. Extensive experiments on four real-world datasets demonstrate that our proposed solution stably outperforms state-of-the-art SR models. Further studies show that our proposed solution can be readily incorporated into current attention-based SR models and significantly boost performance. Our source code is available at https://github.com/AIM-SE/DIF-SR. |
1012.5248 | Sergey Verlan | Ion Petre, Sergey Verlan | Matrix Insertion-Deletion Systems | null | null | null | null | cs.FL cs.CC cs.CL cs.DM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this article, we consider for the first time the operations of insertion
and deletion working in a matrix controlled manner. We show that, similarly as
in the case of context-free productions, the computational power is strictly
increased when using a matrix control: computational completeness can be
obtained by systems with insertion or deletion rules involving at most two
symbols in a contextual or in a context-free manner and using only binary
matrices.
| [
{
"created": "Thu, 23 Dec 2010 17:00:40 GMT",
"version": "v1"
}
] | 2010-12-24 | [
[
"Petre",
"Ion",
""
],
[
"Verlan",
"Sergey",
""
]
] | In this article, we consider for the first time the operations of insertion and deletion working in a matrix controlled manner. We show that, similarly as in the case of context-free productions, the computational power is strictly increased when using a matrix control: computational completeness can be obtained by systems with insertion or deletion rules involving at most two symbols in a contextual or in a context-free manner and using only binary matrices. |
2401.09234 | Alfredo Go\~ni Sarriguren | Alfredo Go\~ni Sarriguren | SARRIGUREN: a polynomial-time complete algorithm for random $k$-SAT with
relatively dense clauses | 23 pages, 2 figures, 8 tables, algorithms, results and data in
http://bdi.si.ehu.es/bdi/sarriguren | null | null | null | cs.DS cs.CC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | SARRIGUREN, a new complete algorithm for SAT based on counting clauses (which
is valid also for Unique-SAT and #SAT) is described, analyzed and tested.
Although existing complete algorithms for SAT perform slower with clauses with
many literals, that is an advantage for SARRIGUREN, because the more literals
are in the clauses the bigger is the probability of overlapping among clauses,
a property that makes the clause counting process more efficient. Actually, it
provides a $O(m^2 \times n/k)$ time complexity for random $k$-SAT instances of
$n$ variables and $m$ relatively dense clauses, where that density level is
relative to the number of variables $n$, that is, clauses are relatively dense
when $k\geq7\sqrt{n}$. Although theoretically there could be worst-cases with
exponential complexity, the probability of those cases to happen in random
$k$-SAT with relatively dense clauses is practically zero. The algorithm has
been empirically tested and that polynomial time complexity maintains also for
$k$-SAT instances with less dense clauses ($k\geq5\sqrt{n}$). That density
could, for example, be of only 0.049 working with $n=20000$ variables and
$k=989$ literals. In addition, they are presented two more complementary
algorithms that provide the solutions to $k$-SAT instances and valuable
information about number of solutions for each literal. Although this algorithm
does not solve the NP=P problem (it is not a polynomial algorithm for 3-SAT),
it broads the knowledge about that subject, because $k$-SAT with $k>3$ and
dense clauses is not harder than 3-SAT. Moreover, the Python implementation of
the algorithms, and all the input datasets and obtained results in the
experiments are made available.
| [
{
"created": "Wed, 17 Jan 2024 14:23:55 GMT",
"version": "v1"
}
] | 2024-01-18 | [
[
"Sarriguren",
"Alfredo Goñi",
""
]
] | SARRIGUREN, a new complete algorithm for SAT based on counting clauses (which is valid also for Unique-SAT and #SAT) is described, analyzed and tested. Although existing complete algorithms for SAT perform slower with clauses with many literals, that is an advantage for SARRIGUREN, because the more literals are in the clauses the bigger is the probability of overlapping among clauses, a property that makes the clause counting process more efficient. Actually, it provides a $O(m^2 \times n/k)$ time complexity for random $k$-SAT instances of $n$ variables and $m$ relatively dense clauses, where that density level is relative to the number of variables $n$, that is, clauses are relatively dense when $k\geq7\sqrt{n}$. Although theoretically there could be worst-cases with exponential complexity, the probability of those cases to happen in random $k$-SAT with relatively dense clauses is practically zero. The algorithm has been empirically tested and that polynomial time complexity maintains also for $k$-SAT instances with less dense clauses ($k\geq5\sqrt{n}$). That density could, for example, be of only 0.049 working with $n=20000$ variables and $k=989$ literals. In addition, they are presented two more complementary algorithms that provide the solutions to $k$-SAT instances and valuable information about number of solutions for each literal. Although this algorithm does not solve the NP=P problem (it is not a polynomial algorithm for 3-SAT), it broads the knowledge about that subject, because $k$-SAT with $k>3$ and dense clauses is not harder than 3-SAT. Moreover, the Python implementation of the algorithms, and all the input datasets and obtained results in the experiments are made available. |
2002.09107 | Iretiayo Akinola | Iretiayo Akinola, Jacob Varley and Dmitry Kalashnikov | Learning Precise 3D Manipulation from Multiple Uncalibrated Cameras | Accepted at International Conference on Robotics and Automation (ICRA
2020) | null | null | null | cs.RO cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we present an effective multi-view approach to closed-loop
end-to-end learning of precise manipulation tasks that are 3D in nature. Our
method learns to accomplish these tasks using multiple statically placed but
uncalibrated RGB camera views without building an explicit 3D representation
such as a pointcloud or voxel grid. This multi-camera approach achieves
superior task performance on difficult stacking and insertion tasks compared to
single-view baselines. Single view robotic agents struggle from occlusion and
challenges in estimating relative poses between points of interest. While full
3D scene representations (voxels or pointclouds) are obtainable from registered
output of multiple depth sensors, several challenges complicate operating off
such explicit 3D representations. These challenges include imperfect camera
calibration, poor depth maps due to object properties such as reflective
surfaces, and slower inference speeds over 3D representations compared to 2D
images. Our use of static but uncalibrated cameras does not require
camera-robot or camera-camera calibration making the proposed approach easy to
setup and our use of \textit{sensor dropout} during training makes it resilient
to the loss of camera-views after deployment.
| [
{
"created": "Fri, 21 Feb 2020 03:28:42 GMT",
"version": "v1"
},
{
"created": "Wed, 31 Mar 2021 18:48:24 GMT",
"version": "v2"
}
] | 2021-04-02 | [
[
"Akinola",
"Iretiayo",
""
],
[
"Varley",
"Jacob",
""
],
[
"Kalashnikov",
"Dmitry",
""
]
] | In this work, we present an effective multi-view approach to closed-loop end-to-end learning of precise manipulation tasks that are 3D in nature. Our method learns to accomplish these tasks using multiple statically placed but uncalibrated RGB camera views without building an explicit 3D representation such as a pointcloud or voxel grid. This multi-camera approach achieves superior task performance on difficult stacking and insertion tasks compared to single-view baselines. Single view robotic agents struggle from occlusion and challenges in estimating relative poses between points of interest. While full 3D scene representations (voxels or pointclouds) are obtainable from registered output of multiple depth sensors, several challenges complicate operating off such explicit 3D representations. These challenges include imperfect camera calibration, poor depth maps due to object properties such as reflective surfaces, and slower inference speeds over 3D representations compared to 2D images. Our use of static but uncalibrated cameras does not require camera-robot or camera-camera calibration making the proposed approach easy to setup and our use of \textit{sensor dropout} during training makes it resilient to the loss of camera-views after deployment. |
2011.11545 | Xuhong Wang | Xuhong Wang, Ding Lyu, Mengjian Li, Yang Xia, Qi Yang, Xinwen Wang,
Xinguang Wang, Ping Cui, Yupu Yang, Bowen Sun, Zhenyu Guo | APAN: Asynchronous Propagation Attention Network for Real-time Temporal
Graph Embedding | In Proceedings of the 2021 International Conference on Management of
Data (SIGMOD/PODS '21) | null | 10.1145/3448016.3457564 | null | cs.AI cs.DB cs.SI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Limited by the time complexity of querying k-hop neighbors in a graph
database, most graph algorithms cannot be deployed online and execute
millisecond-level inference. This problem dramatically limits the potential of
applying graph algorithms in certain areas, such as financial fraud detection.
Therefore, we propose Asynchronous Propagation Attention Network, an
asynchronous continuous time dynamic graph algorithm for real-time temporal
graph embedding. Traditional graph models usually execute two serial
operations: first graph computation and then model inference. We decouple model
inference and graph computation step so that the heavy graph query operations
will not damage the speed of model inference. Extensive experiments demonstrate
that the proposed method can achieve competitive performance and 8.7 times
inference speed improvement in the meantime.
| [
{
"created": "Mon, 23 Nov 2020 16:58:50 GMT",
"version": "v1"
},
{
"created": "Fri, 27 Nov 2020 03:12:47 GMT",
"version": "v2"
},
{
"created": "Wed, 16 Dec 2020 02:35:09 GMT",
"version": "v3"
},
{
"created": "Fri, 26 Mar 2021 05:42:05 GMT",
"version": "v4"
}
] | 2021-07-06 | [
[
"Wang",
"Xuhong",
""
],
[
"Lyu",
"Ding",
""
],
[
"Li",
"Mengjian",
""
],
[
"Xia",
"Yang",
""
],
[
"Yang",
"Qi",
""
],
[
"Wang",
"Xinwen",
""
],
[
"Wang",
"Xinguang",
""
],
[
"Cui",
"Ping",
""
],
[
"Yang",
"Yupu",
""
],
[
"Sun",
"Bowen",
""
],
[
"Guo",
"Zhenyu",
""
]
] | Limited by the time complexity of querying k-hop neighbors in a graph database, most graph algorithms cannot be deployed online and execute millisecond-level inference. This problem dramatically limits the potential of applying graph algorithms in certain areas, such as financial fraud detection. Therefore, we propose Asynchronous Propagation Attention Network, an asynchronous continuous time dynamic graph algorithm for real-time temporal graph embedding. Traditional graph models usually execute two serial operations: first graph computation and then model inference. We decouple model inference and graph computation step so that the heavy graph query operations will not damage the speed of model inference. Extensive experiments demonstrate that the proposed method can achieve competitive performance and 8.7 times inference speed improvement in the meantime. |
1908.05905 | Omar Sami Oubbati | Omar Sami Oubbati and Noureddine Chaib and Abderrahmane Lakas and
Salim Bitam | On-Demand Routing for Urban VANETs using Cooperating UAVs | 6 pages, 7 figures, conference | 2018 International Conference on Smart Communications in Network
Technologies (SaCoNeT) | 10.1109/SaCoNeT.2018.8585453 | null | cs.NI cs.IT cs.SI math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vehicular ad hoc networks (VANETs) are characterized by frequent routing path
failures due to the high mobility caused by the sudden changes of the direction
of vehicles. The routing paths between two different vehicles should be
established with this challenge in mind. Stability and connectedness are a
mandatory condition to ensure a robust and reliable data delivery. The idea
behind this work is to exploit a new reactive routing technique to provide
regulated and well-connected routing paths. Unmanned Aerial Vehicles (UAVs) or
what are referred to as drones can be both involved in the discovery process
and be full members in these discovered paths in order to avoid possible
disconnections on the ground when the network is sparsely connected. The
different tests of this technique are performed based on NS-2 simulator and the
outcomes are compared with those of related on-demand routing protocols
dedicated for VANETs. Interesting results are distinguished showing a reduced
end-to-end delay and a high delivery ratio, which proving that this
heterogeneous communication between vehicles and UAVs is able to extend the
network connectivity.
| [
{
"created": "Fri, 16 Aug 2019 09:21:05 GMT",
"version": "v1"
}
] | 2019-08-19 | [
[
"Oubbati",
"Omar Sami",
""
],
[
"Chaib",
"Noureddine",
""
],
[
"Lakas",
"Abderrahmane",
""
],
[
"Bitam",
"Salim",
""
]
] | Vehicular ad hoc networks (VANETs) are characterized by frequent routing path failures due to the high mobility caused by the sudden changes of the direction of vehicles. The routing paths between two different vehicles should be established with this challenge in mind. Stability and connectedness are a mandatory condition to ensure a robust and reliable data delivery. The idea behind this work is to exploit a new reactive routing technique to provide regulated and well-connected routing paths. Unmanned Aerial Vehicles (UAVs) or what are referred to as drones can be both involved in the discovery process and be full members in these discovered paths in order to avoid possible disconnections on the ground when the network is sparsely connected. The different tests of this technique are performed based on NS-2 simulator and the outcomes are compared with those of related on-demand routing protocols dedicated for VANETs. Interesting results are distinguished showing a reduced end-to-end delay and a high delivery ratio, which proving that this heterogeneous communication between vehicles and UAVs is able to extend the network connectivity. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.