id stringlengths 9 10 | submitter stringlengths 1 64 ⌀ | authors stringlengths 4 20.7k | title stringlengths 4 246 | comments stringlengths 1 523 ⌀ | journal-ref stringlengths 4 404 ⌀ | doi stringlengths 11 153 ⌀ | report-no stringlengths 2 254 ⌀ | categories stringlengths 5 98 | license stringclasses 9 values | orig_abstract stringlengths 14 3.35k | versions listlengths 1 60 | update_date stringlengths 10 10 | authors_parsed listlengths 1 1.35k | abstract stringlengths 11 3.34k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2206.05454 | Arezou Rezazadeh | Arezou Rezazadeh | A General framework for PAC-Bayes Bounds for Meta-Learning | null | null | null | null | cs.LG stat.ML | http://creativecommons.org/publicdomain/zero/1.0/ | Meta learning automatically infers an inductive bias, that includes the
hyperparameter of the base-learning algorithm, by observing data from a finite
number of related tasks. This paper studies PAC-Bayes bounds on meta
generalization gap. The meta-generalization gap comprises two sources of
generalization gaps: the environment-level and task-level gaps resulting from
observation of a finite number of tasks and data samples per task,
respectively. In this paper, by upper bounding arbitrary convex functions,
which link the expected and empirical losses at the environment and also
per-task levels, we obtain new PAC-Bayes bounds. Using these bounds, we develop
new PAC-Bayes meta-learning algorithms. Numerical examples demonstrate the
merits of the proposed novel bounds and algorithm in comparison to prior
PAC-Bayes bounds for meta-learning.
| [
{
"created": "Sat, 11 Jun 2022 07:45:25 GMT",
"version": "v1"
}
] | 2022-06-14 | [
[
"Rezazadeh",
"Arezou",
""
]
] | Meta learning automatically infers an inductive bias, that includes the hyperparameter of the base-learning algorithm, by observing data from a finite number of related tasks. This paper studies PAC-Bayes bounds on meta generalization gap. The meta-generalization gap comprises two sources of generalization gaps: the environment-level and task-level gaps resulting from observation of a finite number of tasks and data samples per task, respectively. In this paper, by upper bounding arbitrary convex functions, which link the expected and empirical losses at the environment and also per-task levels, we obtain new PAC-Bayes bounds. Using these bounds, we develop new PAC-Bayes meta-learning algorithms. Numerical examples demonstrate the merits of the proposed novel bounds and algorithm in comparison to prior PAC-Bayes bounds for meta-learning. |
1509.06035 | Karim Afdel | H.Ouahi and K.Afdel and M.Machkour | Image Retrieval Based on LBP Pyramidal Multiresolution using Reversible
Watermarking | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the medical field, images are increasingly used to facilitate diagnosis of
diseases. These images are stored in multimedia databases accompanied by doctor
s prescriptions and other information related to patients.Search for medical
images has become for clinical applications an essential tool to bring
effective aid in diagnosis. Content Based Image Retrieval (CBIR) is one of the
possible solutions to effectively manage these databases. Our contribution is
to define a relevant descriptor to retrieve images based on multiresolution
analysis of texture using Local Binary Pattern LBP. This descriptor once
calculated and information s relating to the patient; will be placed in the
image using the technique of reversible watermarking. Thereby, the image,
descriptor of its contents, the BFILE locator and patientrelated information
become a single entity, so even the administrator cannot have access to the
patient private data.
| [
{
"created": "Sun, 20 Sep 2015 18:14:04 GMT",
"version": "v1"
}
] | 2015-09-22 | [
[
"Ouahi",
"H.",
""
],
[
"Afdel",
"K.",
""
],
[
"Machkour",
"M.",
""
]
] | In the medical field, images are increasingly used to facilitate diagnosis of diseases. These images are stored in multimedia databases accompanied by doctor s prescriptions and other information related to patients.Search for medical images has become for clinical applications an essential tool to bring effective aid in diagnosis. Content Based Image Retrieval (CBIR) is one of the possible solutions to effectively manage these databases. Our contribution is to define a relevant descriptor to retrieve images based on multiresolution analysis of texture using Local Binary Pattern LBP. This descriptor once calculated and information s relating to the patient; will be placed in the image using the technique of reversible watermarking. Thereby, the image, descriptor of its contents, the BFILE locator and patientrelated information become a single entity, so even the administrator cannot have access to the patient private data. |
1510.09193 | Leslie Ann Goldberg | Ivona Bezakova, Andreas Galanis, Leslie Ann Goldberg, Heng Guo, Daniel
Stefankovic | Approximation via Correlation Decay when Strong Spatial Mixing Fails | To appear in SICOMP | null | null | null | cs.CC cs.DM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Approximate counting via correlation decay is the core algorithmic technique
used in the sharp delineation of the computational phase transition that arises
in the approximation of the partition function of anti-ferromagnetic two-spin
models.
Previous analyses of correlation-decay algorithms implicitly depended on the
occurrence of strong spatial mixing (SSM). This means that one uses worst-case
analysis of the recursive procedure that creates the sub-instances. We develop
a new analysis method that is more refined than the worst-case analysis. We
take the shape of instances in the computation tree into consideration and
amortise against certain "bad" instances that are created as the recursion
proceeds. This enables us to show correlation decay and to obtain an FPTAS even
when SSM fails.
We apply our technique to the problem of approximately counting independent
sets in hypergraphs with degree upper-bound Delta and with a lower bound k on
the arity of hyperedges. Liu and Lin gave an FPTAS for k>=2 and Delta<=5 (lack
of SSM was the obstacle preventing this algorithm from being generalised to
Delta=6). Our technique gives a tight result for Delta=6, showing that there is
an FPTAS for k>=3 and Delta<=6. The best previously-known approximation scheme
for Delta=6 is the Markov-chain simulation based FPRAS of Bordewich, Dyer and
Karpinski, which only works for k>=8.
Our technique also applies for larger values of k, giving an FPTAS for
k>=Delta. This bound is not substantially stronger than existing randomised
results in the literature. Nevertheless, it gives the first deterministic
approximation scheme in this regime. Moreover, unlike existing results, it
leads to an FPTAS for counting dominating sets in regular graphs with
sufficiently large degree.
We further demonstrate that approximately counting independent sets in
hypergraphs is NP-hard even within the uniqueness regime.
| [
{
"created": "Fri, 30 Oct 2015 18:38:48 GMT",
"version": "v1"
},
{
"created": "Mon, 2 Nov 2015 14:45:46 GMT",
"version": "v2"
},
{
"created": "Wed, 17 Feb 2016 20:52:34 GMT",
"version": "v3"
},
{
"created": "Fri, 8 Jul 2016 13:02:30 GMT",
"version": "v4"
},
{
"created": "Fri, 1 Feb 2019 14:47:46 GMT",
"version": "v5"
}
] | 2019-02-04 | [
[
"Bezakova",
"Ivona",
""
],
[
"Galanis",
"Andreas",
""
],
[
"Goldberg",
"Leslie Ann",
""
],
[
"Guo",
"Heng",
""
],
[
"Stefankovic",
"Daniel",
""
]
] | Approximate counting via correlation decay is the core algorithmic technique used in the sharp delineation of the computational phase transition that arises in the approximation of the partition function of anti-ferromagnetic two-spin models. Previous analyses of correlation-decay algorithms implicitly depended on the occurrence of strong spatial mixing (SSM). This means that one uses worst-case analysis of the recursive procedure that creates the sub-instances. We develop a new analysis method that is more refined than the worst-case analysis. We take the shape of instances in the computation tree into consideration and amortise against certain "bad" instances that are created as the recursion proceeds. This enables us to show correlation decay and to obtain an FPTAS even when SSM fails. We apply our technique to the problem of approximately counting independent sets in hypergraphs with degree upper-bound Delta and with a lower bound k on the arity of hyperedges. Liu and Lin gave an FPTAS for k>=2 and Delta<=5 (lack of SSM was the obstacle preventing this algorithm from being generalised to Delta=6). Our technique gives a tight result for Delta=6, showing that there is an FPTAS for k>=3 and Delta<=6. The best previously-known approximation scheme for Delta=6 is the Markov-chain simulation based FPRAS of Bordewich, Dyer and Karpinski, which only works for k>=8. Our technique also applies for larger values of k, giving an FPTAS for k>=Delta. This bound is not substantially stronger than existing randomised results in the literature. Nevertheless, it gives the first deterministic approximation scheme in this regime. Moreover, unlike existing results, it leads to an FPTAS for counting dominating sets in regular graphs with sufficiently large degree. We further demonstrate that approximately counting independent sets in hypergraphs is NP-hard even within the uniqueness regime. |
1906.00784 | Paul Wild | Paul Wild and Lutz Schr\"oder and Dirk Pattinson and Barbara K\"onig | A Modal Characterization Theorem for a Probabilistic Fuzzy Description
Logic | arXiv admin note: text overlap with arXiv:1810.04722 | null | null | null | cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The fuzzy modality `probably` is interpreted over probabilistic type spaces
by taking expected truth values. The arising probabilistic fuzzy description
logic is invariant under probabilistic bisimilarity; more informatively, it is
non-expansive wrt. a suitable notion of behavioural distance. In the present
paper, we provide a characterization of the expressive power of this logic
based on this observation: We prove a probabilistic analogue of the classical
van Benthem theorem, which states that modal logic is precisely the
bisimulation-invariant fragment of first-order logic. Specifically, we show
that every formula in probabilistic fuzzy first-order logic that is
non-expansive wrt. behavioural distance can be approximated by concepts of
bounded rank in probabilistic fuzzy description logic.
For a modal logic perspective on the same result, see arXiv:1810.04722.
| [
{
"created": "Fri, 31 May 2019 04:37:45 GMT",
"version": "v1"
},
{
"created": "Tue, 4 Jun 2019 12:57:53 GMT",
"version": "v2"
}
] | 2019-06-05 | [
[
"Wild",
"Paul",
""
],
[
"Schröder",
"Lutz",
""
],
[
"Pattinson",
"Dirk",
""
],
[
"König",
"Barbara",
""
]
] | The fuzzy modality `probably` is interpreted over probabilistic type spaces by taking expected truth values. The arising probabilistic fuzzy description logic is invariant under probabilistic bisimilarity; more informatively, it is non-expansive wrt. a suitable notion of behavioural distance. In the present paper, we provide a characterization of the expressive power of this logic based on this observation: We prove a probabilistic analogue of the classical van Benthem theorem, which states that modal logic is precisely the bisimulation-invariant fragment of first-order logic. Specifically, we show that every formula in probabilistic fuzzy first-order logic that is non-expansive wrt. behavioural distance can be approximated by concepts of bounded rank in probabilistic fuzzy description logic. For a modal logic perspective on the same result, see arXiv:1810.04722. |
1701.07666 | Bogdan Groza | Bogdan Groza | Traffic models with adversarial vehicle behaviour | 14 pages | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We examine the impact of adversarial actions on vehicles in traffic. Current
advances in assisted/autonomous driving technologies are supposed to reduce the
number of casualties, but this seems to be desired despite the recently proved
insecurity of in-vehicle communication buses or components. Fortunately to some
extent, while compromised cars have become a reality, the numerous attacks
reported so far on in-vehicle electronics are exclusively concerned with
impairments of a single target. In this work we put adversarial behavior under
a more complex scenario where driving decisions deluded by corrupted
electronics can affect more than one vehicle. Particularly, we focus our
attention on chain collisions involving multiple vehicles that can be amplified
by simple adversarial interventions, e.g., delaying taillights or falsifying
speedometer readings. We provide metrics for assessing adversarial impact and
consider safety margins against adversarial actions. Moreover, we discuss
intelligent adversarial behaviour by which the creation of rogue platoons is
possible and speed manipulations become stealthy to human drivers. We emphasize
that our work does not try to show the mere fact that imprudent speeds and
headways lead to chain-collisions, but points out that an adversary may favour
such scenarios (eventually keeping his actions stealthy for human drivers) and
further asks for quantifying the impact of adversarial activity or whether
existing traffic regulations are prepared for such situations.
| [
{
"created": "Thu, 26 Jan 2017 11:57:48 GMT",
"version": "v1"
}
] | 2017-01-27 | [
[
"Groza",
"Bogdan",
""
]
] | We examine the impact of adversarial actions on vehicles in traffic. Current advances in assisted/autonomous driving technologies are supposed to reduce the number of casualties, but this seems to be desired despite the recently proved insecurity of in-vehicle communication buses or components. Fortunately to some extent, while compromised cars have become a reality, the numerous attacks reported so far on in-vehicle electronics are exclusively concerned with impairments of a single target. In this work we put adversarial behavior under a more complex scenario where driving decisions deluded by corrupted electronics can affect more than one vehicle. Particularly, we focus our attention on chain collisions involving multiple vehicles that can be amplified by simple adversarial interventions, e.g., delaying taillights or falsifying speedometer readings. We provide metrics for assessing adversarial impact and consider safety margins against adversarial actions. Moreover, we discuss intelligent adversarial behaviour by which the creation of rogue platoons is possible and speed manipulations become stealthy to human drivers. We emphasize that our work does not try to show the mere fact that imprudent speeds and headways lead to chain-collisions, but points out that an adversary may favour such scenarios (eventually keeping his actions stealthy for human drivers) and further asks for quantifying the impact of adversarial activity or whether existing traffic regulations are prepared for such situations. |
2108.00171 | Min Ren | Min Ren and Lingxiao He and Xingyu Liao and Wu Liu and Yunlong Wang
and Tieniu Tan | Learning Instance-level Spatial-Temporal Patterns for Person
Re-identification | Accepted by ICCV 2021 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Person re-identification (Re-ID) aims to match pedestrians under dis-joint
cameras. Most Re-ID methods formulate it as visual representation learning and
image search, and its accuracy is consequently affected greatly by the search
space. Spatial-temporal information has been proven to be efficient to filter
irrelevant negative samples and significantly improve Re-ID accuracy. However,
existing spatial-temporal person Re-ID methods are still rough and do not
exploit spatial-temporal information sufficiently. In this paper, we propose a
novel Instance-level and Spatial-Temporal Disentangled Re-ID method (InSTD), to
improve Re-ID accuracy. In our proposed framework, personalized information
such as moving direction is explicitly considered to further narrow down the
search space. Besides, the spatial-temporal transferring probability is
disentangled from joint distribution to marginal distribution, so that outliers
can also be well modeled. Abundant experimental analyses are presented, which
demonstrates the superiority and provides more insights into our method. The
proposed method achieves mAP of 90.8% on Market-1501 and 89.1% on
DukeMTMC-reID, improving from the baseline 82.2% and 72.7%, respectively.
Besides, in order to provide a better benchmark for person re-identification,
we release a cleaned data list of DukeMTMC-reID with this paper:
https://github.com/RenMin1991/cleaned-DukeMTMC-reID/
| [
{
"created": "Sat, 31 Jul 2021 07:44:47 GMT",
"version": "v1"
}
] | 2021-08-03 | [
[
"Ren",
"Min",
""
],
[
"He",
"Lingxiao",
""
],
[
"Liao",
"Xingyu",
""
],
[
"Liu",
"Wu",
""
],
[
"Wang",
"Yunlong",
""
],
[
"Tan",
"Tieniu",
""
]
] | Person re-identification (Re-ID) aims to match pedestrians under dis-joint cameras. Most Re-ID methods formulate it as visual representation learning and image search, and its accuracy is consequently affected greatly by the search space. Spatial-temporal information has been proven to be efficient to filter irrelevant negative samples and significantly improve Re-ID accuracy. However, existing spatial-temporal person Re-ID methods are still rough and do not exploit spatial-temporal information sufficiently. In this paper, we propose a novel Instance-level and Spatial-Temporal Disentangled Re-ID method (InSTD), to improve Re-ID accuracy. In our proposed framework, personalized information such as moving direction is explicitly considered to further narrow down the search space. Besides, the spatial-temporal transferring probability is disentangled from joint distribution to marginal distribution, so that outliers can also be well modeled. Abundant experimental analyses are presented, which demonstrates the superiority and provides more insights into our method. The proposed method achieves mAP of 90.8% on Market-1501 and 89.1% on DukeMTMC-reID, improving from the baseline 82.2% and 72.7%, respectively. Besides, in order to provide a better benchmark for person re-identification, we release a cleaned data list of DukeMTMC-reID with this paper: https://github.com/RenMin1991/cleaned-DukeMTMC-reID/ |
1002.3238 | Benjamin Piwowarski | Benjamin Piwowarski and Ingo Frommholz and Mounia Lalmas and Keith van
Rijsbergen | Exploring a Multidimensional Representation of Documents and Queries
(extended version) | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In Information Retrieval (IR), whether implicitly or explicitly, queries and
documents are often represented as vectors. However, it may be more beneficial
to consider documents and/or queries as multidimensional objects. Our belief is
this would allow building "truly" interactive IR systems, i.e., where
interaction is fully incorporated in the IR framework.
The probabilistic formalism of quantum physics represents events and
densities as multidimensional objects. This paper presents our first step
towards building an interactive IR framework upon this formalism, by stating
how the first interaction of the retrieval process, when the user types a
query, can be formalised. Our framework depends on a number of parameters
affecting the final document ranking. In this paper we experimentally
investigate the effect of these parameters, showing that the proposed
representation of documents and queries as multidimensional objects can compete
with standard approaches, with the additional prospect to be applied to
interactive retrieval.
| [
{
"created": "Wed, 17 Feb 2010 10:25:57 GMT",
"version": "v1"
},
{
"created": "Thu, 18 Feb 2010 13:45:17 GMT",
"version": "v2"
}
] | 2010-02-18 | [
[
"Piwowarski",
"Benjamin",
""
],
[
"Frommholz",
"Ingo",
""
],
[
"Lalmas",
"Mounia",
""
],
[
"van Rijsbergen",
"Keith",
""
]
] | In Information Retrieval (IR), whether implicitly or explicitly, queries and documents are often represented as vectors. However, it may be more beneficial to consider documents and/or queries as multidimensional objects. Our belief is this would allow building "truly" interactive IR systems, i.e., where interaction is fully incorporated in the IR framework. The probabilistic formalism of quantum physics represents events and densities as multidimensional objects. This paper presents our first step towards building an interactive IR framework upon this formalism, by stating how the first interaction of the retrieval process, when the user types a query, can be formalised. Our framework depends on a number of parameters affecting the final document ranking. In this paper we experimentally investigate the effect of these parameters, showing that the proposed representation of documents and queries as multidimensional objects can compete with standard approaches, with the additional prospect to be applied to interactive retrieval. |
2102.01241 | Bernard Mans | Dalia Popescu, Philippe Jacquet, Bernard Mans and Bartomiej
Blaszczyszyn | Characterizing the Energy Trade-Offs of End-to-End Vehicular
Communications using an Hyperfractal Urban Modelling | null | null | null | null | cs.DC | http://creativecommons.org/licenses/by/4.0/ | We characterize trade-offs between the end-to-end communication delay and the
energy in urban vehicular communications with infrastructure assistance. Our
study exploits the self-similarity of the location of communication entities in
cities by modeling them with an innovative model called "hyperfractal". We show
that the hyperfractal model can be extended to incorporate road-side
infrastructure and provide stochastic geometry tools to allow a rigorous
analysis. We compute theoretical bounds for the end-to-end communication hop
count considering two different energy-minimizing goals: either total
accumulated energy or maximum energy per node. We prove that the hop count for
an end-to-end transmission is bounded by $O(n^{1-\alpha/(d_F-1)})$ where
$\alpha<1$ and $d_F>2$ is the fractal dimension of the mobile nodes process.
This proves that for both constraints the energy decreases as we allow choosing
routing paths of higher length. The asymptotic limit of the energy becomes
significantly small when the number of nodes becomes asymptotically large. A
lower bound on the network throughput capacity with constraints on path energy
is also given. We show that our model fits real deployments where open data
sets are available. The results are confirmed through simulations using
different fractal dimensions in a Matlab simulator.
| [
{
"created": "Tue, 2 Feb 2021 00:53:17 GMT",
"version": "v1"
}
] | 2021-02-03 | [
[
"Popescu",
"Dalia",
""
],
[
"Jacquet",
"Philippe",
""
],
[
"Mans",
"Bernard",
""
],
[
"Blaszczyszyn",
"Bartomiej",
""
]
] | We characterize trade-offs between the end-to-end communication delay and the energy in urban vehicular communications with infrastructure assistance. Our study exploits the self-similarity of the location of communication entities in cities by modeling them with an innovative model called "hyperfractal". We show that the hyperfractal model can be extended to incorporate road-side infrastructure and provide stochastic geometry tools to allow a rigorous analysis. We compute theoretical bounds for the end-to-end communication hop count considering two different energy-minimizing goals: either total accumulated energy or maximum energy per node. We prove that the hop count for an end-to-end transmission is bounded by $O(n^{1-\alpha/(d_F-1)})$ where $\alpha<1$ and $d_F>2$ is the fractal dimension of the mobile nodes process. This proves that for both constraints the energy decreases as we allow choosing routing paths of higher length. The asymptotic limit of the energy becomes significantly small when the number of nodes becomes asymptotically large. A lower bound on the network throughput capacity with constraints on path energy is also given. We show that our model fits real deployments where open data sets are available. The results are confirmed through simulations using different fractal dimensions in a Matlab simulator. |
2307.15678 | Charles Assaad | Ali A\"it-Bachir, Charles K. Assaad, Christophe de Bignicourt, Emilie
Devijver, Simon Ferreira, Eric Gaussier, Hosein Mohanna, Lei Zan | Case Studies of Causal Discovery from IT Monitoring Time Series | Accepted to the UAI 2023 Workshop on The History and Development of
Search Methods for Causal Structure | null | null | null | cs.LG cs.AI stat.AP stat.ME | http://creativecommons.org/licenses/by/4.0/ | Information technology (IT) systems are vital for modern businesses, handling
data storage, communication, and process automation. Monitoring these systems
is crucial for their proper functioning and efficiency, as it allows collecting
extensive observational time series data for analysis. The interest in causal
discovery is growing in IT monitoring systems as knowing causal relations
between different components of the IT system helps in reducing downtime,
enhancing system performance and identifying root causes of anomalies and
incidents. It also allows proactive prediction of future issues through
historical data analysis. Despite its potential benefits, applying causal
discovery algorithms on IT monitoring data poses challenges, due to the
complexity of the data. For instance, IT monitoring data often contains
misaligned time series, sleeping time series, timestamp errors and missing
values. This paper presents case studies on applying causal discovery
algorithms to different IT monitoring datasets, highlighting benefits and
ongoing challenges.
| [
{
"created": "Fri, 28 Jul 2023 17:13:00 GMT",
"version": "v1"
}
] | 2023-07-31 | [
[
"Aït-Bachir",
"Ali",
""
],
[
"Assaad",
"Charles K.",
""
],
[
"de Bignicourt",
"Christophe",
""
],
[
"Devijver",
"Emilie",
""
],
[
"Ferreira",
"Simon",
""
],
[
"Gaussier",
"Eric",
""
],
[
"Mohanna",
"Hosein",
""
],
[
"Zan",
"Lei",
""
]
] | Information technology (IT) systems are vital for modern businesses, handling data storage, communication, and process automation. Monitoring these systems is crucial for their proper functioning and efficiency, as it allows collecting extensive observational time series data for analysis. The interest in causal discovery is growing in IT monitoring systems as knowing causal relations between different components of the IT system helps in reducing downtime, enhancing system performance and identifying root causes of anomalies and incidents. It also allows proactive prediction of future issues through historical data analysis. Despite its potential benefits, applying causal discovery algorithms on IT monitoring data poses challenges, due to the complexity of the data. For instance, IT monitoring data often contains misaligned time series, sleeping time series, timestamp errors and missing values. This paper presents case studies on applying causal discovery algorithms to different IT monitoring datasets, highlighting benefits and ongoing challenges. |
2303.08396 | Suyash Mahar | Suyash Mahar (UC San Diego), Hao Wang (NVIDIA), Wei Shu (Tenstorrent),
Abhishek Dhanotia (Meta Inc.) | Workload Behavior Driven Memory Subsystem Design for Hyperscale | null | null | null | null | cs.DC cs.AR | http://creativecommons.org/licenses/by/4.0/ | Hyperscalars run services across a large fleet of servers, serving billions
of users worldwide. These services, however, behave differently than commonly
available benchmark suites, resulting in server architectures that are not
optimized for cloud workloads. With datacenters becoming a primary server
processor market, optimizing server processors for cloud workloads by better
understanding their behavior has become crucial. To address this, in this
paper, we present MemProf, a memory profiler that profiles the three major
reasons for stalls in cloud workloads: code-fetch, memory bandwidth, and memory
latency. We use MemProf to understand the behavior of cloud workloads and
propose and evaluate micro-architectural and memory system design improvements
that help cloud workloads' performance.
MemProf's code analysis shows that cloud workloads execute the same code
across CPU cores. Using this, we propose shared micro-architectural
structures--a shared L2 I-TLB and a shared L2 cache. Next, to help with memory
bandwidth stalls, using workloads' memory bandwidth distribution, we find that
only a few pages contribute to most of the system bandwidth. We use this
finding to evaluate a new high-bandwidth, small-capacity memory tier and show
that it performs 1.46x better than the current baseline configuration. Finally,
we look into ways to improve memory latency for cloud workloads. Profiling
using MemProf reveals that L2 hardware prefetchers, a common solution to reduce
memory latency, have very low coverage and consume a significant amount of
memory bandwidth. To help improve hardware prefetcher performance, we built a
memory tracing tool to collect and validate production memory access traces.
| [
{
"created": "Wed, 15 Mar 2023 06:55:29 GMT",
"version": "v1"
},
{
"created": "Tue, 2 May 2023 13:47:23 GMT",
"version": "v2"
}
] | 2023-05-03 | [
[
"Mahar",
"Suyash",
"",
"UC San Diego"
],
[
"Wang",
"Hao",
"",
"NVIDIA"
],
[
"Shu",
"Wei",
"",
"Tenstorrent"
],
[
"Dhanotia",
"Abhishek",
"",
"Meta Inc."
]
] | Hyperscalars run services across a large fleet of servers, serving billions of users worldwide. These services, however, behave differently than commonly available benchmark suites, resulting in server architectures that are not optimized for cloud workloads. With datacenters becoming a primary server processor market, optimizing server processors for cloud workloads by better understanding their behavior has become crucial. To address this, in this paper, we present MemProf, a memory profiler that profiles the three major reasons for stalls in cloud workloads: code-fetch, memory bandwidth, and memory latency. We use MemProf to understand the behavior of cloud workloads and propose and evaluate micro-architectural and memory system design improvements that help cloud workloads' performance. MemProf's code analysis shows that cloud workloads execute the same code across CPU cores. Using this, we propose shared micro-architectural structures--a shared L2 I-TLB and a shared L2 cache. Next, to help with memory bandwidth stalls, using workloads' memory bandwidth distribution, we find that only a few pages contribute to most of the system bandwidth. We use this finding to evaluate a new high-bandwidth, small-capacity memory tier and show that it performs 1.46x better than the current baseline configuration. Finally, we look into ways to improve memory latency for cloud workloads. Profiling using MemProf reveals that L2 hardware prefetchers, a common solution to reduce memory latency, have very low coverage and consume a significant amount of memory bandwidth. To help improve hardware prefetcher performance, we built a memory tracing tool to collect and validate production memory access traces. |
2203.14572 | Xiaotong Cheng | Xiaotong Cheng and Setareh Maghsudi | Distributed Task Management in Fog Computing: A Socially Concave Bandit
Game | null | null | 10.1109/TGCN.2023.3276415 | null | cs.MA cs.GT cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fog computing leverages the task offloading capabilities at the network's
edge to improve efficiency and enable swift responses to application demands.
However, the design of task allocation strategies in a fog computing network is
still challenging because of the heterogeneity of fog nodes and uncertainties
in system dynamics. We formulate the distributed task allocation problem as a
social-concave game with bandit feedback and show that the game has a unique
Nash equilibrium, which is implementable using no-regret learning strategies
(regret with sublinear growth). We then develop two no-regret online
decision-making strategies. One strategy, namely bandit gradient ascent with
momentum, is an online convex optimization algorithm with bandit feedback. The
other strategy, Lipschitz bandit with initialization, is an EXP3 multi-armed
bandit algorithm. We establish regret bounds for both strategies and analyze
their convergence characteristics. Moreover, we compare the proposed strategies
with an allocation strategy named learning with linear rewards. Theoretical-
and numerical analysis shows the superior performance of the proposed
strategies for efficient task allocation compared to the state-of-the-art
methods.
| [
{
"created": "Mon, 28 Mar 2022 08:26:14 GMT",
"version": "v1"
},
{
"created": "Fri, 9 Jun 2023 15:15:14 GMT",
"version": "v2"
}
] | 2023-06-12 | [
[
"Cheng",
"Xiaotong",
""
],
[
"Maghsudi",
"Setareh",
""
]
] | Fog computing leverages the task offloading capabilities at the network's edge to improve efficiency and enable swift responses to application demands. However, the design of task allocation strategies in a fog computing network is still challenging because of the heterogeneity of fog nodes and uncertainties in system dynamics. We formulate the distributed task allocation problem as a social-concave game with bandit feedback and show that the game has a unique Nash equilibrium, which is implementable using no-regret learning strategies (regret with sublinear growth). We then develop two no-regret online decision-making strategies. One strategy, namely bandit gradient ascent with momentum, is an online convex optimization algorithm with bandit feedback. The other strategy, Lipschitz bandit with initialization, is an EXP3 multi-armed bandit algorithm. We establish regret bounds for both strategies and analyze their convergence characteristics. Moreover, we compare the proposed strategies with an allocation strategy named learning with linear rewards. Theoretical- and numerical analysis shows the superior performance of the proposed strategies for efficient task allocation compared to the state-of-the-art methods. |
1504.01207 | Usman Khan | Sam Safavi and Usman A. Khan | Localization in mobile networks via virtual convex hulls | null | null | null | null | cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we develop a \textit{distributed} algorithm to localize an
arbitrary number of agents moving in a bounded region of interest. We assume
that the network contains \textit{at least one} agent with known location
(hereinafter referred to as an anchor), and each agent measures a noisy version
of its motion and the distances to the nearby agents. We provide
a~\emph{geometric approach}, which allows each agent to: (i) continually update
the distances to the locations where it has exchanged information with the
other nodes in the past; and (ii) measure the distance between a neighbor and
any such locations. Based on this approach, we provide a \emph{linear update}
to find the locations of an arbitrary number of mobile agents when they follow
some convexity in their deployment and motion.
Since the agents are mobile, they may not be able to find nearby nodes
(agents and/or anchors) to implement a distributed algorithm. To address this
issue, we introduce the notion of a \emph{virtual convex hull} with the help of
the aforementioned geometric approach. In particular, each agent keeps track of
a virtual convex hull of other nodes, which may not physically exist, and
updates its location with respect to its neighbors in the virtual hull. We show
that the corresponding localization algorithm, in the absence of noise, can be
abstracted as a Linear Time-Varying (LTV) system, with non-deterministic system
matrices, which asymptotically tracks the true locations of the agents. We
provide simulations to verify the analytical results and evaluate the
performance of the algorithm in the presence of noise on the motion as well as
on the distance measurements.
| [
{
"created": "Mon, 6 Apr 2015 05:15:19 GMT",
"version": "v1"
},
{
"created": "Sat, 21 Jan 2017 05:01:56 GMT",
"version": "v2"
}
] | 2017-01-24 | [
[
"Safavi",
"Sam",
""
],
[
"Khan",
"Usman A.",
""
]
] | In this paper, we develop a \textit{distributed} algorithm to localize an arbitrary number of agents moving in a bounded region of interest. We assume that the network contains \textit{at least one} agent with known location (hereinafter referred to as an anchor), and each agent measures a noisy version of its motion and the distances to the nearby agents. We provide a~\emph{geometric approach}, which allows each agent to: (i) continually update the distances to the locations where it has exchanged information with the other nodes in the past; and (ii) measure the distance between a neighbor and any such locations. Based on this approach, we provide a \emph{linear update} to find the locations of an arbitrary number of mobile agents when they follow some convexity in their deployment and motion. Since the agents are mobile, they may not be able to find nearby nodes (agents and/or anchors) to implement a distributed algorithm. To address this issue, we introduce the notion of a \emph{virtual convex hull} with the help of the aforementioned geometric approach. In particular, each agent keeps track of a virtual convex hull of other nodes, which may not physically exist, and updates its location with respect to its neighbors in the virtual hull. We show that the corresponding localization algorithm, in the absence of noise, can be abstracted as a Linear Time-Varying (LTV) system, with non-deterministic system matrices, which asymptotically tracks the true locations of the agents. We provide simulations to verify the analytical results and evaluate the performance of the algorithm in the presence of noise on the motion as well as on the distance measurements. |
2301.10233 | Andre Holzapfel | Andre Holzapfel | Introducing Political Ecology of Creative-Ai | null | null | null | null | cs.CY | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This chapter introduces the perspective of political ecology to the
application of artificial intelligence to artistic processes (Creative-Ai).
Hence, the environmental and social impact of the development and employment of
Creative-Ai are the focus of this text, when we consider them as part of an
economic system that transforms artistic creation to a commodity. I first
analyse specific Creative-Ai cases, and then conduct a speculation that takes
Jacques Attali's writing on the role of music in society as a vantage point,
and investigates the environmental and social consequences of an automatic
composition network controlled by a large music streaming platform. Whereas the
possibilities that emerge from Creative-Ai may be promising from an artistic
perspective, its entanglement with corporate interest raises severe concerns.
These concerns can only be addressed by a wide cross-sectoral alliance between
research and arts that develops a critical perspective on the future directions
of Creative-Ai.
| [
{
"created": "Wed, 21 Dec 2022 15:16:22 GMT",
"version": "v1"
}
] | 2023-01-25 | [
[
"Holzapfel",
"Andre",
""
]
] | This chapter introduces the perspective of political ecology to the application of artificial intelligence to artistic processes (Creative-Ai). Hence, the environmental and social impact of the development and employment of Creative-Ai are the focus of this text, when we consider them as part of an economic system that transforms artistic creation to a commodity. I first analyse specific Creative-Ai cases, and then conduct a speculation that takes Jacques Attali's writing on the role of music in society as a vantage point, and investigates the environmental and social consequences of an automatic composition network controlled by a large music streaming platform. Whereas the possibilities that emerge from Creative-Ai may be promising from an artistic perspective, its entanglement with corporate interest raises severe concerns. These concerns can only be addressed by a wide cross-sectoral alliance between research and arts that develops a critical perspective on the future directions of Creative-Ai. |
1609.06804 | Zhouyuan Huo | Zhouyuan Huo, Bin Gu, Heng Huang | Decoupled Asynchronous Proximal Stochastic Gradient Descent with
Variance Reduction | null | null | null | null | cs.LG math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the era of big data, optimizing large scale machine learning problems
becomes a challenging task and draws significant attention. Asynchronous
optimization algorithms come out as a promising solution. Recently, decoupled
asynchronous proximal stochastic gradient descent (DAP-SGD) is proposed to
minimize a composite function. It is claimed to be able to off-loads the
computation bottleneck from server to workers by allowing workers to evaluate
the proximal operators, therefore, server just need to do element-wise
operations. However, it still suffers from slow convergence rate because of the
variance of stochastic gradient is nonzero. In this paper, we propose a faster
method, decoupled asynchronous proximal stochastic variance reduced gradient
descent method (DAP-SVRG). We prove that our method has linear convergence for
strongly convex problem. Large-scale experiments are also conducted in this
paper, and results demonstrate our theoretical analysis.
| [
{
"created": "Thu, 22 Sep 2016 02:50:09 GMT",
"version": "v1"
},
{
"created": "Thu, 29 Sep 2016 01:54:25 GMT",
"version": "v2"
}
] | 2016-09-30 | [
[
"Huo",
"Zhouyuan",
""
],
[
"Gu",
"Bin",
""
],
[
"Huang",
"Heng",
""
]
] | In the era of big data, optimizing large scale machine learning problems becomes a challenging task and draws significant attention. Asynchronous optimization algorithms come out as a promising solution. Recently, decoupled asynchronous proximal stochastic gradient descent (DAP-SGD) is proposed to minimize a composite function. It is claimed to be able to off-loads the computation bottleneck from server to workers by allowing workers to evaluate the proximal operators, therefore, server just need to do element-wise operations. However, it still suffers from slow convergence rate because of the variance of stochastic gradient is nonzero. In this paper, we propose a faster method, decoupled asynchronous proximal stochastic variance reduced gradient descent method (DAP-SVRG). We prove that our method has linear convergence for strongly convex problem. Large-scale experiments are also conducted in this paper, and results demonstrate our theoretical analysis. |
1907.04058 | Peter Fasogbon O. | Peter O. Fasogbon | Depth from Small Motion using Rank-1 Initialization | 8 pages, 6 figures | 14th International Conference on Computer Vision Theory and
Applications, February 2019 | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Depth from Small Motion (DfSM) (Ha et al., 2016) is particularly interesting
for commercial handheld devices because it allows the possibility to get depth
information with minimal user effort and cooperation. Due to speed and memory
issue on these devices, the self calibration optimization of the method using
Bundle Adjustment (BA) need as little as 10-15 images. Therefore, the
optimization tends to take many iterations to converge or may not converge at
all in some cases. This work propose a robust initialization for the bundle
adjustment using the rank-1 factorization method (Tomasi and Kanade, 1992),
(Aguiar and Moura, 1999a). We create a constraint matrix that is rank-1 in a
noiseless situation, then use SVD to compute the inverse depth values and the
camera motion. We only need about quarter fraction of the bundle adjustment
iteration to converge. We also propose grided feature extraction technique so
that only important and small features are tracked all over the image frames.
This also ensure speedup in the full execution time on the mobile device. For
the experiments, we have documented the execution time with the proposed Rank-1
initialization on two mobile device platforms using optimized accelerations
with CPU-GPU co-processing. The combination of Rank 1-BA generates more robust
depth-map and is significantly faster than using BA alone.
| [
{
"created": "Tue, 9 Jul 2019 09:50:04 GMT",
"version": "v1"
}
] | 2019-07-10 | [
[
"Fasogbon",
"Peter O.",
""
]
] | Depth from Small Motion (DfSM) (Ha et al., 2016) is particularly interesting for commercial handheld devices because it allows the possibility to get depth information with minimal user effort and cooperation. Due to speed and memory issue on these devices, the self calibration optimization of the method using Bundle Adjustment (BA) need as little as 10-15 images. Therefore, the optimization tends to take many iterations to converge or may not converge at all in some cases. This work propose a robust initialization for the bundle adjustment using the rank-1 factorization method (Tomasi and Kanade, 1992), (Aguiar and Moura, 1999a). We create a constraint matrix that is rank-1 in a noiseless situation, then use SVD to compute the inverse depth values and the camera motion. We only need about quarter fraction of the bundle adjustment iteration to converge. We also propose grided feature extraction technique so that only important and small features are tracked all over the image frames. This also ensure speedup in the full execution time on the mobile device. For the experiments, we have documented the execution time with the proposed Rank-1 initialization on two mobile device platforms using optimized accelerations with CPU-GPU co-processing. The combination of Rank 1-BA generates more robust depth-map and is significantly faster than using BA alone. |
2104.04695 | Ou Deng | Ou Deng, Kiichi Tago, Qun Jin | An Extended Epidemic Model on Interconnected Networks for COVID-19 to
Explore the Epidemic Dynamics | 9 pages, 6 figures | null | null | null | cs.CY | http://creativecommons.org/licenses/by-nc-sa/4.0/ | COVID-19 has resulted in a public health global crisis. The pandemic control
necessitates epidemic models that capture the trends and impacts on infectious
individuals. Many exciting models can implement this but they lack practical
interpretability. This study combines the epidemiological and network theories
and proposes a framework with causal interpretability in response to this
issue. This framework consists of an extended epidemic model in interconnected
networks and a dynamic structure that has major human mobility. The networked
causal analysis focuses on the stochastic processing mechanism. It highlights
the social infectivity as the intervention estimator between the observable
effect (the number of daily new cases) and unobservable causes (the number of
infectious persons). According to an experiment on the dataset for Tokyo
metropolitan areas, the computational results indicate the propagation features
of the symptomatic and asymptomatic infectious persons. These new
spatiotemporal findings can be beneficial for policy decision making.
| [
{
"created": "Sat, 10 Apr 2021 06:46:01 GMT",
"version": "v1"
}
] | 2021-04-13 | [
[
"Deng",
"Ou",
""
],
[
"Tago",
"Kiichi",
""
],
[
"Jin",
"Qun",
""
]
] | COVID-19 has resulted in a public health global crisis. The pandemic control necessitates epidemic models that capture the trends and impacts on infectious individuals. Many exciting models can implement this but they lack practical interpretability. This study combines the epidemiological and network theories and proposes a framework with causal interpretability in response to this issue. This framework consists of an extended epidemic model in interconnected networks and a dynamic structure that has major human mobility. The networked causal analysis focuses on the stochastic processing mechanism. It highlights the social infectivity as the intervention estimator between the observable effect (the number of daily new cases) and unobservable causes (the number of infectious persons). According to an experiment on the dataset for Tokyo metropolitan areas, the computational results indicate the propagation features of the symptomatic and asymptomatic infectious persons. These new spatiotemporal findings can be beneficial for policy decision making. |
2010.02613 | Neha Das | Neha Das, Jonas Umlauft, Armin Lederer, Thomas Beckers, Sandra Hirche | Deep Learning based Uncertainty Decomposition for Real-time Control | Accepted at IFAC World Congress 2023 | null | null | null | cs.LG cs.AI cs.RO cs.SY eess.SY | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Data-driven control in unknown environments requires a clear understanding of
the involved uncertainties for ensuring safety and efficient exploration. While
aleatoric uncertainty that arises from measurement noise can often be
explicitly modeled given a parametric description, it can be harder to model
epistemic uncertainty, which describes the presence or absence of training
data. The latter can be particularly useful for implementing exploratory
control strategies when system dynamics are unknown. We propose a novel method
for detecting the absence of training data using deep learning, which gives a
continuous valued scalar output between $0$ (indicating low uncertainty) and
$1$ (indicating high uncertainty). We utilize this detector as a proxy for
epistemic uncertainty and show its advantages over existing approaches on
synthetic and real-world datasets. Our approach can be directly combined with
aleatoric uncertainty estimates and allows for uncertainty estimation in
real-time as the inference is sample-free unlike existing approaches for
uncertainty modeling. We further demonstrate the practicality of this
uncertainty estimate in deploying online data-efficient control on a simulated
quadcopter acted upon by an unknown disturbance model.
| [
{
"created": "Tue, 6 Oct 2020 10:46:27 GMT",
"version": "v1"
},
{
"created": "Tue, 24 Nov 2020 18:12:06 GMT",
"version": "v2"
},
{
"created": "Wed, 12 Jul 2023 10:00:51 GMT",
"version": "v3"
}
] | 2023-07-13 | [
[
"Das",
"Neha",
""
],
[
"Umlauft",
"Jonas",
""
],
[
"Lederer",
"Armin",
""
],
[
"Beckers",
"Thomas",
""
],
[
"Hirche",
"Sandra",
""
]
] | Data-driven control in unknown environments requires a clear understanding of the involved uncertainties for ensuring safety and efficient exploration. While aleatoric uncertainty that arises from measurement noise can often be explicitly modeled given a parametric description, it can be harder to model epistemic uncertainty, which describes the presence or absence of training data. The latter can be particularly useful for implementing exploratory control strategies when system dynamics are unknown. We propose a novel method for detecting the absence of training data using deep learning, which gives a continuous valued scalar output between $0$ (indicating low uncertainty) and $1$ (indicating high uncertainty). We utilize this detector as a proxy for epistemic uncertainty and show its advantages over existing approaches on synthetic and real-world datasets. Our approach can be directly combined with aleatoric uncertainty estimates and allows for uncertainty estimation in real-time as the inference is sample-free unlike existing approaches for uncertainty modeling. We further demonstrate the practicality of this uncertainty estimate in deploying online data-efficient control on a simulated quadcopter acted upon by an unknown disturbance model. |
2212.07818 | Bernhard Klein | Torben Krieger, Bernhard Klein, Holger Fr\"oning | Towards Hardware-Specific Automatic Compression of Neural Networks | To be published at the AAAI Conference on Artificial Intelligence
2023, at the 2nd International Workshop on Practical Deep Learning in the
Wild | null | null | null | cs.LG cs.PF stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Compressing neural network architectures is important to allow the deployment
of models to embedded or mobile devices, and pruning and quantization are the
major approaches to compress neural networks nowadays. Both methods benefit
when compression parameters are selected specifically for each layer. Finding
good combinations of compression parameters, so-called compression policies, is
hard as the problem spans an exponentially large search space. Effective
compression policies consider the influence of the specific hardware
architecture on the used compression methods. We propose an algorithmic
framework called Galen to search such policies using reinforcement learning
utilizing pruning and quantization, thus providing automatic compression for
neural networks. Contrary to other approaches we use inference latency measured
on the target hardware device as an optimization goal. With that, the framework
supports the compression of models specific to a given hardware target. We
validate our approach using three different reinforcement learning agents for
pruning, quantization and joint pruning and quantization. Besides proving the
functionality of our approach we were able to compress a ResNet18 for CIFAR-10,
on an embedded ARM processor, to 20% of the original inference latency without
significant loss of accuracy. Moreover, we can demonstrate that a joint search
and compression using pruning and quantization is superior to an individual
search for policies using a single compression method.
| [
{
"created": "Thu, 15 Dec 2022 13:34:02 GMT",
"version": "v1"
}
] | 2022-12-16 | [
[
"Krieger",
"Torben",
""
],
[
"Klein",
"Bernhard",
""
],
[
"Fröning",
"Holger",
""
]
] | Compressing neural network architectures is important to allow the deployment of models to embedded or mobile devices, and pruning and quantization are the major approaches to compress neural networks nowadays. Both methods benefit when compression parameters are selected specifically for each layer. Finding good combinations of compression parameters, so-called compression policies, is hard as the problem spans an exponentially large search space. Effective compression policies consider the influence of the specific hardware architecture on the used compression methods. We propose an algorithmic framework called Galen to search such policies using reinforcement learning utilizing pruning and quantization, thus providing automatic compression for neural networks. Contrary to other approaches we use inference latency measured on the target hardware device as an optimization goal. With that, the framework supports the compression of models specific to a given hardware target. We validate our approach using three different reinforcement learning agents for pruning, quantization and joint pruning and quantization. Besides proving the functionality of our approach we were able to compress a ResNet18 for CIFAR-10, on an embedded ARM processor, to 20% of the original inference latency without significant loss of accuracy. Moreover, we can demonstrate that a joint search and compression using pruning and quantization is superior to an individual search for policies using a single compression method. |
2103.02143 | Hao Peng | Hao Peng, Nikolaos Pappas, Dani Yogatama, Roy Schwartz, Noah A. Smith,
Lingpeng Kong | Random Feature Attention | ICLR 2021 | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Transformers are state-of-the-art models for a variety of sequence modeling
tasks. At their core is an attention function which models pairwise
interactions between the inputs at every timestep. While attention is powerful,
it does not scale efficiently to long sequences due to its quadratic time and
space complexity in the sequence length. We propose RFA, a linear time and
space attention that uses random feature methods to approximate the softmax
function, and explore its application in transformers. RFA can be used as a
drop-in replacement for conventional softmax attention and offers a
straightforward way of learning with recency bias through an optional gating
mechanism. Experiments on language modeling and machine translation demonstrate
that RFA achieves similar or better performance compared to strong transformer
baselines. In the machine translation experiment, RFA decodes twice as fast as
a vanilla transformer. Compared to existing efficient transformer variants, RFA
is competitive in terms of both accuracy and efficiency on three long text
classification datasets. Our analysis shows that RFA's efficiency gains are
especially notable on long sequences, suggesting that RFA will be particularly
useful in tasks that require working with large inputs, fast decoding speed, or
low memory footprints.
| [
{
"created": "Wed, 3 Mar 2021 02:48:56 GMT",
"version": "v1"
},
{
"created": "Fri, 19 Mar 2021 21:24:06 GMT",
"version": "v2"
}
] | 2021-03-23 | [
[
"Peng",
"Hao",
""
],
[
"Pappas",
"Nikolaos",
""
],
[
"Yogatama",
"Dani",
""
],
[
"Schwartz",
"Roy",
""
],
[
"Smith",
"Noah A.",
""
],
[
"Kong",
"Lingpeng",
""
]
] | Transformers are state-of-the-art models for a variety of sequence modeling tasks. At their core is an attention function which models pairwise interactions between the inputs at every timestep. While attention is powerful, it does not scale efficiently to long sequences due to its quadratic time and space complexity in the sequence length. We propose RFA, a linear time and space attention that uses random feature methods to approximate the softmax function, and explore its application in transformers. RFA can be used as a drop-in replacement for conventional softmax attention and offers a straightforward way of learning with recency bias through an optional gating mechanism. Experiments on language modeling and machine translation demonstrate that RFA achieves similar or better performance compared to strong transformer baselines. In the machine translation experiment, RFA decodes twice as fast as a vanilla transformer. Compared to existing efficient transformer variants, RFA is competitive in terms of both accuracy and efficiency on three long text classification datasets. Our analysis shows that RFA's efficiency gains are especially notable on long sequences, suggesting that RFA will be particularly useful in tasks that require working with large inputs, fast decoding speed, or low memory footprints. |
2208.09363 | Syver D{\o}ving Agdestein | Syver D{\o}ving Agdestein and Benjamin Sanderse | Learning filtered discretization operators: non-intrusive versus
intrusive approaches | Submitted to Eccomas 2022 proceedings. Contains 12 pages, 19
references, 4 figures, and 1 table | null | null | null | cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Simulating multi-scale phenomena such as turbulent fluid flows is typically
computationally very expensive. Filtering the smaller scales allows for using
coarse discretizations, however, this requires closure models to account for
the effects of the unresolved on the resolved scales. The common approach is to
filter the continuous equations, but this gives rise to several commutator
errors due to nonlinear terms, non-uniform filters, or boundary conditions.
We propose a new approach to filtering, where the equations are discretized
first and then filtered. For a non-uniform filter applied to the linear
convection equation, we show that the discretely filtered convection operator
can be inferred using three methods: intrusive (`explicit reconstruction') or
non-intrusive operator inference, either via `derivative fitting' or
`trajectory fitting' (embedded learning). We show that explicit reconstruction
and derivative fitting identify a similar operator and produce small errors,
but that trajectory fitting requires significant effort to train to achieve
similar performance. However, the explicit reconstruction approach is more
prone to instabilities.
| [
{
"created": "Fri, 19 Aug 2022 14:17:57 GMT",
"version": "v1"
}
] | 2022-08-22 | [
[
"Agdestein",
"Syver Døving",
""
],
[
"Sanderse",
"Benjamin",
""
]
] | Simulating multi-scale phenomena such as turbulent fluid flows is typically computationally very expensive. Filtering the smaller scales allows for using coarse discretizations, however, this requires closure models to account for the effects of the unresolved on the resolved scales. The common approach is to filter the continuous equations, but this gives rise to several commutator errors due to nonlinear terms, non-uniform filters, or boundary conditions. We propose a new approach to filtering, where the equations are discretized first and then filtered. For a non-uniform filter applied to the linear convection equation, we show that the discretely filtered convection operator can be inferred using three methods: intrusive (`explicit reconstruction') or non-intrusive operator inference, either via `derivative fitting' or `trajectory fitting' (embedded learning). We show that explicit reconstruction and derivative fitting identify a similar operator and produce small errors, but that trajectory fitting requires significant effort to train to achieve similar performance. However, the explicit reconstruction approach is more prone to instabilities. |
2202.12438 | Tom\'a\v{s} Masa\v{r}\'ik | Pavel Dvo\v{r}\'ak, Monika Krawczyk, Tom\'a\v{s} Masa\v{r}\'ik, Jana
Novotn\'a, Pawe{\l} Rz\k{a}\.zewski, Aneta \.Zuk | List Locally Surjective Homomorphisms in Hereditary Graph Classes | 26 pages, 8 figures | Proceedings: International Symposium on Algorithms and
Computation, ISAAC 2022 | 10.4230/LIPIcs.ISAAC.2022.30 | null | cs.DS cs.CC | http://creativecommons.org/licenses/by/4.0/ | A locally surjective homomorphism from a graph $G$ to a graph $H$ is an
edge-preserving mapping from $V(G)$ to $V(H)$ that is surjective in the
neighborhood of each vertex in $G$. In the list locally surjective homomorphism
problem, denoted by LLSHom($H$), the graph $H$ is fixed and the instance
consists of a graph $G$ whose every vertex is equipped with a subset of $V(H)$,
called list. We ask for the existence of a locally surjective homomorphism from
$G$ to $H$, where every vertex of $G$ is mapped to a vertex from its list. In
this paper, we study the complexity of the LLSHom($H$) problem in $F$-free
graphs, i.e., graphs that exclude a fixed graph $F$ as an induced subgraph. We
aim to understand for which pairs $(H,F)$ the problem can be solved in
subexponential time.
We show that for all graphs $H$, for which the problem is NP-hard in general
graphs, it cannot be solved in subexponential time in $F$-free graphs unless
$F$ is a bounded-degree forest or the ETH fails. The initial study reveals that
a natural subfamily of bounded-degree forests $F$ that might lead to some
tractability results is the family $\mathcal S$ consisting of forests whose
every component has at most three leaves. In this case, we exhibit the
following dichotomy theorem: besides the cases that are polynomial-time
solvable in general graphs, the graphs $H \in \{P_3,C_4\}$ are the only
connected ones that allow for a subexponential-time algorithm in $F$-free
graphs for every $F \in \mathcal S$ (unless the ETH fails).
| [
{
"created": "Fri, 25 Feb 2022 00:17:08 GMT",
"version": "v1"
}
] | 2024-01-11 | [
[
"Dvořák",
"Pavel",
""
],
[
"Krawczyk",
"Monika",
""
],
[
"Masařík",
"Tomáš",
""
],
[
"Novotná",
"Jana",
""
],
[
"Rzążewski",
"Paweł",
""
],
[
"Żuk",
"Aneta",
""
]
] | A locally surjective homomorphism from a graph $G$ to a graph $H$ is an edge-preserving mapping from $V(G)$ to $V(H)$ that is surjective in the neighborhood of each vertex in $G$. In the list locally surjective homomorphism problem, denoted by LLSHom($H$), the graph $H$ is fixed and the instance consists of a graph $G$ whose every vertex is equipped with a subset of $V(H)$, called list. We ask for the existence of a locally surjective homomorphism from $G$ to $H$, where every vertex of $G$ is mapped to a vertex from its list. In this paper, we study the complexity of the LLSHom($H$) problem in $F$-free graphs, i.e., graphs that exclude a fixed graph $F$ as an induced subgraph. We aim to understand for which pairs $(H,F)$ the problem can be solved in subexponential time. We show that for all graphs $H$, for which the problem is NP-hard in general graphs, it cannot be solved in subexponential time in $F$-free graphs unless $F$ is a bounded-degree forest or the ETH fails. The initial study reveals that a natural subfamily of bounded-degree forests $F$ that might lead to some tractability results is the family $\mathcal S$ consisting of forests whose every component has at most three leaves. In this case, we exhibit the following dichotomy theorem: besides the cases that are polynomial-time solvable in general graphs, the graphs $H \in \{P_3,C_4\}$ are the only connected ones that allow for a subexponential-time algorithm in $F$-free graphs for every $F \in \mathcal S$ (unless the ETH fails). |
2402.11639 | Advait Parulekar | Liam Collins, Advait Parulekar, Aryan Mokhtari, Sujay Sanghavi, Sanjay
Shakkottai | In-Context Learning with Transformers: Softmax Attention Adapts to
Function Lipschitzness | null | null | null | null | cs.LG cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A striking property of transformers is their ability to perform in-context
learning (ICL), a machine learning framework in which the learner is presented
with a novel context during inference implicitly through some data, and tasked
with making a prediction in that context. As such, that learner must adapt to
the context without additional training. We explore the role of softmax
attention in an ICL setting where each context encodes a regression task. We
show that an attention unit learns a window that it uses to implement a
nearest-neighbors predictor adapted to the landscape of the pretraining tasks.
Specifically, we show that this window widens with decreasing Lipschitzness and
increasing label noise in the pretraining tasks. We also show that on low-rank,
linear problems, the attention unit learns to project onto the appropriate
subspace before inference. Further, we show that this adaptivity relies
crucially on the softmax activation and thus cannot be replicated by the linear
activation often studied in prior theoretical analyses.
| [
{
"created": "Sun, 18 Feb 2024 16:37:32 GMT",
"version": "v1"
},
{
"created": "Tue, 28 May 2024 05:15:53 GMT",
"version": "v2"
}
] | 2024-05-29 | [
[
"Collins",
"Liam",
""
],
[
"Parulekar",
"Advait",
""
],
[
"Mokhtari",
"Aryan",
""
],
[
"Sanghavi",
"Sujay",
""
],
[
"Shakkottai",
"Sanjay",
""
]
] | A striking property of transformers is their ability to perform in-context learning (ICL), a machine learning framework in which the learner is presented with a novel context during inference implicitly through some data, and tasked with making a prediction in that context. As such, that learner must adapt to the context without additional training. We explore the role of softmax attention in an ICL setting where each context encodes a regression task. We show that an attention unit learns a window that it uses to implement a nearest-neighbors predictor adapted to the landscape of the pretraining tasks. Specifically, we show that this window widens with decreasing Lipschitzness and increasing label noise in the pretraining tasks. We also show that on low-rank, linear problems, the attention unit learns to project onto the appropriate subspace before inference. Further, we show that this adaptivity relies crucially on the softmax activation and thus cannot be replicated by the linear activation often studied in prior theoretical analyses. |
1309.0563 | James Lee | Siu On Chan and James R. Lee and Prasad Raghavendra and David Steurer | Approximate Constraint Satisfaction Requires Large LP Relaxations | 29 pages; significant revisions, new references, simpler proofs | null | null | null | cs.CC cs.DS math.CO math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We prove super-polynomial lower bounds on the size of linear programming
relaxations for approximation versions of constraint satisfaction problems. We
show that for these problems, polynomial-sized linear programs are exactly as
powerful as programs arising from a constant number of rounds of the
Sherali-Adams hierarchy.
In particular, any polynomial-sized linear program for Max Cut has an
integrality gap of 1/2 and any such linear program for Max 3-Sat has an
integrality gap of 7/8.
| [
{
"created": "Tue, 3 Sep 2013 00:30:03 GMT",
"version": "v1"
},
{
"created": "Sun, 22 Feb 2015 14:56:25 GMT",
"version": "v2"
},
{
"created": "Mon, 8 Feb 2016 08:11:10 GMT",
"version": "v3"
}
] | 2016-02-09 | [
[
"Chan",
"Siu On",
""
],
[
"Lee",
"James R.",
""
],
[
"Raghavendra",
"Prasad",
""
],
[
"Steurer",
"David",
""
]
] | We prove super-polynomial lower bounds on the size of linear programming relaxations for approximation versions of constraint satisfaction problems. We show that for these problems, polynomial-sized linear programs are exactly as powerful as programs arising from a constant number of rounds of the Sherali-Adams hierarchy. In particular, any polynomial-sized linear program for Max Cut has an integrality gap of 1/2 and any such linear program for Max 3-Sat has an integrality gap of 7/8. |
1910.09578 | Zhe Dong | Zhe Dong, Deniz Oktay, Ben Poole, Alexander A. Alemi | On Predictive Information in RNNs | null | null | null | null | cs.LG cs.IT math.IT stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Certain biological neurons demonstrate a remarkable capability to optimally
compress the history of sensory inputs while being maximally informative about
the future. In this work, we investigate if the same can be said of artificial
neurons in recurrent neural networks (RNNs) trained with maximum likelihood.
Empirically, we find that RNNs are suboptimal in the information plane. Instead
of optimally compressing past information, they extract additional information
that is not relevant for predicting the future. We show that constraining past
information by injecting noise into the hidden state can improve RNNs in
several ways: optimality in the predictive information plane, sample quality,
heldout likelihood, and downstream classification performance.
| [
{
"created": "Mon, 21 Oct 2019 18:12:43 GMT",
"version": "v1"
},
{
"created": "Mon, 10 Feb 2020 20:53:42 GMT",
"version": "v2"
}
] | 2020-02-12 | [
[
"Dong",
"Zhe",
""
],
[
"Oktay",
"Deniz",
""
],
[
"Poole",
"Ben",
""
],
[
"Alemi",
"Alexander A.",
""
]
] | Certain biological neurons demonstrate a remarkable capability to optimally compress the history of sensory inputs while being maximally informative about the future. In this work, we investigate if the same can be said of artificial neurons in recurrent neural networks (RNNs) trained with maximum likelihood. Empirically, we find that RNNs are suboptimal in the information plane. Instead of optimally compressing past information, they extract additional information that is not relevant for predicting the future. We show that constraining past information by injecting noise into the hidden state can improve RNNs in several ways: optimality in the predictive information plane, sample quality, heldout likelihood, and downstream classification performance. |
2107.11526 | Uri Stemmer | Menachem Sadigurschi, Uri Stemmer | On the Sample Complexity of Privately Learning Axis-Aligned Rectangles | null | null | null | null | cs.LG cs.CR cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We revisit the fundamental problem of learning Axis-Aligned-Rectangles over a
finite grid $X^d\subseteq{\mathbb{R}}^d$ with differential privacy. Existing
results show that the sample complexity of this problem is at most $\min\left\{
d{\cdot}\log|X| \;,\; d^{1.5}{\cdot}\left(\log^*|X| \right)^{1.5}\right\}$.
That is, existing constructions either require sample complexity that grows
linearly with $\log|X|$, or else it grows super linearly with the dimension
$d$. We present a novel algorithm that reduces the sample complexity to only
$\tilde{O}\left\{d{\cdot}\left(\log^*|X|\right)^{1.5}\right\}$, attaining a
dimensionality optimal dependency without requiring the sample complexity to
grow with $\log|X|$.The technique used in order to attain this improvement
involves the deletion of "exposed" data-points on the go, in a fashion designed
to avoid the cost of the adaptive composition theorems. The core of this
technique may be of individual interest, introducing a new method for
constructing statistically-efficient private algorithms.
| [
{
"created": "Sat, 24 Jul 2021 04:06:11 GMT",
"version": "v1"
}
] | 2021-07-27 | [
[
"Sadigurschi",
"Menachem",
""
],
[
"Stemmer",
"Uri",
""
]
] | We revisit the fundamental problem of learning Axis-Aligned-Rectangles over a finite grid $X^d\subseteq{\mathbb{R}}^d$ with differential privacy. Existing results show that the sample complexity of this problem is at most $\min\left\{ d{\cdot}\log|X| \;,\; d^{1.5}{\cdot}\left(\log^*|X| \right)^{1.5}\right\}$. That is, existing constructions either require sample complexity that grows linearly with $\log|X|$, or else it grows super linearly with the dimension $d$. We present a novel algorithm that reduces the sample complexity to only $\tilde{O}\left\{d{\cdot}\left(\log^*|X|\right)^{1.5}\right\}$, attaining a dimensionality optimal dependency without requiring the sample complexity to grow with $\log|X|$.The technique used in order to attain this improvement involves the deletion of "exposed" data-points on the go, in a fashion designed to avoid the cost of the adaptive composition theorems. The core of this technique may be of individual interest, introducing a new method for constructing statistically-efficient private algorithms. |
2002.10670 | Eric Hulburd | Eric Hulburd | Exploring BERT Parameter Efficiency on the Stanford Question Answering
Dataset v2.0 | 11 pages, 5 figures, 3 tables | null | null | null | cs.CL cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we explore the parameter efficiency of BERT arXiv:1810.04805 on
version 2.0 of the Stanford Question Answering dataset (SQuAD2.0). We evaluate
the parameter efficiency of BERT while freezing a varying number of final
transformer layers as well as including the adapter layers proposed in
arXiv:1902.00751. Additionally, we experiment with the use of context-aware
convolutional (CACNN) filters, as described in arXiv:1709.08294v3, as a final
augmentation layer for the SQuAD2.0 tasks.
This exploration is motivated in part by arXiv:1907.10597, which made a
compelling case for broadening the evaluation criteria of artificial
intelligence models to include various measures of resource efficiency. While
we do not evaluate these models based on their floating point operation
efficiency as proposed in arXiv:1907.10597, we examine efficiency with respect
to training time, inference time, and total number of model parameters. Our
results largely corroborate those of arXiv:1902.00751 for adapter modules,
while also demonstrating that gains in F1 score from adding context-aware
convolutional filters are not practical due to the increase in training and
inference time.
| [
{
"created": "Tue, 25 Feb 2020 05:09:48 GMT",
"version": "v1"
},
{
"created": "Tue, 3 Mar 2020 05:16:37 GMT",
"version": "v2"
}
] | 2020-03-04 | [
[
"Hulburd",
"Eric",
""
]
] | In this paper we explore the parameter efficiency of BERT arXiv:1810.04805 on version 2.0 of the Stanford Question Answering dataset (SQuAD2.0). We evaluate the parameter efficiency of BERT while freezing a varying number of final transformer layers as well as including the adapter layers proposed in arXiv:1902.00751. Additionally, we experiment with the use of context-aware convolutional (CACNN) filters, as described in arXiv:1709.08294v3, as a final augmentation layer for the SQuAD2.0 tasks. This exploration is motivated in part by arXiv:1907.10597, which made a compelling case for broadening the evaluation criteria of artificial intelligence models to include various measures of resource efficiency. While we do not evaluate these models based on their floating point operation efficiency as proposed in arXiv:1907.10597, we examine efficiency with respect to training time, inference time, and total number of model parameters. Our results largely corroborate those of arXiv:1902.00751 for adapter modules, while also demonstrating that gains in F1 score from adding context-aware convolutional filters are not practical due to the increase in training and inference time. |
2012.15533 | David Lorenz | Ofir T. Erlich and David H. Lorenz | Optimal Software Architecture From Initial Requirements: An End-to-End
Approach | null | null | null | null | cs.SE | http://creativecommons.org/licenses/by/4.0/ | A software architect turns system requirements into a suitable software
architecture through an architecture optimization process. However, how should
the architect decide which quality improvement to prioritize, e.g., security or
reliability? In software product line, should a small improvement in multiple
products be preferred over a large improvement in a single product? Existing
architecture optimization methods handle various steps in the process, but none
of them systematically guides the architect in generating an optimal
architecture from the initial requirements. In this work we present an
end-to-end approach for generating an optimal software architecture for a
single software product and an optimal family of architectures for a family of
products. We report on a case-study of applying our approach to optimize five
industry-grade products in a real-life product line architecture, where 359
possible combinations of ten different quality efforts were prioritized.
| [
{
"created": "Thu, 31 Dec 2020 10:35:43 GMT",
"version": "v1"
}
] | 2021-01-01 | [
[
"Erlich",
"Ofir T.",
""
],
[
"Lorenz",
"David H.",
""
]
] | A software architect turns system requirements into a suitable software architecture through an architecture optimization process. However, how should the architect decide which quality improvement to prioritize, e.g., security or reliability? In software product line, should a small improvement in multiple products be preferred over a large improvement in a single product? Existing architecture optimization methods handle various steps in the process, but none of them systematically guides the architect in generating an optimal architecture from the initial requirements. In this work we present an end-to-end approach for generating an optimal software architecture for a single software product and an optimal family of architectures for a family of products. We report on a case-study of applying our approach to optimize five industry-grade products in a real-life product line architecture, where 359 possible combinations of ten different quality efforts were prioritized. |
2210.10191 | Changhan Wang | Changhan Wang, Hirofumi Inaguma, Peng-Jen Chen, Ilia Kulikov, Yun
Tang, Wei-Ning Hsu, Michael Auli, Juan Pino | Simple and Effective Unsupervised Speech Translation | null | null | null | null | cs.CL cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The amount of labeled data to train models for speech tasks is limited for
most languages, however, the data scarcity is exacerbated for speech
translation which requires labeled data covering two different languages. To
address this issue, we study a simple and effective approach to build speech
translation systems without labeled data by leveraging recent advances in
unsupervised speech recognition, machine translation and speech synthesis,
either in a pipeline approach, or to generate pseudo-labels for training
end-to-end speech translation models. Furthermore, we present an unsupervised
domain adaptation technique for pre-trained speech models which improves the
performance of downstream unsupervised speech recognition, especially for
low-resource settings. Experiments show that unsupervised speech-to-text
translation outperforms the previous unsupervised state of the art by 3.2 BLEU
on the Libri-Trans benchmark, on CoVoST 2, our best systems outperform the best
supervised end-to-end models (without pre-training) from only two years ago by
an average of 5.0 BLEU over five X-En directions. We also report competitive
results on MuST-C and CVSS benchmarks.
| [
{
"created": "Tue, 18 Oct 2022 22:26:13 GMT",
"version": "v1"
}
] | 2022-10-20 | [
[
"Wang",
"Changhan",
""
],
[
"Inaguma",
"Hirofumi",
""
],
[
"Chen",
"Peng-Jen",
""
],
[
"Kulikov",
"Ilia",
""
],
[
"Tang",
"Yun",
""
],
[
"Hsu",
"Wei-Ning",
""
],
[
"Auli",
"Michael",
""
],
[
"Pino",
"Juan",
""
]
] | The amount of labeled data to train models for speech tasks is limited for most languages, however, the data scarcity is exacerbated for speech translation which requires labeled data covering two different languages. To address this issue, we study a simple and effective approach to build speech translation systems without labeled data by leveraging recent advances in unsupervised speech recognition, machine translation and speech synthesis, either in a pipeline approach, or to generate pseudo-labels for training end-to-end speech translation models. Furthermore, we present an unsupervised domain adaptation technique for pre-trained speech models which improves the performance of downstream unsupervised speech recognition, especially for low-resource settings. Experiments show that unsupervised speech-to-text translation outperforms the previous unsupervised state of the art by 3.2 BLEU on the Libri-Trans benchmark, on CoVoST 2, our best systems outperform the best supervised end-to-end models (without pre-training) from only two years ago by an average of 5.0 BLEU over five X-En directions. We also report competitive results on MuST-C and CVSS benchmarks. |
2202.13041 | Wensheng Gan | Wensheng Gan, Guoting Chen, Hongzhi Yin, Philippe Fournier-Viger,
Chien-Ming Chen, and Philip S. Yu | Towards Revenue Maximization with Popular and Profitable Products | ACM/IMS Transactions on Data Science. 4 figures, 5 tables | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Economic-wise, a common goal for companies conducting marketing is to
maximize the return revenue/profit by utilizing the various effective marketing
strategies. Consumer behavior is crucially important in economy and targeted
marketing, in which behavioral economics can provide valuable insights to
identify the biases and profit from customers. Finding credible and reliable
information on products' profitability is, however, quite difficult since most
products tends to peak at certain times w.r.t. seasonal sales cycle in a year.
On-Shelf Availability (OSA) plays a key factor for performance evaluation.
Besides, staying ahead of hot product trends means we can increase marketing
efforts without selling out the inventory. To fulfill this gap, in this paper,
we first propose a general profit-oriented framework to address the problem of
revenue maximization based on economic behavior, and compute the 0n-shelf
Popular and most Profitable Products (OPPPs) for the targeted marketing. To
tackle the revenue maximization problem, we model the k-satisfiable product
concept and propose an algorithmic framework for searching OPPP and its
variants. Extensive experiments are conducted on several real-world datasets to
evaluate the effectiveness and efficiency of the proposed algorithm.
| [
{
"created": "Sat, 26 Feb 2022 02:07:25 GMT",
"version": "v1"
}
] | 2022-03-01 | [
[
"Gan",
"Wensheng",
""
],
[
"Chen",
"Guoting",
""
],
[
"Yin",
"Hongzhi",
""
],
[
"Fournier-Viger",
"Philippe",
""
],
[
"Chen",
"Chien-Ming",
""
],
[
"Yu",
"Philip S.",
""
]
] | Economic-wise, a common goal for companies conducting marketing is to maximize the return revenue/profit by utilizing the various effective marketing strategies. Consumer behavior is crucially important in economy and targeted marketing, in which behavioral economics can provide valuable insights to identify the biases and profit from customers. Finding credible and reliable information on products' profitability is, however, quite difficult since most products tends to peak at certain times w.r.t. seasonal sales cycle in a year. On-Shelf Availability (OSA) plays a key factor for performance evaluation. Besides, staying ahead of hot product trends means we can increase marketing efforts without selling out the inventory. To fulfill this gap, in this paper, we first propose a general profit-oriented framework to address the problem of revenue maximization based on economic behavior, and compute the 0n-shelf Popular and most Profitable Products (OPPPs) for the targeted marketing. To tackle the revenue maximization problem, we model the k-satisfiable product concept and propose an algorithmic framework for searching OPPP and its variants. Extensive experiments are conducted on several real-world datasets to evaluate the effectiveness and efficiency of the proposed algorithm. |
1705.04544 | Karl D\"aubel | Aaron Bernstein, Karl D\"aubel, Yann Disser, Max Klimm, Torsten
M\"utze, Frieder Smolny | Distance-preserving graph contractions | An extended abstract of this work has appeared in the Proceedings of
the 9th Innovations in Theoretical Computer Science Conference (ITCS) 2018 | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Compression and sparsification algorithms are frequently applied in a
preprocessing step before analyzing or optimizing large networks/graphs. In
this paper we propose and study a new framework contracting edges of a graph
(merging vertices into super-vertices) with the goal of preserving pairwise
distances as accurately as possible. Formally, given an edge-weighted graph,
the contraction should guarantee that for any two vertices at distance $d$, the
corresponding super-vertices remain at distance at least $\varphi(d)$ in the
contracted graph, where $\varphi$ is a tolerance function bounding the
permitted distance distortion. We present a comprehensive picture of the
algorithmic complexity of the contraction problem for affine tolerance
functions $\varphi(x)=x/\alpha-\beta$, where $\alpha\geq 1$ and $\beta\geq 0$
are arbitrary real-valued parameters. Specifically, we present polynomial-time
algorithms for trees as well as hardness and inapproximability results for
different graph classes, precisely separating easy and hard cases. Further we
analyze the asymptotic behavior of contractions, and find efficient algorithms
to compute (non-optimal) contractions despite our hardness results.
| [
{
"created": "Fri, 12 May 2017 12:52:49 GMT",
"version": "v1"
},
{
"created": "Tue, 12 Sep 2017 17:00:39 GMT",
"version": "v2"
},
{
"created": "Mon, 5 Feb 2018 16:37:42 GMT",
"version": "v3"
},
{
"created": "Wed, 13 Feb 2019 13:51:48 GMT",
"version": "v4"
}
] | 2019-02-14 | [
[
"Bernstein",
"Aaron",
""
],
[
"Däubel",
"Karl",
""
],
[
"Disser",
"Yann",
""
],
[
"Klimm",
"Max",
""
],
[
"Mütze",
"Torsten",
""
],
[
"Smolny",
"Frieder",
""
]
] | Compression and sparsification algorithms are frequently applied in a preprocessing step before analyzing or optimizing large networks/graphs. In this paper we propose and study a new framework contracting edges of a graph (merging vertices into super-vertices) with the goal of preserving pairwise distances as accurately as possible. Formally, given an edge-weighted graph, the contraction should guarantee that for any two vertices at distance $d$, the corresponding super-vertices remain at distance at least $\varphi(d)$ in the contracted graph, where $\varphi$ is a tolerance function bounding the permitted distance distortion. We present a comprehensive picture of the algorithmic complexity of the contraction problem for affine tolerance functions $\varphi(x)=x/\alpha-\beta$, where $\alpha\geq 1$ and $\beta\geq 0$ are arbitrary real-valued parameters. Specifically, we present polynomial-time algorithms for trees as well as hardness and inapproximability results for different graph classes, precisely separating easy and hard cases. Further we analyze the asymptotic behavior of contractions, and find efficient algorithms to compute (non-optimal) contractions despite our hardness results. |
2303.16109 | Sajjad Mozaffari | Sajjad Mozaffari, Mreza Alipour Sormoli, Konstantinos Koufos, and
Mehrdad Dianati | Multimodal Manoeuvre and Trajectory Prediction for Automated Driving on
Highways Using Transformer Networks | 8 pages, 3 figures, submitted to IEEE RAL | null | null | null | cs.LG cs.RO | http://creativecommons.org/licenses/by/4.0/ | Predicting the behaviour (i.e., manoeuvre/trajectory) of other road users,
including vehicles, is critical for the safe and efficient operation of
autonomous vehicles (AVs), a.k.a., automated driving systems (ADSs). Due to the
uncertain future behaviour of vehicles, multiple future behaviour modes are
often plausible for a vehicle in a given driving scene. Therefore, multimodal
prediction can provide richer information than single-mode prediction, enabling
AVs to perform a better risk assessment. To this end, we propose a novel
multimodal prediction framework that can predict multiple plausible behaviour
modes and their likelihoods. The proposed framework includes a bespoke problem
formulation for manoeuvre prediction, a novel transformer-based prediction
model, and a tailored training method for multimodal manoeuvre and trajectory
prediction. The performance of the framework is evaluated using three public
highway driving datasets, namely NGSIM, highD, and exiD. The results show that
our framework outperforms the state-of-the-art multimodal methods in terms of
prediction error and is capable of predicting plausible manoeuvre and
trajectory modes.
| [
{
"created": "Tue, 28 Mar 2023 16:25:16 GMT",
"version": "v1"
},
{
"created": "Wed, 26 Jul 2023 16:58:06 GMT",
"version": "v2"
}
] | 2023-07-27 | [
[
"Mozaffari",
"Sajjad",
""
],
[
"Sormoli",
"Mreza Alipour",
""
],
[
"Koufos",
"Konstantinos",
""
],
[
"Dianati",
"Mehrdad",
""
]
] | Predicting the behaviour (i.e., manoeuvre/trajectory) of other road users, including vehicles, is critical for the safe and efficient operation of autonomous vehicles (AVs), a.k.a., automated driving systems (ADSs). Due to the uncertain future behaviour of vehicles, multiple future behaviour modes are often plausible for a vehicle in a given driving scene. Therefore, multimodal prediction can provide richer information than single-mode prediction, enabling AVs to perform a better risk assessment. To this end, we propose a novel multimodal prediction framework that can predict multiple plausible behaviour modes and their likelihoods. The proposed framework includes a bespoke problem formulation for manoeuvre prediction, a novel transformer-based prediction model, and a tailored training method for multimodal manoeuvre and trajectory prediction. The performance of the framework is evaluated using three public highway driving datasets, namely NGSIM, highD, and exiD. The results show that our framework outperforms the state-of-the-art multimodal methods in terms of prediction error and is capable of predicting plausible manoeuvre and trajectory modes. |
1601.00543 | Xiangming Meng | Xiangming Meng and Sheng Wu and Linling Kuang and Defeng (David) Huang
and Jianhua Lu | Approximate Message Passing with Nearest Neighbor Sparsity Pattern
Learning | 5 pages, 4 figures | null | null | null | cs.IT cs.LG math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of recovering clustered sparse signals with no prior
knowledge of the sparsity pattern. Beyond simple sparsity, signals of interest
often exhibits an underlying sparsity pattern which, if leveraged, can improve
the reconstruction performance. However, the sparsity pattern is usually
unknown a priori. Inspired by the idea of k-nearest neighbor (k-NN) algorithm,
we propose an efficient algorithm termed approximate message passing with
nearest neighbor sparsity pattern learning (AMP-NNSPL), which learns the
sparsity pattern adaptively. AMP-NNSPL specifies a flexible spike and slab
prior on the unknown signal and, after each AMP iteration, sets the sparse
ratios as the average of the nearest neighbor estimates via expectation
maximization (EM). Experimental results on both synthetic and real data
demonstrate the superiority of our proposed algorithm both in terms of
reconstruction performance and computational complexity.
| [
{
"created": "Mon, 4 Jan 2016 15:43:49 GMT",
"version": "v1"
}
] | 2016-01-05 | [
[
"Meng",
"Xiangming",
"",
"David"
],
[
"Wu",
"Sheng",
"",
"David"
],
[
"Kuang",
"Linling",
"",
"David"
],
[
"Defeng",
"",
"",
"David"
],
[
"Huang",
"",
""
],
[
"Lu",
"Jianhua",
""
]
] | We consider the problem of recovering clustered sparse signals with no prior knowledge of the sparsity pattern. Beyond simple sparsity, signals of interest often exhibits an underlying sparsity pattern which, if leveraged, can improve the reconstruction performance. However, the sparsity pattern is usually unknown a priori. Inspired by the idea of k-nearest neighbor (k-NN) algorithm, we propose an efficient algorithm termed approximate message passing with nearest neighbor sparsity pattern learning (AMP-NNSPL), which learns the sparsity pattern adaptively. AMP-NNSPL specifies a flexible spike and slab prior on the unknown signal and, after each AMP iteration, sets the sparse ratios as the average of the nearest neighbor estimates via expectation maximization (EM). Experimental results on both synthetic and real data demonstrate the superiority of our proposed algorithm both in terms of reconstruction performance and computational complexity. |
2303.10145 | Sudarshan Ambasamudram Rajagopalan | Kitty Varghese, Sudarshan Rajagopalan, Mohit Lamba, Kaushik Mitra | Spectrum-inspired Low-light Image Translation for Saliency Detection | Presented at The Indian Conference on Computer Vision, Graphics and
Image Processing (ICVGIP) 2022 | null | 10.1145/3571600.3571634 | null | cs.CV eess.IV | http://creativecommons.org/licenses/by/4.0/ | Saliency detection methods are central to several real-world applications
such as robot navigation and satellite imagery. However, the performance of
existing methods deteriorate under low-light conditions because training
datasets mostly comprise of well-lit images. One possible solution is to
collect a new dataset for low-light conditions. This involves pixel-level
annotations, which is not only tedious and time-consuming but also infeasible
if a huge training corpus is required. We propose a technique that performs
classical band-pass filtering in the Fourier space to transform well-lit images
to low-light images and use them as a proxy for real low-light images. Unlike
popular deep learning approaches which require learning thousands of parameters
and enormous amounts of training data, the proposed transformation is fast and
simple and easy to extend to other tasks such as low-light depth estimation.
Our experiments show that the state-of-the-art saliency detection and depth
estimation networks trained on our proxy low-light images perform significantly
better on real low-light images than networks trained using existing
strategies.
| [
{
"created": "Fri, 17 Mar 2023 17:30:42 GMT",
"version": "v1"
}
] | 2023-03-20 | [
[
"Varghese",
"Kitty",
""
],
[
"Rajagopalan",
"Sudarshan",
""
],
[
"Lamba",
"Mohit",
""
],
[
"Mitra",
"Kaushik",
""
]
] | Saliency detection methods are central to several real-world applications such as robot navigation and satellite imagery. However, the performance of existing methods deteriorate under low-light conditions because training datasets mostly comprise of well-lit images. One possible solution is to collect a new dataset for low-light conditions. This involves pixel-level annotations, which is not only tedious and time-consuming but also infeasible if a huge training corpus is required. We propose a technique that performs classical band-pass filtering in the Fourier space to transform well-lit images to low-light images and use them as a proxy for real low-light images. Unlike popular deep learning approaches which require learning thousands of parameters and enormous amounts of training data, the proposed transformation is fast and simple and easy to extend to other tasks such as low-light depth estimation. Our experiments show that the state-of-the-art saliency detection and depth estimation networks trained on our proxy low-light images perform significantly better on real low-light images than networks trained using existing strategies. |
2111.02354 | Sapana Chaudhary | Sapana Chaudhary, Balaraman Ravindran | Smooth Imitation Learning via Smooth Costs and Smooth Policies | To appear in the Proceedings of the Fifth Joint International
Conference on Data Science and Management of Data (CoDS-COMAD 2022). Research
Track. ACM DL | null | 10.1145/3493700.3493716 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Imitation learning (IL) is a popular approach in the continuous control
setting as among other reasons it circumvents the problems of reward
mis-specification and exploration in reinforcement learning (RL). In IL from
demonstrations, an important challenge is to obtain agent policies that are
smooth with respect to the inputs. Learning through imitation a policy that is
smooth as a function of a large state-action ($s$-$a$) space (typical of high
dimensional continuous control environments) can be challenging. We take a
first step towards tackling this issue by using smoothness inducing
regularizers on \textit{both} the policy and the cost models of adversarial
imitation learning. Our regularizers work by ensuring that the cost function
changes in a controlled manner as a function of $s$-$a$ space; and the agent
policy is well behaved with respect to the state space. We call our new smooth
IL algorithm \textit{Smooth Policy and Cost Imitation Learning} (SPaCIL,
pronounced 'Special'). We introduce a novel metric to quantify the smoothness
of the learned policies. We demonstrate SPaCIL's superior performance on
continuous control tasks from MuJoCo. The algorithm not just outperforms the
state-of-the-art IL algorithm on our proposed smoothness metric, but, enjoys
added benefits of faster learning and substantially higher average return.
| [
{
"created": "Wed, 3 Nov 2021 17:12:47 GMT",
"version": "v1"
}
] | 2021-11-04 | [
[
"Chaudhary",
"Sapana",
""
],
[
"Ravindran",
"Balaraman",
""
]
] | Imitation learning (IL) is a popular approach in the continuous control setting as among other reasons it circumvents the problems of reward mis-specification and exploration in reinforcement learning (RL). In IL from demonstrations, an important challenge is to obtain agent policies that are smooth with respect to the inputs. Learning through imitation a policy that is smooth as a function of a large state-action ($s$-$a$) space (typical of high dimensional continuous control environments) can be challenging. We take a first step towards tackling this issue by using smoothness inducing regularizers on \textit{both} the policy and the cost models of adversarial imitation learning. Our regularizers work by ensuring that the cost function changes in a controlled manner as a function of $s$-$a$ space; and the agent policy is well behaved with respect to the state space. We call our new smooth IL algorithm \textit{Smooth Policy and Cost Imitation Learning} (SPaCIL, pronounced 'Special'). We introduce a novel metric to quantify the smoothness of the learned policies. We demonstrate SPaCIL's superior performance on continuous control tasks from MuJoCo. The algorithm not just outperforms the state-of-the-art IL algorithm on our proposed smoothness metric, but, enjoys added benefits of faster learning and substantially higher average return. |
2405.10457 | Kristin Denlinger | Kristie Denlinger, Stephen Wechsler and Kyle Mahowald | Participle-Prepended Nominals Have Lower Entropy Than Nominals Appended
After the Participle | Accepted to CogSci 2024, 6 pages, 2 figures | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | English allows for both compounds (e.g., London-made) and phrasal paraphrases
(e.g., made in London). While these constructions have roughly the same
truth-conditional meaning, we hypothesize that the compound allows less freedom
to express the nature of the semantic relationship between the participle and
the pre-participle nominal. We thus predict that the pre-participle slot is
more constrained than the equivalent position in the phrasal construction. We
test this prediction in a large corpus by measuring the entropy of
corresponding nominal slots, conditional on the participle used. That is, we
compare the entropy of $\alpha$ in compound construction slots like
$\alpha$-[V]ed to the entropy of $\alpha$ in phrasal constructions like [V]ed
by $\alpha$ for a given verb V. As predicted, there is significantly lower
entropy in the compound construction than in the phrasal construction. We
consider how these predictions follow from more general grammatical properties
and processing factors.
| [
{
"created": "Thu, 16 May 2024 22:05:45 GMT",
"version": "v1"
}
] | 2024-05-20 | [
[
"Denlinger",
"Kristie",
""
],
[
"Wechsler",
"Stephen",
""
],
[
"Mahowald",
"Kyle",
""
]
] | English allows for both compounds (e.g., London-made) and phrasal paraphrases (e.g., made in London). While these constructions have roughly the same truth-conditional meaning, we hypothesize that the compound allows less freedom to express the nature of the semantic relationship between the participle and the pre-participle nominal. We thus predict that the pre-participle slot is more constrained than the equivalent position in the phrasal construction. We test this prediction in a large corpus by measuring the entropy of corresponding nominal slots, conditional on the participle used. That is, we compare the entropy of $\alpha$ in compound construction slots like $\alpha$-[V]ed to the entropy of $\alpha$ in phrasal constructions like [V]ed by $\alpha$ for a given verb V. As predicted, there is significantly lower entropy in the compound construction than in the phrasal construction. We consider how these predictions follow from more general grammatical properties and processing factors. |
1006.2682 | Eswar Karthikeyan | S. S. Sonavane, B. P. Patil, V. Kumar | Experimentation for Packet Loss on MSP430 and nRF24L01 Based Wireless
Sensor Network | 5 Pages IJANA | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper,a new design of wireless sensor network (WSN)node is discussed
which is based on components with ultra low power.We ha e de eloped a Low cost
and low power WSN Node using MSP430 and nRF24L01.The architectural circuit
details are presented.This architecture fulfils the requirements like low
cost,low power,compact size and self organization.Various tests are carried out
to test the performance of the nRF24L01 module.The packet loss,free Space loss
(FSL)and battery lifetime calculations are described.These test results will
help the researchers to build new applications using abo e node and to work
efficiently with nRF24L01.
| [
{
"created": "Mon, 14 Jun 2010 11:59:38 GMT",
"version": "v1"
}
] | 2010-06-15 | [
[
"Sonavane",
"S. S.",
""
],
[
"Patil",
"B. P.",
""
],
[
"Kumar",
"V.",
""
]
] | In this paper,a new design of wireless sensor network (WSN)node is discussed which is based on components with ultra low power.We ha e de eloped a Low cost and low power WSN Node using MSP430 and nRF24L01.The architectural circuit details are presented.This architecture fulfils the requirements like low cost,low power,compact size and self organization.Various tests are carried out to test the performance of the nRF24L01 module.The packet loss,free Space loss (FSL)and battery lifetime calculations are described.These test results will help the researchers to build new applications using abo e node and to work efficiently with nRF24L01. |
1901.06915 | Xiangliang Kong | Xiangliang Kong, Jingxue Ma and Gennian Ge | New Bounds on the Field Size for Maximally Recoverable Codes
Instantiating Grid-like Topologies | 18 pages. arXiv admin note: text overlap with arXiv:1605.05412 by
other authors | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, the rapidly increasing amounts of data created and processed
through the internet resulted in distributed storage systems employing erasure
coding based schemes. Aiming to balance the tradeoff between data recovery for
correlated failures and efficient encoding and decoding, distributed storage
systems employing maximally recoverable codes came up. Unifying a number of
topologies considered both in theory and practice, Gopalan et al.
\cite{Gopalan2017} initiated the study of maximally recoverable codes for
grid-like topologies.
In this paper, we focus on the maximally recoverable codes that instantiate
grid-like topologies $T_{m\times n}(1,b,0)$. To characterize the property of
codes for these topologies, we introduce the notion of \emph{pseudo-parity
check matrix}. Then, using the Combinatorial Nullstellensatz, we establish the
first polynomial upper bound on the field size needed for achieving the maximal
recoverability in topologies $T_{m\times n}(1,b,0)$. And using hypergraph
independent set approach, we further improve this general upper bound for
topologies $T_{4\times n}(1,2,0)$ and $T_{3\times n}(1,3,0)$. By relating the
problem to generalized \emph{Sidon sets} in $\mathbb{F}_q$, we also obtain
non-trivial lower bounds on the field size for maximally recoverable codes that
instantiate topologies $T_{4\times n}(1,2,0)$ and $T_{3\times n}(1,3,0)$.
| [
{
"created": "Mon, 21 Jan 2019 13:02:16 GMT",
"version": "v1"
},
{
"created": "Tue, 19 Feb 2019 12:13:07 GMT",
"version": "v2"
}
] | 2019-02-21 | [
[
"Kong",
"Xiangliang",
""
],
[
"Ma",
"Jingxue",
""
],
[
"Ge",
"Gennian",
""
]
] | In recent years, the rapidly increasing amounts of data created and processed through the internet resulted in distributed storage systems employing erasure coding based schemes. Aiming to balance the tradeoff between data recovery for correlated failures and efficient encoding and decoding, distributed storage systems employing maximally recoverable codes came up. Unifying a number of topologies considered both in theory and practice, Gopalan et al. \cite{Gopalan2017} initiated the study of maximally recoverable codes for grid-like topologies. In this paper, we focus on the maximally recoverable codes that instantiate grid-like topologies $T_{m\times n}(1,b,0)$. To characterize the property of codes for these topologies, we introduce the notion of \emph{pseudo-parity check matrix}. Then, using the Combinatorial Nullstellensatz, we establish the first polynomial upper bound on the field size needed for achieving the maximal recoverability in topologies $T_{m\times n}(1,b,0)$. And using hypergraph independent set approach, we further improve this general upper bound for topologies $T_{4\times n}(1,2,0)$ and $T_{3\times n}(1,3,0)$. By relating the problem to generalized \emph{Sidon sets} in $\mathbb{F}_q$, we also obtain non-trivial lower bounds on the field size for maximally recoverable codes that instantiate topologies $T_{4\times n}(1,2,0)$ and $T_{3\times n}(1,3,0)$. |
2203.12677 | Kyle Hsu | Kyle Hsu, Moo Jin Kim, Rafael Rafailov, Jiajun Wu, Chelsea Finn | Vision-Based Manipulators Need to Also See from Their Hands | First two authors contributed equally. ICLR 2022 (oral) camera-ready.
30 pages, 20 figures. Project website:
https://sites.google.com/view/seeing-from-hands | null | null | null | cs.RO cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | We study how the choice of visual perspective affects learning and
generalization in the context of physical manipulation from raw sensor
observations. Compared with the more commonly used global third-person
perspective, a hand-centric (eye-in-hand) perspective affords reduced
observability, but we find that it consistently improves training efficiency
and out-of-distribution generalization. These benefits hold across a variety of
learning algorithms, experimental settings, and distribution shifts, and for
both simulated and real robot apparatuses. However, this is only the case when
hand-centric observability is sufficient; otherwise, including a third-person
perspective is necessary for learning, but also harms out-of-distribution
generalization. To mitigate this, we propose to regularize the third-person
information stream via a variational information bottleneck. On six
representative manipulation tasks with varying hand-centric observability
adapted from the Meta-World benchmark, this results in a state-of-the-art
reinforcement learning agent operating from both perspectives improving its
out-of-distribution generalization on every task. While some practitioners have
long put cameras in the hands of robots, our work systematically analyzes the
benefits of doing so and provides simple and broadly applicable insights for
improving end-to-end learned vision-based robotic manipulation.
| [
{
"created": "Tue, 15 Mar 2022 18:46:18 GMT",
"version": "v1"
}
] | 2022-03-25 | [
[
"Hsu",
"Kyle",
""
],
[
"Kim",
"Moo Jin",
""
],
[
"Rafailov",
"Rafael",
""
],
[
"Wu",
"Jiajun",
""
],
[
"Finn",
"Chelsea",
""
]
] | We study how the choice of visual perspective affects learning and generalization in the context of physical manipulation from raw sensor observations. Compared with the more commonly used global third-person perspective, a hand-centric (eye-in-hand) perspective affords reduced observability, but we find that it consistently improves training efficiency and out-of-distribution generalization. These benefits hold across a variety of learning algorithms, experimental settings, and distribution shifts, and for both simulated and real robot apparatuses. However, this is only the case when hand-centric observability is sufficient; otherwise, including a third-person perspective is necessary for learning, but also harms out-of-distribution generalization. To mitigate this, we propose to regularize the third-person information stream via a variational information bottleneck. On six representative manipulation tasks with varying hand-centric observability adapted from the Meta-World benchmark, this results in a state-of-the-art reinforcement learning agent operating from both perspectives improving its out-of-distribution generalization on every task. While some practitioners have long put cameras in the hands of robots, our work systematically analyzes the benefits of doing so and provides simple and broadly applicable insights for improving end-to-end learned vision-based robotic manipulation. |
1304.1836 | arXiv Admin | Tairen Sun | A Simulation and Modeling of Access Points with Definition Language | Withdrawn by arXiv admins | null | null | null | cs.OH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This submission has been withdrawn by arXiv administrators because it
contains fictitious content and was submitted under a pseudonym, which is
against arXiv policy.
| [
{
"created": "Sat, 6 Apr 2013 00:18:59 GMT",
"version": "v1"
},
{
"created": "Thu, 11 Apr 2013 19:06:01 GMT",
"version": "v2"
}
] | 2015-01-22 | [
[
"Sun",
"Tairen",
""
]
] | This submission has been withdrawn by arXiv administrators because it contains fictitious content and was submitted under a pseudonym, which is against arXiv policy. |
2402.08267 | Kei Iino | Kei Iino, Shunsuke Akamatsu, Hiroshi Watanabe, Shohei Enomoto, Akira
Sakamoto, Takeharu Eda | Improving Image Coding for Machines through Optimizing Encoder via
Auxiliary Loss | This version has been removed by arXiv administrators as the
submitter did not have the right to agree to the license at the time of
submission | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Image coding for machines (ICM) aims to compress images for machine analysis
using recognition models rather than human vision. Hence, in ICM, it is
important for the encoder to recognize and compress the information necessary
for the machine recognition task. There are two main approaches in learned ICM;
optimization of the compression model based on task loss, and Region of
Interest (ROI) based bit allocation. These approaches provide the encoder with
the recognition capability. However, optimization with task loss becomes
difficult when the recognition model is deep, and ROI-based methods often
involve extra overhead during evaluation. In this study, we propose a novel
training method for learned ICM models that applies auxiliary loss to the
encoder to improve its recognition capability and rate-distortion performance.
Our method achieves Bjontegaard Delta rate improvements of 27.7% and 20.3% in
object detection and semantic segmentation tasks, compared to the conventional
training method.
| [
{
"created": "Tue, 13 Feb 2024 07:45:25 GMT",
"version": "v1"
}
] | 2024-03-07 | [
[
"Iino",
"Kei",
""
],
[
"Akamatsu",
"Shunsuke",
""
],
[
"Watanabe",
"Hiroshi",
""
],
[
"Enomoto",
"Shohei",
""
],
[
"Sakamoto",
"Akira",
""
],
[
"Eda",
"Takeharu",
""
]
] | Image coding for machines (ICM) aims to compress images for machine analysis using recognition models rather than human vision. Hence, in ICM, it is important for the encoder to recognize and compress the information necessary for the machine recognition task. There are two main approaches in learned ICM; optimization of the compression model based on task loss, and Region of Interest (ROI) based bit allocation. These approaches provide the encoder with the recognition capability. However, optimization with task loss becomes difficult when the recognition model is deep, and ROI-based methods often involve extra overhead during evaluation. In this study, we propose a novel training method for learned ICM models that applies auxiliary loss to the encoder to improve its recognition capability and rate-distortion performance. Our method achieves Bjontegaard Delta rate improvements of 27.7% and 20.3% in object detection and semantic segmentation tasks, compared to the conventional training method. |
1907.07818 | Manash Pratim Barman | Manash Pratim Barman, Amit Awekar, Sambhav Kothari | Decoding the Style and Bias of Song Lyrics | Accepted for ACM SIGIR 2019 | null | null | null | cs.IR cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The central idea of this paper is to gain a deeper understanding of song
lyrics computationally. We focus on two aspects: style and biases of song
lyrics. All prior works to understand these two aspects are limited to manual
analysis of a small corpus of song lyrics. In contrast, we analyzed more than
half a million songs spread over five decades. We characterize the lyrics style
in terms of vocabulary, length, repetitiveness, speed, and readability. We have
observed that the style of popular songs significantly differs from other
songs. We have used distributed representation methods and WEAT test to measure
various gender and racial biases in the song lyrics. We have observed that
biases in song lyrics correlate with prior results on human subjects. This
correlation indicates that song lyrics reflect the biases that exist in
society. Increasing consumption of music and the effect of lyrics on human
emotions makes this analysis important.
| [
{
"created": "Wed, 17 Jul 2019 23:57:46 GMT",
"version": "v1"
}
] | 2019-07-19 | [
[
"Barman",
"Manash Pratim",
""
],
[
"Awekar",
"Amit",
""
],
[
"Kothari",
"Sambhav",
""
]
] | The central idea of this paper is to gain a deeper understanding of song lyrics computationally. We focus on two aspects: style and biases of song lyrics. All prior works to understand these two aspects are limited to manual analysis of a small corpus of song lyrics. In contrast, we analyzed more than half a million songs spread over five decades. We characterize the lyrics style in terms of vocabulary, length, repetitiveness, speed, and readability. We have observed that the style of popular songs significantly differs from other songs. We have used distributed representation methods and WEAT test to measure various gender and racial biases in the song lyrics. We have observed that biases in song lyrics correlate with prior results on human subjects. This correlation indicates that song lyrics reflect the biases that exist in society. Increasing consumption of music and the effect of lyrics on human emotions makes this analysis important. |
2402.05605 | Andrew Fuchs | Andrew Fuchs, Andrea Passarella, and Marco Conti | Optimizing Delegation in Collaborative Human-AI Hybrid Teams | null | null | null | null | cs.AI cs.HC cs.LG | http://creativecommons.org/licenses/by/4.0/ | When humans and autonomous systems operate together as what we refer to as a
hybrid team, we of course wish to ensure the team operates successfully and
effectively. We refer to team members as agents. In our proposed framework, we
address the case of hybrid teams in which, at any time, only one team member
(the control agent) is authorized to act as control for the team. To determine
the best selection of a control agent, we propose the addition of an AI manager
(via Reinforcement Learning) which learns as an outside observer of the team.
The manager learns a model of behavior linking observations of agent
performance and the environment/world the team is operating in, and from these
observations makes the most desirable selection of a control agent. We restrict
the manager task by introducing a set of constraints. The manager constraints
indicate acceptable team operation, so a violation occurs if the team enters a
condition which is unacceptable and requires manager intervention. To ensure
minimal added complexity or potential inefficiency for the team, the manager
should attempt to minimize the number of times the team reaches a constraint
violation and requires subsequent manager intervention. Therefore our manager
is optimizing its selection of authorized agents to boost overall team
performance while minimizing the frequency of manager intervention. We
demonstrate our manager performance in a simulated driving scenario
representing the case of a hybrid team of agents composed of a human driver and
autonomous driving system. We perform experiments for our driving scenario with
interfering vehicles, indicating the need for collision avoidance and proper
speed control. Our results indicate a positive impact of our manager, with some
cases resulting in increased team performance up to ~187% that of the best solo
agent performance.
| [
{
"created": "Thu, 8 Feb 2024 12:04:43 GMT",
"version": "v1"
}
] | 2024-02-09 | [
[
"Fuchs",
"Andrew",
""
],
[
"Passarella",
"Andrea",
""
],
[
"Conti",
"Marco",
""
]
] | When humans and autonomous systems operate together as what we refer to as a hybrid team, we of course wish to ensure the team operates successfully and effectively. We refer to team members as agents. In our proposed framework, we address the case of hybrid teams in which, at any time, only one team member (the control agent) is authorized to act as control for the team. To determine the best selection of a control agent, we propose the addition of an AI manager (via Reinforcement Learning) which learns as an outside observer of the team. The manager learns a model of behavior linking observations of agent performance and the environment/world the team is operating in, and from these observations makes the most desirable selection of a control agent. We restrict the manager task by introducing a set of constraints. The manager constraints indicate acceptable team operation, so a violation occurs if the team enters a condition which is unacceptable and requires manager intervention. To ensure minimal added complexity or potential inefficiency for the team, the manager should attempt to minimize the number of times the team reaches a constraint violation and requires subsequent manager intervention. Therefore our manager is optimizing its selection of authorized agents to boost overall team performance while minimizing the frequency of manager intervention. We demonstrate our manager performance in a simulated driving scenario representing the case of a hybrid team of agents composed of a human driver and autonomous driving system. We perform experiments for our driving scenario with interfering vehicles, indicating the need for collision avoidance and proper speed control. Our results indicate a positive impact of our manager, with some cases resulting in increased team performance up to ~187% that of the best solo agent performance. |
1312.7354 | Md. Selim Al Mamun | Md. Selim Al Mamun, Syed Monowar Hossain | Design of Reversible Random Access Memory | null | International Journal of Computer Applications,Volume 56 - Number
15 , Year of Publication: 2012 | 10.5120/8967-3182 | null | cs.ET cs.AR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reversible logic has become immensely popular research area and its
applications have spread in various technologies for their low power
consumption. In this paper we proposed an efficient design of random access
memory using reversible logic. In the way of designing the reversible random
access memory we proposed a reversible decoder and a write enable reversible
master slave D flip-flop. All the reversible designs are superior in terms of
quantum cost, delay and garbage outputs compared to the designs existing in
literature.
| [
{
"created": "Sun, 22 Dec 2013 15:13:12 GMT",
"version": "v1"
}
] | 2013-12-31 | [
[
"Mamun",
"Md. Selim Al",
""
],
[
"Hossain",
"Syed Monowar",
""
]
] | Reversible logic has become immensely popular research area and its applications have spread in various technologies for their low power consumption. In this paper we proposed an efficient design of random access memory using reversible logic. In the way of designing the reversible random access memory we proposed a reversible decoder and a write enable reversible master slave D flip-flop. All the reversible designs are superior in terms of quantum cost, delay and garbage outputs compared to the designs existing in literature. |
2211.11186 | Zhiyi Xue | Yiting Wu, Zhaodi Zhang, Zhiyi Xue, Si Liu, Min Zhang | DualApp: Tight Over-Approximation for Neural Network Robustness
Verification via Under-Approximation | 13 pages, 9 fugures, 3 tables | null | null | null | cs.SE cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The robustness of neural networks is fundamental to the hosting system's
reliability and security. Formal verification has been proven to be effective
in providing provable robustness guarantees. To improve the verification
scalability, over-approximating the non-linear activation functions in neural
networks by linear constraints is widely adopted, which transforms the
verification problem into an efficiently solvable linear programming problem.
As over-approximations inevitably introduce overestimation, many efforts have
been dedicated to defining the tightest possible approximations. Recent studies
have however showed that the existing so-called tightest approximations are
superior to each other. In this paper we identify and report an crucial factor
in defining tight approximations, namely the approximation domains of
activation functions. We observe that existing approaches only rely on
overestimated domains, while the corresponding tight approximation may not
necessarily be tight on its actual domain. We propose a novel
under-approximation-guided approach, called dual-approximation, to define tight
over-approximations and two complementary under-approximation algorithms based
on sampling and gradient descent. The overestimated domain guarantees the
soundness while the underestimated one guides the tightness. We implement our
approach into a tool called DualApp and extensively evaluate it on a
comprehensive benchmark of 84 collected and trained neural networks with
different architectures. The experimental results show that DualApp outperforms
the state-of-the-art approximation-based approaches, with up to 71.22%
improvement to the verification result.
| [
{
"created": "Mon, 21 Nov 2022 05:09:34 GMT",
"version": "v1"
}
] | 2022-11-22 | [
[
"Wu",
"Yiting",
""
],
[
"Zhang",
"Zhaodi",
""
],
[
"Xue",
"Zhiyi",
""
],
[
"Liu",
"Si",
""
],
[
"Zhang",
"Min",
""
]
] | The robustness of neural networks is fundamental to the hosting system's reliability and security. Formal verification has been proven to be effective in providing provable robustness guarantees. To improve the verification scalability, over-approximating the non-linear activation functions in neural networks by linear constraints is widely adopted, which transforms the verification problem into an efficiently solvable linear programming problem. As over-approximations inevitably introduce overestimation, many efforts have been dedicated to defining the tightest possible approximations. Recent studies have however showed that the existing so-called tightest approximations are superior to each other. In this paper we identify and report an crucial factor in defining tight approximations, namely the approximation domains of activation functions. We observe that existing approaches only rely on overestimated domains, while the corresponding tight approximation may not necessarily be tight on its actual domain. We propose a novel under-approximation-guided approach, called dual-approximation, to define tight over-approximations and two complementary under-approximation algorithms based on sampling and gradient descent. The overestimated domain guarantees the soundness while the underestimated one guides the tightness. We implement our approach into a tool called DualApp and extensively evaluate it on a comprehensive benchmark of 84 collected and trained neural networks with different architectures. The experimental results show that DualApp outperforms the state-of-the-art approximation-based approaches, with up to 71.22% improvement to the verification result. |
2203.07990 | Abhishek Dhankar | Abhishek Dhankar, Osmar R. Za\"iane and Francois Bolduc | UofA-Truth at Factify 2022 : Transformer And Transfer Learning Based
Multi-Modal Fact-Checking | null | null | null | null | cs.MM cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | Identifying fake news is a very difficult task, especially when considering
the multiple modes of conveying information through text, image, video and/or
audio. We attempted to tackle the problem of automated
misinformation/disinformation detection in multi-modal news sources (including
text and images) through our simple, yet effective, approach in the FACTIFY
shared task at De-Factify@AAAI2022. Our model produced an F1-weighted score of
74.807%, which was the fourth best out of all the submissions. In this paper we
will explain our approach to undertake the shared task.
| [
{
"created": "Fri, 28 Jan 2022 18:13:03 GMT",
"version": "v1"
}
] | 2022-03-16 | [
[
"Dhankar",
"Abhishek",
""
],
[
"Zaïane",
"Osmar R.",
""
],
[
"Bolduc",
"Francois",
""
]
] | Identifying fake news is a very difficult task, especially when considering the multiple modes of conveying information through text, image, video and/or audio. We attempted to tackle the problem of automated misinformation/disinformation detection in multi-modal news sources (including text and images) through our simple, yet effective, approach in the FACTIFY shared task at De-Factify@AAAI2022. Our model produced an F1-weighted score of 74.807%, which was the fourth best out of all the submissions. In this paper we will explain our approach to undertake the shared task. |
2403.00434 | Zhouxiang Zhao | Zhouxiang Zhao, Zhaohui Yang, Ye Hu, Qianqian Yang, Wei Xu, Zhaoyang
Zhang | Probabilistic Semantic Communication over Wireless Networks with Rate
Splitting | null | null | null | null | cs.IT eess.SP math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, the problem of joint transmission and computation resource
allocation for probabilistic semantic communication (PSC) system with rate
splitting multiple access (RSMA) is investigated. In the considered model, the
base station (BS) needs to transmit a large amount of data to multiple users
with RSMA. Due to limited communication resources, the BS is required to
utilize semantic communication techniques to compress the large-sized data. The
semantic communication is enabled by shared probability graphs between the BS
and the users. The probability graph can be used to further compress the
transmission data at the BS, while the received compressed semantic information
can be recovered through using the same shared probability graph at each user
side. The semantic information compression progress consumes additional
computation power at the BS, which inevitably decreases the transmission power
due to limited total power budget. Considering both the effect of semantic
compression ratio and computation power, the semantic rate expression for RSMA
is first obtained. Then, based on the obtained rate expression, an optimization
problem is formulated with the aim of maximizing the sum of semantic rates of
all users under total power, semantic compression ratio, and rate allocation
constraints. To tackle this problem, an iterative algorithm is proposed, where
the rate allocation and transmit beamforming design subproblem is solved using
a successive convex approximation method, and the semantic compression ratio
subproblem is addressed using a greedy algorithm. Numerical results validate
the effectiveness of the proposed scheme.
| [
{
"created": "Fri, 1 Mar 2024 10:36:04 GMT",
"version": "v1"
}
] | 2024-03-04 | [
[
"Zhao",
"Zhouxiang",
""
],
[
"Yang",
"Zhaohui",
""
],
[
"Hu",
"Ye",
""
],
[
"Yang",
"Qianqian",
""
],
[
"Xu",
"Wei",
""
],
[
"Zhang",
"Zhaoyang",
""
]
] | In this paper, the problem of joint transmission and computation resource allocation for probabilistic semantic communication (PSC) system with rate splitting multiple access (RSMA) is investigated. In the considered model, the base station (BS) needs to transmit a large amount of data to multiple users with RSMA. Due to limited communication resources, the BS is required to utilize semantic communication techniques to compress the large-sized data. The semantic communication is enabled by shared probability graphs between the BS and the users. The probability graph can be used to further compress the transmission data at the BS, while the received compressed semantic information can be recovered through using the same shared probability graph at each user side. The semantic information compression progress consumes additional computation power at the BS, which inevitably decreases the transmission power due to limited total power budget. Considering both the effect of semantic compression ratio and computation power, the semantic rate expression for RSMA is first obtained. Then, based on the obtained rate expression, an optimization problem is formulated with the aim of maximizing the sum of semantic rates of all users under total power, semantic compression ratio, and rate allocation constraints. To tackle this problem, an iterative algorithm is proposed, where the rate allocation and transmit beamforming design subproblem is solved using a successive convex approximation method, and the semantic compression ratio subproblem is addressed using a greedy algorithm. Numerical results validate the effectiveness of the proposed scheme. |
2201.04913 | Laurent Mertens | Laurent Mertens, Joost Vennekens | Compressing Word Embeddings Using Syllables | 19 pages 3 figures 11 tables | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | This work examines the possibility of using syllable embeddings, instead of
the often used $n$-gram embeddings, as subword embeddings. We investigate this
for two languages: English and Dutch. To this end, we also translated two
standard English word embedding evaluation datasets, WordSim353 and
SemEval-2017, to Dutch. Furthermore, we provide the research community with
data sets of syllabic decompositions for both languages. We compare our
approach to full word and $n$-gram embeddings. Compared to full word
embeddings, we obtain English models that are 20 to 30 times smaller while
retaining 80% of the performance. For Dutch, models are 15 times smaller for
70% performance retention. Although less accurate than the $n$-gram baseline we
used, our models can be trained in a matter of minutes, as opposed to hours for
the $n$-gram approach. We identify a path toward upgrading performance in
future work. All code is made publicly available, as well as our collected
English and Dutch syllabic decompositions and Dutch evaluation set
translations.
| [
{
"created": "Thu, 13 Jan 2022 12:09:44 GMT",
"version": "v1"
}
] | 2022-01-14 | [
[
"Mertens",
"Laurent",
""
],
[
"Vennekens",
"Joost",
""
]
] | This work examines the possibility of using syllable embeddings, instead of the often used $n$-gram embeddings, as subword embeddings. We investigate this for two languages: English and Dutch. To this end, we also translated two standard English word embedding evaluation datasets, WordSim353 and SemEval-2017, to Dutch. Furthermore, we provide the research community with data sets of syllabic decompositions for both languages. We compare our approach to full word and $n$-gram embeddings. Compared to full word embeddings, we obtain English models that are 20 to 30 times smaller while retaining 80% of the performance. For Dutch, models are 15 times smaller for 70% performance retention. Although less accurate than the $n$-gram baseline we used, our models can be trained in a matter of minutes, as opposed to hours for the $n$-gram approach. We identify a path toward upgrading performance in future work. All code is made publicly available, as well as our collected English and Dutch syllabic decompositions and Dutch evaluation set translations. |
2406.04470 | Haokun Zhou | Haokun Zhou, Yipeng Hong | DiffuSyn Bench: Evaluating Vision-Language Models on Real-World
Complexities with Diffusion-Generated Synthetic Benchmarks | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | This study assesses the ability of Large Vision-Language Models (LVLMs) to
differentiate between AI-generated and human-generated images. It introduces a
new automated benchmark construction method for this evaluation. The experiment
compared common LVLMs with human participants using a mixed dataset of AI and
human-created images. Results showed that LVLMs could distinguish between the
image types to some extent but exhibited a rightward bias, and perform
significantly worse compared to humans. To build on these findings, we
developed an automated benchmark construction process using AI. This process
involved topic retrieval, narrative script generation, error embedding, and
image generation, creating a diverse set of text-image pairs with intentional
errors. We validated our method through constructing two caparable benchmarks.
This study highlights the strengths and weaknesses of LVLMs in real-world
understanding and advances benchmark construction techniques, providing a
scalable and automatic approach for AI model evaluation.
| [
{
"created": "Thu, 6 Jun 2024 19:50:33 GMT",
"version": "v1"
},
{
"created": "Thu, 13 Jun 2024 16:46:22 GMT",
"version": "v2"
}
] | 2024-06-14 | [
[
"Zhou",
"Haokun",
""
],
[
"Hong",
"Yipeng",
""
]
] | This study assesses the ability of Large Vision-Language Models (LVLMs) to differentiate between AI-generated and human-generated images. It introduces a new automated benchmark construction method for this evaluation. The experiment compared common LVLMs with human participants using a mixed dataset of AI and human-created images. Results showed that LVLMs could distinguish between the image types to some extent but exhibited a rightward bias, and perform significantly worse compared to humans. To build on these findings, we developed an automated benchmark construction process using AI. This process involved topic retrieval, narrative script generation, error embedding, and image generation, creating a diverse set of text-image pairs with intentional errors. We validated our method through constructing two caparable benchmarks. This study highlights the strengths and weaknesses of LVLMs in real-world understanding and advances benchmark construction techniques, providing a scalable and automatic approach for AI model evaluation. |
cs/0508053 | Peter Turney | Peter D. Turney (National Research Council of Canada) | Measuring Semantic Similarity by Latent Relational Analysis | 6 pages, related work available at http://purl.org/peter.turney/ | Proceedings of the Nineteenth International Joint Conference on
Artificial Intelligence (IJCAI-05), (2005), Edinburgh, Scotland, 1136-1141 | null | NRC-48255 | cs.LG cs.CL cs.IR | null | This paper introduces Latent Relational Analysis (LRA), a method for
measuring semantic similarity. LRA measures similarity in the semantic
relations between two pairs of words. When two pairs have a high degree of
relational similarity, they are analogous. For example, the pair cat:meow is
analogous to the pair dog:bark. There is evidence from cognitive science that
relational similarity is fundamental to many cognitive and linguistic tasks
(e.g., analogical reasoning). In the Vector Space Model (VSM) approach to
measuring relational similarity, the similarity between two pairs is calculated
by the cosine of the angle between the vectors that represent the two pairs.
The elements in the vectors are based on the frequencies of manually
constructed patterns in a large corpus. LRA extends the VSM approach in three
ways: (1) patterns are derived automatically from the corpus, (2) Singular
Value Decomposition is used to smooth the frequency data, and (3) synonyms are
used to reformulate word pairs. This paper describes the LRA algorithm and
experimentally compares LRA to VSM on two tasks, answering college-level
multiple-choice word analogy questions and classifying semantic relations in
noun-modifier expressions. LRA achieves state-of-the-art results, reaching
human-level performance on the analogy questions and significantly exceeding
VSM performance on both tasks.
| [
{
"created": "Wed, 10 Aug 2005 19:35:57 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Turney",
"Peter D.",
"",
"National Research Council of Canada"
]
] | This paper introduces Latent Relational Analysis (LRA), a method for measuring semantic similarity. LRA measures similarity in the semantic relations between two pairs of words. When two pairs have a high degree of relational similarity, they are analogous. For example, the pair cat:meow is analogous to the pair dog:bark. There is evidence from cognitive science that relational similarity is fundamental to many cognitive and linguistic tasks (e.g., analogical reasoning). In the Vector Space Model (VSM) approach to measuring relational similarity, the similarity between two pairs is calculated by the cosine of the angle between the vectors that represent the two pairs. The elements in the vectors are based on the frequencies of manually constructed patterns in a large corpus. LRA extends the VSM approach in three ways: (1) patterns are derived automatically from the corpus, (2) Singular Value Decomposition is used to smooth the frequency data, and (3) synonyms are used to reformulate word pairs. This paper describes the LRA algorithm and experimentally compares LRA to VSM on two tasks, answering college-level multiple-choice word analogy questions and classifying semantic relations in noun-modifier expressions. LRA achieves state-of-the-art results, reaching human-level performance on the analogy questions and significantly exceeding VSM performance on both tasks. |
2305.03624 | Yitong Ji | Yitong Ji, Aixin Sun, Jie Zhang | Retraining A Graph-based Recommender with Interests Disentanglement | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In a practical recommender system, new interactions are continuously
observed. Some interactions are expected, because they largely follow users'
long-term preferences. Some other interactions are indications of recent trends
in user preference changes or marketing positions of new items. Accordingly,
the recommender needs to be periodically retrained or updated to capture the
new trends, and yet not to forget the long-term preferences. In this paper, we
propose a novel and generic retraining framework called Disentangled
Incremental Learning (DIL) for graph-based recommenders. We assume that
long-term preferences are well captured in the existing model, in the form of
model parameters learned from past interactions. New preferences can be learned
from the user-item bipartite graph constructed using the newly observed
interactions. In DIL, we design an Information Extraction Module to extract
historical preferences from the existing model. Then we blend the historical
and new preferences in the form of node embeddings in the new graph, through a
Disentanglement Module. The essence of the disentanglement module is to
decorrelate the historical and new preferences so that both can be well
captured, via carefully designed losses. Through experiments on three benchmark
datasets, we show the effectiveness of DIL in capturing dynamics of useritem
interactions. We also demonstrate the robustness of DIL by attaching it to two
base models - LightGCN and NGCF.
| [
{
"created": "Fri, 5 May 2023 15:36:33 GMT",
"version": "v1"
}
] | 2023-05-08 | [
[
"Ji",
"Yitong",
""
],
[
"Sun",
"Aixin",
""
],
[
"Zhang",
"Jie",
""
]
] | In a practical recommender system, new interactions are continuously observed. Some interactions are expected, because they largely follow users' long-term preferences. Some other interactions are indications of recent trends in user preference changes or marketing positions of new items. Accordingly, the recommender needs to be periodically retrained or updated to capture the new trends, and yet not to forget the long-term preferences. In this paper, we propose a novel and generic retraining framework called Disentangled Incremental Learning (DIL) for graph-based recommenders. We assume that long-term preferences are well captured in the existing model, in the form of model parameters learned from past interactions. New preferences can be learned from the user-item bipartite graph constructed using the newly observed interactions. In DIL, we design an Information Extraction Module to extract historical preferences from the existing model. Then we blend the historical and new preferences in the form of node embeddings in the new graph, through a Disentanglement Module. The essence of the disentanglement module is to decorrelate the historical and new preferences so that both can be well captured, via carefully designed losses. Through experiments on three benchmark datasets, we show the effectiveness of DIL in capturing dynamics of useritem interactions. We also demonstrate the robustness of DIL by attaching it to two base models - LightGCN and NGCF. |
2304.05483 | Lasse Peters | Lasse Peters, Andrea Bajcsy, Chih-Yuan Chiu, David Fridovich-Keil,
Forrest Laine, Laura Ferranti, Javier Alonso-Mora | Contingency Games for Multi-Agent Interaction | null | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Contingency planning, wherein an agent generates a set of possible plans
conditioned on the outcome of an uncertain event, is an increasingly popular
way for robots to act under uncertainty. In this work we take a game-theoretic
perspective on contingency planning, tailored to multi-agent scenarios in which
a robot's actions impact the decisions of other agents and vice versa. The
resulting contingency game allows the robot to efficiently interact with other
agents by generating strategic motion plans conditioned on multiple possible
intents for other actors in the scene. Contingency games are parameterized via
a scalar variable which represents a future time when intent uncertainty will
be resolved. By estimating this parameter online, we construct a game-theoretic
motion planner that adapts to changing beliefs while anticipating future
certainty. We show that existing variants of game-theoretic planning under
uncertainty are readily obtained as special cases of contingency games. Through
a series of simulated autonomous driving scenarios, we demonstrate that
contingency games close the gap between certainty-equivalent games that commit
to a single hypothesis and non-contingent multi-hypothesis games that do not
account for future uncertainty reduction.
| [
{
"created": "Tue, 11 Apr 2023 20:30:17 GMT",
"version": "v1"
},
{
"created": "Fri, 14 Apr 2023 10:01:38 GMT",
"version": "v2"
},
{
"created": "Fri, 18 Aug 2023 13:58:36 GMT",
"version": "v3"
},
{
"created": "Thu, 21 Dec 2023 18:52:18 GMT",
"version": "v4"
}
] | 2023-12-22 | [
[
"Peters",
"Lasse",
""
],
[
"Bajcsy",
"Andrea",
""
],
[
"Chiu",
"Chih-Yuan",
""
],
[
"Fridovich-Keil",
"David",
""
],
[
"Laine",
"Forrest",
""
],
[
"Ferranti",
"Laura",
""
],
[
"Alonso-Mora",
"Javier",
""
]
] | Contingency planning, wherein an agent generates a set of possible plans conditioned on the outcome of an uncertain event, is an increasingly popular way for robots to act under uncertainty. In this work we take a game-theoretic perspective on contingency planning, tailored to multi-agent scenarios in which a robot's actions impact the decisions of other agents and vice versa. The resulting contingency game allows the robot to efficiently interact with other agents by generating strategic motion plans conditioned on multiple possible intents for other actors in the scene. Contingency games are parameterized via a scalar variable which represents a future time when intent uncertainty will be resolved. By estimating this parameter online, we construct a game-theoretic motion planner that adapts to changing beliefs while anticipating future certainty. We show that existing variants of game-theoretic planning under uncertainty are readily obtained as special cases of contingency games. Through a series of simulated autonomous driving scenarios, we demonstrate that contingency games close the gap between certainty-equivalent games that commit to a single hypothesis and non-contingent multi-hypothesis games that do not account for future uncertainty reduction. |
2305.16750 | Anna Wr\'oblewska | Anna Wr\'oblewska, Bartosz Pieli\'nski, Karolina Seweryn, Sylwia
Sysko-Roma\'nczuk, Karol Saputa, Aleksandra Wichrowska, Hanna Schreiber | Automating the Analysis of Institutional Design in International
Agreements | 11 pages, 8 figures, accepted to ICCS 2023. arXiv admin note:
substantial text overlap with arXiv:2209.00944 | null | 10.1007/978-3-031-36024-4_5 | null | cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper explores the automatic knowledge extraction of formal
institutional design - norms, rules, and actors - from international
agreements. The focus was to analyze the relationship between the visibility
and centrality of actors in the formal institutional design in regulating
critical aspects of cultural heritage relations. The developed tool utilizes
techniques such as collecting legal documents, annotating them with
Institutional Grammar, and using graph analysis to explore the formal
institutional design. The system was tested against the 2003 UNESCO Convention
for the Safeguarding of the Intangible Cultural Heritage.
| [
{
"created": "Fri, 26 May 2023 08:57:11 GMT",
"version": "v1"
}
] | 2023-09-08 | [
[
"Wróblewska",
"Anna",
""
],
[
"Pieliński",
"Bartosz",
""
],
[
"Seweryn",
"Karolina",
""
],
[
"Sysko-Romańczuk",
"Sylwia",
""
],
[
"Saputa",
"Karol",
""
],
[
"Wichrowska",
"Aleksandra",
""
],
[
"Schreiber",
"Hanna",
""
]
] | This paper explores the automatic knowledge extraction of formal institutional design - norms, rules, and actors - from international agreements. The focus was to analyze the relationship between the visibility and centrality of actors in the formal institutional design in regulating critical aspects of cultural heritage relations. The developed tool utilizes techniques such as collecting legal documents, annotating them with Institutional Grammar, and using graph analysis to explore the formal institutional design. The system was tested against the 2003 UNESCO Convention for the Safeguarding of the Intangible Cultural Heritage. |
0709.0259 | Chien-Hwa Hwang | Chien-Hwa Hwang and Shih-Chang Chen | Spectrum Sensing in Wideband OFDM Cognitive Radios | 30 pages, 7 figures, submitted to IEEE Transactions on Signal
Processing, Aug. 2007 | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, detection of the primary user (PU) signal in an orthogonal
frequency division multiplexing (OFDM) based cognitive radio (CR) system is
addressed. According to the prior knowledge of the PU signal known to the
detector, three detection algorithms based on the Neyman-Pearson philosophy are
proposed. In the first case, a Gaussian PU signal with completely known
probability density function (PDF) except for its received power is considered.
The frequency band that the PU signal resides is also assumed known. Detection
is performed individually at each OFDM sub-carrier possibly interfered by the
PU signal, and the results are then combined to form a final decision. In the
second case, the sub-carriers that the PU signal resides are known.
Observations from all possibly interfered sub-carriers are considered jointly
to exploit the fact that the presence of a PU signal interferers all of them
simultaneously. In the last case, it is assumed no PU signal prior knowledge is
available. The detection is involved with a search of the interfered band. The
proposed detector is able to detect an abrupt power change when tracing along
the frequency axis.
| [
{
"created": "Mon, 3 Sep 2007 15:16:20 GMT",
"version": "v1"
},
{
"created": "Mon, 6 Oct 2008 05:04:22 GMT",
"version": "v2"
}
] | 2008-10-06 | [
[
"Hwang",
"Chien-Hwa",
""
],
[
"Chen",
"Shih-Chang",
""
]
] | In this paper, detection of the primary user (PU) signal in an orthogonal frequency division multiplexing (OFDM) based cognitive radio (CR) system is addressed. According to the prior knowledge of the PU signal known to the detector, three detection algorithms based on the Neyman-Pearson philosophy are proposed. In the first case, a Gaussian PU signal with completely known probability density function (PDF) except for its received power is considered. The frequency band that the PU signal resides is also assumed known. Detection is performed individually at each OFDM sub-carrier possibly interfered by the PU signal, and the results are then combined to form a final decision. In the second case, the sub-carriers that the PU signal resides are known. Observations from all possibly interfered sub-carriers are considered jointly to exploit the fact that the presence of a PU signal interferers all of them simultaneously. In the last case, it is assumed no PU signal prior knowledge is available. The detection is involved with a search of the interfered band. The proposed detector is able to detect an abrupt power change when tracing along the frequency axis. |
2310.11344 | Murilo Varges da Silva | Lucca de Freitas Santos, Murilo Varges da Silva | The effect of stemming and lemmatization on Portuguese fake news text
classification | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | With the popularization of the internet, smartphones and social media,
information is being spread quickly and easily way, which implies bigger
traffic of information in the world, but there is a problem that is harming
society with the dissemination of fake news. With a bigger flow of information,
some people are trying to disseminate deceptive information and fake news. The
automatic detection of fake news is a challenging task because to obtain a good
result is necessary to deal with linguistics problems, especially when we are
dealing with languages that not have been comprehensively studied yet, besides
that, some techniques can help to reach a good result when we are dealing with
text data, although, the motivation of detecting this deceptive information it
is in the fact that the people need to know which information is true and
trustful and which one is not. In this work, we present the effect the
pre-processing methods such as lemmatization and stemming have on fake news
classification, for that we designed some classifier models applying different
pre-processing techniques. The results show that the pre-processing step is
important to obtain betters results, the stemming and lemmatization techniques
are interesting methods and need to be more studied to develop techniques
focused on the Portuguese language so we can reach better results.
| [
{
"created": "Tue, 17 Oct 2023 15:26:40 GMT",
"version": "v1"
}
] | 2023-10-18 | [
[
"Santos",
"Lucca de Freitas",
""
],
[
"da Silva",
"Murilo Varges",
""
]
] | With the popularization of the internet, smartphones and social media, information is being spread quickly and easily way, which implies bigger traffic of information in the world, but there is a problem that is harming society with the dissemination of fake news. With a bigger flow of information, some people are trying to disseminate deceptive information and fake news. The automatic detection of fake news is a challenging task because to obtain a good result is necessary to deal with linguistics problems, especially when we are dealing with languages that not have been comprehensively studied yet, besides that, some techniques can help to reach a good result when we are dealing with text data, although, the motivation of detecting this deceptive information it is in the fact that the people need to know which information is true and trustful and which one is not. In this work, we present the effect the pre-processing methods such as lemmatization and stemming have on fake news classification, for that we designed some classifier models applying different pre-processing techniques. The results show that the pre-processing step is important to obtain betters results, the stemming and lemmatization techniques are interesting methods and need to be more studied to develop techniques focused on the Portuguese language so we can reach better results. |
2308.08873 | Mahyar Jahaninasab | Mahyar Jahaninasab, Mohamad Ali Bijarchi | Enhancing Convergence Speed with Feature-Enforcing Physics-Informed
Neural Networks: Utilizing Boundary Conditions as Prior Knowledge for Faster
Convergence | 26 pages, 10 figures, 6 tables | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This study introduces an accelerated training method for Vanilla
Physics-Informed-Neural-Networks (PINN) addressing three factors that imbalance
the loss function: initial weight state of a neural network, domain to boundary
points ratio, and loss weighting factor. We propose a novel two-stage training
method. During the initial stage, we create a unique loss function using a
subset of boundary conditions and partial differential equation terms.
Furthermore, we introduce preprocessing procedures that aim to decrease the
variance during initialization and choose domain points according to the
initial weight state of various neural networks. The second phase resembles
Vanilla-PINN training, but a portion of the random weights are substituted with
weights from the first phase. This implies that the neural network's structure
is designed to prioritize the boundary conditions, subsequently affecting the
overall convergence. Three benchmarks are utilized: two-dimensional flow over a
cylinder, an inverse problem of inlet velocity determination, and the Burger
equation. It is found that incorporating weights generated in the first
training phase into the structure of a neural network neutralizes the effects
of imbalance factors. For instance, in the first benchmark, as a result of our
process, the second phase of training is balanced across a wide range of ratios
and is not affected by the initial state of weights, while the Vanilla-PINN
failed to converge in most cases. Lastly, the initial training process not only
eliminates the need for hyperparameter tuning to balance the loss function, but
it also outperforms the Vanilla-PINN in terms of speed.
| [
{
"created": "Thu, 17 Aug 2023 09:10:07 GMT",
"version": "v1"
},
{
"created": "Fri, 15 Sep 2023 17:13:59 GMT",
"version": "v2"
},
{
"created": "Wed, 27 Sep 2023 07:43:29 GMT",
"version": "v3"
},
{
"created": "Sat, 6 Apr 2024 12:30:26 GMT",
"version": "v4"
}
] | 2024-04-09 | [
[
"Jahaninasab",
"Mahyar",
""
],
[
"Bijarchi",
"Mohamad Ali",
""
]
] | This study introduces an accelerated training method for Vanilla Physics-Informed-Neural-Networks (PINN) addressing three factors that imbalance the loss function: initial weight state of a neural network, domain to boundary points ratio, and loss weighting factor. We propose a novel two-stage training method. During the initial stage, we create a unique loss function using a subset of boundary conditions and partial differential equation terms. Furthermore, we introduce preprocessing procedures that aim to decrease the variance during initialization and choose domain points according to the initial weight state of various neural networks. The second phase resembles Vanilla-PINN training, but a portion of the random weights are substituted with weights from the first phase. This implies that the neural network's structure is designed to prioritize the boundary conditions, subsequently affecting the overall convergence. Three benchmarks are utilized: two-dimensional flow over a cylinder, an inverse problem of inlet velocity determination, and the Burger equation. It is found that incorporating weights generated in the first training phase into the structure of a neural network neutralizes the effects of imbalance factors. For instance, in the first benchmark, as a result of our process, the second phase of training is balanced across a wide range of ratios and is not affected by the initial state of weights, while the Vanilla-PINN failed to converge in most cases. Lastly, the initial training process not only eliminates the need for hyperparameter tuning to balance the loss function, but it also outperforms the Vanilla-PINN in terms of speed. |
2209.05603 | Zhenishbek Zhakypov | Zhenishbek Zhakypov, Yimeng Qin, and Allison Okamura | Hoxels: Fully 3-D Printed Soft Multi-Modal & Multi-Contact Haptic Voxel
Displays for Enriched Tactile Information Transfer | The extended abstract paper was presented in the LEVERAGING
ADVANCEMENTS IN SMART MATERIALS SCIENCE: SOFT ROBOTS GAINING NEW ABILITIES
THROUGH SMART AND FUNCTIONAL MATERIALS workshop at the 2022 IEEE
International Conference on Robotics and Automation | null | null | null | cs.RO | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Wrist-worn haptic interfaces can deliver a wide range of tactile cues for
communication of information and interaction with virtual objects. Unlike
fingertips, the wrist and forearm provide a considerably large area of skin
that allows the placement of multiple haptic actuators as a display for
enriching tactile information transfer with minimal encumbrance. Existing
multi-degree-of-freedom (DoF) wrist-worn devices employ traditional rigid
robotic mechanisms and electric motors that limit their versatility,
miniaturization, distribution, and assembly. Alternative solutions based on
soft elastomeric actuator arrays constitute only 1-DoF haptic pixels.
Higher-DoF prototypes produce a single interaction point and require complex
manual assembly processes, such as molding and gluing several parts. These
approaches limit the construction of high-DoF compact haptic displays,
repeatability, and customizability. Here we present a novel, fully 3D-printed,
soft, wearable haptic display for increasing tactile information transfer on
the wrist and forearm with 3-DoF haptic voxels, called hoxels. Our initial
prototype comprises two hoxels that provide skin shear, pressure, twist,
stretch, squeeze, and other arbitrary stimuli. Each hoxel generates force up to
1.6 N in the x and y-axes and up to 20 N in the z-axis. Our method enables the
rapid fabrication of versatile and forceful haptic displays.
| [
{
"created": "Mon, 12 Sep 2022 20:38:03 GMT",
"version": "v1"
}
] | 2022-09-14 | [
[
"Zhakypov",
"Zhenishbek",
""
],
[
"Qin",
"Yimeng",
""
],
[
"Okamura",
"Allison",
""
]
] | Wrist-worn haptic interfaces can deliver a wide range of tactile cues for communication of information and interaction with virtual objects. Unlike fingertips, the wrist and forearm provide a considerably large area of skin that allows the placement of multiple haptic actuators as a display for enriching tactile information transfer with minimal encumbrance. Existing multi-degree-of-freedom (DoF) wrist-worn devices employ traditional rigid robotic mechanisms and electric motors that limit their versatility, miniaturization, distribution, and assembly. Alternative solutions based on soft elastomeric actuator arrays constitute only 1-DoF haptic pixels. Higher-DoF prototypes produce a single interaction point and require complex manual assembly processes, such as molding and gluing several parts. These approaches limit the construction of high-DoF compact haptic displays, repeatability, and customizability. Here we present a novel, fully 3D-printed, soft, wearable haptic display for increasing tactile information transfer on the wrist and forearm with 3-DoF haptic voxels, called hoxels. Our initial prototype comprises two hoxels that provide skin shear, pressure, twist, stretch, squeeze, and other arbitrary stimuli. Each hoxel generates force up to 1.6 N in the x and y-axes and up to 20 N in the z-axis. Our method enables the rapid fabrication of versatile and forceful haptic displays. |
2305.00646 | Ziwei Yu | Ziwei Yu, Chen Li, Linlin Yang, Xiaoxu Zheng, Michael Bi Mi, Gim Hee
Lee, Angela Yao | Overcoming the Trade-off Between Accuracy and Plausibility in 3D Hand
Shape Reconstruction | CVPR 2023 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Direct mesh fitting for 3D hand shape reconstruction is highly accurate.
However, the reconstructed meshes are prone to artifacts and do not appear as
plausible hand shapes. Conversely, parametric models like MANO ensure plausible
hand shapes but are not as accurate as the non-parametric methods. In this
work, we introduce a novel weakly-supervised hand shape estimation framework
that integrates non-parametric mesh fitting with MANO model in an end-to-end
fashion. Our joint model overcomes the tradeoff in accuracy and plausibility to
yield well-aligned and high-quality 3D meshes, especially in challenging
two-hand and hand-object interaction scenarios.
| [
{
"created": "Mon, 1 May 2023 03:38:01 GMT",
"version": "v1"
}
] | 2023-05-02 | [
[
"Yu",
"Ziwei",
""
],
[
"Li",
"Chen",
""
],
[
"Yang",
"Linlin",
""
],
[
"Zheng",
"Xiaoxu",
""
],
[
"Mi",
"Michael Bi",
""
],
[
"Lee",
"Gim Hee",
""
],
[
"Yao",
"Angela",
""
]
] | Direct mesh fitting for 3D hand shape reconstruction is highly accurate. However, the reconstructed meshes are prone to artifacts and do not appear as plausible hand shapes. Conversely, parametric models like MANO ensure plausible hand shapes but are not as accurate as the non-parametric methods. In this work, we introduce a novel weakly-supervised hand shape estimation framework that integrates non-parametric mesh fitting with MANO model in an end-to-end fashion. Our joint model overcomes the tradeoff in accuracy and plausibility to yield well-aligned and high-quality 3D meshes, especially in challenging two-hand and hand-object interaction scenarios. |
1807.01745 | Tianze Shi | Carlos G\'omez-Rodr\'iguez and Tianze Shi and Lillian Lee | Global Transition-based Non-projective Dependency Parsing | Proceedings of ACL 2018. 13 pages | Proceedings of ACL 2018 | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Shi, Huang, and Lee (2017) obtained state-of-the-art results for English and
Chinese dependency parsing by combining dynamic-programming implementations of
transition-based dependency parsers with a minimal set of bidirectional LSTM
features. However, their results were limited to projective parsing. In this
paper, we extend their approach to support non-projectivity by providing the
first practical implementation of the MH_4 algorithm, an $O(n^4)$ mildly
nonprojective dynamic-programming parser with very high coverage on
non-projective treebanks. To make MH_4 compatible with minimal transition-based
feature sets, we introduce a transition-based interpretation of it in which
parser items are mapped to sequences of transitions. We thus obtain the first
implementation of global decoding for non-projective transition-based parsing,
and demonstrate empirically that it is more effective than its projective
counterpart in parsing a number of highly non-projective languages
| [
{
"created": "Wed, 4 Jul 2018 19:09:40 GMT",
"version": "v1"
}
] | 2018-07-06 | [
[
"Gómez-Rodríguez",
"Carlos",
""
],
[
"Shi",
"Tianze",
""
],
[
"Lee",
"Lillian",
""
]
] | Shi, Huang, and Lee (2017) obtained state-of-the-art results for English and Chinese dependency parsing by combining dynamic-programming implementations of transition-based dependency parsers with a minimal set of bidirectional LSTM features. However, their results were limited to projective parsing. In this paper, we extend their approach to support non-projectivity by providing the first practical implementation of the MH_4 algorithm, an $O(n^4)$ mildly nonprojective dynamic-programming parser with very high coverage on non-projective treebanks. To make MH_4 compatible with minimal transition-based feature sets, we introduce a transition-based interpretation of it in which parser items are mapped to sequences of transitions. We thus obtain the first implementation of global decoding for non-projective transition-based parsing, and demonstrate empirically that it is more effective than its projective counterpart in parsing a number of highly non-projective languages |
2305.09974 | Kai Wang | Kai Wang and Siqiang Luo and Dan Lin | River of No Return: Graph Percolation Embeddings for Efficient Knowledge
Graph Reasoning | 9 pages, 2 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study Graph Neural Networks (GNNs)-based embedding techniques for
knowledge graph (KG) reasoning. For the first time, we link the path redundancy
issue in the state-of-the-art KG reasoning models based on path encoding and
message passing to the transformation error in model training, which brings us
new theoretical insights into KG reasoning, as well as high efficacy in
practice. On the theoretical side, we analyze the entropy of transformation
error in KG paths and point out query-specific redundant paths causing entropy
increases. These findings guide us to maintain the shortest paths and remove
redundant paths for minimized-entropy message passing. To achieve this goal, on
the practical side, we propose an efficient Graph Percolation Process motivated
by the percolation model in Fluid Mechanics, and design a lightweight GNN-based
KG reasoning framework called Graph Percolation Embeddings (GraPE). GraPE
outperforms previous state-of-the-art methods in both transductive and
inductive reasoning tasks while requiring fewer training parameters and less
inference time.
| [
{
"created": "Wed, 17 May 2023 06:13:28 GMT",
"version": "v1"
}
] | 2023-05-18 | [
[
"Wang",
"Kai",
""
],
[
"Luo",
"Siqiang",
""
],
[
"Lin",
"Dan",
""
]
] | We study Graph Neural Networks (GNNs)-based embedding techniques for knowledge graph (KG) reasoning. For the first time, we link the path redundancy issue in the state-of-the-art KG reasoning models based on path encoding and message passing to the transformation error in model training, which brings us new theoretical insights into KG reasoning, as well as high efficacy in practice. On the theoretical side, we analyze the entropy of transformation error in KG paths and point out query-specific redundant paths causing entropy increases. These findings guide us to maintain the shortest paths and remove redundant paths for minimized-entropy message passing. To achieve this goal, on the practical side, we propose an efficient Graph Percolation Process motivated by the percolation model in Fluid Mechanics, and design a lightweight GNN-based KG reasoning framework called Graph Percolation Embeddings (GraPE). GraPE outperforms previous state-of-the-art methods in both transductive and inductive reasoning tasks while requiring fewer training parameters and less inference time. |
2205.03525 | Yuhan Xie | Yuhan Xie, Kexin Jiang, Zhiyong Zhang, Shaolong Chen, Xiaodong Zhang
and Changzhen Qiu | Automatic segmentation of meniscus based on MAE self-supervision and
point-line weak supervision paradigm | 8 pages,10 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Medical image segmentation based on deep learning is often faced with the
problems of insufficient datasets and long time-consuming labeling. In this
paper, we introduce the self-supervised method MAE(Masked Autoencoders) into
knee joint images to provide a good initial weight for the segmentation model
and improve the adaptability of the model to small datasets. Secondly, we
propose a weakly supervised paradigm for meniscus segmentation based on the
combination of point and line to reduce the time of labeling. Based on the weak
label ,we design a region growing algorithm to generate pseudo-label. Finally
we train the segmentation network based on pseudo-labels with weight transfer
from self-supervision. Sufficient experimental results show that our proposed
method combining self-supervision and weak supervision can almost approach the
performance of purely fully supervised models while greatly reducing the
required labeling time and dataset size.
| [
{
"created": "Sat, 7 May 2022 02:57:50 GMT",
"version": "v1"
}
] | 2022-05-10 | [
[
"Xie",
"Yuhan",
""
],
[
"Jiang",
"Kexin",
""
],
[
"Zhang",
"Zhiyong",
""
],
[
"Chen",
"Shaolong",
""
],
[
"Zhang",
"Xiaodong",
""
],
[
"Qiu",
"Changzhen",
""
]
] | Medical image segmentation based on deep learning is often faced with the problems of insufficient datasets and long time-consuming labeling. In this paper, we introduce the self-supervised method MAE(Masked Autoencoders) into knee joint images to provide a good initial weight for the segmentation model and improve the adaptability of the model to small datasets. Secondly, we propose a weakly supervised paradigm for meniscus segmentation based on the combination of point and line to reduce the time of labeling. Based on the weak label ,we design a region growing algorithm to generate pseudo-label. Finally we train the segmentation network based on pseudo-labels with weight transfer from self-supervision. Sufficient experimental results show that our proposed method combining self-supervision and weak supervision can almost approach the performance of purely fully supervised models while greatly reducing the required labeling time and dataset size. |
cs/0205067 | Ted Pedersen | Ted Pedersen | Evaluating the Effectiveness of Ensembles of Decision Trees in
Disambiguating Senseval Lexical Samples | Appears in the Proceedings of the ACL-02 Workshop on Word Sense
Disambiguation: Recent Successes and Future Directions, July 11, 2002,
Philadelphia, PA | null | null | null | cs.CL | null | This paper presents an evaluation of an ensemble--based system that
participated in the English and Spanish lexical sample tasks of Senseval-2. The
system combines decision trees of unigrams, bigrams, and co--occurrences into a
single classifier. The analysis is extended to include the Senseval-1 data.
| [
{
"created": "Mon, 27 May 2002 18:42:10 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Pedersen",
"Ted",
""
]
] | This paper presents an evaluation of an ensemble--based system that participated in the English and Spanish lexical sample tasks of Senseval-2. The system combines decision trees of unigrams, bigrams, and co--occurrences into a single classifier. The analysis is extended to include the Senseval-1 data. |
1810.01223 | Max A. Deppert | Max A. Deppert, Klaus Jansen | Near-Linear Approximation Algorithms for Scheduling Problems with Batch
Setup Times | null | null | 10.1145/3323165.3323200 | null | cs.DS cs.CC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the scheduling of $n$ jobs divided into $c$ classes on $m$
identical parallel machines. For every class there is a setup time which is
required whenever a machine switches from the processing of one class to
another class. The objective is to find a schedule that minimizes the makespan.
We give near-linear approximation algorithms for the following problem
variants: the non-preemptive context where jobs may not be preempted, the
preemptive context where jobs may be preempted but not parallelized, as well as
the splittable context where jobs may be preempted and parallelized.
We present the first algorithm improving the previously best approximation
ratio of $2$ to a better ratio of $3/2$ in the preemptive case. In more detail,
for all three flavors we present an approximation ratio $2$ with running time
$\mathcal{O}(n)$, ratio $3/2+\varepsilon$ in time $\mathcal{O}(n\log
1/\varepsilon)$ as well as a ratio of $3/2$. The $(3/2)$-approximate algorithms
have different running times. In the non-preemptive case we get time
$\mathcal{O}(n\log (n+\Delta))$ where $\Delta$ is the largest value of the
input. The splittable approximation runs in time $\mathcal{O}(n+c\log(c+m))$
whereas the preemptive algorithm has a running time $\mathcal{O}(n \log (c+m))
\leq \mathcal{O}(n \log n)$. So far, no PTAS is known for the preemptive
problem without restrictions, so we make progress towards that question.
Recently Jansen et al. found an EPTAS for the splittable and non-preemptive
case but with impractical running times exponential in $1/\varepsilon$.
| [
{
"created": "Tue, 2 Oct 2018 13:14:46 GMT",
"version": "v1"
},
{
"created": "Wed, 1 May 2019 17:01:08 GMT",
"version": "v2"
}
] | 2019-05-02 | [
[
"Deppert",
"Max A.",
""
],
[
"Jansen",
"Klaus",
""
]
] | We investigate the scheduling of $n$ jobs divided into $c$ classes on $m$ identical parallel machines. For every class there is a setup time which is required whenever a machine switches from the processing of one class to another class. The objective is to find a schedule that minimizes the makespan. We give near-linear approximation algorithms for the following problem variants: the non-preemptive context where jobs may not be preempted, the preemptive context where jobs may be preempted but not parallelized, as well as the splittable context where jobs may be preempted and parallelized. We present the first algorithm improving the previously best approximation ratio of $2$ to a better ratio of $3/2$ in the preemptive case. In more detail, for all three flavors we present an approximation ratio $2$ with running time $\mathcal{O}(n)$, ratio $3/2+\varepsilon$ in time $\mathcal{O}(n\log 1/\varepsilon)$ as well as a ratio of $3/2$. The $(3/2)$-approximate algorithms have different running times. In the non-preemptive case we get time $\mathcal{O}(n\log (n+\Delta))$ where $\Delta$ is the largest value of the input. The splittable approximation runs in time $\mathcal{O}(n+c\log(c+m))$ whereas the preemptive algorithm has a running time $\mathcal{O}(n \log (c+m)) \leq \mathcal{O}(n \log n)$. So far, no PTAS is known for the preemptive problem without restrictions, so we make progress towards that question. Recently Jansen et al. found an EPTAS for the splittable and non-preemptive case but with impractical running times exponential in $1/\varepsilon$. |
1906.09384 | Mayank Agarwal | Sohini Upadhyay, Mayank Agarwal, Djallel Bounneffouf, Yasaman Khazaeni | A Bandit Approach to Posterior Dialog Orchestration Under a Budget | 2nd Conversational AI Workshop, NeurIPS 2018 | null | null | null | cs.AI cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Building multi-domain AI agents is a challenging task and an open problem in
the area of AI. Within the domain of dialog, the ability to orchestrate
multiple independently trained dialog agents, or skills, to create a unified
system is of particular significance. In this work, we study the task of online
posterior dialog orchestration, where we define posterior orchestration as the
task of selecting a subset of skills which most appropriately answer a user
input using features extracted from both the user input and the individual
skills. To account for the various costs associated with extracting skill
features, we consider online posterior orchestration under a skill execution
budget. We formalize this setting as Context Attentive Bandit with Observations
(CABO), a variant of context attentive bandits, and evaluate it on simulated
non-conversational and proprietary conversational datasets.
| [
{
"created": "Sat, 22 Jun 2019 04:02:26 GMT",
"version": "v1"
}
] | 2019-06-25 | [
[
"Upadhyay",
"Sohini",
""
],
[
"Agarwal",
"Mayank",
""
],
[
"Bounneffouf",
"Djallel",
""
],
[
"Khazaeni",
"Yasaman",
""
]
] | Building multi-domain AI agents is a challenging task and an open problem in the area of AI. Within the domain of dialog, the ability to orchestrate multiple independently trained dialog agents, or skills, to create a unified system is of particular significance. In this work, we study the task of online posterior dialog orchestration, where we define posterior orchestration as the task of selecting a subset of skills which most appropriately answer a user input using features extracted from both the user input and the individual skills. To account for the various costs associated with extracting skill features, we consider online posterior orchestration under a skill execution budget. We formalize this setting as Context Attentive Bandit with Observations (CABO), a variant of context attentive bandits, and evaluate it on simulated non-conversational and proprietary conversational datasets. |
2303.17328 | Vanessa Wirth | Vanessa Wirth, Vanessa Wirth | Author-Unification: Name-, Institution-, and Career-Sharing Co-authors | SIGBOVIK 2023 | null | null | null | cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we investigate the phenomenon of Author-UnificAtion (AUA),
which describes the high structural similarity of two co-authoring engineers
that share the same forename, surname, institution, and academic career without
being related by blood. So far, prior work has only explored similar surnames
and institutions. On top of that, we elaborate on the additional author
similarity of sharing the same academic career as a Ph.D. candidate with the
same starting day and month included in the university contract. We show that
our work outperforms previous state-of-the-art investigations, among others by
providing a higher Structural Similarity Index Measure (SSIM) of the letters in
our names and in our institution. Lastly, we prove the duality of our
identities through a qualitative evaluation.
| [
{
"created": "Tue, 28 Mar 2023 11:55:45 GMT",
"version": "v1"
},
{
"created": "Sat, 1 Apr 2023 09:08:34 GMT",
"version": "v2"
}
] | 2023-04-04 | [
[
"Wirth",
"Vanessa",
""
],
[
"Wirth",
"Vanessa",
""
]
] | In this work, we investigate the phenomenon of Author-UnificAtion (AUA), which describes the high structural similarity of two co-authoring engineers that share the same forename, surname, institution, and academic career without being related by blood. So far, prior work has only explored similar surnames and institutions. On top of that, we elaborate on the additional author similarity of sharing the same academic career as a Ph.D. candidate with the same starting day and month included in the university contract. We show that our work outperforms previous state-of-the-art investigations, among others by providing a higher Structural Similarity Index Measure (SSIM) of the letters in our names and in our institution. Lastly, we prove the duality of our identities through a qualitative evaluation. |
2310.14065 | Durgakant Pushp | Durgakant Pushp, Zheng Chen, Chaomin Luo, Jason M. Gregory and Lantao
Liu | POVNav: A Pareto-Optimal Mapless Visual Navigator | null | null | null | null | cs.RO | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Mapless navigation has emerged as a promising approach for enabling
autonomous robots to navigate in environments where pre-existing maps may be
inaccurate, outdated, or unavailable. In this work, we propose an image-based
local representation of the environment immediately around a robot to parse
navigability. We further develop a local planning and control framework, a
Pareto-optimal mapless visual navigator (POVNav), to use this representation
and enable autonomous navigation in various challenging and real-world
environments. In POVNav, we choose a Pareto-optimal sub-goal in the image by
evaluating all the navigable pixels, finding a safe visual path, and generating
actions to follow the path using visual servo control. In addition to providing
collision-free motion, our approach enables selective navigation behavior, such
as restricting navigation to select terrain types, by only changing the
navigability definition in the local representation. The ability of POVNav to
navigate a robot to the goal using only a monocular camera without relying on a
map makes it computationally light and easy to implement on various robotic
platforms. Real-world experiments in diverse challenging environments, ranging
from structured indoor environments to unstructured outdoor environments such
as forest trails and roads after a heavy snowfall, using various image
segmentation techniques demonstrate the remarkable efficacy of our proposed
framework.
| [
{
"created": "Sat, 21 Oct 2023 16:56:02 GMT",
"version": "v1"
}
] | 2023-10-24 | [
[
"Pushp",
"Durgakant",
""
],
[
"Chen",
"Zheng",
""
],
[
"Luo",
"Chaomin",
""
],
[
"Gregory",
"Jason M.",
""
],
[
"Liu",
"Lantao",
""
]
] | Mapless navigation has emerged as a promising approach for enabling autonomous robots to navigate in environments where pre-existing maps may be inaccurate, outdated, or unavailable. In this work, we propose an image-based local representation of the environment immediately around a robot to parse navigability. We further develop a local planning and control framework, a Pareto-optimal mapless visual navigator (POVNav), to use this representation and enable autonomous navigation in various challenging and real-world environments. In POVNav, we choose a Pareto-optimal sub-goal in the image by evaluating all the navigable pixels, finding a safe visual path, and generating actions to follow the path using visual servo control. In addition to providing collision-free motion, our approach enables selective navigation behavior, such as restricting navigation to select terrain types, by only changing the navigability definition in the local representation. The ability of POVNav to navigate a robot to the goal using only a monocular camera without relying on a map makes it computationally light and easy to implement on various robotic platforms. Real-world experiments in diverse challenging environments, ranging from structured indoor environments to unstructured outdoor environments such as forest trails and roads after a heavy snowfall, using various image segmentation techniques demonstrate the remarkable efficacy of our proposed framework. |
1612.05474 | Michael Holzhauser | Michael Holzhauser and Sven O. Krumke | A Generalized Approximation Framework for Fractional Network Flow and
Packing Problems | null | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We generalize the fractional packing framework of Garg and Koenemann to the
case of linear fractional packing problems over polyhedral cones. More
precisely, we provide approximation algorithms for problems of the form
$\max\{c^T x : Ax \leq b, x \in C \}$, where the matrix $A$ contains no
negative entries and $C$ is a cone that is generated by a finite set $S$ of
non-negative vectors. While the cone is allowed to require an exponential-sized
representation, we assume that we can access it via one of three types of
oracles. For each of these oracles, we present positive results for the
approximability of the packing problem. In contrast to other frameworks, the
presented one allows the use of arbitrary linear objective functions and can be
applied to a large class of packing problems without much effort. In
particular, our framework instantly allows to derive fast and simple fully
polynomial-time approximation algorithms (FPTASs) for a large set of network
flow problems, such as budget-constrained versions of traditional network
flows, multicommodity flows, or generalized flows. Some of these FPTASs
represent the first ones of their kind, while others match existing results but
offer a much simpler proof.
| [
{
"created": "Fri, 16 Dec 2016 14:12:07 GMT",
"version": "v1"
}
] | 2016-12-19 | [
[
"Holzhauser",
"Michael",
""
],
[
"Krumke",
"Sven O.",
""
]
] | We generalize the fractional packing framework of Garg and Koenemann to the case of linear fractional packing problems over polyhedral cones. More precisely, we provide approximation algorithms for problems of the form $\max\{c^T x : Ax \leq b, x \in C \}$, where the matrix $A$ contains no negative entries and $C$ is a cone that is generated by a finite set $S$ of non-negative vectors. While the cone is allowed to require an exponential-sized representation, we assume that we can access it via one of three types of oracles. For each of these oracles, we present positive results for the approximability of the packing problem. In contrast to other frameworks, the presented one allows the use of arbitrary linear objective functions and can be applied to a large class of packing problems without much effort. In particular, our framework instantly allows to derive fast and simple fully polynomial-time approximation algorithms (FPTASs) for a large set of network flow problems, such as budget-constrained versions of traditional network flows, multicommodity flows, or generalized flows. Some of these FPTASs represent the first ones of their kind, while others match existing results but offer a much simpler proof. |
1904.03597 | Jiangliu Wang | Jiangliu Wang, Jianbo Jiao, Linchao Bao, Shengfeng He, Yunhui Liu and
Wei Liu | Self-supervised Spatio-temporal Representation Learning for Videos by
Predicting Motion and Appearance Statistics | CVPR 2019 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address the problem of video representation learning without
human-annotated labels. While previous efforts address the problem by designing
novel self-supervised tasks using video data, the learned features are merely
on a frame-by-frame basis, which are not applicable to many video analytic
tasks where spatio-temporal features are prevailing. In this paper we propose a
novel self-supervised approach to learn spatio-temporal features for video
representation. Inspired by the success of two-stream approaches in video
classification, we propose to learn visual features by regressing both motion
and appearance statistics along spatial and temporal dimensions, given only the
input video data. Specifically, we extract statistical concepts (fast-motion
region and the corresponding dominant direction, spatio-temporal color
diversity, dominant color, etc.) from simple patterns in both spatial and
temporal domains. Unlike prior puzzles that are even hard for humans to solve,
the proposed approach is consistent with human inherent visual habits and
therefore easy to answer. We conduct extensive experiments with C3D to validate
the effectiveness of our proposed approach. The experiments show that our
approach can significantly improve the performance of C3D when applied to video
classification tasks. Code is available at
https://github.com/laura-wang/video_repres_mas.
| [
{
"created": "Sun, 7 Apr 2019 07:27:37 GMT",
"version": "v1"
}
] | 2019-04-09 | [
[
"Wang",
"Jiangliu",
""
],
[
"Jiao",
"Jianbo",
""
],
[
"Bao",
"Linchao",
""
],
[
"He",
"Shengfeng",
""
],
[
"Liu",
"Yunhui",
""
],
[
"Liu",
"Wei",
""
]
] | We address the problem of video representation learning without human-annotated labels. While previous efforts address the problem by designing novel self-supervised tasks using video data, the learned features are merely on a frame-by-frame basis, which are not applicable to many video analytic tasks where spatio-temporal features are prevailing. In this paper we propose a novel self-supervised approach to learn spatio-temporal features for video representation. Inspired by the success of two-stream approaches in video classification, we propose to learn visual features by regressing both motion and appearance statistics along spatial and temporal dimensions, given only the input video data. Specifically, we extract statistical concepts (fast-motion region and the corresponding dominant direction, spatio-temporal color diversity, dominant color, etc.) from simple patterns in both spatial and temporal domains. Unlike prior puzzles that are even hard for humans to solve, the proposed approach is consistent with human inherent visual habits and therefore easy to answer. We conduct extensive experiments with C3D to validate the effectiveness of our proposed approach. The experiments show that our approach can significantly improve the performance of C3D when applied to video classification tasks. Code is available at https://github.com/laura-wang/video_repres_mas. |
2403.13918 | Henri Casanova | Jesse McDonald, Maximilian Horzela, Fr\'ed\'eric Suter, Henri Casanova | Automated Calibration of Parallel and Distributed Computing Simulators:
A Case Study | In Proc. of the 25th IEEE International Workshop on Parallel and
Distributed Scientific and Engineering Computing (PDSEC 2024) | null | null | null | cs.DC cs.PF | http://creativecommons.org/licenses/by/4.0/ | Many parallel and distributed computing research results are obtained in
simulation, using simulators that mimic real-world executions on some target
system. Each such simulator is configured by picking values for parameters that
define the behavior of the underlying simulation models it implements. The main
concern for a simulator is accuracy: simulated behaviors should be as close as
possible to those observed in the real-world target system. This requires that
values for each of the simulator's parameters be carefully picked, or
"calibrated," based on ground-truth real-world executions. Examining the
current state of the art shows that simulator calibration, at least in the
field of parallel and distributed computing, is often undocumented (and thus
perhaps often not performed) and, when documented, is described as a
labor-intensive, manual process. In this work we evaluate the benefit of
automating simulation calibration using simple algorithms. Specifically, we use
a real-world case study from the field of High Energy Physics and compare
automated calibration to calibration performed by a domain scientist. Our main
finding is that automated calibration is on par with or significantly
outperforms the calibration performed by the domain scientist. Furthermore,
automated calibration makes it straightforward to operate desirable trade-offs
between simulation accuracy and simulation speed.
| [
{
"created": "Wed, 20 Mar 2024 18:39:47 GMT",
"version": "v1"
},
{
"created": "Mon, 1 Jul 2024 20:34:35 GMT",
"version": "v2"
}
] | 2024-07-03 | [
[
"McDonald",
"Jesse",
""
],
[
"Horzela",
"Maximilian",
""
],
[
"Suter",
"Frédéric",
""
],
[
"Casanova",
"Henri",
""
]
] | Many parallel and distributed computing research results are obtained in simulation, using simulators that mimic real-world executions on some target system. Each such simulator is configured by picking values for parameters that define the behavior of the underlying simulation models it implements. The main concern for a simulator is accuracy: simulated behaviors should be as close as possible to those observed in the real-world target system. This requires that values for each of the simulator's parameters be carefully picked, or "calibrated," based on ground-truth real-world executions. Examining the current state of the art shows that simulator calibration, at least in the field of parallel and distributed computing, is often undocumented (and thus perhaps often not performed) and, when documented, is described as a labor-intensive, manual process. In this work we evaluate the benefit of automating simulation calibration using simple algorithms. Specifically, we use a real-world case study from the field of High Energy Physics and compare automated calibration to calibration performed by a domain scientist. Our main finding is that automated calibration is on par with or significantly outperforms the calibration performed by the domain scientist. Furthermore, automated calibration makes it straightforward to operate desirable trade-offs between simulation accuracy and simulation speed. |
2112.05415 | Soheil Behnezhad | Soheil Behnezhad, Avrim Blum, Mahsa Derakhshan | Stochastic Vertex Cover with Few Queries | null | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the minimum vertex cover problem in the following stochastic
setting. Let $G$ be an arbitrary given graph, $p \in (0, 1]$ a parameter of the
problem, and let $G_p$ be a random subgraph that includes each edge of $G$
independently with probability $p$. We are unaware of the realization $G_p$,
but can learn if an edge $e$ exists in $G_p$ by querying it. The goal is to
find an approximate minimum vertex cover (MVC) of $G_p$ by querying few edges
of $G$ non-adaptively.
This stochastic setting has been studied extensively for various problems
such as minimum spanning trees, matroids, shortest paths, and matchings. To our
knowledge, however, no non-trivial bound was known for MVC prior to our work.
In this work, we present a:
* $(2+\epsilon)$-approximation for general graphs which queries
$O(\frac{1}{\epsilon^3 p})$ edges per vertex, and a
* $1.367$-approximation for bipartite graphs which queries $poly(1/p)$ edges
per vertex.
Additionally, we show that at the expense of a triple-exponential dependence
on $p^{-1}$ in the number of queries, the approximation ratio can be improved
down to $(1+\epsilon)$ for bipartite graphs.
Our techniques also lead to improved bounds for bipartite stochastic
matching. We obtain a $0.731$-approximation with nearly-linear in $1/p$
per-vertex queries. This is the first result to break the prevalent $(2/3 \sim
0.66)$-approximation barrier in the $poly(1/p)$ query regime, improving
algorithms of [Behnezhad et al; SODA'19] and [Assadi and Bernstein; SOSA'19].
| [
{
"created": "Fri, 10 Dec 2021 09:46:12 GMT",
"version": "v1"
}
] | 2021-12-13 | [
[
"Behnezhad",
"Soheil",
""
],
[
"Blum",
"Avrim",
""
],
[
"Derakhshan",
"Mahsa",
""
]
] | We study the minimum vertex cover problem in the following stochastic setting. Let $G$ be an arbitrary given graph, $p \in (0, 1]$ a parameter of the problem, and let $G_p$ be a random subgraph that includes each edge of $G$ independently with probability $p$. We are unaware of the realization $G_p$, but can learn if an edge $e$ exists in $G_p$ by querying it. The goal is to find an approximate minimum vertex cover (MVC) of $G_p$ by querying few edges of $G$ non-adaptively. This stochastic setting has been studied extensively for various problems such as minimum spanning trees, matroids, shortest paths, and matchings. To our knowledge, however, no non-trivial bound was known for MVC prior to our work. In this work, we present a: * $(2+\epsilon)$-approximation for general graphs which queries $O(\frac{1}{\epsilon^3 p})$ edges per vertex, and a * $1.367$-approximation for bipartite graphs which queries $poly(1/p)$ edges per vertex. Additionally, we show that at the expense of a triple-exponential dependence on $p^{-1}$ in the number of queries, the approximation ratio can be improved down to $(1+\epsilon)$ for bipartite graphs. Our techniques also lead to improved bounds for bipartite stochastic matching. We obtain a $0.731$-approximation with nearly-linear in $1/p$ per-vertex queries. This is the first result to break the prevalent $(2/3 \sim 0.66)$-approximation barrier in the $poly(1/p)$ query regime, improving algorithms of [Behnezhad et al; SODA'19] and [Assadi and Bernstein; SOSA'19]. |
1509.02470 | Jianguo Li | Jianwei Luo and Jianguo Li and Jun Wang and Zhiguo Jiang and Yurong
Chen | Deep Attributes from Context-Aware Regional Neural Codes | 10 pages, 8 figures | null | null | null | cs.CV cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, many researches employ middle-layer output of convolutional neural
network models (CNN) as features for different visual recognition tasks.
Although promising results have been achieved in some empirical studies, such
type of representations still suffer from the well-known issue of semantic gap.
This paper proposes so-called deep attribute framework to alleviate this issue
from three aspects. First, we introduce object region proposals as intermedia
to represent target images, and extract features from region proposals. Second,
we study aggregating features from different CNN layers for all region
proposals. The aggregation yields a holistic yet compact representation of
input images. Results show that cross-region max-pooling of soft-max layer
output outperform all other layers. As soft-max layer directly corresponds to
semantic concepts, this representation is named "deep attributes". Third, we
observe that only a small portion of generated regions by object proposals
algorithm are correlated to classification target. Therefore, we introduce
context-aware region refining algorithm to pick out contextual regions and
build context-aware classifiers.
We apply the proposed deep attributes framework for various vision tasks.
Extensive experiments are conducted on standard benchmarks for three visual
recognition tasks, i.e., image classification, fine-grained recognition and
visual instance retrieval. Results show that deep attribute approaches achieve
state-of-the-art results, and outperforms existing peer methods with a
significant margin, even though some benchmarks have little overlap of concepts
with the pre-trained CNN models.
| [
{
"created": "Tue, 8 Sep 2015 17:53:54 GMT",
"version": "v1"
}
] | 2015-09-09 | [
[
"Luo",
"Jianwei",
""
],
[
"Li",
"Jianguo",
""
],
[
"Wang",
"Jun",
""
],
[
"Jiang",
"Zhiguo",
""
],
[
"Chen",
"Yurong",
""
]
] | Recently, many researches employ middle-layer output of convolutional neural network models (CNN) as features for different visual recognition tasks. Although promising results have been achieved in some empirical studies, such type of representations still suffer from the well-known issue of semantic gap. This paper proposes so-called deep attribute framework to alleviate this issue from three aspects. First, we introduce object region proposals as intermedia to represent target images, and extract features from region proposals. Second, we study aggregating features from different CNN layers for all region proposals. The aggregation yields a holistic yet compact representation of input images. Results show that cross-region max-pooling of soft-max layer output outperform all other layers. As soft-max layer directly corresponds to semantic concepts, this representation is named "deep attributes". Third, we observe that only a small portion of generated regions by object proposals algorithm are correlated to classification target. Therefore, we introduce context-aware region refining algorithm to pick out contextual regions and build context-aware classifiers. We apply the proposed deep attributes framework for various vision tasks. Extensive experiments are conducted on standard benchmarks for three visual recognition tasks, i.e., image classification, fine-grained recognition and visual instance retrieval. Results show that deep attribute approaches achieve state-of-the-art results, and outperforms existing peer methods with a significant margin, even though some benchmarks have little overlap of concepts with the pre-trained CNN models. |
0901.3580 | Changho Suh | Changho Suh, David Tse | Feedback Capacity of the Gaussian Interference Channel to Within 1.7075
Bits: the Symmetric Case | submitted to the International Symposium and Information Theory | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We characterize the symmetric capacity to within 1.7075 bits/s/Hz for the
two-user Gaussian interference channel with feedback. The result makes use of a
deterministic model to provide insights into the Gaussian channel. We derive a
new outer bound to show that a proposed achievable scheme can achieve the
symmetric capacity to within 1.7075 bits for all channel parameters. From this
result, we show that feedback provides unbounded gain, i.e., the gain becomes
arbitrarily large for certain channel parameters. It is a surprising result
because feedback has been so far known to provide only power gain (bounded
gain) in the context of multiple access channels and broadcast channels.
| [
{
"created": "Fri, 23 Jan 2009 00:37:49 GMT",
"version": "v1"
}
] | 2009-01-26 | [
[
"Suh",
"Changho",
""
],
[
"Tse",
"David",
""
]
] | We characterize the symmetric capacity to within 1.7075 bits/s/Hz for the two-user Gaussian interference channel with feedback. The result makes use of a deterministic model to provide insights into the Gaussian channel. We derive a new outer bound to show that a proposed achievable scheme can achieve the symmetric capacity to within 1.7075 bits for all channel parameters. From this result, we show that feedback provides unbounded gain, i.e., the gain becomes arbitrarily large for certain channel parameters. It is a surprising result because feedback has been so far known to provide only power gain (bounded gain) in the context of multiple access channels and broadcast channels. |
2209.08740 | Christopher Meiklejohn | Christopher S. Meiklejohn, Rohan Padhye, Heather Miller | Distributed Execution Indexing | null | null | null | null | cs.DC | http://creativecommons.org/licenses/by/4.0/ | This work-in-progress report presents both the design and partial evaluation
of distributed execution indexing, a technique for microservice applications
that precisely identifies dynamic instances of inter-service remote procedure
calls (RPCs). Such an indexing scheme is critical for request-level fault
injection techniques, which aim to automatically find failure-handling bugs in
microservice applications.Distributed execution indexes enable granular
specification of request-level faults, while also establishing a correspondence
between inter-service RPCs across multiple executions, as is required to
perform a systematic search of the fault space.In this paper, we formally
define the general concept of a distributed execution index, which can be
parameterized on different ways of identifying an RPC in a single service. We
identify an instantiation that maintains precision in the presence of a variety
of program structure complexities such as loops, function indirection, and
concurrency with scheduling nondeterminism. We demonstrate that this particular
instantiation addresses gaps in the state-of-the-art in request-level fault
injection and show that they are all special cases of distributed execution
indexing. We discuss the implementation challenges and provide an
implementation of distributed execution indexing as an extension of
\Filibuster{}, a resilience testing tool for microservice applications for the
Java programming language, which supports fault injection for gRPC and HTTP.
| [
{
"created": "Mon, 19 Sep 2022 03:24:46 GMT",
"version": "v1"
}
] | 2022-09-20 | [
[
"Meiklejohn",
"Christopher S.",
""
],
[
"Padhye",
"Rohan",
""
],
[
"Miller",
"Heather",
""
]
] | This work-in-progress report presents both the design and partial evaluation of distributed execution indexing, a technique for microservice applications that precisely identifies dynamic instances of inter-service remote procedure calls (RPCs). Such an indexing scheme is critical for request-level fault injection techniques, which aim to automatically find failure-handling bugs in microservice applications.Distributed execution indexes enable granular specification of request-level faults, while also establishing a correspondence between inter-service RPCs across multiple executions, as is required to perform a systematic search of the fault space.In this paper, we formally define the general concept of a distributed execution index, which can be parameterized on different ways of identifying an RPC in a single service. We identify an instantiation that maintains precision in the presence of a variety of program structure complexities such as loops, function indirection, and concurrency with scheduling nondeterminism. We demonstrate that this particular instantiation addresses gaps in the state-of-the-art in request-level fault injection and show that they are all special cases of distributed execution indexing. We discuss the implementation challenges and provide an implementation of distributed execution indexing as an extension of \Filibuster{}, a resilience testing tool for microservice applications for the Java programming language, which supports fault injection for gRPC and HTTP. |
2306.14251 | Kai Gao | Andy Xu, Kai Gao, Si Wei Feng, Jingjin Yu | Optimal and Stable Multi-Layer Object Rearrangement on a Tabletop | Accepted by 2023 IROS - IEEE/RSJ International Conference on
Intelligent Robots | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Object rearrangement is a fundamental sub-task in accomplishing a great many
physical tasks. As such, effectively executing rearrangement is an important
skill for intelligent robots to master. In this study, we conduct the first
algorithmic study on optimally solving the problem of Multi-layer Object
Rearrangement on a Tabletop (MORT), in which one object may be relocated at a
time, and an object can only be moved if other objects do not block its top
surface. In addition, any intermediate structure during the reconfiguration
process must be physically stable, i.e., it should stand without external
support. To tackle the dual challenges of untangling the dependencies between
objects and ensuring structural stability, we develop an algorithm that
interleaves the computation of the optimal rearrangement plan and structural
stability checking. Using a carefully constructed integer linear programming
(ILP) model, our algorithm, Stability-aware Integer Programming-based Planner
(SIPP), readily scales to optimally solve complex rearrangement problems of 3D
structures with over 60 building blocks, with solution quality significantly
outperforming natural greedy best-first approaches.
Upon the publication of the manuscript, source code and data will be
available at https://github.com/arc-l/mort/
| [
{
"created": "Sun, 25 Jun 2023 13:52:01 GMT",
"version": "v1"
},
{
"created": "Fri, 30 Jun 2023 04:18:28 GMT",
"version": "v2"
}
] | 2023-07-03 | [
[
"Xu",
"Andy",
""
],
[
"Gao",
"Kai",
""
],
[
"Feng",
"Si Wei",
""
],
[
"Yu",
"Jingjin",
""
]
] | Object rearrangement is a fundamental sub-task in accomplishing a great many physical tasks. As such, effectively executing rearrangement is an important skill for intelligent robots to master. In this study, we conduct the first algorithmic study on optimally solving the problem of Multi-layer Object Rearrangement on a Tabletop (MORT), in which one object may be relocated at a time, and an object can only be moved if other objects do not block its top surface. In addition, any intermediate structure during the reconfiguration process must be physically stable, i.e., it should stand without external support. To tackle the dual challenges of untangling the dependencies between objects and ensuring structural stability, we develop an algorithm that interleaves the computation of the optimal rearrangement plan and structural stability checking. Using a carefully constructed integer linear programming (ILP) model, our algorithm, Stability-aware Integer Programming-based Planner (SIPP), readily scales to optimally solve complex rearrangement problems of 3D structures with over 60 building blocks, with solution quality significantly outperforming natural greedy best-first approaches. Upon the publication of the manuscript, source code and data will be available at https://github.com/arc-l/mort/ |
1502.05886 | Emilio Ferrara | Lei Le, Emilio Ferrara, Alessandro Flammini | On predictability of rare events leveraging social media: a machine
learning perspective | 10 pages, 10 tables, 8 figures | Proceedings of the 2015 ACM on Conference on Online Social
Networks (pp. 3-13). ACM. 2015 | 10.1145/2817946.2817949 | null | cs.SI cs.LG physics.data-an physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Information extracted from social media streams has been leveraged to
forecast the outcome of a large number of real-world events, from political
elections to stock market fluctuations. An increasing amount of studies
demonstrates how the analysis of social media conversations provides cheap
access to the wisdom of the crowd. However, extents and contexts in which such
forecasting power can be effectively leveraged are still unverified at least in
a systematic way. It is also unclear how social-media-based predictions compare
to those based on alternative information sources. To address these issues,
here we develop a machine learning framework that leverages social media
streams to automatically identify and predict the outcomes of soccer matches.
We focus in particular on matches in which at least one of the possible
outcomes is deemed as highly unlikely by professional bookmakers. We argue that
sport events offer a systematic approach for testing the predictive power of
social media, and allow to compare such power against the rigorous baselines
set by external sources. Despite such strict baselines, our framework yields
above 8% marginal profit when used to inform simple betting strategies. The
system is based on real-time sentiment analysis and exploits data collected
immediately before the games, allowing for informed bets. We discuss the
rationale behind our approach, describe the learning framework, its prediction
performance and the return it provides as compared to a set of betting
strategies. To test our framework we use both historical Twitter data from the
2014 FIFA World Cup games, and real-time Twitter data collected by monitoring
the conversations about all soccer matches of four major European tournaments
(FA Premier League, Serie A, La Liga, and Bundesliga), and the 2014 UEFA
Champions League, during the period between Oct. 25th 2014 and Nov. 26th 2014.
| [
{
"created": "Fri, 20 Feb 2015 14:42:26 GMT",
"version": "v1"
}
] | 2017-03-07 | [
[
"Le",
"Lei",
""
],
[
"Ferrara",
"Emilio",
""
],
[
"Flammini",
"Alessandro",
""
]
] | Information extracted from social media streams has been leveraged to forecast the outcome of a large number of real-world events, from political elections to stock market fluctuations. An increasing amount of studies demonstrates how the analysis of social media conversations provides cheap access to the wisdom of the crowd. However, extents and contexts in which such forecasting power can be effectively leveraged are still unverified at least in a systematic way. It is also unclear how social-media-based predictions compare to those based on alternative information sources. To address these issues, here we develop a machine learning framework that leverages social media streams to automatically identify and predict the outcomes of soccer matches. We focus in particular on matches in which at least one of the possible outcomes is deemed as highly unlikely by professional bookmakers. We argue that sport events offer a systematic approach for testing the predictive power of social media, and allow to compare such power against the rigorous baselines set by external sources. Despite such strict baselines, our framework yields above 8% marginal profit when used to inform simple betting strategies. The system is based on real-time sentiment analysis and exploits data collected immediately before the games, allowing for informed bets. We discuss the rationale behind our approach, describe the learning framework, its prediction performance and the return it provides as compared to a set of betting strategies. To test our framework we use both historical Twitter data from the 2014 FIFA World Cup games, and real-time Twitter data collected by monitoring the conversations about all soccer matches of four major European tournaments (FA Premier League, Serie A, La Liga, and Bundesliga), and the 2014 UEFA Champions League, during the period between Oct. 25th 2014 and Nov. 26th 2014. |
1909.12919 | Madhura Ingalhalikar | Sumeet Shinde, Tanay Chougule, Jitender Saini and Madhura Ingalhalikar | HR-CAM: Precise Localization of Pathology Using Multi-level Learning in
CNNs | Medical Image Computing and Computer Assisted Intervention, 2019 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a CNN based technique that aggregates feature maps from its
multiple layers that can localize abnormalities with greater details as well as
predict pathology under consideration. Existing class activation mapping (CAM)
techniques extract feature maps from either the final layer or a single
intermediate layer to create the discriminative maps and then interpolate to
upsample to the original image resolution. In this case, the subject specific
localization is coarse and is unable to capture subtle abnormalities. To
mitigate this, our method builds a novel CNN based discriminative localization
model that we call high resolution CAM (HR-CAM), which accounts for layers from
each resolution, therefore facilitating a comprehensive map that can delineate
the pathology for each subject by combining low-level, intermediate as well as
high-level features from the CNN. Moreover, our model directly provides the
discriminative map in the resolution of the original image facilitating finer
delineation of abnormalities. We demonstrate the working of our model on a
simulated abnormalities data where we illustrate how the model captures finer
details in the final discriminative maps as compared to current techniques. We
then apply this technique: (1) to classify ependymomas from grade IV
glioblastoma on T1-weighted contrast enhanced (T1-CE) MRI and (2) to predict
Parkinson's disease from neuromelanin sensitive MRI. In all these cases we
demonstrate that our model not only predicts pathologies with high accuracies,
but also creates clinically interpretable subject specific high resolution
discriminative localizations. Overall, the technique can be generalized to any
CNN and carries high relevance in a clinical setting.
| [
{
"created": "Mon, 23 Sep 2019 13:47:12 GMT",
"version": "v1"
}
] | 2019-10-01 | [
[
"Shinde",
"Sumeet",
""
],
[
"Chougule",
"Tanay",
""
],
[
"Saini",
"Jitender",
""
],
[
"Ingalhalikar",
"Madhura",
""
]
] | We propose a CNN based technique that aggregates feature maps from its multiple layers that can localize abnormalities with greater details as well as predict pathology under consideration. Existing class activation mapping (CAM) techniques extract feature maps from either the final layer or a single intermediate layer to create the discriminative maps and then interpolate to upsample to the original image resolution. In this case, the subject specific localization is coarse and is unable to capture subtle abnormalities. To mitigate this, our method builds a novel CNN based discriminative localization model that we call high resolution CAM (HR-CAM), which accounts for layers from each resolution, therefore facilitating a comprehensive map that can delineate the pathology for each subject by combining low-level, intermediate as well as high-level features from the CNN. Moreover, our model directly provides the discriminative map in the resolution of the original image facilitating finer delineation of abnormalities. We demonstrate the working of our model on a simulated abnormalities data where we illustrate how the model captures finer details in the final discriminative maps as compared to current techniques. We then apply this technique: (1) to classify ependymomas from grade IV glioblastoma on T1-weighted contrast enhanced (T1-CE) MRI and (2) to predict Parkinson's disease from neuromelanin sensitive MRI. In all these cases we demonstrate that our model not only predicts pathologies with high accuracies, but also creates clinically interpretable subject specific high resolution discriminative localizations. Overall, the technique can be generalized to any CNN and carries high relevance in a clinical setting. |
1603.09660 | Benjamin Marussig | Benjamin Marussig, J\"urgen Zechner, Gernot Beer and Thomas-Peter
Fries | Stable Isogeometric Analysis of Trimmed Geometries | null | null | 10.1016/j.cma.2016.07.040 | null | cs.NA math.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We explore extended B-splines as a stable basis for isogeometric analysis
with trimmed parameter spaces. The stabilization is accomplished by an
appropriate substitution of B-splines that may lead to ill-conditioned system
matrices. The construction for non-uniform knot vectors is presented. The
properties of extended B-splines are examined in the context of interpolation,
potential, and linear elasticity problems and excellent results are attained.
The analysis is performed by an isogeometric boundary element formulation using
collocation. It is argued that extended B-splines provide a flexible and simple
stabilization scheme which ideally suits the isogeometric paradigm.
| [
{
"created": "Thu, 31 Mar 2016 16:19:05 GMT",
"version": "v1"
},
{
"created": "Fri, 22 Jul 2016 16:43:01 GMT",
"version": "v2"
}
] | 2017-04-05 | [
[
"Marussig",
"Benjamin",
""
],
[
"Zechner",
"Jürgen",
""
],
[
"Beer",
"Gernot",
""
],
[
"Fries",
"Thomas-Peter",
""
]
] | We explore extended B-splines as a stable basis for isogeometric analysis with trimmed parameter spaces. The stabilization is accomplished by an appropriate substitution of B-splines that may lead to ill-conditioned system matrices. The construction for non-uniform knot vectors is presented. The properties of extended B-splines are examined in the context of interpolation, potential, and linear elasticity problems and excellent results are attained. The analysis is performed by an isogeometric boundary element formulation using collocation. It is argued that extended B-splines provide a flexible and simple stabilization scheme which ideally suits the isogeometric paradigm. |
2406.11737 | Clinton Wang | Clinton Wang, Peter Hedman, Polina Golland, Jonathan T. Barron, Daniel
Duckworth | InterNeRF: Scaling Radiance Fields via Parameter Interpolation | Presented at CVPR 2024 Neural Rendering Intelligence Workshop | null | null | null | cs.CV cs.GR | http://creativecommons.org/licenses/by/4.0/ | Neural Radiance Fields (NeRFs) have unmatched fidelity on large, real-world
scenes. A common approach for scaling NeRFs is to partition the scene into
regions, each of which is assigned its own parameters. When implemented
naively, such an approach is limited by poor test-time scaling and inconsistent
appearance and geometry. We instead propose InterNeRF, a novel architecture for
rendering a target view using a subset of the model's parameters. Our approach
enables out-of-core training and rendering, increasing total model capacity
with only a modest increase to training time. We demonstrate significant
improvements in multi-room scenes while remaining competitive on standard
benchmarks.
| [
{
"created": "Mon, 17 Jun 2024 16:55:22 GMT",
"version": "v1"
}
] | 2024-06-18 | [
[
"Wang",
"Clinton",
""
],
[
"Hedman",
"Peter",
""
],
[
"Golland",
"Polina",
""
],
[
"Barron",
"Jonathan T.",
""
],
[
"Duckworth",
"Daniel",
""
]
] | Neural Radiance Fields (NeRFs) have unmatched fidelity on large, real-world scenes. A common approach for scaling NeRFs is to partition the scene into regions, each of which is assigned its own parameters. When implemented naively, such an approach is limited by poor test-time scaling and inconsistent appearance and geometry. We instead propose InterNeRF, a novel architecture for rendering a target view using a subset of the model's parameters. Our approach enables out-of-core training and rendering, increasing total model capacity with only a modest increase to training time. We demonstrate significant improvements in multi-room scenes while remaining competitive on standard benchmarks. |
1706.00339 | Raffaella Carloni | Ludo C. Visser, Stefano Stramigioli, and Raffaella Carloni | Bipedal locomotion using variable stiffness actuation | null | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Robust and energy-efficient bipedal locomotion in robotics is still a
challenging topic. In order to address issues in this field, we can take
inspiration from nature, by studying human locomotion. The Spring-Loaded
Inverted Pendulum (SLIP) model has shown to be a good model for this purpose.
However, the human musculoskeletal system enables us to actively modulate leg
stiffness, for example when walking in rough terrain with irregular and
unexpected height variations of the walking surface. This ability of varying
leg stiffness is not considered in conventional SLIP-based models, and
therefore this paper explores the potential role of active leg stiffness
variation in bipedal locomotion. It is shown that the conceptual SLIP model can
be iteratively extended to more closely resemble a realistic (i.e., non-ideal)
walker, and that feedback control strategies can be designed that reproduce the
SLIP behavior in these extended models. We show that these extended models
realize a cost of transport comparable to human walking, which indicates that
active leg stiffness variation plays an important role in human locomotion that
was previously not captured by the SLIP model. The results of this study show
that active leg stiffness adaptation is a promising approach for realizing more
energy-efficient and robust bipedal walking robots.
| [
{
"created": "Thu, 1 Jun 2017 15:18:07 GMT",
"version": "v1"
}
] | 2017-06-02 | [
[
"Visser",
"Ludo C.",
""
],
[
"Stramigioli",
"Stefano",
""
],
[
"Carloni",
"Raffaella",
""
]
] | Robust and energy-efficient bipedal locomotion in robotics is still a challenging topic. In order to address issues in this field, we can take inspiration from nature, by studying human locomotion. The Spring-Loaded Inverted Pendulum (SLIP) model has shown to be a good model for this purpose. However, the human musculoskeletal system enables us to actively modulate leg stiffness, for example when walking in rough terrain with irregular and unexpected height variations of the walking surface. This ability of varying leg stiffness is not considered in conventional SLIP-based models, and therefore this paper explores the potential role of active leg stiffness variation in bipedal locomotion. It is shown that the conceptual SLIP model can be iteratively extended to more closely resemble a realistic (i.e., non-ideal) walker, and that feedback control strategies can be designed that reproduce the SLIP behavior in these extended models. We show that these extended models realize a cost of transport comparable to human walking, which indicates that active leg stiffness variation plays an important role in human locomotion that was previously not captured by the SLIP model. The results of this study show that active leg stiffness adaptation is a promising approach for realizing more energy-efficient and robust bipedal walking robots. |
1807.07711 | Minseok Choi Dr. | Minseok Choi, Daejung Yoon, and Joongheon Kim | Blind Signal Classification for Non-Orthogonal Multiple Access in
Vehicular Networks | 13 pages, 15 figures | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For downlink multiple-user (MU) transmission based on non-orthogonal multiple
access (NOMA), the advanced receiver strategy is required to cancel the
inter-user interference, e.g., successive interference cancellation (SIC). The
SIC process can be applicable only when information about the co-scheduled
signal is known at the user terminal (UT) side. In particular, the UT should
know whether the received signal is OMA or NOMA, whether SIC is required or
not, and which modulation orders and power ratios have been used for the
superposed UTs, before decoding the signal. An efficient network, e.g.,
vehicular network, requires that the UTs blindly classify the received signal
and apply a matching receiver strategy to reduce the high-layer signaling
overhead which is essential for high-mobility vehicular networks. In this
paper, we first analyze the performance impact of errors in NOMA signal
classification and address ensuing receiver challenges in practical MU usage
cases. In order to reduce the blind signal classification error rate, we
propose transmission schemes that rotate data symbols or pilots to a specific
phase according to the transmitted signal format. In the case of pilot
rotation, a new signal classification algorithm is also proposed. The
performance improvements by the proposed methods are verified by intensive
simulation results.
| [
{
"created": "Fri, 20 Jul 2018 06:25:47 GMT",
"version": "v1"
},
{
"created": "Sat, 2 Mar 2019 03:29:37 GMT",
"version": "v2"
},
{
"created": "Fri, 24 Jan 2020 22:37:55 GMT",
"version": "v3"
}
] | 2020-01-28 | [
[
"Choi",
"Minseok",
""
],
[
"Yoon",
"Daejung",
""
],
[
"Kim",
"Joongheon",
""
]
] | For downlink multiple-user (MU) transmission based on non-orthogonal multiple access (NOMA), the advanced receiver strategy is required to cancel the inter-user interference, e.g., successive interference cancellation (SIC). The SIC process can be applicable only when information about the co-scheduled signal is known at the user terminal (UT) side. In particular, the UT should know whether the received signal is OMA or NOMA, whether SIC is required or not, and which modulation orders and power ratios have been used for the superposed UTs, before decoding the signal. An efficient network, e.g., vehicular network, requires that the UTs blindly classify the received signal and apply a matching receiver strategy to reduce the high-layer signaling overhead which is essential for high-mobility vehicular networks. In this paper, we first analyze the performance impact of errors in NOMA signal classification and address ensuing receiver challenges in practical MU usage cases. In order to reduce the blind signal classification error rate, we propose transmission schemes that rotate data symbols or pilots to a specific phase according to the transmitted signal format. In the case of pilot rotation, a new signal classification algorithm is also proposed. The performance improvements by the proposed methods are verified by intensive simulation results. |
2006.16180 | Wenye Li | Wenye Li, Shuzhong Zhang | Binary Random Projections with Controllable Sparsity Patterns | 19 pages, 15 figures | null | null | null | cs.LG cs.IR stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Random projection is often used to project higher-dimensional vectors onto a
lower-dimensional space, while approximately preserving their pairwise
distances. It has emerged as a powerful tool in various data processing tasks
and has attracted considerable research interest. Partly motivated by the
recent discoveries in neuroscience, in this paper we study the problem of
random projection using binary matrices with controllable sparsity patterns.
Specifically, we proposed two sparse binary projection models that work on
general data vectors. Compared with the conventional random projection models
with dense projection matrices, our proposed models enjoy significant
computational advantages due to their sparsity structure, as well as improved
accuracies in empirical evaluations.
| [
{
"created": "Mon, 29 Jun 2020 16:45:26 GMT",
"version": "v1"
}
] | 2020-06-30 | [
[
"Li",
"Wenye",
""
],
[
"Zhang",
"Shuzhong",
""
]
] | Random projection is often used to project higher-dimensional vectors onto a lower-dimensional space, while approximately preserving their pairwise distances. It has emerged as a powerful tool in various data processing tasks and has attracted considerable research interest. Partly motivated by the recent discoveries in neuroscience, in this paper we study the problem of random projection using binary matrices with controllable sparsity patterns. Specifically, we proposed two sparse binary projection models that work on general data vectors. Compared with the conventional random projection models with dense projection matrices, our proposed models enjoy significant computational advantages due to their sparsity structure, as well as improved accuracies in empirical evaluations. |
2402.13153 | Simon D. Fink | Simon D. Fink, Matthias Pfretzschner, Ignaz Rutter, Marie Diana Sieper | Clustered Planarity Variants for Level Graphs | null | null | null | null | cs.CG cs.DS | http://creativecommons.org/licenses/by/4.0/ | We consider variants of the clustered planarity problem for level-planar
drawings. So far, only convex clusters have been studied in this setting. We
introduce two new variants that both insist on a level-planar drawing of the
input graph but relax the requirements on the shape of the clusters. In
unrestricted Clustered Level Planarity (uCLP) we only require that they are
bounded by simple closed curves that enclose exactly the vertices of the
cluster and cross each edge of the graph at most once. The problem y-monotone
Clustered Level Planarity (y-CLP) requires that additionally it must be
possible to augment each cluster with edges that do not cross the cluster
boundaries so that it becomes connected while the graph remains level-planar,
thereby mimicking a classic characterization of clustered planarity in the
level-planar setting.
We give a polynomial-time algorithm for uCLP if the input graph is
biconnected and has a single source. By contrast, we show that y-CLP is hard
under the same restrictions and it remains NP-hard even if the number of levels
is bounded by a constant and there is only a single non-trivial cluster.
| [
{
"created": "Tue, 20 Feb 2024 17:10:42 GMT",
"version": "v1"
}
] | 2024-02-21 | [
[
"Fink",
"Simon D.",
""
],
[
"Pfretzschner",
"Matthias",
""
],
[
"Rutter",
"Ignaz",
""
],
[
"Sieper",
"Marie Diana",
""
]
] | We consider variants of the clustered planarity problem for level-planar drawings. So far, only convex clusters have been studied in this setting. We introduce two new variants that both insist on a level-planar drawing of the input graph but relax the requirements on the shape of the clusters. In unrestricted Clustered Level Planarity (uCLP) we only require that they are bounded by simple closed curves that enclose exactly the vertices of the cluster and cross each edge of the graph at most once. The problem y-monotone Clustered Level Planarity (y-CLP) requires that additionally it must be possible to augment each cluster with edges that do not cross the cluster boundaries so that it becomes connected while the graph remains level-planar, thereby mimicking a classic characterization of clustered planarity in the level-planar setting. We give a polynomial-time algorithm for uCLP if the input graph is biconnected and has a single source. By contrast, we show that y-CLP is hard under the same restrictions and it remains NP-hard even if the number of levels is bounded by a constant and there is only a single non-trivial cluster. |
1901.07497 | Jiaxiao Zheng | Jiaxiao Zheng, Gustavo de Veciana | Elastic Multi-resource Network Slicing: Can Protection Lead to Improved
Performance? | null | null | null | null | cs.NI cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In order to meet the performance/privacy requirements of future
data-intensive mobile applications, e.g., self-driving cars, mobile data
analytics, and AR/VR, service providers are expected to draw on shared
storage/computation/connectivity resources at the network "edge". To be
cost-effective, a key functional requirement for such infrastructure is
enabling the sharing of heterogeneous resources amongst tenants/service
providers supporting spatially varying and dynamic user demands. This paper
proposes a resource allocation criterion, namely, Share Constrained Slicing
(SCS), for slices allocated predefined shares of the network's resources, which
extends the traditional alpha-fairness criterion, by striking a balance among
inter- and intra-slice fairness vs. overall efficiency. We show that SCS has
several desirable properties including slice-level protection, envyfreeness,
and load driven elasticity. In practice, mobile users' dynamics could make the
cost of implementing SCS high, so we discuss the feasibility of using a simpler
(dynamically) weighted max-min as a surrogate resource allocation scheme. For a
setting with stochastic loads and elastic user requirements, we establish a
sufficient condition for the stability of the associated coupled network
system. Finally, and perhaps surprisingly, we show via extensive simulations
that while SCS (and/or the surrogate weighted max-min allocation) provides
inter-slice protection, they can achieve improved job delay and/or perceived
throughput, as compared to other weighted max-min based allocation schemes
whose intra-slice weight allocation is not share-constrained, e.g., traditional
max-min or discriminatory processor sharing.
| [
{
"created": "Tue, 22 Jan 2019 18:16:25 GMT",
"version": "v1"
}
] | 2019-01-23 | [
[
"Zheng",
"Jiaxiao",
""
],
[
"de Veciana",
"Gustavo",
""
]
] | In order to meet the performance/privacy requirements of future data-intensive mobile applications, e.g., self-driving cars, mobile data analytics, and AR/VR, service providers are expected to draw on shared storage/computation/connectivity resources at the network "edge". To be cost-effective, a key functional requirement for such infrastructure is enabling the sharing of heterogeneous resources amongst tenants/service providers supporting spatially varying and dynamic user demands. This paper proposes a resource allocation criterion, namely, Share Constrained Slicing (SCS), for slices allocated predefined shares of the network's resources, which extends the traditional alpha-fairness criterion, by striking a balance among inter- and intra-slice fairness vs. overall efficiency. We show that SCS has several desirable properties including slice-level protection, envyfreeness, and load driven elasticity. In practice, mobile users' dynamics could make the cost of implementing SCS high, so we discuss the feasibility of using a simpler (dynamically) weighted max-min as a surrogate resource allocation scheme. For a setting with stochastic loads and elastic user requirements, we establish a sufficient condition for the stability of the associated coupled network system. Finally, and perhaps surprisingly, we show via extensive simulations that while SCS (and/or the surrogate weighted max-min allocation) provides inter-slice protection, they can achieve improved job delay and/or perceived throughput, as compared to other weighted max-min based allocation schemes whose intra-slice weight allocation is not share-constrained, e.g., traditional max-min or discriminatory processor sharing. |
2406.01581 | Denny Wu | Jason D. Lee and Kazusato Oko and Taiji Suzuki and Denny Wu | Neural network learns low-dimensional polynomials with SGD near the
information-theoretic limit | 34 pages | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the problem of gradient descent learning of a single-index target
function $f_*(\boldsymbol{x}) =
\textstyle\sigma_*\left(\langle\boldsymbol{x},\boldsymbol{\theta}\rangle\right)$
under isotropic Gaussian data in $\mathbb{R}^d$, where the link function
$\sigma_*:\mathbb{R}\to\mathbb{R}$ is an unknown degree $q$ polynomial with
information exponent $p$ (defined as the lowest degree in the Hermite
expansion). Prior works showed that gradient-based training of neural networks
can learn this target with $n\gtrsim d^{\Theta(p)}$ samples, and such
statistical complexity is predicted to be necessary by the correlational
statistical query lower bound. Surprisingly, we prove that a two-layer neural
network optimized by an SGD-based algorithm learns $f_*$ of arbitrary
polynomial link function with a sample and runtime complexity of $n \asymp T
\asymp C(q) \cdot d\mathrm{polylog} d$, where constant $C(q)$ only depends on
the degree of $\sigma_*$, regardless of information exponent; this dimension
dependence matches the information theoretic limit up to polylogarithmic
factors. Core to our analysis is the reuse of minibatch in the gradient
computation, which gives rise to higher-order information beyond correlational
queries.
| [
{
"created": "Mon, 3 Jun 2024 17:56:58 GMT",
"version": "v1"
}
] | 2024-06-04 | [
[
"Lee",
"Jason D.",
""
],
[
"Oko",
"Kazusato",
""
],
[
"Suzuki",
"Taiji",
""
],
[
"Wu",
"Denny",
""
]
] | We study the problem of gradient descent learning of a single-index target function $f_*(\boldsymbol{x}) = \textstyle\sigma_*\left(\langle\boldsymbol{x},\boldsymbol{\theta}\rangle\right)$ under isotropic Gaussian data in $\mathbb{R}^d$, where the link function $\sigma_*:\mathbb{R}\to\mathbb{R}$ is an unknown degree $q$ polynomial with information exponent $p$ (defined as the lowest degree in the Hermite expansion). Prior works showed that gradient-based training of neural networks can learn this target with $n\gtrsim d^{\Theta(p)}$ samples, and such statistical complexity is predicted to be necessary by the correlational statistical query lower bound. Surprisingly, we prove that a two-layer neural network optimized by an SGD-based algorithm learns $f_*$ of arbitrary polynomial link function with a sample and runtime complexity of $n \asymp T \asymp C(q) \cdot d\mathrm{polylog} d$, where constant $C(q)$ only depends on the degree of $\sigma_*$, regardless of information exponent; this dimension dependence matches the information theoretic limit up to polylogarithmic factors. Core to our analysis is the reuse of minibatch in the gradient computation, which gives rise to higher-order information beyond correlational queries. |
2205.10138 | J\'an Koloda | J\'an Koloda, J\"urgen Seiler and Andr\'e Kaup | Reliability-based Mesh-to-Grid Image Reconstruction | null | 2016 IEEE 18th International Workshop on Multimedia Signal
Processing (MMSP), 2016, pp. 1-5 | 10.1109/MMSP.2016.7813344 | null | cs.CV eess.IV | http://creativecommons.org/licenses/by/4.0/ | This paper presents a novel method for the reconstruction of images from
samples located at non-integer positions, called mesh. This is a common
scenario for many image processing applications, such as super-resolution,
warping or virtual view generation in multi-camera systems. The proposed method
relies on a set of initial estimates that are later refined by a new
reliability-based content-adaptive framework that employs denoising in order to
reduce the reconstruction error. The reliability of the initial estimate is
computed so stronger denoising is applied to less reliable estimates. The
proposed technique can improve the reconstruction quality by more than 2 dB (in
terms of PSNR) with respect to the initial estimate and it outperforms the
state-of-the-art denoising-based refinement by up to 0.7 dB.
| [
{
"created": "Fri, 20 May 2022 12:32:52 GMT",
"version": "v1"
}
] | 2022-05-23 | [
[
"Koloda",
"Ján",
""
],
[
"Seiler",
"Jürgen",
""
],
[
"Kaup",
"André",
""
]
] | This paper presents a novel method for the reconstruction of images from samples located at non-integer positions, called mesh. This is a common scenario for many image processing applications, such as super-resolution, warping or virtual view generation in multi-camera systems. The proposed method relies on a set of initial estimates that are later refined by a new reliability-based content-adaptive framework that employs denoising in order to reduce the reconstruction error. The reliability of the initial estimate is computed so stronger denoising is applied to less reliable estimates. The proposed technique can improve the reconstruction quality by more than 2 dB (in terms of PSNR) with respect to the initial estimate and it outperforms the state-of-the-art denoising-based refinement by up to 0.7 dB. |
1802.08275 | Hang Su | Hang Su, Varun Jampani, Deqing Sun, Subhransu Maji, Evangelos
Kalogerakis, Ming-Hsuan Yang, and Jan Kautz | SPLATNet: Sparse Lattice Networks for Point Cloud Processing | Camera-ready, accepted to CVPR 2018 (oral); project website:
http://vis-www.cs.umass.edu/splatnet/ | null | null | null | cs.CV cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a network architecture for processing point clouds that directly
operates on a collection of points represented as a sparse set of samples in a
high-dimensional lattice. Naively applying convolutions on this lattice scales
poorly, both in terms of memory and computational cost, as the size of the
lattice increases. Instead, our network uses sparse bilateral convolutional
layers as building blocks. These layers maintain efficiency by using indexing
structures to apply convolutions only on occupied parts of the lattice, and
allow flexible specifications of the lattice structure enabling hierarchical
and spatially-aware feature learning, as well as joint 2D-3D reasoning. Both
point-based and image-based representations can be easily incorporated in a
network with such layers and the resulting model can be trained in an
end-to-end manner. We present results on 3D segmentation tasks where our
approach outperforms existing state-of-the-art techniques.
| [
{
"created": "Thu, 22 Feb 2018 19:30:09 GMT",
"version": "v1"
},
{
"created": "Mon, 26 Feb 2018 22:16:31 GMT",
"version": "v2"
},
{
"created": "Thu, 29 Mar 2018 18:36:16 GMT",
"version": "v3"
},
{
"created": "Wed, 9 May 2018 14:22:41 GMT",
"version": "v4"
}
] | 2018-05-10 | [
[
"Su",
"Hang",
""
],
[
"Jampani",
"Varun",
""
],
[
"Sun",
"Deqing",
""
],
[
"Maji",
"Subhransu",
""
],
[
"Kalogerakis",
"Evangelos",
""
],
[
"Yang",
"Ming-Hsuan",
""
],
[
"Kautz",
"Jan",
""
]
] | We present a network architecture for processing point clouds that directly operates on a collection of points represented as a sparse set of samples in a high-dimensional lattice. Naively applying convolutions on this lattice scales poorly, both in terms of memory and computational cost, as the size of the lattice increases. Instead, our network uses sparse bilateral convolutional layers as building blocks. These layers maintain efficiency by using indexing structures to apply convolutions only on occupied parts of the lattice, and allow flexible specifications of the lattice structure enabling hierarchical and spatially-aware feature learning, as well as joint 2D-3D reasoning. Both point-based and image-based representations can be easily incorporated in a network with such layers and the resulting model can be trained in an end-to-end manner. We present results on 3D segmentation tasks where our approach outperforms existing state-of-the-art techniques. |
1902.10224 | Richa Tripathi | Richa Tripathi, Amit Reza and Dinesh Garg | Prediction of the disease controllability in a complex network using
machine learning algorithms | null | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The application of machine learning (ML) techniques span a vast spectrum
ranging from speech, face and character recognition, medical diagnosis, anomaly
detection in data to the general classification, prediction, and regression
problems. In the present work, we solve the problem of predicting R_0 for
disease spreading on complex networks using the regression-based state-of-art
ML techniques. R_0 is a metric that determines whether the disease-free
epidemic or an endemic state is asymptotically stable and hence indicates the
controllability of the disease spread. We predict R_0 , based on training the
ML models with structural properties of complex networks, irrespective of the
network type. The prediction is possible because: (a) The structure of complex
networks plays an essential role in the spreading processes on networks (b) The
regression techniques such as Support Vector Regression and Artificial Neural
Network Model can be very efficiently used for prediction problems, even for
non-linear data. We obtained good accuracy in the prediction of R_0 for the
simulated networks as well as real-world networks using these techniques.
Moreover, the ML model training is a one-time investment cost in terms of
training time and memory, and the trained model can be used for predicting R_0
on unseen/new examples of networks.
| [
{
"created": "Tue, 26 Feb 2019 21:12:50 GMT",
"version": "v1"
},
{
"created": "Fri, 1 May 2020 19:58:31 GMT",
"version": "v2"
}
] | 2020-05-05 | [
[
"Tripathi",
"Richa",
""
],
[
"Reza",
"Amit",
""
],
[
"Garg",
"Dinesh",
""
]
] | The application of machine learning (ML) techniques span a vast spectrum ranging from speech, face and character recognition, medical diagnosis, anomaly detection in data to the general classification, prediction, and regression problems. In the present work, we solve the problem of predicting R_0 for disease spreading on complex networks using the regression-based state-of-art ML techniques. R_0 is a metric that determines whether the disease-free epidemic or an endemic state is asymptotically stable and hence indicates the controllability of the disease spread. We predict R_0 , based on training the ML models with structural properties of complex networks, irrespective of the network type. The prediction is possible because: (a) The structure of complex networks plays an essential role in the spreading processes on networks (b) The regression techniques such as Support Vector Regression and Artificial Neural Network Model can be very efficiently used for prediction problems, even for non-linear data. We obtained good accuracy in the prediction of R_0 for the simulated networks as well as real-world networks using these techniques. Moreover, the ML model training is a one-time investment cost in terms of training time and memory, and the trained model can be used for predicting R_0 on unseen/new examples of networks. |
2010.12305 | Lukas Lange | Lukas Lange, Heike Adel, Jannik Str\"otgen, Dietrich Klakow | FAME: Feature-Based Adversarial Meta-Embeddings for Robust Input
Representations | Accepted at EMNLP 2021 | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Combining several embeddings typically improves performance in downstream
tasks as different embeddings encode different information. It has been shown
that even models using embeddings from transformers still benefit from the
inclusion of standard word embeddings. However, the combination of embeddings
of different types and dimensions is challenging. As an alternative to
attention-based meta-embeddings, we propose feature-based adversarial
meta-embeddings (FAME) with an attention function that is guided by features
reflecting word-specific properties, such as shape and frequency, and show that
this is beneficial to handle subword-based embeddings. In addition, FAME uses
adversarial training to optimize the mappings of differently-sized embeddings
to the same space. We demonstrate that FAME works effectively across languages
and domains for sequence labeling and sentence classification, in particular in
low-resource settings. FAME sets the new state of the art for POS tagging in 27
languages, various NER settings and question classification in different
domains.
| [
{
"created": "Fri, 23 Oct 2020 11:16:53 GMT",
"version": "v1"
},
{
"created": "Fri, 29 Oct 2021 14:05:41 GMT",
"version": "v2"
}
] | 2021-11-01 | [
[
"Lange",
"Lukas",
""
],
[
"Adel",
"Heike",
""
],
[
"Strötgen",
"Jannik",
""
],
[
"Klakow",
"Dietrich",
""
]
] | Combining several embeddings typically improves performance in downstream tasks as different embeddings encode different information. It has been shown that even models using embeddings from transformers still benefit from the inclusion of standard word embeddings. However, the combination of embeddings of different types and dimensions is challenging. As an alternative to attention-based meta-embeddings, we propose feature-based adversarial meta-embeddings (FAME) with an attention function that is guided by features reflecting word-specific properties, such as shape and frequency, and show that this is beneficial to handle subword-based embeddings. In addition, FAME uses adversarial training to optimize the mappings of differently-sized embeddings to the same space. We demonstrate that FAME works effectively across languages and domains for sequence labeling and sentence classification, in particular in low-resource settings. FAME sets the new state of the art for POS tagging in 27 languages, various NER settings and question classification in different domains. |
2101.12169 | Navneet Garg | Navneet Garg, Junkai Zhang, Tharmalingam Ratnarajah | Rate-Energy Balanced Precoding Design for SWIPT based Two-Way Relay
Systems | arXiv admin note: text overlap with arXiv:2101.12161 | null | null | null | cs.IT math.IT | http://creativecommons.org/licenses/by/4.0/ | Simultaneous wireless information and power transfer (SWIPT) technique is a
popular strategy to convey both information and RF energy for harvesting at
receivers. In this regard, we consider a two-way relay system with multiple
users and a multi-antenna relay employing SWIPT strategy, where splitting the
received signal leads to a rate-energy trade-off. In literature, the works on
transceiver design have been studied using computationally intensive and
suboptimal convex relaxation based schemes. In this paper, we study the
balanced precoder design using chordal distance (CD) decomposition, which
incurs much lower complexity, and is flexible to dynamic energy requirements.
It is analyzed that given a non-negative value of CD, the achieved harvested
energy for the proposed balanced precoder is higher than that for the perfect
interference alignment (IA) precoder. The corresponding loss in sum rates is
also analyzed via an upper bound. Simulation results add that the IA schemes
based on mean-squared error are better suited for the SWIPT maximization than
the subspace alignment-based methods.
| [
{
"created": "Thu, 28 Jan 2021 18:23:47 GMT",
"version": "v1"
},
{
"created": "Sun, 6 Jun 2021 02:27:47 GMT",
"version": "v2"
}
] | 2021-06-08 | [
[
"Garg",
"Navneet",
""
],
[
"Zhang",
"Junkai",
""
],
[
"Ratnarajah",
"Tharmalingam",
""
]
] | Simultaneous wireless information and power transfer (SWIPT) technique is a popular strategy to convey both information and RF energy for harvesting at receivers. In this regard, we consider a two-way relay system with multiple users and a multi-antenna relay employing SWIPT strategy, where splitting the received signal leads to a rate-energy trade-off. In literature, the works on transceiver design have been studied using computationally intensive and suboptimal convex relaxation based schemes. In this paper, we study the balanced precoder design using chordal distance (CD) decomposition, which incurs much lower complexity, and is flexible to dynamic energy requirements. It is analyzed that given a non-negative value of CD, the achieved harvested energy for the proposed balanced precoder is higher than that for the perfect interference alignment (IA) precoder. The corresponding loss in sum rates is also analyzed via an upper bound. Simulation results add that the IA schemes based on mean-squared error are better suited for the SWIPT maximization than the subspace alignment-based methods. |
1301.3457 | Marcelo Cicconet | Marcelo Cicconet, Italo Lima, Davi Geiger, Kris Gunsalus | A Geometric Descriptor for Cell-Division Detection | This paper has been withdrawn by the author since the review process
for the conference to which it was applied ended | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe a method for cell-division detection based on a geometric-driven
descriptor that can be represented as a 5-layers processing network, based
mainly on wavelet filtering and a test for mirror symmetry between pairs of
pixels. After the centroids of the descriptors are computed for a sequence of
frames, the two-steps piecewise constant function that best fits the sequence
of centroids determines the frame where the division occurs.
| [
{
"created": "Tue, 15 Jan 2013 19:18:52 GMT",
"version": "v1"
},
{
"created": "Wed, 10 Apr 2013 18:32:07 GMT",
"version": "v2"
}
] | 2013-04-11 | [
[
"Cicconet",
"Marcelo",
""
],
[
"Lima",
"Italo",
""
],
[
"Geiger",
"Davi",
""
],
[
"Gunsalus",
"Kris",
""
]
] | We describe a method for cell-division detection based on a geometric-driven descriptor that can be represented as a 5-layers processing network, based mainly on wavelet filtering and a test for mirror symmetry between pairs of pixels. After the centroids of the descriptors are computed for a sequence of frames, the two-steps piecewise constant function that best fits the sequence of centroids determines the frame where the division occurs. |
2405.01857 | Seonyeong Heo | Byungchul Chae and Jiae Kim and Seonyeong Heo | TinySeg: Model Optimizing Framework for Image Segmentation on Tiny
Embedded Systems | LCTES 2024 | null | 10.1145/3652032.3657576 | null | cs.NE cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Image segmentation is one of the major computer vision tasks, which is
applicable in a variety of domains, such as autonomous navigation of an
unmanned aerial vehicle. However, image segmentation cannot easily materialize
on tiny embedded systems because image segmentation models generally have high
peak memory usage due to their architectural characteristics. This work finds
that image segmentation models unnecessarily require large memory space with an
existing tiny machine learning framework. That is, the existing framework
cannot effectively manage the memory space for the image segmentation models.
This work proposes TinySeg, a new model optimizing framework that enables
memory-efficient image segmentation for tiny embedded systems. TinySeg analyzes
the lifetimes of tensors in the target model and identifies long-living
tensors. Then, TinySeg optimizes the memory usage of the target model mainly
with two methods: (i) tensor spilling into local or remote storage and (ii)
fused fetching of spilled tensors. This work implements TinySeg on top of the
existing tiny machine learning framework and demonstrates that TinySeg can
reduce the peak memory usage of an image segmentation model by 39.3% for tiny
embedded systems.
| [
{
"created": "Fri, 3 May 2024 05:18:35 GMT",
"version": "v1"
}
] | 2024-05-06 | [
[
"Chae",
"Byungchul",
""
],
[
"Kim",
"Jiae",
""
],
[
"Heo",
"Seonyeong",
""
]
] | Image segmentation is one of the major computer vision tasks, which is applicable in a variety of domains, such as autonomous navigation of an unmanned aerial vehicle. However, image segmentation cannot easily materialize on tiny embedded systems because image segmentation models generally have high peak memory usage due to their architectural characteristics. This work finds that image segmentation models unnecessarily require large memory space with an existing tiny machine learning framework. That is, the existing framework cannot effectively manage the memory space for the image segmentation models. This work proposes TinySeg, a new model optimizing framework that enables memory-efficient image segmentation for tiny embedded systems. TinySeg analyzes the lifetimes of tensors in the target model and identifies long-living tensors. Then, TinySeg optimizes the memory usage of the target model mainly with two methods: (i) tensor spilling into local or remote storage and (ii) fused fetching of spilled tensors. This work implements TinySeg on top of the existing tiny machine learning framework and demonstrates that TinySeg can reduce the peak memory usage of an image segmentation model by 39.3% for tiny embedded systems. |
2209.09022 | Sean Wilkinson | Sean R. Wilkinson, Greg Eisenhauer, Anuj J. Kapadia, Kathryn Knight,
Jeremy Logan, Patrick Widener, and Matthew Wolf | F*** workflows: when parts of FAIR are missing | 6 pages, 0 figures, accepted to ERROR 2022 workshop (see
https://error-workshop.org/ for more information), to be published in
proceedings of IEEE eScience 2022 | null | 10.1109/eScience55777.2022.00090 | null | cs.DL cs.SE | http://creativecommons.org/licenses/by/4.0/ | The FAIR principles for scientific data (Findable, Accessible, Interoperable,
Reusable) are also relevant to other digital objects such as research software
and scientific workflows that operate on scientific data. The FAIR principles
can be applied to the data being handled by a scientific workflow as well as
the processes, software, and other infrastructure which are necessary to
specify and execute a workflow. The FAIR principles were designed as
guidelines, rather than rules, that would allow for differences in standards
for different communities and for different degrees of compliance. There are
many practical considerations which impact the level of FAIR-ness that can
actually be achieved, including policies, traditions, and technologies. Because
of these considerations, obstacles are often encountered during the workflow
lifecycle that trace directly to shortcomings in the implementation of the FAIR
principles. Here, we detail some cases, without naming names, in which data and
workflows were Findable but otherwise lacking in areas commonly needed and
expected by modern FAIR methods, tools, and users. We describe how some of
these problems, all of which were overcome successfully, have motivated us to
push on systems and approaches for fully FAIR workflows.
| [
{
"created": "Mon, 19 Sep 2022 13:58:51 GMT",
"version": "v1"
}
] | 2022-12-16 | [
[
"Wilkinson",
"Sean R.",
""
],
[
"Eisenhauer",
"Greg",
""
],
[
"Kapadia",
"Anuj J.",
""
],
[
"Knight",
"Kathryn",
""
],
[
"Logan",
"Jeremy",
""
],
[
"Widener",
"Patrick",
""
],
[
"Wolf",
"Matthew",
""
]
] | The FAIR principles for scientific data (Findable, Accessible, Interoperable, Reusable) are also relevant to other digital objects such as research software and scientific workflows that operate on scientific data. The FAIR principles can be applied to the data being handled by a scientific workflow as well as the processes, software, and other infrastructure which are necessary to specify and execute a workflow. The FAIR principles were designed as guidelines, rather than rules, that would allow for differences in standards for different communities and for different degrees of compliance. There are many practical considerations which impact the level of FAIR-ness that can actually be achieved, including policies, traditions, and technologies. Because of these considerations, obstacles are often encountered during the workflow lifecycle that trace directly to shortcomings in the implementation of the FAIR principles. Here, we detail some cases, without naming names, in which data and workflows were Findable but otherwise lacking in areas commonly needed and expected by modern FAIR methods, tools, and users. We describe how some of these problems, all of which were overcome successfully, have motivated us to push on systems and approaches for fully FAIR workflows. |
2212.03340 | Geoffrey Ramseyer | Mohak Goyal and Geoffrey Ramseyer and Ashish Goel and David Mazi\`eres | Finding the Right Curve: Optimal Design of Constant Function Market
Makers | 31 pages | null | null | null | cs.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Constant Function Market Makers (CFMMs) are a tool for creating exchange
markets, have been deployed effectively in prediction markets, and are now
especially prominent in the Decentralized Finance ecosystem. We show that for
any set of beliefs about future asset prices, an optimal CFMM trading function
exists that maximizes the fraction of trades that a CFMM can settle. We
formulate a convex program to compute this optimal trading function. This
program, therefore, gives a tractable framework for market-makers to compile
their belief function on the future prices of the underlying assets into the
trading function of a maximally capital-efficient CFMM.
Our convex optimization framework further extends to capture the tradeoffs
between fee revenue, arbitrage loss, and opportunity costs of liquidity
providers. Analyzing the program shows how the consideration of profit and loss
leads to a qualitatively different optimal trading function. Our model
additionally explains the diversity of CFMM designs that appear in practice. We
show that careful analysis of our convex program enables inference of a
market-maker's beliefs about future asset prices, and show that these beliefs
mirror the folklore intuition for several widely used CFMMs. Developing the
program requires a new notion of the liquidity of a CFMM, and the core
technical challenge is in the analysis of the KKT conditions of an optimization
over an infinite-dimensional Banach space.
| [
{
"created": "Tue, 6 Dec 2022 21:43:06 GMT",
"version": "v1"
},
{
"created": "Thu, 2 Mar 2023 23:17:02 GMT",
"version": "v2"
}
] | 2023-03-06 | [
[
"Goyal",
"Mohak",
""
],
[
"Ramseyer",
"Geoffrey",
""
],
[
"Goel",
"Ashish",
""
],
[
"Mazières",
"David",
""
]
] | Constant Function Market Makers (CFMMs) are a tool for creating exchange markets, have been deployed effectively in prediction markets, and are now especially prominent in the Decentralized Finance ecosystem. We show that for any set of beliefs about future asset prices, an optimal CFMM trading function exists that maximizes the fraction of trades that a CFMM can settle. We formulate a convex program to compute this optimal trading function. This program, therefore, gives a tractable framework for market-makers to compile their belief function on the future prices of the underlying assets into the trading function of a maximally capital-efficient CFMM. Our convex optimization framework further extends to capture the tradeoffs between fee revenue, arbitrage loss, and opportunity costs of liquidity providers. Analyzing the program shows how the consideration of profit and loss leads to a qualitatively different optimal trading function. Our model additionally explains the diversity of CFMM designs that appear in practice. We show that careful analysis of our convex program enables inference of a market-maker's beliefs about future asset prices, and show that these beliefs mirror the folklore intuition for several widely used CFMMs. Developing the program requires a new notion of the liquidity of a CFMM, and the core technical challenge is in the analysis of the KKT conditions of an optimization over an infinite-dimensional Banach space. |
2210.09428 | Thomas Effland | Thomas Effland and Michael Collins | Improving Low-Resource Cross-lingual Parsing with Expected Statistic
Regularization | Accepted in TACL 2022, pre-MIT Press publication version | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | We present Expected Statistic Regularization (ESR), a novel regularization
technique that utilizes low-order multi-task structural statistics to shape
model distributions for semi-supervised learning on low-resource datasets. We
study ESR in the context of cross-lingual transfer for syntactic analysis (POS
tagging and labeled dependency parsing) and present several classes of
low-order statistic functions that bear on model behavior. Experimentally, we
evaluate the proposed statistics with ESR for unsupervised transfer on 5
diverse target languages and show that all statistics, when estimated
accurately, yield improvements to both POS and LAS, with the best statistic
improving POS by +7.0 and LAS by +8.5 on average. We also present
semi-supervised transfer and learning curve experiments that show ESR provides
significant gains over strong cross-lingual-transfer-plus-fine-tuning baselines
for modest amounts of label data. These results indicate that ESR is a
promising and complementary approach to model-transfer approaches for
cross-lingual parsing.
| [
{
"created": "Mon, 17 Oct 2022 20:44:57 GMT",
"version": "v1"
}
] | 2022-10-19 | [
[
"Effland",
"Thomas",
""
],
[
"Collins",
"Michael",
""
]
] | We present Expected Statistic Regularization (ESR), a novel regularization technique that utilizes low-order multi-task structural statistics to shape model distributions for semi-supervised learning on low-resource datasets. We study ESR in the context of cross-lingual transfer for syntactic analysis (POS tagging and labeled dependency parsing) and present several classes of low-order statistic functions that bear on model behavior. Experimentally, we evaluate the proposed statistics with ESR for unsupervised transfer on 5 diverse target languages and show that all statistics, when estimated accurately, yield improvements to both POS and LAS, with the best statistic improving POS by +7.0 and LAS by +8.5 on average. We also present semi-supervised transfer and learning curve experiments that show ESR provides significant gains over strong cross-lingual-transfer-plus-fine-tuning baselines for modest amounts of label data. These results indicate that ESR is a promising and complementary approach to model-transfer approaches for cross-lingual parsing. |
1301.6176 | Thijs Laarhoven | Thijs Laarhoven and Michele Mosca and Joop van de Pol | Solving the Shortest Vector Problem in Lattices Faster Using Quantum
Search | 19 pages | 5th International Workshop on Post-Quantum Cryptography
(PQCrypto), pp. 83-101, 2013 | 10.1007/978-3-642-38616-9_6 | null | cs.CR quant-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | By applying Grover's quantum search algorithm to the lattice algorithms of
Micciancio and Voulgaris, Nguyen and Vidick, Wang et al., and Pujol and
Stehl\'{e}, we obtain improved asymptotic quantum results for solving the
shortest vector problem. With quantum computers we can provably find a shortest
vector in time $2^{1.799n + o(n)}$, improving upon the classical time
complexity of $2^{2.465n + o(n)}$ of Pujol and Stehl\'{e} and the $2^{2n +
o(n)}$ of Micciancio and Voulgaris, while heuristically we expect to find a
shortest vector in time $2^{0.312n + o(n)}$, improving upon the classical time
complexity of $2^{0.384n + o(n)}$ of Wang et al. These quantum complexities
will be an important guide for the selection of parameters for post-quantum
cryptosystems based on the hardness of the shortest vector problem.
| [
{
"created": "Fri, 25 Jan 2013 21:15:55 GMT",
"version": "v1"
}
] | 2013-06-12 | [
[
"Laarhoven",
"Thijs",
""
],
[
"Mosca",
"Michele",
""
],
[
"van de Pol",
"Joop",
""
]
] | By applying Grover's quantum search algorithm to the lattice algorithms of Micciancio and Voulgaris, Nguyen and Vidick, Wang et al., and Pujol and Stehl\'{e}, we obtain improved asymptotic quantum results for solving the shortest vector problem. With quantum computers we can provably find a shortest vector in time $2^{1.799n + o(n)}$, improving upon the classical time complexity of $2^{2.465n + o(n)}$ of Pujol and Stehl\'{e} and the $2^{2n + o(n)}$ of Micciancio and Voulgaris, while heuristically we expect to find a shortest vector in time $2^{0.312n + o(n)}$, improving upon the classical time complexity of $2^{0.384n + o(n)}$ of Wang et al. These quantum complexities will be an important guide for the selection of parameters for post-quantum cryptosystems based on the hardness of the shortest vector problem. |
2009.00146 | Ioannis Kordonis | Ioannis Kordonis, Athanasios-Rafail Lagos, George P. Papavassilopoulos | Nash Social Distancing Games with Equity Constraints: How Inequality
Aversion Affects the Spread of Epidemics | null | null | null | null | cs.GT cs.SY eess.SY math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present a game-theoretic model describing voluntary social
distancing during the spread of an epidemic. The payoffs of the agents depend
on the social distancing they practice and on the probability of getting
infected. We consider two types of agents, the non-vulnerable agents who have a
small cost if they get infected, and the vulnerable agents who have a higher
cost. For the modeling of the epidemic outbreak, we consider a variant of the
SIR (Susceptible-Infected-Removed) model involving populations of susceptible,
infected, and removed persons of vulnerable and non-vulnerable types. The Nash
equilibria of this social distancing game are studied. The main contribution of
this work is the analysis of the case where the players, desiring to achieve a
low social inequality, pose a bound on the variance of the payoffs. In this
case, we introduce and characterize a notion of Generalized Nash Equilibrium
(GNE) for games with a continuum of players. Through numerical studies, we show
that inequality constraints result in a slower spread of the epidemic and an
improved cost for the vulnerable players. Furthermore, it is possible that
inequality constraints are beneficial for non-vulnerable players as well.
| [
{
"created": "Mon, 31 Aug 2020 23:28:52 GMT",
"version": "v1"
},
{
"created": "Sat, 3 Apr 2021 17:42:18 GMT",
"version": "v2"
}
] | 2021-04-06 | [
[
"Kordonis",
"Ioannis",
""
],
[
"Lagos",
"Athanasios-Rafail",
""
],
[
"Papavassilopoulos",
"George P.",
""
]
] | In this paper, we present a game-theoretic model describing voluntary social distancing during the spread of an epidemic. The payoffs of the agents depend on the social distancing they practice and on the probability of getting infected. We consider two types of agents, the non-vulnerable agents who have a small cost if they get infected, and the vulnerable agents who have a higher cost. For the modeling of the epidemic outbreak, we consider a variant of the SIR (Susceptible-Infected-Removed) model involving populations of susceptible, infected, and removed persons of vulnerable and non-vulnerable types. The Nash equilibria of this social distancing game are studied. The main contribution of this work is the analysis of the case where the players, desiring to achieve a low social inequality, pose a bound on the variance of the payoffs. In this case, we introduce and characterize a notion of Generalized Nash Equilibrium (GNE) for games with a continuum of players. Through numerical studies, we show that inequality constraints result in a slower spread of the epidemic and an improved cost for the vulnerable players. Furthermore, it is possible that inequality constraints are beneficial for non-vulnerable players as well. |
2101.10897 | Yunxiang Zhao | Yunxiang Zhao, Qiuhong Ke, Flip Korn, Jianzhong Qi, Rui Zhang | HexCNN: A Framework for Native Hexagonal Convolutional Neural Networks | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Hexagonal CNN models have shown superior performance in applications such as
IACT data analysis and aerial scene classification due to their better rotation
symmetry and reduced anisotropy. In order to realize hexagonal processing,
existing studies mainly use the ZeroOut method to imitate hexagonal processing,
which causes substantial memory and computation overheads. We address this
deficiency with a novel native hexagonal CNN framework named HexCNN. HexCNN
takes hexagon-shaped input and performs forward and backward propagation on the
original form of the input based on hexagon-shaped filters, hence avoiding
computation and memory overheads caused by imitation. For applications with
rectangle-shaped input but require hexagonal processing, HexCNN can be applied
by padding the input into hexagon-shape as preprocessing. In this case, we show
that the time and space efficiency of HexCNN still outperforms existing
hexagonal CNN methods substantially. Experimental results show that compared
with the state-of-the-art models, which imitate hexagonal processing but using
rectangle-shaped filters, HexCNN reduces the training time by up to 42.2%.
Meanwhile, HexCNN saves the memory space cost by up to 25% and 41.7% for
loading the input and performing convolution, respectively.
| [
{
"created": "Mon, 25 Jan 2021 08:23:39 GMT",
"version": "v1"
}
] | 2021-01-27 | [
[
"Zhao",
"Yunxiang",
""
],
[
"Ke",
"Qiuhong",
""
],
[
"Korn",
"Flip",
""
],
[
"Qi",
"Jianzhong",
""
],
[
"Zhang",
"Rui",
""
]
] | Hexagonal CNN models have shown superior performance in applications such as IACT data analysis and aerial scene classification due to their better rotation symmetry and reduced anisotropy. In order to realize hexagonal processing, existing studies mainly use the ZeroOut method to imitate hexagonal processing, which causes substantial memory and computation overheads. We address this deficiency with a novel native hexagonal CNN framework named HexCNN. HexCNN takes hexagon-shaped input and performs forward and backward propagation on the original form of the input based on hexagon-shaped filters, hence avoiding computation and memory overheads caused by imitation. For applications with rectangle-shaped input but require hexagonal processing, HexCNN can be applied by padding the input into hexagon-shape as preprocessing. In this case, we show that the time and space efficiency of HexCNN still outperforms existing hexagonal CNN methods substantially. Experimental results show that compared with the state-of-the-art models, which imitate hexagonal processing but using rectangle-shaped filters, HexCNN reduces the training time by up to 42.2%. Meanwhile, HexCNN saves the memory space cost by up to 25% and 41.7% for loading the input and performing convolution, respectively. |
2109.01386 | Rodothea Myrsini Tsoupidi | Rodothea Myrsini Tsoupidi and Musard Balliu and Benoit Baudry | Vivienne: Relational Verification of Cryptographic Implementations in
WebAssembly | null | null | null | null | cs.CR cs.PL cs.SC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This paper explores the use of relational symbolic execution to counter
timing side channels in WebAssembly programs. We design and implement Vivienne,
an open-source tool to automatically analyze WebAssembly cryptographic
libraries for constant-time violations. Our approach features various
optimizations that leverage the structure of WebAssembly and automated theorem
provers, including support for loops via relational invariants. We evaluate
Vivienne on 57 real-world cryptographic implementations, including a previously
unverified implementation of the HACL* library in WebAssembly. The results
indicate that Vivienne is a practical solution for constant-time analysis of
cryptographic libraries in WebAssembly.
| [
{
"created": "Fri, 3 Sep 2021 09:11:08 GMT",
"version": "v1"
}
] | 2021-09-06 | [
[
"Tsoupidi",
"Rodothea Myrsini",
""
],
[
"Balliu",
"Musard",
""
],
[
"Baudry",
"Benoit",
""
]
] | This paper explores the use of relational symbolic execution to counter timing side channels in WebAssembly programs. We design and implement Vivienne, an open-source tool to automatically analyze WebAssembly cryptographic libraries for constant-time violations. Our approach features various optimizations that leverage the structure of WebAssembly and automated theorem provers, including support for loops via relational invariants. We evaluate Vivienne on 57 real-world cryptographic implementations, including a previously unverified implementation of the HACL* library in WebAssembly. The results indicate that Vivienne is a practical solution for constant-time analysis of cryptographic libraries in WebAssembly. |
2101.01043 | Estefania Recayte | Estefan\'ia Recayte and Andrea Munari | Caching at the Edge: Outage Probability | null | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Caching at the edge of wireless networks is a keytechnology to reduce traffic
in the backhaul link. However, aconcentrated amount of requests during
peak-periods may causethe outage of the system, meaning that the network is not
ableto serve the whole set of demands. The outage probability is afundamental
metric to take into account during the networkdesign. In this paper, we derive
the analytical expression ofthe outage probability as a function of the total
amount ofusers requests, library size, requests distribution, cache size
andcapacity constraints on the backhaul resources. In particular, wefocus on a
scenario where end-users have no direct connectionto the master node which
holds the complete library of contentthat can be requested. A general
formulation of the outage isderived and studied for two relevant caching
schemes, i.e. therandom caching scheme and the most popular caching schemes.The
exact closed form expressions presented in this paper provideuseful insights on
how requests, memory and resources can bebalanced when the parameters of a
cache-enabled network haveto designed
| [
{
"created": "Mon, 4 Jan 2021 16:07:53 GMT",
"version": "v1"
}
] | 2021-01-05 | [
[
"Recayte",
"Estefanía",
""
],
[
"Munari",
"Andrea",
""
]
] | Caching at the edge of wireless networks is a keytechnology to reduce traffic in the backhaul link. However, aconcentrated amount of requests during peak-periods may causethe outage of the system, meaning that the network is not ableto serve the whole set of demands. The outage probability is afundamental metric to take into account during the networkdesign. In this paper, we derive the analytical expression ofthe outage probability as a function of the total amount ofusers requests, library size, requests distribution, cache size andcapacity constraints on the backhaul resources. In particular, wefocus on a scenario where end-users have no direct connectionto the master node which holds the complete library of contentthat can be requested. A general formulation of the outage isderived and studied for two relevant caching schemes, i.e. therandom caching scheme and the most popular caching schemes.The exact closed form expressions presented in this paper provideuseful insights on how requests, memory and resources can bebalanced when the parameters of a cache-enabled network haveto designed |
2105.05130 | Li Wang | Li Wang | Towards a Model for LSH | arXiv admin note: text overlap with arXiv:2103.01888 | null | null | null | cs.DB cs.DS cs.LG | http://creativecommons.org/publicdomain/zero/1.0/ | As data volumes continue to grow, clustering and outlier detection algorithms
are becoming increasingly time-consuming. Classical index structures for
neighbor search are no longer sustainable due to the "curse of dimensionality".
Instead, approximated index structures offer a good opportunity to
significantly accelerate the neighbor search for clustering and outlier
detection and to have the lowest possible error rate in the results of the
algorithms. Locality-sensitive hashing is one of those. We indicate directions
to model the properties of LSH.
| [
{
"created": "Tue, 11 May 2021 15:39:55 GMT",
"version": "v1"
}
] | 2021-05-12 | [
[
"Wang",
"Li",
""
]
] | As data volumes continue to grow, clustering and outlier detection algorithms are becoming increasingly time-consuming. Classical index structures for neighbor search are no longer sustainable due to the "curse of dimensionality". Instead, approximated index structures offer a good opportunity to significantly accelerate the neighbor search for clustering and outlier detection and to have the lowest possible error rate in the results of the algorithms. Locality-sensitive hashing is one of those. We indicate directions to model the properties of LSH. |
2403.10884 | Nibaran Das | Soumyajyoti Dey, Sukanta Chakraborty, Utso Guha Roy, Nibaran Das | Fuzzy Rank-based Late Fusion Technique for Cytology image Segmentation | Accept at International Conference on Data, Electronics and Computing
(ICDEC-2023) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cytology image segmentation is quite challenging due to its complex cellular
structure and multiple overlapping regions. On the other hand, for supervised
machine learning techniques, we need a large amount of annotated data, which is
costly. In recent years, late fusion techniques have given some promising
performances in the field of image classification. In this paper, we have
explored a fuzzy-based late fusion techniques for cytology image segmentation.
This fusion rule integrates three traditional semantic segmentation models
UNet, SegNet, and PSPNet. The technique is applied on two cytology image
datasets, i.e., cervical cytology(HErlev) and breast cytology(JUCYT-v1) image
datasets. We have achieved maximum MeanIoU score 84.27% and 83.79% on the
HErlev dataset and JUCYT-v1 dataset after the proposed late fusion technique,
respectively which are better than that of the traditional fusion rules such as
average probability, geometric mean, Borda Count, etc. The codes of the
proposed model are available on GitHub.
| [
{
"created": "Sat, 16 Mar 2024 10:33:02 GMT",
"version": "v1"
}
] | 2024-03-19 | [
[
"Dey",
"Soumyajyoti",
""
],
[
"Chakraborty",
"Sukanta",
""
],
[
"Roy",
"Utso Guha",
""
],
[
"Das",
"Nibaran",
""
]
] | Cytology image segmentation is quite challenging due to its complex cellular structure and multiple overlapping regions. On the other hand, for supervised machine learning techniques, we need a large amount of annotated data, which is costly. In recent years, late fusion techniques have given some promising performances in the field of image classification. In this paper, we have explored a fuzzy-based late fusion techniques for cytology image segmentation. This fusion rule integrates three traditional semantic segmentation models UNet, SegNet, and PSPNet. The technique is applied on two cytology image datasets, i.e., cervical cytology(HErlev) and breast cytology(JUCYT-v1) image datasets. We have achieved maximum MeanIoU score 84.27% and 83.79% on the HErlev dataset and JUCYT-v1 dataset after the proposed late fusion technique, respectively which are better than that of the traditional fusion rules such as average probability, geometric mean, Borda Count, etc. The codes of the proposed model are available on GitHub. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.