id stringlengths 9 10 | submitter stringlengths 1 64 ⌀ | authors stringlengths 4 20.7k | title stringlengths 4 246 | comments stringlengths 1 523 ⌀ | journal-ref stringlengths 4 404 ⌀ | doi stringlengths 11 153 ⌀ | report-no stringlengths 2 254 ⌀ | categories stringlengths 5 98 | license stringclasses 9 values | orig_abstract stringlengths 14 3.35k | versions listlengths 1 60 | update_date stringlengths 10 10 | authors_parsed listlengths 1 1.35k | abstract stringlengths 11 3.34k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2202.11940 | Soumyabrata Pal | Arya Mazumdar, Soumyabrata Pal | Support Recovery in Mixture Models with Sparse Parameters | 55 pages, Shorter version titled "On Learning Mixture Models with
Sparse Parameters " accepted at AISTATS 2022 | null | null | null | cs.LG cs.IT math.IT stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mixture models are widely used to fit complex and multimodal datasets. In
this paper we study mixtures with high dimensional sparse latent parameter
vectors and consider the problem of support recovery of those vectors. While
parameter learning in mixture models is well-studied, the sparsity constraint
remains relatively unexplored. Sparsity of parameter vectors is a natural
constraint in variety of settings, and support recovery is a major step towards
parameter estimation. We provide efficient algorithms for support recovery that
have a logarithmic sample complexity dependence on the dimensionality of the
latent space. Our algorithms are quite general, namely they are applicable to
1) mixtures of many different canonical distributions including Uniform,
Poisson, Laplace, Gaussians, etc. 2) Mixtures of linear regressions and linear
classifiers with Gaussian covariates under different assumptions on the unknown
parameters. In most of these settings, our results are the first guarantees on
the problem while in the rest, our results provide improvements on existing
works.
| [
{
"created": "Thu, 24 Feb 2022 07:44:23 GMT",
"version": "v1"
},
{
"created": "Sat, 10 Sep 2022 10:24:47 GMT",
"version": "v2"
}
] | 2022-09-13 | [
[
"Mazumdar",
"Arya",
""
],
[
"Pal",
"Soumyabrata",
""
]
] | Mixture models are widely used to fit complex and multimodal datasets. In this paper we study mixtures with high dimensional sparse latent parameter vectors and consider the problem of support recovery of those vectors. While parameter learning in mixture models is well-studied, the sparsity constraint remains relatively unexplored. Sparsity of parameter vectors is a natural constraint in variety of settings, and support recovery is a major step towards parameter estimation. We provide efficient algorithms for support recovery that have a logarithmic sample complexity dependence on the dimensionality of the latent space. Our algorithms are quite general, namely they are applicable to 1) mixtures of many different canonical distributions including Uniform, Poisson, Laplace, Gaussians, etc. 2) Mixtures of linear regressions and linear classifiers with Gaussian covariates under different assumptions on the unknown parameters. In most of these settings, our results are the first guarantees on the problem while in the rest, our results provide improvements on existing works. |
1612.09394 | Kwonsoo Chae | Kwonsoo Chae and Hakjoo Oh and Kihong Heo and Hongseok Yang | Automatically generating features for learning program analysis
heuristics | null | null | null | null | cs.PL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a technique for automatically generating features for data-driven
program analyses. Recently data-driven approaches for building a program
analysis have been proposed, which mine existing codebases and automatically
learn heuristics for finding a cost-effective abstraction for a given analysis
task. Such approaches reduce the burden of the analysis designers, but they do
not remove it completely; they still leave the highly nontrivial task of
designing so called features to the hands of the designers. Our technique
automates this feature design process. The idea is to use programs as features
after reducing and abstracting them. Our technique goes through selected
program-query pairs in codebases, and it reduces and abstracts the program in
each pair to a few lines of code, while ensuring that the analysis behaves
similarly for the original and the new programs with respect to the query. Each
reduced program serves as a boolean feature for program-query pairs. This
feature evaluates to true for a given program-query pair when (as a program) it
is included in the program part of the pair. We have implemented our approach
for three real-world program analyses. Our experimental evaluation shows that
these analyses with automatically-generated features perform comparably to
those with manually crafted features.
| [
{
"created": "Fri, 30 Dec 2016 05:55:56 GMT",
"version": "v1"
}
] | 2017-01-02 | [
[
"Chae",
"Kwonsoo",
""
],
[
"Oh",
"Hakjoo",
""
],
[
"Heo",
"Kihong",
""
],
[
"Yang",
"Hongseok",
""
]
] | We present a technique for automatically generating features for data-driven program analyses. Recently data-driven approaches for building a program analysis have been proposed, which mine existing codebases and automatically learn heuristics for finding a cost-effective abstraction for a given analysis task. Such approaches reduce the burden of the analysis designers, but they do not remove it completely; they still leave the highly nontrivial task of designing so called features to the hands of the designers. Our technique automates this feature design process. The idea is to use programs as features after reducing and abstracting them. Our technique goes through selected program-query pairs in codebases, and it reduces and abstracts the program in each pair to a few lines of code, while ensuring that the analysis behaves similarly for the original and the new programs with respect to the query. Each reduced program serves as a boolean feature for program-query pairs. This feature evaluates to true for a given program-query pair when (as a program) it is included in the program part of the pair. We have implemented our approach for three real-world program analyses. Our experimental evaluation shows that these analyses with automatically-generated features perform comparably to those with manually crafted features. |
2009.10047 | Congcong Wang | Congcong Wang and David Lillis | UCD-CS at W-NUT 2020 Shared Task-3: A Text to Text Approach for COVID-19
Event Extraction on Social Media | 8 pages, 2 figures | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | In this paper, we describe our approach in the shared task: COVID-19 event
extraction from Twitter. The objective of this task is to extract answers from
COVID-related tweets to a set of predefined slot-filling questions. Our
approach treats the event extraction task as a question answering task by
leveraging the transformer-based T5 text-to-text model.
According to the official evaluation scores returned, namely F1, our
submitted run achieves competitive performance compared to other participating
runs (Top 3). However, we argue that this evaluation may underestimate the
actual performance of runs based on text-generation. Although some such runs
may answer the slot questions well, they may not be an exact string match for
the gold standard answers. To measure the extent of this underestimation, we
adopt a simple exact-answer transformation method aiming at converting the
well-answered predictions to exactly-matched predictions. The results show that
after this transformation our run overall reaches the same level of performance
as the best participating run and state-of-the-art F1 scores in three of five
COVID-related events. Our code is publicly available to aid reproducibility
| [
{
"created": "Mon, 21 Sep 2020 17:39:00 GMT",
"version": "v1"
},
{
"created": "Mon, 12 Oct 2020 16:18:53 GMT",
"version": "v2"
}
] | 2021-02-19 | [
[
"Wang",
"Congcong",
""
],
[
"Lillis",
"David",
""
]
] | In this paper, we describe our approach in the shared task: COVID-19 event extraction from Twitter. The objective of this task is to extract answers from COVID-related tweets to a set of predefined slot-filling questions. Our approach treats the event extraction task as a question answering task by leveraging the transformer-based T5 text-to-text model. According to the official evaluation scores returned, namely F1, our submitted run achieves competitive performance compared to other participating runs (Top 3). However, we argue that this evaluation may underestimate the actual performance of runs based on text-generation. Although some such runs may answer the slot questions well, they may not be an exact string match for the gold standard answers. To measure the extent of this underestimation, we adopt a simple exact-answer transformation method aiming at converting the well-answered predictions to exactly-matched predictions. The results show that after this transformation our run overall reaches the same level of performance as the best participating run and state-of-the-art F1 scores in three of five COVID-related events. Our code is publicly available to aid reproducibility |
1702.08300 | Merim Dzaferagic | Merim Dzaferagic, Nicholas Kaminski, Irene Macaluso, Nicola Marchetti | How Functional Complexity affects the Scalability-Energy Efficiency
Trade-Off of HCC WSN Clustering | arXiv admin note: substantial text overlap with arXiv:1610.05970 | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Even though clustering algorithms in Wireless Sensor Networks (WSN) are a
well investigate subject, the increasing interest in the Internet of Things
(IoT) and 5G technologies has precipitated the need of new ways to comprehend
and overcome a new set of challenges. While studies mainly propose new
algorithms and compare these algorithms based on a set of properties (e.g.
energy efficiency, scalability), none of them focuses on the underlying
mechanisms and organizational patterns that lead to these properties. We
address this lack of understanding by applying a complex systems science
approach to investigate the properties of WSNs arising from the communication
patterns of the network nodes. We represent different implementations of
clustering in WSNs with a functional topology graph. Moreover, we employ a
complexity metric - functional complexity (CF) - to explain how local
interactions give rise to the global behavior of the network. Our analysis
shows that higher values of CF indicate higher scalability and lower energy
efficiency.
| [
{
"created": "Mon, 27 Feb 2017 14:28:26 GMT",
"version": "v1"
}
] | 2017-02-28 | [
[
"Dzaferagic",
"Merim",
""
],
[
"Kaminski",
"Nicholas",
""
],
[
"Macaluso",
"Irene",
""
],
[
"Marchetti",
"Nicola",
""
]
] | Even though clustering algorithms in Wireless Sensor Networks (WSN) are a well investigate subject, the increasing interest in the Internet of Things (IoT) and 5G technologies has precipitated the need of new ways to comprehend and overcome a new set of challenges. While studies mainly propose new algorithms and compare these algorithms based on a set of properties (e.g. energy efficiency, scalability), none of them focuses on the underlying mechanisms and organizational patterns that lead to these properties. We address this lack of understanding by applying a complex systems science approach to investigate the properties of WSNs arising from the communication patterns of the network nodes. We represent different implementations of clustering in WSNs with a functional topology graph. Moreover, we employ a complexity metric - functional complexity (CF) - to explain how local interactions give rise to the global behavior of the network. Our analysis shows that higher values of CF indicate higher scalability and lower energy efficiency. |
1802.03064 | Dirk Pfl\"uger | Markus K\"oppel and Fabian Franzelin and Ilja Kr\"oker and Sergey
Oladyshkin and Gabriele Santin and Dominik Wittwar and Andrea Barth and
Bernard Haasdonk and Wolfgang Nowak and Dirk Pfl\"uger and Christian Rohde | Comparison of data-driven uncertainty quantification methods for a
carbon dioxide storage benchmark scenario | null | null | 10.1007/s10596-018-9785-x | null | cs.CE cs.NA math.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A variety of methods is available to quantify uncertainties arising with\-in
the modeling of flow and transport in carbon dioxide storage, but there is a
lack of thorough comparisons. Usually, raw data from such storage sites can
hardly be described by theoretical statistical distributions since only very
limited data is available. Hence, exact information on distribution shapes for
all uncertain parameters is very rare in realistic applications. We discuss and
compare four different methods tested for data-driven uncertainty
quantification based on a benchmark scenario of carbon dioxide storage. In the
benchmark, for which we provide data and code, carbon dioxide is injected into
a saline aquifer modeled by the nonlinear capillarity-free fractional flow
formulation for two incompressible fluid phases, namely carbon dioxide and
brine. To cover different aspects of uncertainty quantification, we incorporate
various sources of uncertainty such as uncertainty of boundary conditions, of
conceptual model definitions and of material properties. We consider recent
versions of the following non-intrusive and intrusive uncertainty
quantification methods: arbitary polynomial chaos, spatially adaptive sparse
grids, kernel-based greedy interpolation and hybrid stochastic Galerkin. The
performance of each approach is demonstrated assessing expectation value and
standard deviation of the carbon dioxide saturation against a reference
statistic based on Monte Carlo sampling. We compare the convergence of all
methods reporting on accuracy with respect to the number of model runs and
resolution. Finally we offer suggestions about the methods' advantages and
disadvantages that can guide the modeler for uncertainty quantification in
carbon dioxide storage and beyond.
| [
{
"created": "Thu, 8 Feb 2018 22:27:38 GMT",
"version": "v1"
}
] | 2018-11-13 | [
[
"Köppel",
"Markus",
""
],
[
"Franzelin",
"Fabian",
""
],
[
"Kröker",
"Ilja",
""
],
[
"Oladyshkin",
"Sergey",
""
],
[
"Santin",
"Gabriele",
""
],
[
"Wittwar",
"Dominik",
""
],
[
"Barth",
"Andrea",
""
],
[
"Haasdonk",
"Bernard",
""
],
[
"Nowak",
"Wolfgang",
""
],
[
"Pflüger",
"Dirk",
""
],
[
"Rohde",
"Christian",
""
]
] | A variety of methods is available to quantify uncertainties arising with\-in the modeling of flow and transport in carbon dioxide storage, but there is a lack of thorough comparisons. Usually, raw data from such storage sites can hardly be described by theoretical statistical distributions since only very limited data is available. Hence, exact information on distribution shapes for all uncertain parameters is very rare in realistic applications. We discuss and compare four different methods tested for data-driven uncertainty quantification based on a benchmark scenario of carbon dioxide storage. In the benchmark, for which we provide data and code, carbon dioxide is injected into a saline aquifer modeled by the nonlinear capillarity-free fractional flow formulation for two incompressible fluid phases, namely carbon dioxide and brine. To cover different aspects of uncertainty quantification, we incorporate various sources of uncertainty such as uncertainty of boundary conditions, of conceptual model definitions and of material properties. We consider recent versions of the following non-intrusive and intrusive uncertainty quantification methods: arbitary polynomial chaos, spatially adaptive sparse grids, kernel-based greedy interpolation and hybrid stochastic Galerkin. The performance of each approach is demonstrated assessing expectation value and standard deviation of the carbon dioxide saturation against a reference statistic based on Monte Carlo sampling. We compare the convergence of all methods reporting on accuracy with respect to the number of model runs and resolution. Finally we offer suggestions about the methods' advantages and disadvantages that can guide the modeler for uncertainty quantification in carbon dioxide storage and beyond. |
1904.03122 | Stefan Larson | Stefan Larson, Anish Mahendran, Andrew Lee, Jonathan K. Kummerfeld,
Parker Hill, Michael A. Laurenzano, Johann Hauswald, Lingjia Tang, Jason Mars | Outlier Detection for Improved Data Quality and Diversity in Dialog
Systems | Accepted as long paper to NAACL 2019 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In a corpus of data, outliers are either errors: mistakes in the data that
are counterproductive, or are unique: informative samples that improve model
robustness. Identifying outliers can lead to better datasets by (1) removing
noise in datasets and (2) guiding collection of additional data to fill gaps.
However, the problem of detecting both outlier types has received relatively
little attention in NLP, particularly for dialog systems. We introduce a simple
and effective technique for detecting both erroneous and unique samples in a
corpus of short texts using neural sentence embeddings combined with
distance-based outlier detection. We also present a novel data collection
pipeline built atop our detection technique to automatically and iteratively
mine unique data samples while discarding erroneous samples. Experiments show
that our outlier detection technique is effective at finding errors while our
data collection pipeline yields highly diverse corpora that in turn produce
more robust intent classification and slot-filling models.
| [
{
"created": "Fri, 5 Apr 2019 15:31:28 GMT",
"version": "v1"
}
] | 2019-04-08 | [
[
"Larson",
"Stefan",
""
],
[
"Mahendran",
"Anish",
""
],
[
"Lee",
"Andrew",
""
],
[
"Kummerfeld",
"Jonathan K.",
""
],
[
"Hill",
"Parker",
""
],
[
"Laurenzano",
"Michael A.",
""
],
[
"Hauswald",
"Johann",
""
],
[
"Tang",
"Lingjia",
""
],
[
"Mars",
"Jason",
""
]
] | In a corpus of data, outliers are either errors: mistakes in the data that are counterproductive, or are unique: informative samples that improve model robustness. Identifying outliers can lead to better datasets by (1) removing noise in datasets and (2) guiding collection of additional data to fill gaps. However, the problem of detecting both outlier types has received relatively little attention in NLP, particularly for dialog systems. We introduce a simple and effective technique for detecting both erroneous and unique samples in a corpus of short texts using neural sentence embeddings combined with distance-based outlier detection. We also present a novel data collection pipeline built atop our detection technique to automatically and iteratively mine unique data samples while discarding erroneous samples. Experiments show that our outlier detection technique is effective at finding errors while our data collection pipeline yields highly diverse corpora that in turn produce more robust intent classification and slot-filling models. |
0912.3429 | Pietro Sala Mr. | A. Montanari, G. Puppis, P. Sala, G. Sciavicco | Decidability of the interval temporal logic ABBar over the natural
numbers | null | null | null | null | cs.LO | http://creativecommons.org/licenses/by/3.0/ | In this paper, we focus our attention on the interval temporal logic of the
Allen's relations "meets", "begins", and "begun by" (ABBar for short),
interpreted over natural numbers. We first introduce the logic and we show that
it is expressive enough to model distinctive interval properties,such as
accomplishment conditions, to capture basic modalities of point-based temporal
logic, such as the until operator, and to encode relevant metric constraints.
Then, we prove that the satisfiability problem for ABBar over natural numbers
is decidable by providing a small model theorem based on an original
contraction method. Finally, we prove the EXPSPACE-completeness of the problem
| [
{
"created": "Thu, 17 Dec 2009 15:22:45 GMT",
"version": "v1"
},
{
"created": "Wed, 3 Feb 2010 13:40:37 GMT",
"version": "v2"
}
] | 2010-02-03 | [
[
"Montanari",
"A.",
""
],
[
"Puppis",
"G.",
""
],
[
"Sala",
"P.",
""
],
[
"Sciavicco",
"G.",
""
]
] | In this paper, we focus our attention on the interval temporal logic of the Allen's relations "meets", "begins", and "begun by" (ABBar for short), interpreted over natural numbers. We first introduce the logic and we show that it is expressive enough to model distinctive interval properties,such as accomplishment conditions, to capture basic modalities of point-based temporal logic, such as the until operator, and to encode relevant metric constraints. Then, we prove that the satisfiability problem for ABBar over natural numbers is decidable by providing a small model theorem based on an original contraction method. Finally, we prove the EXPSPACE-completeness of the problem |
2404.00801 | Ye Liu | Ye Liu, Jixuan He, Wanhua Li, Junsik Kim, Donglai Wei, Hanspeter
Pfister, Chang Wen Chen | $R^2$-Tuning: Efficient Image-to-Video Transfer Learning for Video
Temporal Grounding | ECCV 2024 Camera Ready | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Video temporal grounding (VTG) is a fine-grained video understanding problem
that aims to ground relevant clips in untrimmed videos given natural language
queries. Most existing VTG models are built upon frame-wise final-layer CLIP
features, aided by additional temporal backbones (e.g., SlowFast) with
sophisticated temporal reasoning mechanisms. In this work, we claim that CLIP
itself already shows great potential for fine-grained spatial-temporal
modeling, as each layer offers distinct yet useful information under different
granularity levels. Motivated by this, we propose Reversed Recurrent Tuning
($R^2$-Tuning), a parameter- and memory-efficient transfer learning framework
for video temporal grounding. Our method learns a lightweight $R^2$ Block
containing only 1.5% of the total parameters to perform progressive
spatial-temporal modeling. Starting from the last layer of CLIP, $R^2$ Block
recurrently aggregates spatial features from earlier layers, then refines
temporal correlation conditioning on the given query, resulting in a
coarse-to-fine scheme. $R^2$-Tuning achieves state-of-the-art performance
across three VTG tasks (i.e., moment retrieval, highlight detection, and video
summarization) on six public benchmarks (i.e., QVHighlights, Charades-STA,
Ego4D-NLQ, TACoS, YouTube Highlights, and TVSum) even without the additional
backbone, demonstrating the significance and effectiveness of the proposed
scheme. Our code is available at https://github.com/yeliudev/R2-Tuning.
| [
{
"created": "Sun, 31 Mar 2024 21:17:48 GMT",
"version": "v1"
},
{
"created": "Sun, 21 Jul 2024 16:17:07 GMT",
"version": "v2"
}
] | 2024-07-23 | [
[
"Liu",
"Ye",
""
],
[
"He",
"Jixuan",
""
],
[
"Li",
"Wanhua",
""
],
[
"Kim",
"Junsik",
""
],
[
"Wei",
"Donglai",
""
],
[
"Pfister",
"Hanspeter",
""
],
[
"Chen",
"Chang Wen",
""
]
] | Video temporal grounding (VTG) is a fine-grained video understanding problem that aims to ground relevant clips in untrimmed videos given natural language queries. Most existing VTG models are built upon frame-wise final-layer CLIP features, aided by additional temporal backbones (e.g., SlowFast) with sophisticated temporal reasoning mechanisms. In this work, we claim that CLIP itself already shows great potential for fine-grained spatial-temporal modeling, as each layer offers distinct yet useful information under different granularity levels. Motivated by this, we propose Reversed Recurrent Tuning ($R^2$-Tuning), a parameter- and memory-efficient transfer learning framework for video temporal grounding. Our method learns a lightweight $R^2$ Block containing only 1.5% of the total parameters to perform progressive spatial-temporal modeling. Starting from the last layer of CLIP, $R^2$ Block recurrently aggregates spatial features from earlier layers, then refines temporal correlation conditioning on the given query, resulting in a coarse-to-fine scheme. $R^2$-Tuning achieves state-of-the-art performance across three VTG tasks (i.e., moment retrieval, highlight detection, and video summarization) on six public benchmarks (i.e., QVHighlights, Charades-STA, Ego4D-NLQ, TACoS, YouTube Highlights, and TVSum) even without the additional backbone, demonstrating the significance and effectiveness of the proposed scheme. Our code is available at https://github.com/yeliudev/R2-Tuning. |
1509.07513 | Marco Antonio Valenzuela Esc\'arcega | Marco A. Valenzuela-Esc\'arcega, Gus Hahn-Powell, Mihai Surdeanu | Description of the Odin Event Extraction Framework and Rule Language | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This document describes the Odin framework, which is a domain-independent
platform for developing rule-based event extraction models. Odin aims to be
powerful (the rule language allows the modeling of complex syntactic
structures) and robust (to recover from syntactic parsing errors, syntactic
patterns can be freely mixed with surface, token-based patterns), while
remaining simple (some domain grammars can be up and running in minutes), and
fast (Odin processes over 100 sentences/second in a real-world domain with over
200 rules). Here we include a thorough definition of the Odin rule language,
together with a description of the Odin API in the Scala language, which allows
one to apply these rules to arbitrary texts.
| [
{
"created": "Thu, 24 Sep 2015 20:10:27 GMT",
"version": "v1"
}
] | 2015-09-28 | [
[
"Valenzuela-Escárcega",
"Marco A.",
""
],
[
"Hahn-Powell",
"Gus",
""
],
[
"Surdeanu",
"Mihai",
""
]
] | This document describes the Odin framework, which is a domain-independent platform for developing rule-based event extraction models. Odin aims to be powerful (the rule language allows the modeling of complex syntactic structures) and robust (to recover from syntactic parsing errors, syntactic patterns can be freely mixed with surface, token-based patterns), while remaining simple (some domain grammars can be up and running in minutes), and fast (Odin processes over 100 sentences/second in a real-world domain with over 200 rules). Here we include a thorough definition of the Odin rule language, together with a description of the Odin API in the Scala language, which allows one to apply these rules to arbitrary texts. |
2110.14124 | Wang Chen | Wang Chen, Jian Chen, Weitian Wu, Xinmin Yang, Hui Li | A novel multiobjective evolutionary algorithm based on decomposition and
multi-reference points strategy | null | null | null | null | cs.NE math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many real-world optimization problems such as engineering design can be
eventually modeled as the corresponding multiobjective optimization problems
(MOPs) which must be solved to obtain approximate Pareto optimal fronts.
Multiobjective evolutionary algorithm based on decomposition (MOEA/D) has been
regarded as a significantly promising approach for solving MOPs. Recent studies
have shown that MOEA/D with uniform weight vectors is well-suited to MOPs with
regular Pareto optimal fronts, but its performance in terms of diversity
usually deteriorates when solving MOPs with irregular Pareto optimal fronts. In
this way, the solution set obtained by the algorithm can not provide more
reasonable choices for decision makers. In order to efficiently overcome this
drawback, we propose an improved MOEA/D algorithm by virtue of the well-known
Pascoletti-Serafini scalarization method and a new strategy of multi-reference
points. Specifically, this strategy consists of the setting and adaptation of
reference points generated by the techniques of equidistant partition and
projection. For performance assessment, the proposed algorithm is compared with
existing four state-of-the-art multiobjective evolutionary algorithms on
benchmark test problems with various types of Pareto optimal fronts. According
to the experimental results, the proposed algorithm exhibits better diversity
performance than that of the other compared algorithms. Finally, our algorithm
is applied to two real-world MOPs in engineering optimization successfully.
| [
{
"created": "Wed, 27 Oct 2021 02:07:08 GMT",
"version": "v1"
},
{
"created": "Mon, 1 Nov 2021 13:31:17 GMT",
"version": "v2"
},
{
"created": "Tue, 2 Nov 2021 07:01:08 GMT",
"version": "v3"
},
{
"created": "Wed, 3 Nov 2021 11:03:40 GMT",
"version": "v4"
},
{
"created": "Mon, 8 Nov 2021 16:07:28 GMT",
"version": "v5"
},
{
"created": "Thu, 11 Nov 2021 08:21:35 GMT",
"version": "v6"
}
] | 2021-11-12 | [
[
"Chen",
"Wang",
""
],
[
"Chen",
"Jian",
""
],
[
"Wu",
"Weitian",
""
],
[
"Yang",
"Xinmin",
""
],
[
"Li",
"Hui",
""
]
] | Many real-world optimization problems such as engineering design can be eventually modeled as the corresponding multiobjective optimization problems (MOPs) which must be solved to obtain approximate Pareto optimal fronts. Multiobjective evolutionary algorithm based on decomposition (MOEA/D) has been regarded as a significantly promising approach for solving MOPs. Recent studies have shown that MOEA/D with uniform weight vectors is well-suited to MOPs with regular Pareto optimal fronts, but its performance in terms of diversity usually deteriorates when solving MOPs with irregular Pareto optimal fronts. In this way, the solution set obtained by the algorithm can not provide more reasonable choices for decision makers. In order to efficiently overcome this drawback, we propose an improved MOEA/D algorithm by virtue of the well-known Pascoletti-Serafini scalarization method and a new strategy of multi-reference points. Specifically, this strategy consists of the setting and adaptation of reference points generated by the techniques of equidistant partition and projection. For performance assessment, the proposed algorithm is compared with existing four state-of-the-art multiobjective evolutionary algorithms on benchmark test problems with various types of Pareto optimal fronts. According to the experimental results, the proposed algorithm exhibits better diversity performance than that of the other compared algorithms. Finally, our algorithm is applied to two real-world MOPs in engineering optimization successfully. |
2202.11460 | Pavel Hrab\'ak | Hana Najmanov\'a and Veronika Pe\v{s}kov\'a and Luk\'a\v{s} Kukl\'ik
and Marek Buk\'a\v{c}ek and Pavel Hrab\'ak and Daniel Va\v{s}ata | Evacuation trials from a double-deck electric train unit: Experimental
data and sensitivity analysis | null | Safety Science, Volume 146, 2022, 105523, ISSN 0925-7535 | 10.1016/j.ssci.2021.105523 | null | cs.MA physics.soc-ph | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Passenger trains represent a challenging environment in emergencies, with
specific evacuation conditions resulting from the typical layout and interior
design inherent to public transportation vehicles. This paper describes a
dataset obtained in a full-scale controlled experiment emulating the emergency
evacuation of a double-deck electric unit railcar carried out in Prague in
2018. 15 evacuation trials involving 91 participants were conducted under
various evacuation scenarios considering different compositions of passenger
crowd, exit widths, and exit types (e.g. egress to a high platform, to an open
rail line using stairs, and a 750 mm jump without any supporting equipment).
The study's main goals were to collect experimental data on the movement
conditions in the railcar and to study the impact of various boundary
conditions on evacuation process and total evacuation time. Movement
characteristics (exit flows, speeds) and human behaviour (pre-movement
activities, exiting behaviours) were also analysed.
The data obtained was used to validate and adjust a Pathfinder model to
capture important aspects of evacuation from the railcar. Furthermore, a series
of simulations using this model was performed to provide sensitivity analysis
of the influence of crowd composition, exit width, and exit type on total
evacuation time. As a key finding, we can conclude that for the case of a
standard exit path (platform or stairs) the width of the main exit had the
greatest impact on total evacuation time, however, crowd composition played the
prevailing role in evacuation scenarios involving a jump.
| [
{
"created": "Wed, 23 Feb 2022 12:25:06 GMT",
"version": "v1"
}
] | 2022-02-24 | [
[
"Najmanová",
"Hana",
""
],
[
"Pešková",
"Veronika",
""
],
[
"Kuklík",
"Lukáš",
""
],
[
"Bukáček",
"Marek",
""
],
[
"Hrabák",
"Pavel",
""
],
[
"Vašata",
"Daniel",
""
]
] | Passenger trains represent a challenging environment in emergencies, with specific evacuation conditions resulting from the typical layout and interior design inherent to public transportation vehicles. This paper describes a dataset obtained in a full-scale controlled experiment emulating the emergency evacuation of a double-deck electric unit railcar carried out in Prague in 2018. 15 evacuation trials involving 91 participants were conducted under various evacuation scenarios considering different compositions of passenger crowd, exit widths, and exit types (e.g. egress to a high platform, to an open rail line using stairs, and a 750 mm jump without any supporting equipment). The study's main goals were to collect experimental data on the movement conditions in the railcar and to study the impact of various boundary conditions on evacuation process and total evacuation time. Movement characteristics (exit flows, speeds) and human behaviour (pre-movement activities, exiting behaviours) were also analysed. The data obtained was used to validate and adjust a Pathfinder model to capture important aspects of evacuation from the railcar. Furthermore, a series of simulations using this model was performed to provide sensitivity analysis of the influence of crowd composition, exit width, and exit type on total evacuation time. As a key finding, we can conclude that for the case of a standard exit path (platform or stairs) the width of the main exit had the greatest impact on total evacuation time, however, crowd composition played the prevailing role in evacuation scenarios involving a jump. |
2401.11697 | Sundar Narayanan | Sundaraparipurnan Narayanan, Mark Potkewitz | A risk-based approach to assessing liability risk for AI-driven harms
considering EU liability directive | null | null | null | null | cs.CY | http://creativecommons.org/licenses/by/4.0/ | Artificial intelligence can cause inconvenience, harm, or other unintended
consequences in various ways, including those that arise from defects or
malfunctions in the AI system itself or those caused by its use or misuse.
Responsibility for AI harms or unintended consequences must be addressed to
hold accountable the people who caused such harms and ensure that victims
receive compensation for any damages or losses they may have sustained.
Historical instances of harm caused by AI have led to European Union
establishing an AI Liability Directive. The directive aims to lay down a
uniform set of rules for access to information, delineate the duty and level of
care required for AI development and use, and clarify the burden of proof for
damages or harms caused by AI systems, establishing broader protection for
victims. The future ability of provider to contest a product liability claim
will depend on good practices adopted in designing, developing, and maintaining
AI systems in the market. This paper provides a risk-based approach to
examining liability for AI-driven injuries. It also provides an overview of
existing liability approaches, insights into limitations and complexities in
these approaches, and a detailed self-assessment questionnaire to assess the
risk associated with liability for a specific AI system from a provider's
perspective.
| [
{
"created": "Mon, 18 Dec 2023 15:52:43 GMT",
"version": "v1"
}
] | 2024-01-23 | [
[
"Narayanan",
"Sundaraparipurnan",
""
],
[
"Potkewitz",
"Mark",
""
]
] | Artificial intelligence can cause inconvenience, harm, or other unintended consequences in various ways, including those that arise from defects or malfunctions in the AI system itself or those caused by its use or misuse. Responsibility for AI harms or unintended consequences must be addressed to hold accountable the people who caused such harms and ensure that victims receive compensation for any damages or losses they may have sustained. Historical instances of harm caused by AI have led to European Union establishing an AI Liability Directive. The directive aims to lay down a uniform set of rules for access to information, delineate the duty and level of care required for AI development and use, and clarify the burden of proof for damages or harms caused by AI systems, establishing broader protection for victims. The future ability of provider to contest a product liability claim will depend on good practices adopted in designing, developing, and maintaining AI systems in the market. This paper provides a risk-based approach to examining liability for AI-driven injuries. It also provides an overview of existing liability approaches, insights into limitations and complexities in these approaches, and a detailed self-assessment questionnaire to assess the risk associated with liability for a specific AI system from a provider's perspective. |
2405.00181 | Binzhu Xie | Hang Du, Sicheng Zhang, Binzhu Xie, Guoshun Nan, Jiayang Zhang, Junrui
Xu, Hangyu Liu, Sicong Leng, Jiangming Liu, Hehe Fan, Dajiu Huang, Jing Feng,
Linli Chen, Can Zhang, Xuhuan Li, Hao Zhang, Jianhang Chen, Qimei Cui,
Xiaofeng Tao | Uncovering What, Why and How: A Comprehensive Benchmark for Causation
Understanding of Video Anomaly | Accepted in CVPR2024, Codebase: https://github.com/fesvhtr/CUVA | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Video anomaly understanding (VAU) aims to automatically comprehend unusual
occurrences in videos, thereby enabling various applications such as traffic
surveillance and industrial manufacturing. While existing VAU benchmarks
primarily concentrate on anomaly detection and localization, our focus is on
more practicality, prompting us to raise the following crucial questions: "what
anomaly occurred?", "why did it happen?", and "how severe is this abnormal
event?". In pursuit of these answers, we present a comprehensive benchmark for
Causation Understanding of Video Anomaly (CUVA). Specifically, each instance of
the proposed benchmark involves three sets of human annotations to indicate the
"what", "why" and "how" of an anomaly, including 1) anomaly type, start and end
times, and event descriptions, 2) natural language explanations for the cause
of an anomaly, and 3) free text reflecting the effect of the abnormality. In
addition, we also introduce MMEval, a novel evaluation metric designed to
better align with human preferences for CUVA, facilitating the measurement of
existing LLMs in comprehending the underlying cause and corresponding effect of
video anomalies. Finally, we propose a novel prompt-based method that can serve
as a baseline approach for the challenging CUVA. We conduct extensive
experiments to show the superiority of our evaluation metric and the
prompt-based approach. Our code and dataset are available at
https://github.com/fesvhtr/CUVA.
| [
{
"created": "Tue, 30 Apr 2024 20:11:49 GMT",
"version": "v1"
},
{
"created": "Mon, 6 May 2024 14:57:50 GMT",
"version": "v2"
}
] | 2024-05-07 | [
[
"Du",
"Hang",
""
],
[
"Zhang",
"Sicheng",
""
],
[
"Xie",
"Binzhu",
""
],
[
"Nan",
"Guoshun",
""
],
[
"Zhang",
"Jiayang",
""
],
[
"Xu",
"Junrui",
""
],
[
"Liu",
"Hangyu",
""
],
[
"Leng",
"Sicong",
""
],
[
"Liu",
"Jiangming",
""
],
[
"Fan",
"Hehe",
""
],
[
"Huang",
"Dajiu",
""
],
[
"Feng",
"Jing",
""
],
[
"Chen",
"Linli",
""
],
[
"Zhang",
"Can",
""
],
[
"Li",
"Xuhuan",
""
],
[
"Zhang",
"Hao",
""
],
[
"Chen",
"Jianhang",
""
],
[
"Cui",
"Qimei",
""
],
[
"Tao",
"Xiaofeng",
""
]
] | Video anomaly understanding (VAU) aims to automatically comprehend unusual occurrences in videos, thereby enabling various applications such as traffic surveillance and industrial manufacturing. While existing VAU benchmarks primarily concentrate on anomaly detection and localization, our focus is on more practicality, prompting us to raise the following crucial questions: "what anomaly occurred?", "why did it happen?", and "how severe is this abnormal event?". In pursuit of these answers, we present a comprehensive benchmark for Causation Understanding of Video Anomaly (CUVA). Specifically, each instance of the proposed benchmark involves three sets of human annotations to indicate the "what", "why" and "how" of an anomaly, including 1) anomaly type, start and end times, and event descriptions, 2) natural language explanations for the cause of an anomaly, and 3) free text reflecting the effect of the abnormality. In addition, we also introduce MMEval, a novel evaluation metric designed to better align with human preferences for CUVA, facilitating the measurement of existing LLMs in comprehending the underlying cause and corresponding effect of video anomalies. Finally, we propose a novel prompt-based method that can serve as a baseline approach for the challenging CUVA. We conduct extensive experiments to show the superiority of our evaluation metric and the prompt-based approach. Our code and dataset are available at https://github.com/fesvhtr/CUVA. |
2008.02146 | Santosh Vempala | He Jia, Aditi Laddha, Yin Tat Lee, Santosh S. Vempala | Reducing Isotropy and Volume to KLS: An $O(n^3\psi^2)$ Volume Algorithm | 23 pages, 1 figure; updated with current KLS bound and resulting
complexity | null | null | null | cs.DS cs.CC math.FA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We show that the volume of a convex body in ${\bf R}^{n}$ in the general
membership oracle model can be computed to within relative error $\varepsilon$
using $\widetilde{O}(n^{3}\psi^{2} + n^{3}/\varepsilon^{2})$ oracle queries,
where $\psi$ is the KLS constant. With the current bound of
$\psi=\widetilde{O}(1)$, this gives an $\widetilde{O}(n^{3}/\varepsilon^{2})$
algorithm, improving on the Lov\'{a}sz-Vempala
$\widetilde{O}(n^{4}/\varepsilon^{2})$ algorithm from 2003. The main new
ingredient is an $\widetilde{O}(n^{3}\psi^{2})$ algorithm for isotropic
transformation, following which we can apply the
$\widetilde{O}(n^{3}/\varepsilon^{2})$ volume algorithm of Cousins and Vempala
for well-rounded convex bodies. We also give an efficient implementation of the
new algorithm for convex polytopes defined by $m$ inequalities in ${\bf
R}^{n}$: polytope volume can be estimated in time
$\widetilde{O}(mn^{c}/\varepsilon^{2})$ where $c<3.2$ depends on the current
matrix multiplication exponent; this improves known bounds.
| [
{
"created": "Wed, 5 Aug 2020 14:08:16 GMT",
"version": "v1"
},
{
"created": "Sat, 3 Sep 2022 11:18:21 GMT",
"version": "v2"
}
] | 2022-09-07 | [
[
"Jia",
"He",
""
],
[
"Laddha",
"Aditi",
""
],
[
"Lee",
"Yin Tat",
""
],
[
"Vempala",
"Santosh S.",
""
]
] | We show that the volume of a convex body in ${\bf R}^{n}$ in the general membership oracle model can be computed to within relative error $\varepsilon$ using $\widetilde{O}(n^{3}\psi^{2} + n^{3}/\varepsilon^{2})$ oracle queries, where $\psi$ is the KLS constant. With the current bound of $\psi=\widetilde{O}(1)$, this gives an $\widetilde{O}(n^{3}/\varepsilon^{2})$ algorithm, improving on the Lov\'{a}sz-Vempala $\widetilde{O}(n^{4}/\varepsilon^{2})$ algorithm from 2003. The main new ingredient is an $\widetilde{O}(n^{3}\psi^{2})$ algorithm for isotropic transformation, following which we can apply the $\widetilde{O}(n^{3}/\varepsilon^{2})$ volume algorithm of Cousins and Vempala for well-rounded convex bodies. We also give an efficient implementation of the new algorithm for convex polytopes defined by $m$ inequalities in ${\bf R}^{n}$: polytope volume can be estimated in time $\widetilde{O}(mn^{c}/\varepsilon^{2})$ where $c<3.2$ depends on the current matrix multiplication exponent; this improves known bounds. |
2406.18310 | Jie Liu | Wenting Chen, Jie Liu, Tommy W.S. Chow, Yixuan Yuan | Spatial-temporal Hierarchical Reinforcement Learning for Interpretable
Pathology Image Super-Resolution | Accepted to IEEE TRANSACTIONS ON MEDICAL IMAGING (TMI) | null | null | null | cs.CV cs.LG eess.IV | http://creativecommons.org/licenses/by/4.0/ | Pathology image are essential for accurately interpreting lesion cells in
cytopathology screening, but acquiring high-resolution digital slides requires
specialized equipment and long scanning times. Though super-resolution (SR)
techniques can alleviate this problem, existing deep learning models recover
pathology image in a black-box manner, which can lead to untruthful biological
details and misdiagnosis. Additionally, current methods allocate the same
computational resources to recover each pixel of pathology image, leading to
the sub-optimal recovery issue due to the large variation of pathology image.
In this paper, we propose the first hierarchical reinforcement learning
framework named Spatial-Temporal hierARchical Reinforcement Learning (STAR-RL),
mainly for addressing the aforementioned issues in pathology image
super-resolution problem. We reformulate the SR problem as a Markov decision
process of interpretable operations and adopt the hierarchical recovery
mechanism in patch level, to avoid sub-optimal recovery. Specifically, the
higher-level spatial manager is proposed to pick out the most corrupted patch
for the lower-level patch worker. Moreover, the higher-level temporal manager
is advanced to evaluate the selected patch and determine whether the
optimization should be stopped earlier, thereby avoiding the over-processed
problem. Under the guidance of spatial-temporal managers, the lower-level patch
worker processes the selected patch with pixel-wise interpretable actions at
each time step. Experimental results on medical images degraded by different
kernels show the effectiveness of STAR-RL. Furthermore, STAR-RL validates the
promotion in tumor diagnosis with a large margin and shows generalizability
under various degradations. The source code is available at
https://github.com/CUHK-AIM-Group/STAR-RL.
| [
{
"created": "Wed, 26 Jun 2024 12:50:10 GMT",
"version": "v1"
}
] | 2024-06-27 | [
[
"Chen",
"Wenting",
""
],
[
"Liu",
"Jie",
""
],
[
"Chow",
"Tommy W. S.",
""
],
[
"Yuan",
"Yixuan",
""
]
] | Pathology image are essential for accurately interpreting lesion cells in cytopathology screening, but acquiring high-resolution digital slides requires specialized equipment and long scanning times. Though super-resolution (SR) techniques can alleviate this problem, existing deep learning models recover pathology image in a black-box manner, which can lead to untruthful biological details and misdiagnosis. Additionally, current methods allocate the same computational resources to recover each pixel of pathology image, leading to the sub-optimal recovery issue due to the large variation of pathology image. In this paper, we propose the first hierarchical reinforcement learning framework named Spatial-Temporal hierARchical Reinforcement Learning (STAR-RL), mainly for addressing the aforementioned issues in pathology image super-resolution problem. We reformulate the SR problem as a Markov decision process of interpretable operations and adopt the hierarchical recovery mechanism in patch level, to avoid sub-optimal recovery. Specifically, the higher-level spatial manager is proposed to pick out the most corrupted patch for the lower-level patch worker. Moreover, the higher-level temporal manager is advanced to evaluate the selected patch and determine whether the optimization should be stopped earlier, thereby avoiding the over-processed problem. Under the guidance of spatial-temporal managers, the lower-level patch worker processes the selected patch with pixel-wise interpretable actions at each time step. Experimental results on medical images degraded by different kernels show the effectiveness of STAR-RL. Furthermore, STAR-RL validates the promotion in tumor diagnosis with a large margin and shows generalizability under various degradations. The source code is available at https://github.com/CUHK-AIM-Group/STAR-RL. |
2404.08472 | Emadeldeen Eldele | Emadeldeen Eldele, Mohamed Ragab, Zhenghua Chen, Min Wu, Xiaoli Li | TSLANet: Rethinking Transformers for Time Series Representation Learning | Accepted in ICML 2024 | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | Time series data, characterized by its intrinsic long and short-range
dependencies, poses a unique challenge across analytical applications. While
Transformer-based models excel at capturing long-range dependencies, they face
limitations in noise sensitivity, computational efficiency, and overfitting
with smaller datasets. In response, we introduce a novel Time Series
Lightweight Adaptive Network (TSLANet), as a universal convolutional model for
diverse time series tasks. Specifically, we propose an Adaptive Spectral Block,
harnessing Fourier analysis to enhance feature representation and to capture
both long-term and short-term interactions while mitigating noise via adaptive
thresholding. Additionally, we introduce an Interactive Convolution Block and
leverage self-supervised learning to refine the capacity of TSLANet for
decoding complex temporal patterns and improve its robustness on different
datasets. Our comprehensive experiments demonstrate that TSLANet outperforms
state-of-the-art models in various tasks spanning classification, forecasting,
and anomaly detection, showcasing its resilience and adaptability across a
spectrum of noise levels and data sizes. The code is available at
https://github.com/emadeldeen24/TSLANet.
| [
{
"created": "Fri, 12 Apr 2024 13:41:29 GMT",
"version": "v1"
},
{
"created": "Mon, 6 May 2024 04:00:17 GMT",
"version": "v2"
}
] | 2024-05-07 | [
[
"Eldele",
"Emadeldeen",
""
],
[
"Ragab",
"Mohamed",
""
],
[
"Chen",
"Zhenghua",
""
],
[
"Wu",
"Min",
""
],
[
"Li",
"Xiaoli",
""
]
] | Time series data, characterized by its intrinsic long and short-range dependencies, poses a unique challenge across analytical applications. While Transformer-based models excel at capturing long-range dependencies, they face limitations in noise sensitivity, computational efficiency, and overfitting with smaller datasets. In response, we introduce a novel Time Series Lightweight Adaptive Network (TSLANet), as a universal convolutional model for diverse time series tasks. Specifically, we propose an Adaptive Spectral Block, harnessing Fourier analysis to enhance feature representation and to capture both long-term and short-term interactions while mitigating noise via adaptive thresholding. Additionally, we introduce an Interactive Convolution Block and leverage self-supervised learning to refine the capacity of TSLANet for decoding complex temporal patterns and improve its robustness on different datasets. Our comprehensive experiments demonstrate that TSLANet outperforms state-of-the-art models in various tasks spanning classification, forecasting, and anomaly detection, showcasing its resilience and adaptability across a spectrum of noise levels and data sizes. The code is available at https://github.com/emadeldeen24/TSLANet. |
1207.4166 | Trey Smith | Trey Smith, Reid Simmons | Heuristic Search Value Iteration for POMDPs | Appears in Proceedings of the Twentieth Conference on Uncertainty in
Artificial Intelligence (UAI2004) | null | null | UAI-P-2004-PG-520-527 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a novel POMDP planning algorithm called heuristic search value
iteration (HSVI).HSVI is an anytime algorithm that returns a policy and a
provable bound on its regret with respect to the optimal policy. HSVI gets its
power by combining two well-known techniques: attention-focusing search
heuristics and piecewise linear convex representations of the value function.
HSVI's soundness and convergence have been proven. On some benchmark problems
from the literature, HSVI displays speedups of greater than 100 with respect to
other state-of-the-art POMDP value iteration algorithms. We also apply HSVI to
a new rover exploration problem 10 times larger than most POMDP problems in the
literature.
| [
{
"created": "Wed, 11 Jul 2012 15:04:47 GMT",
"version": "v1"
}
] | 2012-07-19 | [
[
"Smith",
"Trey",
""
],
[
"Simmons",
"Reid",
""
]
] | We present a novel POMDP planning algorithm called heuristic search value iteration (HSVI).HSVI is an anytime algorithm that returns a policy and a provable bound on its regret with respect to the optimal policy. HSVI gets its power by combining two well-known techniques: attention-focusing search heuristics and piecewise linear convex representations of the value function. HSVI's soundness and convergence have been proven. On some benchmark problems from the literature, HSVI displays speedups of greater than 100 with respect to other state-of-the-art POMDP value iteration algorithms. We also apply HSVI to a new rover exploration problem 10 times larger than most POMDP problems in the literature. |
2211.07738 | Amanda Calatrava Arroyo | Amanda Calatrava, Hern\'an Asorey, Jan Astalos, Alberto Azevedo,
Francesco Benincasa, Ignacio Blanquer, Martin Bobak, Francisco Brasileiro,
Laia Cod\'o, Laura del Cano, Borja Esteban, Meritxell Ferret, Josef Handl,
Tobias Kerzenmacher, Valentin Kozlov, Ale\v{s} K\v{r}enek, Ricardo Martins,
Manuel Pavesio, Antonio Juan Rubio-Montero, Juan S\'anchez-Ferrero | A survey of the European Open Science Cloud services for expanding the
capacity and capabilities of multidisciplinary scientific applications | null | null | null | null | cs.DC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Open Science is a paradigm in which scientific data, procedures, tools and
results are shared transparently and reused by society as a whole. The
initiative known as the European Open Science Cloud (EOSC) is an effort in
Europe to provide an open, trusted, virtual and federated computing environment
to execute scientific applications, and to store, share and re-use research
data across borders and scientific disciplines. Additionally, scientific
services are becoming increasingly data-intensive, not only in terms of
computationally intensive tasks but also in terms of storage resources.
Computing paradigms such as High Performance Computing (HPC) and Cloud
Computing are applied to e-science applications to meet these demands. However,
adapting applications and services to these paradigms is not a trivial task,
commonly requiring a deep knowledge of the underlying technologies, which often
constitutes a barrier for its uptake by scientists in general. In this context,
EOSC-SYNERGY, a collaborative project involving more than 20 institutions from
eight European countries pooling their knowledge and experience to enhance
EOSC's capabilities and capacities, aims to bring EOSC closer to the scientific
communities. This article provides a summary analysis of the adaptations made
in the ten thematic services of EOSC-SYNERGY to embrace this paradigm. These
services are grouped into four categories: Earth Observation, Environment,
Biomedicine, and Astrophysics. The analysis will lead to the identification of
commonalities, best practices and common requirements, regardless of the
thematic area of the service. Experience gained from the thematic services
could be transferred to new services for the adoption of the EOSC ecosystem
framework.
| [
{
"created": "Mon, 14 Nov 2022 20:33:27 GMT",
"version": "v1"
}
] | 2022-11-16 | [
[
"Calatrava",
"Amanda",
""
],
[
"Asorey",
"Hernán",
""
],
[
"Astalos",
"Jan",
""
],
[
"Azevedo",
"Alberto",
""
],
[
"Benincasa",
"Francesco",
""
],
[
"Blanquer",
"Ignacio",
""
],
[
"Bobak",
"Martin",
""
],
[
"Brasileiro",
"Francisco",
""
],
[
"Codó",
"Laia",
""
],
[
"del Cano",
"Laura",
""
],
[
"Esteban",
"Borja",
""
],
[
"Ferret",
"Meritxell",
""
],
[
"Handl",
"Josef",
""
],
[
"Kerzenmacher",
"Tobias",
""
],
[
"Kozlov",
"Valentin",
""
],
[
"Křenek",
"Aleš",
""
],
[
"Martins",
"Ricardo",
""
],
[
"Pavesio",
"Manuel",
""
],
[
"Rubio-Montero",
"Antonio Juan",
""
],
[
"Sánchez-Ferrero",
"Juan",
""
]
] | Open Science is a paradigm in which scientific data, procedures, tools and results are shared transparently and reused by society as a whole. The initiative known as the European Open Science Cloud (EOSC) is an effort in Europe to provide an open, trusted, virtual and federated computing environment to execute scientific applications, and to store, share and re-use research data across borders and scientific disciplines. Additionally, scientific services are becoming increasingly data-intensive, not only in terms of computationally intensive tasks but also in terms of storage resources. Computing paradigms such as High Performance Computing (HPC) and Cloud Computing are applied to e-science applications to meet these demands. However, adapting applications and services to these paradigms is not a trivial task, commonly requiring a deep knowledge of the underlying technologies, which often constitutes a barrier for its uptake by scientists in general. In this context, EOSC-SYNERGY, a collaborative project involving more than 20 institutions from eight European countries pooling their knowledge and experience to enhance EOSC's capabilities and capacities, aims to bring EOSC closer to the scientific communities. This article provides a summary analysis of the adaptations made in the ten thematic services of EOSC-SYNERGY to embrace this paradigm. These services are grouped into four categories: Earth Observation, Environment, Biomedicine, and Astrophysics. The analysis will lead to the identification of commonalities, best practices and common requirements, regardless of the thematic area of the service. Experience gained from the thematic services could be transferred to new services for the adoption of the EOSC ecosystem framework. |
2402.00086 | Wenguan Wang | Xu Zhang and Yiming Mo and Wenguan Wang and Yi Yang | Retrosynthesis prediction enhanced by in-silico reaction data
augmentation | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in machine learning (ML) have expedited retrosynthesis
research by assisting chemists to design experiments more efficiently. However,
all ML-based methods consume substantial amounts of paired training data (i.e.,
chemical reaction: product-reactant(s) pair), which is costly to obtain.
Moreover, companies view reaction data as a valuable asset and restrict the
accessibility to researchers. These issues prevent the creation of more
powerful retrosynthesis models due to their data-driven nature. As a response,
we exploit easy-to-access unpaired data (i.e., one component of
product-reactant(s) pair) for generating in-silico paired data to facilitate
model training. Specifically, we present RetroWISE, a self-boosting framework
that employs a base model inferred from real paired data to perform in-silico
reaction generation and augmentation using unpaired data, ultimately leading to
a superior model. On three benchmark datasets, RetroWISE achieves the best
overall performance against state-of-the-art models (e.g., +8.6% top-1 accuracy
on the USPTO-50K test dataset). Moreover, it consistently improves the
prediction accuracy of rare transformations. These results show that Retro-
WISE overcomes the training bottleneck by in-silico reactions, thereby paving
the way toward more effective ML-based retrosynthesis models.
| [
{
"created": "Wed, 31 Jan 2024 07:40:37 GMT",
"version": "v1"
}
] | 2024-02-02 | [
[
"Zhang",
"Xu",
""
],
[
"Mo",
"Yiming",
""
],
[
"Wang",
"Wenguan",
""
],
[
"Yang",
"Yi",
""
]
] | Recent advances in machine learning (ML) have expedited retrosynthesis research by assisting chemists to design experiments more efficiently. However, all ML-based methods consume substantial amounts of paired training data (i.e., chemical reaction: product-reactant(s) pair), which is costly to obtain. Moreover, companies view reaction data as a valuable asset and restrict the accessibility to researchers. These issues prevent the creation of more powerful retrosynthesis models due to their data-driven nature. As a response, we exploit easy-to-access unpaired data (i.e., one component of product-reactant(s) pair) for generating in-silico paired data to facilitate model training. Specifically, we present RetroWISE, a self-boosting framework that employs a base model inferred from real paired data to perform in-silico reaction generation and augmentation using unpaired data, ultimately leading to a superior model. On three benchmark datasets, RetroWISE achieves the best overall performance against state-of-the-art models (e.g., +8.6% top-1 accuracy on the USPTO-50K test dataset). Moreover, it consistently improves the prediction accuracy of rare transformations. These results show that Retro- WISE overcomes the training bottleneck by in-silico reactions, thereby paving the way toward more effective ML-based retrosynthesis models. |
1811.12065 | Lile Cai | Lile Cai, Anne-Maelle Barneche, Arthur Herbout, Chuan Sheng Foo, Jie
Lin, Vijay Ramaseshan Chandrasekhar and Mohamed M. Sabry | TEA-DNN: the Quest for Time-Energy-Accuracy Co-optimized Deep Neural
Networks | Accepted by ISLPED2019 | null | null | null | cs.NE cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Embedded deep learning platforms have witnessed two simultaneous
improvements. First, the accuracy of convolutional neural networks (CNNs) has
been significantly improved through the use of automated neural-architecture
search (NAS) algorithms to determine CNN structure. Second, there has been
increasing interest in developing hardware accelerators for CNNs that provide
improved inference performance and energy consumption compared to GPUs. Such
embedded deep learning platforms differ in the amount of compute resources and
memory-access bandwidth, which would affect performance and energy consumption
of CNNs. It is therefore critical to consider the available hardware resources
in the network architecture search. To this end, we introduce TEA-DNN, a NAS
algorithm targeting multi-objective optimization of execution time, energy
consumption, and classification accuracy of CNN workloads on embedded
architectures. TEA-DNN leverages energy and execution time measurements on
embedded hardware when exploring the Pareto-optimal curves across accuracy,
execution time, and energy consumption and does not require additional effort
to model the underlying hardware. We apply TEA-DNN for image classification on
actual embedded platforms (NVIDIA Jetson TX2 and Intel Movidius Neural Compute
Stick). We highlight the Pareto-optimal operating points that emphasize the
necessity to explicitly consider hardware characteristics in the search
process. To the best of our knowledge, this is the most comprehensive study of
Pareto-optimal models across a range of hardware platforms using actual
measurements on hardware to obtain objective values.
| [
{
"created": "Thu, 29 Nov 2018 11:05:28 GMT",
"version": "v1"
},
{
"created": "Mon, 21 Oct 2019 07:39:19 GMT",
"version": "v2"
}
] | 2019-10-22 | [
[
"Cai",
"Lile",
""
],
[
"Barneche",
"Anne-Maelle",
""
],
[
"Herbout",
"Arthur",
""
],
[
"Foo",
"Chuan Sheng",
""
],
[
"Lin",
"Jie",
""
],
[
"Chandrasekhar",
"Vijay Ramaseshan",
""
],
[
"Sabry",
"Mohamed M.",
""
]
] | Embedded deep learning platforms have witnessed two simultaneous improvements. First, the accuracy of convolutional neural networks (CNNs) has been significantly improved through the use of automated neural-architecture search (NAS) algorithms to determine CNN structure. Second, there has been increasing interest in developing hardware accelerators for CNNs that provide improved inference performance and energy consumption compared to GPUs. Such embedded deep learning platforms differ in the amount of compute resources and memory-access bandwidth, which would affect performance and energy consumption of CNNs. It is therefore critical to consider the available hardware resources in the network architecture search. To this end, we introduce TEA-DNN, a NAS algorithm targeting multi-objective optimization of execution time, energy consumption, and classification accuracy of CNN workloads on embedded architectures. TEA-DNN leverages energy and execution time measurements on embedded hardware when exploring the Pareto-optimal curves across accuracy, execution time, and energy consumption and does not require additional effort to model the underlying hardware. We apply TEA-DNN for image classification on actual embedded platforms (NVIDIA Jetson TX2 and Intel Movidius Neural Compute Stick). We highlight the Pareto-optimal operating points that emphasize the necessity to explicitly consider hardware characteristics in the search process. To the best of our knowledge, this is the most comprehensive study of Pareto-optimal models across a range of hardware platforms using actual measurements on hardware to obtain objective values. |
1212.3540 | Dima Kagan | Yehonatan Bitton, Michael Fire, Dima Kagan, Bracha Shapira, Lior
Rokach, Judit Bar-Ilan | Social Network Based Search for Experts | Participated in HCIR 2012 | null | null | null | cs.SI cs.HC cs.IR physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Our system illustrates how information retrieved from social networks can be
used for suggesting experts for specific tasks. The system is designed to
facilitate the task of finding the appropriate person(s) for a job, as a
conference committee member, an advisor, etc. This short description will
demonstrate how the system works in the context of the HCIR2012 published
tasks.
| [
{
"created": "Fri, 14 Dec 2012 17:35:31 GMT",
"version": "v1"
}
] | 2012-12-17 | [
[
"Bitton",
"Yehonatan",
""
],
[
"Fire",
"Michael",
""
],
[
"Kagan",
"Dima",
""
],
[
"Shapira",
"Bracha",
""
],
[
"Rokach",
"Lior",
""
],
[
"Bar-Ilan",
"Judit",
""
]
] | Our system illustrates how information retrieved from social networks can be used for suggesting experts for specific tasks. The system is designed to facilitate the task of finding the appropriate person(s) for a job, as a conference committee member, an advisor, etc. This short description will demonstrate how the system works in the context of the HCIR2012 published tasks. |
2304.01472 | Xinru Zhang | Xinru Zhang, Ni Ou, Chenghao Liu, Zhizheng Zhuo, Yaou Liu, and Chuyang
Ye | Unsupervised Brain Tumor Segmentation with Image-based Prompts | Currently under review (from November 14th, 2022 until now) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automated brain tumor segmentation based on deep learning (DL) has achieved
promising performance. However, it generally relies on annotated images for
model training, which is not always feasible in clinical settings. Therefore,
the development of unsupervised DL-based brain tumor segmentation approaches
without expert annotations is desired. Motivated by the success of prompt
learning (PL) in natural language processing, we propose an approach to
unsupervised brain tumor segmentation by designing image-based prompts that
allow indication of brain tumors, and this approach is dubbed as PL-based Brain
Tumor Segmentation (PL-BTS). Specifically, instead of directly training a model
for brain tumor segmentation with a large amount of annotated data, we seek to
train a model that can answer the question: is a voxel in the input image
associated with tumor-like hyper-/hypo-intensity? Such a model can be trained
by artificially generating tumor-like hyper-/hypo-intensity on images without
tumors with hand-crafted designs. Since the hand-crafted designs may be too
simplistic to represent all kinds of real tumors, the trained model may overfit
the simplistic hand-crafted task rather than actually answer the question of
abnormality. To address this problem, we propose the use of a validation task,
where we generate a different hand-crafted task to monitor overfitting. In
addition, we propose PL-BTS+ that further improves PL-BTS by exploiting
unannotated images with brain tumors. Compared with competing unsupervised
methods, the proposed method has achieved marked improvements on both public
and in-house datasets, and we have also demonstrated its possible extension to
other brain lesion segmentation tasks.
| [
{
"created": "Tue, 4 Apr 2023 02:28:25 GMT",
"version": "v1"
}
] | 2023-04-05 | [
[
"Zhang",
"Xinru",
""
],
[
"Ou",
"Ni",
""
],
[
"Liu",
"Chenghao",
""
],
[
"Zhuo",
"Zhizheng",
""
],
[
"Liu",
"Yaou",
""
],
[
"Ye",
"Chuyang",
""
]
] | Automated brain tumor segmentation based on deep learning (DL) has achieved promising performance. However, it generally relies on annotated images for model training, which is not always feasible in clinical settings. Therefore, the development of unsupervised DL-based brain tumor segmentation approaches without expert annotations is desired. Motivated by the success of prompt learning (PL) in natural language processing, we propose an approach to unsupervised brain tumor segmentation by designing image-based prompts that allow indication of brain tumors, and this approach is dubbed as PL-based Brain Tumor Segmentation (PL-BTS). Specifically, instead of directly training a model for brain tumor segmentation with a large amount of annotated data, we seek to train a model that can answer the question: is a voxel in the input image associated with tumor-like hyper-/hypo-intensity? Such a model can be trained by artificially generating tumor-like hyper-/hypo-intensity on images without tumors with hand-crafted designs. Since the hand-crafted designs may be too simplistic to represent all kinds of real tumors, the trained model may overfit the simplistic hand-crafted task rather than actually answer the question of abnormality. To address this problem, we propose the use of a validation task, where we generate a different hand-crafted task to monitor overfitting. In addition, we propose PL-BTS+ that further improves PL-BTS by exploiting unannotated images with brain tumors. Compared with competing unsupervised methods, the proposed method has achieved marked improvements on both public and in-house datasets, and we have also demonstrated its possible extension to other brain lesion segmentation tasks. |
1306.4755 | Chen Gong | Shuying Li, Chen Gong, Xiaodong Wang | Hybrid Group Decoding for Scalable Video over MIMO-OFDM Downlink Systems | null | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a scalable video broadcasting scheme over MIMO-OFDM systems. The
scalable video source layers are channel encoded and modulated into independent
signal streams, which are then transmitted from the allocated antennas in
certain time-frequency blocks. Each receiver employs the successive group
decoder to decode the signal streams of interest by treating other signal
streams as interference. The transmitter performs adaptive coding and
modulation, and transmission antenna and subcarrier allocation, based on the
rate feedback from the receivers. We also propose a hybrid receiver that
switches between the successive group decoder and the MMSE decoder depending on
the rate. Extensive simulations are provided to demonstrate the performance
gain of the proposed group-decoding-based scalable video broadcasting scheme
over the one based on the conventional MMSE decoding.
| [
{
"created": "Thu, 20 Jun 2013 05:12:41 GMT",
"version": "v1"
}
] | 2013-06-21 | [
[
"Li",
"Shuying",
""
],
[
"Gong",
"Chen",
""
],
[
"Wang",
"Xiaodong",
""
]
] | We propose a scalable video broadcasting scheme over MIMO-OFDM systems. The scalable video source layers are channel encoded and modulated into independent signal streams, which are then transmitted from the allocated antennas in certain time-frequency blocks. Each receiver employs the successive group decoder to decode the signal streams of interest by treating other signal streams as interference. The transmitter performs adaptive coding and modulation, and transmission antenna and subcarrier allocation, based on the rate feedback from the receivers. We also propose a hybrid receiver that switches between the successive group decoder and the MMSE decoder depending on the rate. Extensive simulations are provided to demonstrate the performance gain of the proposed group-decoding-based scalable video broadcasting scheme over the one based on the conventional MMSE decoding. |
1712.01126 | Huimiao Chen | Yinghao Jia, Yide Zhao, Ziyang Guo, Yu Xin, Huimiao Chen | Optimizing Electric Taxi Charging System: A Data-Driven Approach from
Transport Energy Supply Chain Perspective | null | null | null | null | cs.SY math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the last decade, the development of electric taxis has motivated rapidly
growing research interest in efficiently allocating electric charging stations
in the academic literature. To address the driving pattern of electric taxis,
we introduce the perspective of transport energy supply chain to capture the
charging demand and to transform the charging station allocation problem to a
location problem. Based on the P-median and the Min-max models, we developed a
data-driven method to evaluate the system efficiency and service quality. We
also conduct a case study using GPS trajectory data in Beijing, where various
location strategies are evaluated from perspectives of system efficiency and
service quality. Also, situations with and without congestion are comparatively
evaluated.
| [
{
"created": "Mon, 4 Dec 2017 14:56:32 GMT",
"version": "v1"
}
] | 2017-12-05 | [
[
"Jia",
"Yinghao",
""
],
[
"Zhao",
"Yide",
""
],
[
"Guo",
"Ziyang",
""
],
[
"Xin",
"Yu",
""
],
[
"Chen",
"Huimiao",
""
]
] | In the last decade, the development of electric taxis has motivated rapidly growing research interest in efficiently allocating electric charging stations in the academic literature. To address the driving pattern of electric taxis, we introduce the perspective of transport energy supply chain to capture the charging demand and to transform the charging station allocation problem to a location problem. Based on the P-median and the Min-max models, we developed a data-driven method to evaluate the system efficiency and service quality. We also conduct a case study using GPS trajectory data in Beijing, where various location strategies are evaluated from perspectives of system efficiency and service quality. Also, situations with and without congestion are comparatively evaluated. |
1507.07815 | Svebor Karaman | Giuseppe Lisanti and Svebor Karaman and Daniele Pezzatini and Alberto
Del Bimbo | A Multi-Camera Image Processing and Visualization System for Train
Safety Assessment | 11 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present a machine vision system to efficiently monitor,
analyze and present visual data acquired with a railway overhead gantry
equipped with multiple cameras. This solution aims to improve the safety of
daily life railway transportation in a two- fold manner: (1) by providing
automatic algorithms that can process large imagery of trains (2) by helping
train operators to keep attention on any possible malfunction. The system is
designed with the latest cutting edge, high-rate visible and thermal cameras
that ob- serve a train passing under an railway overhead gantry. The machine
vision system is composed of three principal modules: (1) an automatic wagon
identification system, recognizing the wagon ID according to the UIC
classification of railway coaches; (2) a temperature monitoring system; (3) a
system for the detection, localization and visualization of the pantograph of
the train. These three machine vision modules process batch trains sequences
and their resulting analysis are presented to an operator using a multitouch
user interface. We detail all technical aspects of our multi-camera portal: the
hardware requirements, the software developed to deal with the high-frame rate
cameras and ensure reliable acquisition, the algorithms proposed to solve each
computer vision task, and the multitouch interaction and visualization
interface. We evaluate each component of our system on a dataset recorded in an
ad-hoc railway test-bed, showing the potential of our proposed portal for train
safety assessment.
| [
{
"created": "Tue, 28 Jul 2015 15:36:24 GMT",
"version": "v1"
}
] | 2015-07-29 | [
[
"Lisanti",
"Giuseppe",
""
],
[
"Karaman",
"Svebor",
""
],
[
"Pezzatini",
"Daniele",
""
],
[
"Del Bimbo",
"Alberto",
""
]
] | In this paper we present a machine vision system to efficiently monitor, analyze and present visual data acquired with a railway overhead gantry equipped with multiple cameras. This solution aims to improve the safety of daily life railway transportation in a two- fold manner: (1) by providing automatic algorithms that can process large imagery of trains (2) by helping train operators to keep attention on any possible malfunction. The system is designed with the latest cutting edge, high-rate visible and thermal cameras that ob- serve a train passing under an railway overhead gantry. The machine vision system is composed of three principal modules: (1) an automatic wagon identification system, recognizing the wagon ID according to the UIC classification of railway coaches; (2) a temperature monitoring system; (3) a system for the detection, localization and visualization of the pantograph of the train. These three machine vision modules process batch trains sequences and their resulting analysis are presented to an operator using a multitouch user interface. We detail all technical aspects of our multi-camera portal: the hardware requirements, the software developed to deal with the high-frame rate cameras and ensure reliable acquisition, the algorithms proposed to solve each computer vision task, and the multitouch interaction and visualization interface. We evaluate each component of our system on a dataset recorded in an ad-hoc railway test-bed, showing the potential of our proposed portal for train safety assessment. |
2201.05297 | Hanting Li | Hanting Li, Mingzhe Sui, Zhaoqing Zhu, Feng Zhao | MMNet: Muscle motion-guided network for micro-expression recognition | 8 pages, 4 figures | Proc. 31st Int'l Joint Conf. Artificial Intelligence (IJCAI), 2022 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Facial micro-expressions (MEs) are involuntary facial motions revealing
peoples real feelings and play an important role in the early intervention of
mental illness, the national security, and many human-computer interaction
systems. However, existing micro-expression datasets are limited and usually
pose some challenges for training good classifiers. To model the subtle facial
muscle motions, we propose a robust micro-expression recognition (MER)
framework, namely muscle motion-guided network (MMNet). Specifically, a
continuous attention (CA) block is introduced to focus on modeling local subtle
muscle motion patterns with little identity information, which is different
from most previous methods that directly extract features from complete video
frames with much identity information. Besides, we design a position
calibration (PC) module based on the vision transformer. By adding the position
embeddings of the face generated by PC module at the end of the two branches,
the PC module can help to add position information to facial muscle motion
pattern features for the MER. Extensive experiments on three public
micro-expression datasets demonstrate that our approach outperforms
state-of-the-art methods by a large margin.
| [
{
"created": "Fri, 14 Jan 2022 04:05:49 GMT",
"version": "v1"
},
{
"created": "Fri, 19 Aug 2022 11:24:19 GMT",
"version": "v2"
}
] | 2022-08-22 | [
[
"Li",
"Hanting",
""
],
[
"Sui",
"Mingzhe",
""
],
[
"Zhu",
"Zhaoqing",
""
],
[
"Zhao",
"Feng",
""
]
] | Facial micro-expressions (MEs) are involuntary facial motions revealing peoples real feelings and play an important role in the early intervention of mental illness, the national security, and many human-computer interaction systems. However, existing micro-expression datasets are limited and usually pose some challenges for training good classifiers. To model the subtle facial muscle motions, we propose a robust micro-expression recognition (MER) framework, namely muscle motion-guided network (MMNet). Specifically, a continuous attention (CA) block is introduced to focus on modeling local subtle muscle motion patterns with little identity information, which is different from most previous methods that directly extract features from complete video frames with much identity information. Besides, we design a position calibration (PC) module based on the vision transformer. By adding the position embeddings of the face generated by PC module at the end of the two branches, the PC module can help to add position information to facial muscle motion pattern features for the MER. Extensive experiments on three public micro-expression datasets demonstrate that our approach outperforms state-of-the-art methods by a large margin. |
2211.10202 | Zhengmao He | Zhengmao He, Bin Zhao | Some problems about co-consonance of topological spaces | null | null | null | null | cs.LO math.GN | http://creativecommons.org/licenses/by/4.0/ | In this paper, we first prove that the retract of a consonant space (or
co-consonant space) is consonant (co-consonant). Using this result, some
related results have obtained. Simultaneously, we proved that (1) the
co-consonance of the Smyth powerspace implies the co-consonance of a
topological space under a necessary condition; (2) the co-consonance of a
topological implies the co-consonance of the smyth powerspace under some
conditions; (3) if the lower powerspace is co-consonant, then the topological
space is co-consonant; (4) the co-consonance of implies the co-consonance of
the lower powerspace with some sufficient conditions.
| [
{
"created": "Fri, 18 Nov 2022 12:47:18 GMT",
"version": "v1"
},
{
"created": "Wed, 30 Nov 2022 09:19:22 GMT",
"version": "v2"
}
] | 2022-12-01 | [
[
"He",
"Zhengmao",
""
],
[
"Zhao",
"Bin",
""
]
] | In this paper, we first prove that the retract of a consonant space (or co-consonant space) is consonant (co-consonant). Using this result, some related results have obtained. Simultaneously, we proved that (1) the co-consonance of the Smyth powerspace implies the co-consonance of a topological space under a necessary condition; (2) the co-consonance of a topological implies the co-consonance of the smyth powerspace under some conditions; (3) if the lower powerspace is co-consonant, then the topological space is co-consonant; (4) the co-consonance of implies the co-consonance of the lower powerspace with some sufficient conditions. |
2201.00434 | Hanyuan Wang | Hanyuan Wang, Dima Damen, Majid Mirmehdi and Toby Perrett | TVNet: Temporal Voting Network for Action Localization | 9 pages, 7 figures, 11 tables | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a Temporal Voting Network (TVNet) for action localization in
untrimmed videos. This incorporates a novel Voting Evidence Module to locate
temporal boundaries, more accurately, where temporal contextual evidence is
accumulated to predict frame-level probabilities of start and end action
boundaries. Our action-independent evidence module is incorporated within a
pipeline to calculate confidence scores and action classes. We achieve an
average mAP of 34.6% on ActivityNet-1.3, particularly outperforming previous
methods with the highest IoU of 0.95. TVNet also achieves mAP of 56.0% when
combined with PGCN and 59.1% with MUSES at 0.5 IoU on THUMOS14 and outperforms
prior work at all thresholds. Our code is available at
https://github.com/hanielwang/TVNet.
| [
{
"created": "Sun, 2 Jan 2022 23:46:18 GMT",
"version": "v1"
}
] | 2022-01-04 | [
[
"Wang",
"Hanyuan",
""
],
[
"Damen",
"Dima",
""
],
[
"Mirmehdi",
"Majid",
""
],
[
"Perrett",
"Toby",
""
]
] | We propose a Temporal Voting Network (TVNet) for action localization in untrimmed videos. This incorporates a novel Voting Evidence Module to locate temporal boundaries, more accurately, where temporal contextual evidence is accumulated to predict frame-level probabilities of start and end action boundaries. Our action-independent evidence module is incorporated within a pipeline to calculate confidence scores and action classes. We achieve an average mAP of 34.6% on ActivityNet-1.3, particularly outperforming previous methods with the highest IoU of 0.95. TVNet also achieves mAP of 56.0% when combined with PGCN and 59.1% with MUSES at 0.5 IoU on THUMOS14 and outperforms prior work at all thresholds. Our code is available at https://github.com/hanielwang/TVNet. |
2005.05743 | Ertan Kaz{\i}kl{\i} | Ertan Kaz{\i}kl{\i}, Sinan Gezici and Serdar Y\"uksel | Quadratic Privacy-Signaling Games and the MMSE Information Bottleneck
Problem for Gaussian Sources | 16 pages, 6 figures | null | null | null | cs.IT math.IT math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate a privacy-signaling game problem in which a sender with
privacy concerns observes a pair of correlated random vectors which are modeled
as jointly Gaussian. The sender aims to hide one of these random vectors and
convey the other one whereas the objective of the receiver is to accurately
estimate both of the random vectors. We analyze these conflicting objectives in
a game theoretic framework with quadratic costs where depending on the
commitment conditions (of the sender), we consider Nash or Stackelberg
(Bayesian persuasion) equilibria. We show that a payoff dominant Nash
equilibrium among all admissible policies is attained by a set of explicitly
characterized linear policies. We also show that a payoff dominant Nash
equilibrium coincides with a Stackelberg equilibrium. We formulate the
information bottleneck problem within our Stackelberg framework under the mean
squared error distortion criterion where the information bottleneck setup has a
further restriction that only one of the random variables is observed at the
sender. We show that this MMSE Gaussian Information Bottleneck Problem admits a
linear solution which is explicitly characterized in the paper. We provide
explicit conditions on when the optimal solutions, or equilibrium solutions in
the Nash setup, are informative or noninformative.
| [
{
"created": "Tue, 12 May 2020 13:16:05 GMT",
"version": "v1"
},
{
"created": "Fri, 10 Jul 2020 20:44:16 GMT",
"version": "v2"
},
{
"created": "Fri, 4 Mar 2022 22:14:52 GMT",
"version": "v3"
}
] | 2022-03-08 | [
[
"Kazıklı",
"Ertan",
""
],
[
"Gezici",
"Sinan",
""
],
[
"Yüksel",
"Serdar",
""
]
] | We investigate a privacy-signaling game problem in which a sender with privacy concerns observes a pair of correlated random vectors which are modeled as jointly Gaussian. The sender aims to hide one of these random vectors and convey the other one whereas the objective of the receiver is to accurately estimate both of the random vectors. We analyze these conflicting objectives in a game theoretic framework with quadratic costs where depending on the commitment conditions (of the sender), we consider Nash or Stackelberg (Bayesian persuasion) equilibria. We show that a payoff dominant Nash equilibrium among all admissible policies is attained by a set of explicitly characterized linear policies. We also show that a payoff dominant Nash equilibrium coincides with a Stackelberg equilibrium. We formulate the information bottleneck problem within our Stackelberg framework under the mean squared error distortion criterion where the information bottleneck setup has a further restriction that only one of the random variables is observed at the sender. We show that this MMSE Gaussian Information Bottleneck Problem admits a linear solution which is explicitly characterized in the paper. We provide explicit conditions on when the optimal solutions, or equilibrium solutions in the Nash setup, are informative or noninformative. |
2206.06117 | Steve Mathew | Steve Mathew D A | Optimizing musical chord inversions using the cartesian coordinate
system | 9 pages, 5 tables | null | null | null | cs.SD eess.AS | http://creativecommons.org/licenses/by/4.0/ | In classical music and in any genre of contemporary music, the tonal elements
or notes used for playing are the same. The numerous possibilities of chords
for a given instance in a piece make the playing, in general, very intricate,
and advanced. The theory sounds quite trivial, yet the application has vast
options, each leading to inarguably different outcomes, characterized by
scientific and musical principles. Chords and their importance are
self-explanatory. A chord is a bunch of notes played together. As far as
scientists are concerned, it is a set of tonal frequencies ringing together
resulting in a consonant/dissonant sound. It is well-known that the notes of a
chord can be rearranged to come up with various voicings (1) of the same chord
which enables a composer/player to choose the most optimal one to convey the
emotion they wish to convey. Though there are numerous possibilities, it is
scientific to think that there is just one appropriate voicing for a particular
situation of tonal movements. In this study, we attempt to find the optimal
voicings by considering chords to be points in a 3-dimensional cartesian
coordinate system and further the fundamental understanding of mathematics in
music theory.
| [
{
"created": "Fri, 10 Jun 2022 14:48:30 GMT",
"version": "v1"
}
] | 2022-06-14 | [
[
"A",
"Steve Mathew D",
""
]
] | In classical music and in any genre of contemporary music, the tonal elements or notes used for playing are the same. The numerous possibilities of chords for a given instance in a piece make the playing, in general, very intricate, and advanced. The theory sounds quite trivial, yet the application has vast options, each leading to inarguably different outcomes, characterized by scientific and musical principles. Chords and their importance are self-explanatory. A chord is a bunch of notes played together. As far as scientists are concerned, it is a set of tonal frequencies ringing together resulting in a consonant/dissonant sound. It is well-known that the notes of a chord can be rearranged to come up with various voicings (1) of the same chord which enables a composer/player to choose the most optimal one to convey the emotion they wish to convey. Though there are numerous possibilities, it is scientific to think that there is just one appropriate voicing for a particular situation of tonal movements. In this study, we attempt to find the optimal voicings by considering chords to be points in a 3-dimensional cartesian coordinate system and further the fundamental understanding of mathematics in music theory. |
0810.0558 | Ashish Goel | Ashish Goel, Sanjeev Khanna, Brad Null | The Ratio Index for Budgeted Learning, with Applications | This paper has a substantial bug that we are trying to fix. Many
thanks to Joe Halpern for pointing this bug out. Please do not cite in the
meantime. Please let us know if you would like to understand and/or try to
fix the bug | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the budgeted learning problem, we are allowed to experiment on a set of
alternatives (given a fixed experimentation budget) with the goal of picking a
single alternative with the largest possible expected payoff. Approximation
algorithms for this problem were developed by Guha and Munagala by rounding a
linear program that couples the various alternatives together. In this paper we
present an index for this problem, which we call the ratio index, which also
guarantees a constant factor approximation. Index-based policies have the
advantage that a single number (i.e. the index) can be computed for each
alternative irrespective of all other alternatives, and the alternative with
the highest index is experimented upon. This is analogous to the famous Gittins
index for the discounted multi-armed bandit problem.
The ratio index has several interesting structural properties. First, we show
that it can be computed in strongly polynomial time. Second, we show that with
the appropriate discount factor, the Gittins index and our ratio index are
constant factor approximations of each other, and hence the Gittins index also
gives a constant factor approximation to the budgeted learning problem.
Finally, we show that the ratio index can be used to create an index-based
policy that achieves an O(1)-approximation for the finite horizon version of
the multi-armed bandit problem. Moreover, the policy does not require any
knowledge of the horizon (whereas we compare its performance against an optimal
strategy that is aware of the horizon). This yields the following surprising
result: there is an index-based policy that achieves an O(1)-approximation for
the multi-armed bandit problem, oblivious to the underlying discount factor.
| [
{
"created": "Fri, 3 Oct 2008 01:37:45 GMT",
"version": "v1"
},
{
"created": "Mon, 11 Apr 2016 18:47:16 GMT",
"version": "v2"
}
] | 2016-04-12 | [
[
"Goel",
"Ashish",
""
],
[
"Khanna",
"Sanjeev",
""
],
[
"Null",
"Brad",
""
]
] | In the budgeted learning problem, we are allowed to experiment on a set of alternatives (given a fixed experimentation budget) with the goal of picking a single alternative with the largest possible expected payoff. Approximation algorithms for this problem were developed by Guha and Munagala by rounding a linear program that couples the various alternatives together. In this paper we present an index for this problem, which we call the ratio index, which also guarantees a constant factor approximation. Index-based policies have the advantage that a single number (i.e. the index) can be computed for each alternative irrespective of all other alternatives, and the alternative with the highest index is experimented upon. This is analogous to the famous Gittins index for the discounted multi-armed bandit problem. The ratio index has several interesting structural properties. First, we show that it can be computed in strongly polynomial time. Second, we show that with the appropriate discount factor, the Gittins index and our ratio index are constant factor approximations of each other, and hence the Gittins index also gives a constant factor approximation to the budgeted learning problem. Finally, we show that the ratio index can be used to create an index-based policy that achieves an O(1)-approximation for the finite horizon version of the multi-armed bandit problem. Moreover, the policy does not require any knowledge of the horizon (whereas we compare its performance against an optimal strategy that is aware of the horizon). This yields the following surprising result: there is an index-based policy that achieves an O(1)-approximation for the multi-armed bandit problem, oblivious to the underlying discount factor. |
1103.1001 | Xinhua Wang | Xinhua Wang, Hai Lin | Two-step differentiator for delayed signal | 12 pages, 10 figures | null | null | null | cs.SY math.DS math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a high-order differentiator for delayed measurement
signal. The proposed differentiator not only can correct the delay in signal,
but aslo can estimate the undelayed derivatives. The differentiator consists of
two-step algorithms with the delayed time instant. Conditions are given
ensuring convergence of the estimation error for the given delay in the
signals. The merits of method include its simple implementation and interesting
application. Numerical simulations illustrate the effectiveness of the proposed
differentiator.
| [
{
"created": "Sat, 5 Mar 2011 03:19:52 GMT",
"version": "v1"
}
] | 2011-03-08 | [
[
"Wang",
"Xinhua",
""
],
[
"Lin",
"Hai",
""
]
] | This paper presents a high-order differentiator for delayed measurement signal. The proposed differentiator not only can correct the delay in signal, but aslo can estimate the undelayed derivatives. The differentiator consists of two-step algorithms with the delayed time instant. Conditions are given ensuring convergence of the estimation error for the given delay in the signals. The merits of method include its simple implementation and interesting application. Numerical simulations illustrate the effectiveness of the proposed differentiator. |
1408.2293 | Hugh Kennedy Dr. | Hugh L. Kennedy | Direct Digital Design of Loop-Shaping Filters for Sampled Control
Systems | In addition to the brief journal paper (see v3 comments), this paper
was split into 2 conference papers: "Numerical Derivation of Fading-Memory
Polynomial and Sinusoidal Filters for Discrete-Time Control Systems" and
"Application of Fading-Memory Polynomial Filters to the Control of an
Electric Motor", to appear in Proc. 2015 IEEE Multi-Conference on Systems and
Control | null | null | null | cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A controller design technique for shaping the frequency response of a process
is described. A general linear model (GLM) is used to define the form of a lag
or lead compensator in discrete time using a prescribed set of basis functions.
The model is then transformed via the complex z-domain into a difference
equation for a recursive digital filter with an infinite impulse response
(IIR). A polynomial basis set is better for shaping the frequency response in
the near-zero region; whereas a sinusoidal basis set is better for defining the
response at arbitrary frequencies. The proposed compensator design method is
more flexible than existing low-order approaches and more suitable than other
general-purpose high-order methods. Performance of the resulting controller is
compared with digital proportional-integral-differential (PID) and
linear-state-space (LSS) algorithms in a real motor-control application.
| [
{
"created": "Mon, 11 Aug 2014 01:45:41 GMT",
"version": "v1"
},
{
"created": "Thu, 20 Nov 2014 02:38:26 GMT",
"version": "v2"
},
{
"created": "Mon, 19 Jan 2015 22:42:53 GMT",
"version": "v3"
},
{
"created": "Sat, 18 Jul 2015 01:22:01 GMT",
"version": "v4"
}
] | 2015-07-21 | [
[
"Kennedy",
"Hugh L.",
""
]
] | A controller design technique for shaping the frequency response of a process is described. A general linear model (GLM) is used to define the form of a lag or lead compensator in discrete time using a prescribed set of basis functions. The model is then transformed via the complex z-domain into a difference equation for a recursive digital filter with an infinite impulse response (IIR). A polynomial basis set is better for shaping the frequency response in the near-zero region; whereas a sinusoidal basis set is better for defining the response at arbitrary frequencies. The proposed compensator design method is more flexible than existing low-order approaches and more suitable than other general-purpose high-order methods. Performance of the resulting controller is compared with digital proportional-integral-differential (PID) and linear-state-space (LSS) algorithms in a real motor-control application. |
2307.14199 | Masoume Kazemi | Masoume Kazemi, Davood Moradkhani, Alireza Abbas Alipour | Application of Random Forest and Support Vector Machine for
Investigation of Pressure Filtration Performance, a Zinc Plant Filter Cake
Modeling | null | null | null | null | cs.LG | http://creativecommons.org/publicdomain/zero/1.0/ | The hydrometallurgical method of zinc production involves leaching zinc from
ore and then separating the solid residue from the liquid solution by pressure
filtration. This separation process is very important since the solid residue
contains some moisture that can reduce the amount of zinc recovered. This study
modeled the pressure filtration process through Random Forest (RF) and Support
Vector Machine (SVM). The models take continuous variables (extracted features)
from the lab samples as inputs. Thus, regression models namely Random Forest
Regression (RFR) and Support Vector Regression (SVR) were chosen. A total
dataset was obtained during the pressure filtration process in two conditions:
1) Polypropylene (S1) and 2) Polyester fabrics (S2). To predict the cake
moisture, solids concentration (0.2 and 0.38), temperature (35 and 65
centigrade), pH (2, 3.5, and 5), pressure, cake thickness (14, 20, 26, and 34
mm), air-blow time (2, 10 and 15 min) and filtration time were applied as input
variables. The models' predictive accuracy was evaluated by the coefficient of
determination (R2) parameter. The results revealed that the RFR model is
superior to the SVR model for cake moisture prediction.
| [
{
"created": "Wed, 26 Jul 2023 13:52:53 GMT",
"version": "v1"
}
] | 2023-07-27 | [
[
"Kazemi",
"Masoume",
""
],
[
"Moradkhani",
"Davood",
""
],
[
"Alipour",
"Alireza Abbas",
""
]
] | The hydrometallurgical method of zinc production involves leaching zinc from ore and then separating the solid residue from the liquid solution by pressure filtration. This separation process is very important since the solid residue contains some moisture that can reduce the amount of zinc recovered. This study modeled the pressure filtration process through Random Forest (RF) and Support Vector Machine (SVM). The models take continuous variables (extracted features) from the lab samples as inputs. Thus, regression models namely Random Forest Regression (RFR) and Support Vector Regression (SVR) were chosen. A total dataset was obtained during the pressure filtration process in two conditions: 1) Polypropylene (S1) and 2) Polyester fabrics (S2). To predict the cake moisture, solids concentration (0.2 and 0.38), temperature (35 and 65 centigrade), pH (2, 3.5, and 5), pressure, cake thickness (14, 20, 26, and 34 mm), air-blow time (2, 10 and 15 min) and filtration time were applied as input variables. The models' predictive accuracy was evaluated by the coefficient of determination (R2) parameter. The results revealed that the RFR model is superior to the SVR model for cake moisture prediction. |
2310.02583 | Seyed Mirvakili | Seyed Mo Mirvakili, Ehsan Haghighat, Douglas Sim | Machine Learning-Enabled Precision Position Control and Thermal
Regulation in Advanced Thermal Actuators | null | null | null | null | cs.RO cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With their unique combination of characteristics - an energy density almost
100 times that of human muscle, and a power density of 5.3 kW/kg, similar to a
jet engine's output - Nylon artificial muscles stand out as particularly apt
for robotics applications. However, the necessity of integrating sensors and
controllers poses a limitation to their practical usage. Here we report a
constant power open-loop controller based on machine learning. We show that we
can control the position of a nylon artificial muscle without external sensors.
To this end, we construct a mapping from a desired displacement trajectory to a
required power using an ensemble encoder-style feed-forward neural network. The
neural controller is carefully trained on a physics-based denoised dataset and
can be fine-tuned to accommodate various types of thermal artificial muscles,
irrespective of the presence or absence of hysteresis.
| [
{
"created": "Wed, 4 Oct 2023 05:01:47 GMT",
"version": "v1"
}
] | 2023-10-05 | [
[
"Mirvakili",
"Seyed Mo",
""
],
[
"Haghighat",
"Ehsan",
""
],
[
"Sim",
"Douglas",
""
]
] | With their unique combination of characteristics - an energy density almost 100 times that of human muscle, and a power density of 5.3 kW/kg, similar to a jet engine's output - Nylon artificial muscles stand out as particularly apt for robotics applications. However, the necessity of integrating sensors and controllers poses a limitation to their practical usage. Here we report a constant power open-loop controller based on machine learning. We show that we can control the position of a nylon artificial muscle without external sensors. To this end, we construct a mapping from a desired displacement trajectory to a required power using an ensemble encoder-style feed-forward neural network. The neural controller is carefully trained on a physics-based denoised dataset and can be fine-tuned to accommodate various types of thermal artificial muscles, irrespective of the presence or absence of hysteresis. |
2002.05538 | Jakkepalli Pavan Kumar | Jakkepalli Pavan Kumar and P. Venkata Subba Reddy | Algorithmic Complexity of Isolate Secure Domination in Graphs | arXiv admin note: substantial text overlap with arXiv:2002.00002;
text overlap with arXiv:2001.11250 | null | null | null | cs.DM cs.CC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A dominating set $S$ is an Isolate Dominating Set (IDS) if the induced
subgraph $G[S]$ has at least one isolated vertex. In this paper, we initiate
the study of new domination parameter called, isolate secure domination. An
isolate dominating set $S\subseteq V$ is an isolate secure dominating set
(ISDS), if for each vertex $u \in V \setminus S$, there exists a neighboring
vertex $v$ of $u$ in $S$ such that $(S \setminus \{v\}) \cup \{u\}$ is an IDS
of $G$. The minimum cardinality of an ISDS of $G$ is called as an isolate
secure domination number, and is denoted by $\gamma_{0s}(G)$. Given a graph $
G=(V,E)$ and a positive integer $ k,$ the ISDM problem is to check whether $ G
$ has an isolate secure dominating set of size at most $ k.$ We prove that ISDM
is NP-complete even when restricted to bipartite graphs and split graphs. We
also show that ISDM can be solved in linear time for graphs of bounded
tree-width.
| [
{
"created": "Wed, 12 Feb 2020 07:42:51 GMT",
"version": "v1"
}
] | 2020-02-14 | [
[
"Kumar",
"Jakkepalli Pavan",
""
],
[
"Reddy",
"P. Venkata Subba",
""
]
] | A dominating set $S$ is an Isolate Dominating Set (IDS) if the induced subgraph $G[S]$ has at least one isolated vertex. In this paper, we initiate the study of new domination parameter called, isolate secure domination. An isolate dominating set $S\subseteq V$ is an isolate secure dominating set (ISDS), if for each vertex $u \in V \setminus S$, there exists a neighboring vertex $v$ of $u$ in $S$ such that $(S \setminus \{v\}) \cup \{u\}$ is an IDS of $G$. The minimum cardinality of an ISDS of $G$ is called as an isolate secure domination number, and is denoted by $\gamma_{0s}(G)$. Given a graph $ G=(V,E)$ and a positive integer $ k,$ the ISDM problem is to check whether $ G $ has an isolate secure dominating set of size at most $ k.$ We prove that ISDM is NP-complete even when restricted to bipartite graphs and split graphs. We also show that ISDM can be solved in linear time for graphs of bounded tree-width. |
2206.12343 | Robin Haunschild | Lutz Bornmann and Robin Haunschild | Identification of young talented individuals in the natural and life
sciences using bibliometric data | 7 pages, 3 tables, to be presented at STI 2022 | null | 10.1016/j.joi.2023.101394 | null | cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Identification of young talented individuals is not an easy task.
Citation-based data usually need too long to accrue. In this study, we proposed
a method based on bibliometric data for the identification of young talented
individuals. Three different indicators and their combinations were used. An
older cohort with their first publication between 1999 and 2003 was used to
find the most suitable indicator combination. For the validation step, citation
impact on the level of individual papers was used. The best performing
indicator combination was applied to the time period 2007-2011 for identifying
young talented individuals who published their first paper within this time
period. We produced a set of 46,200 potential talented individuals.
| [
{
"created": "Fri, 24 Jun 2022 15:28:03 GMT",
"version": "v1"
}
] | 2024-08-12 | [
[
"Bornmann",
"Lutz",
""
],
[
"Haunschild",
"Robin",
""
]
] | Identification of young talented individuals is not an easy task. Citation-based data usually need too long to accrue. In this study, we proposed a method based on bibliometric data for the identification of young talented individuals. Three different indicators and their combinations were used. An older cohort with their first publication between 1999 and 2003 was used to find the most suitable indicator combination. For the validation step, citation impact on the level of individual papers was used. The best performing indicator combination was applied to the time period 2007-2011 for identifying young talented individuals who published their first paper within this time period. We produced a set of 46,200 potential talented individuals. |
1911.08339 | Jonathan Ullman | Alexander Edmonds and Aleksandar Nikolov and Jonathan Ullman | The Power of Factorization Mechanisms in Local and Central Differential
Privacy | null | null | null | null | cs.DS cs.CR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We give new characterizations of the sample complexity of answering linear
queries (statistical queries) in the local and central models of differential
privacy:
*In the non-interactive local model, we give the first approximate
characterization of the sample complexity. Informally our bounds are tight to
within polylogarithmic factors in the number of queries and desired accuracy.
Our characterization extends to agnostic learning in the local model.
*In the central model, we give a characterization of the sample complexity in
the high-accuracy regime that is analogous to that of Nikolov, Talwar, and
Zhang (STOC 2013), but is both quantitatively tighter and has a dramatically
simpler proof.
Our lower bounds apply equally to the empirical and population estimation
problems. In both cases, our characterizations show that a particular
factorization mechanism is approximately optimal, and the optimal sample
complexity is bounded from above and below by well studied factorization norms
of a matrix associated with the queries.
| [
{
"created": "Tue, 19 Nov 2019 15:17:18 GMT",
"version": "v1"
}
] | 2019-11-20 | [
[
"Edmonds",
"Alexander",
""
],
[
"Nikolov",
"Aleksandar",
""
],
[
"Ullman",
"Jonathan",
""
]
] | We give new characterizations of the sample complexity of answering linear queries (statistical queries) in the local and central models of differential privacy: *In the non-interactive local model, we give the first approximate characterization of the sample complexity. Informally our bounds are tight to within polylogarithmic factors in the number of queries and desired accuracy. Our characterization extends to agnostic learning in the local model. *In the central model, we give a characterization of the sample complexity in the high-accuracy regime that is analogous to that of Nikolov, Talwar, and Zhang (STOC 2013), but is both quantitatively tighter and has a dramatically simpler proof. Our lower bounds apply equally to the empirical and population estimation problems. In both cases, our characterizations show that a particular factorization mechanism is approximately optimal, and the optimal sample complexity is bounded from above and below by well studied factorization norms of a matrix associated with the queries. |
1705.06086 | Sebastian Werner | Sebastian Werner, Zdravko Velinov, Wenzel Jakob, Matthias B. Hullin | Scratch iridescence: Wave-optical rendering of diffractive surface
structure | null | null | null | null | cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The surface of metal, glass and plastic objects is often characterized by
microscopic scratches caused by manufacturing and/or wear. A closer look onto
such scratches reveals iridescent colors with a complex dependency on viewing
and lighting conditions. The physics behind this phenomenon is well understood;
it is caused by diffraction of the incident light by surface features on the
order of the optical wavelength. Existing analytic models are able to reproduce
spatially unresolved microstructure such as the iridescent appearance of
compact disks and similar materials. Spatially resolved scratches, on the other
hand, have proven elusive due to the highly complex wave-optical light
transport simulations needed to account for their appearance. In this paper, we
propose a wave-optical shading model based on non-paraxial scalar diffraction
theory to render this class of effects. Our model expresses surface roughness
as a collection of line segments. To shade a point on the surface, the
individual diffraction patterns for contributing scratch segments are computed
analytically and superimposed coherently. This provides natural transitions
from localized glint-like iridescence to smooth BRDFs representing the
superposition of many reflections at large viewing distances. We demonstrate
that our model is capable of recreating the overall appearance as well as
characteristic detail effects observed on real-world examples.
| [
{
"created": "Wed, 17 May 2017 10:59:29 GMT",
"version": "v1"
}
] | 2017-05-18 | [
[
"Werner",
"Sebastian",
""
],
[
"Velinov",
"Zdravko",
""
],
[
"Jakob",
"Wenzel",
""
],
[
"Hullin",
"Matthias B.",
""
]
] | The surface of metal, glass and plastic objects is often characterized by microscopic scratches caused by manufacturing and/or wear. A closer look onto such scratches reveals iridescent colors with a complex dependency on viewing and lighting conditions. The physics behind this phenomenon is well understood; it is caused by diffraction of the incident light by surface features on the order of the optical wavelength. Existing analytic models are able to reproduce spatially unresolved microstructure such as the iridescent appearance of compact disks and similar materials. Spatially resolved scratches, on the other hand, have proven elusive due to the highly complex wave-optical light transport simulations needed to account for their appearance. In this paper, we propose a wave-optical shading model based on non-paraxial scalar diffraction theory to render this class of effects. Our model expresses surface roughness as a collection of line segments. To shade a point on the surface, the individual diffraction patterns for contributing scratch segments are computed analytically and superimposed coherently. This provides natural transitions from localized glint-like iridescence to smooth BRDFs representing the superposition of many reflections at large viewing distances. We demonstrate that our model is capable of recreating the overall appearance as well as characteristic detail effects observed on real-world examples. |
1204.2101 | Sumit Katiyar | Sumit Katiyar, R. K. Jain, N. K. Agrawal | R.F. Pollution Reduction in Cellular Communication | 6 pages, 7 figures, international journal, International Journal of
Scientific & Engineering Research, Volume 3, Issue 3, March -2012 | null | null | null | cs.CY cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | R. F. pollution has been recognized as health hazard in India in the
prevailing circumstances. There is lot of hue and cry against cellular towers
installed in residential area. Recently high court in India has issued an order
not to install towers in residential areas. For meeting the exponential demand
of cellular communication in India this will be a set back for future growth.
An appropriate solution has to be developed for meeting demand as well as RF
pollution concern of the society. This paper deals with the installation of low
power base stations in residential areas instead of high power macro cell base
stations. Macro stations are proposed to be used for fast traffic, low power
micro cell for a slow traffic / pedestrian and pico cell / femto cell for
indoor use. These cells will be in hierarchical structure along with adaptive
frequency allocation techniques and A-SDMA approach.
| [
{
"created": "Tue, 10 Apr 2012 10:45:12 GMT",
"version": "v1"
}
] | 2012-04-11 | [
[
"Katiyar",
"Sumit",
""
],
[
"Jain",
"R. K.",
""
],
[
"Agrawal",
"N. K.",
""
]
] | R. F. pollution has been recognized as health hazard in India in the prevailing circumstances. There is lot of hue and cry against cellular towers installed in residential area. Recently high court in India has issued an order not to install towers in residential areas. For meeting the exponential demand of cellular communication in India this will be a set back for future growth. An appropriate solution has to be developed for meeting demand as well as RF pollution concern of the society. This paper deals with the installation of low power base stations in residential areas instead of high power macro cell base stations. Macro stations are proposed to be used for fast traffic, low power micro cell for a slow traffic / pedestrian and pico cell / femto cell for indoor use. These cells will be in hierarchical structure along with adaptive frequency allocation techniques and A-SDMA approach. |
1211.6918 | Mathis Seidl | Mathis Seidl, Andreas Schenk, Clemens Stierstorfer, and Johannes B.
Huber | Aspects of Polar-Coded Modulation | Accepted for presentation at International ITG Conference on Systems,
Communications and Coding, Munich, Germany, January 2013 | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the joint design of polar coding and higher-order modulation
schemes for ever increased spectral efficiency. The close connection between
the polar code construction and the multi-level coding approach is described in
detail. Relations between different modulation schemes such as bit-interleaved
coded modulation (BICM) and multi-level coding (MLC) in case of polar-coded
modulation as well as the influence of the applied labeling rule and the
selection of frozen channels are demonstrated.
| [
{
"created": "Thu, 29 Nov 2012 14:01:23 GMT",
"version": "v1"
}
] | 2012-11-30 | [
[
"Seidl",
"Mathis",
""
],
[
"Schenk",
"Andreas",
""
],
[
"Stierstorfer",
"Clemens",
""
],
[
"Huber",
"Johannes B.",
""
]
] | We consider the joint design of polar coding and higher-order modulation schemes for ever increased spectral efficiency. The close connection between the polar code construction and the multi-level coding approach is described in detail. Relations between different modulation schemes such as bit-interleaved coded modulation (BICM) and multi-level coding (MLC) in case of polar-coded modulation as well as the influence of the applied labeling rule and the selection of frozen channels are demonstrated. |
2010.04554 | Huiting Hong | Yucheng Lin, Huiting Hong, Xiaoqing Yang, Xiaodi Yang, Pinghua Gong,
Jieping Ye | Meta Graph Attention on Heterogeneous Graph with Node-Edge Co-evolution | 11pages, 4figures | null | null | null | cs.LG cs.SI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph neural networks have become an important tool for modeling structured
data. In many real-world systems, intricate hidden information may exist, e.g.,
heterogeneity in nodes/edges, static node/edge attributes, and spatiotemporal
node/edge features. However, most existing methods only take part of the
information into consideration. In this paper, we present the Co-evolved Meta
Graph Neural Network (CoMGNN), which applies meta graph attention to
heterogeneous graphs with co-evolution of node and edge states. We further
propose a spatiotemporal adaption of CoMGNN (ST-CoMGNN) for modeling
spatiotemporal patterns on nodes and edges. We conduct experiments on two
large-scale real-world datasets. Experimental results show that our models
significantly outperform the state-of-the-art methods, demonstrating the
effectiveness of encoding diverse information from different aspects.
| [
{
"created": "Fri, 9 Oct 2020 13:19:39 GMT",
"version": "v1"
}
] | 2020-10-12 | [
[
"Lin",
"Yucheng",
""
],
[
"Hong",
"Huiting",
""
],
[
"Yang",
"Xiaoqing",
""
],
[
"Yang",
"Xiaodi",
""
],
[
"Gong",
"Pinghua",
""
],
[
"Ye",
"Jieping",
""
]
] | Graph neural networks have become an important tool for modeling structured data. In many real-world systems, intricate hidden information may exist, e.g., heterogeneity in nodes/edges, static node/edge attributes, and spatiotemporal node/edge features. However, most existing methods only take part of the information into consideration. In this paper, we present the Co-evolved Meta Graph Neural Network (CoMGNN), which applies meta graph attention to heterogeneous graphs with co-evolution of node and edge states. We further propose a spatiotemporal adaption of CoMGNN (ST-CoMGNN) for modeling spatiotemporal patterns on nodes and edges. We conduct experiments on two large-scale real-world datasets. Experimental results show that our models significantly outperform the state-of-the-art methods, demonstrating the effectiveness of encoding diverse information from different aspects. |
1802.07647 | Christian Konrad | Christian Konrad | MIS in the Congested Clique Model in $O(\log \log \Delta)$ Rounds | null | null | null | null | cs.DC cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We give a maximal independent set (MIS) algorithm that runs in $O(\log \log
\Delta)$ rounds in the congested clique model, where $\Delta$ is the maximum
degree of the input graph. This improves upon the $O(\frac{\log(\Delta) \cdot
\log \log \Delta}{\sqrt{\log n}} + \log \log \Delta )$ rounds algorithm of
[Ghaffari, PODC '17], where $n$ is the number of vertices of the input graph.
In the first stage of our algorithm, we simulate the first
$O(\frac{n}{\text{poly} \log n})$ iterations of the sequential random order
Greedy algorithm for MIS in the congested clique model in $O(\log \log \Delta)$
rounds. This thins out the input graph relatively quickly: After this stage,
the maximum degree of the residual graph is poly-logarithmic. In the second
stage, we run the MIS algorithm of [Ghaffari, PODC '17] on the residual graph,
which completes in $O(\log \log \Delta)$ rounds on graphs of poly-logarithmic
degree.
| [
{
"created": "Wed, 21 Feb 2018 16:21:34 GMT",
"version": "v1"
}
] | 2018-02-22 | [
[
"Konrad",
"Christian",
""
]
] | We give a maximal independent set (MIS) algorithm that runs in $O(\log \log \Delta)$ rounds in the congested clique model, where $\Delta$ is the maximum degree of the input graph. This improves upon the $O(\frac{\log(\Delta) \cdot \log \log \Delta}{\sqrt{\log n}} + \log \log \Delta )$ rounds algorithm of [Ghaffari, PODC '17], where $n$ is the number of vertices of the input graph. In the first stage of our algorithm, we simulate the first $O(\frac{n}{\text{poly} \log n})$ iterations of the sequential random order Greedy algorithm for MIS in the congested clique model in $O(\log \log \Delta)$ rounds. This thins out the input graph relatively quickly: After this stage, the maximum degree of the residual graph is poly-logarithmic. In the second stage, we run the MIS algorithm of [Ghaffari, PODC '17] on the residual graph, which completes in $O(\log \log \Delta)$ rounds on graphs of poly-logarithmic degree. |
0910.2632 | Laurent Romary | Laurent Romary (INRIA Saclay - Ile de France, IDSL) | Communication scientifique : Pour le meilleur et pour le PEER | null | Hermes (2009) | null | null | cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper provides an overview (in French) of the European PEER project,
focusing on its origins, the actual objectives and the technical deployment.
| [
{
"created": "Wed, 14 Oct 2009 14:28:28 GMT",
"version": "v1"
}
] | 2009-10-15 | [
[
"Romary",
"Laurent",
"",
"INRIA Saclay - Ile de France, IDSL"
]
] | This paper provides an overview (in French) of the European PEER project, focusing on its origins, the actual objectives and the technical deployment. |
2003.05420 | Peng Jiang Dr. | Guangnan Wu and Zhiyi Pan and Peng Jiang and Changhe Tu | Bi-Directional Attention for Joint Instance and Semantic Segmentation in
Point Clouds | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Instance segmentation in point clouds is one of the most fine-grained ways to
understand the 3D scene. Due to its close relationship to semantic
segmentation, many works approach these two tasks simultaneously and leverage
the benefits of multi-task learning. However, most of them only considered
simple strategies such as element-wise feature fusion, which may not lead to
mutual promotion. In this work, we build a Bi-Directional Attention module on
backbone neural networks for 3D point cloud perception, which uses similarity
matrix measured from features for one task to help aggregate non-local
information for the other task, avoiding the potential feature exclusion and
task conflict. From comprehensive experiments and ablation studies on the S3DIS
dataset and the PartNet dataset, the superiority of our method is verified.
Moreover, the mechanism of how bi-directional attention module helps joint
instance and semantic segmentation is also analyzed.
| [
{
"created": "Wed, 11 Mar 2020 17:16:07 GMT",
"version": "v1"
}
] | 2020-03-12 | [
[
"Wu",
"Guangnan",
""
],
[
"Pan",
"Zhiyi",
""
],
[
"Jiang",
"Peng",
""
],
[
"Tu",
"Changhe",
""
]
] | Instance segmentation in point clouds is one of the most fine-grained ways to understand the 3D scene. Due to its close relationship to semantic segmentation, many works approach these two tasks simultaneously and leverage the benefits of multi-task learning. However, most of them only considered simple strategies such as element-wise feature fusion, which may not lead to mutual promotion. In this work, we build a Bi-Directional Attention module on backbone neural networks for 3D point cloud perception, which uses similarity matrix measured from features for one task to help aggregate non-local information for the other task, avoiding the potential feature exclusion and task conflict. From comprehensive experiments and ablation studies on the S3DIS dataset and the PartNet dataset, the superiority of our method is verified. Moreover, the mechanism of how bi-directional attention module helps joint instance and semantic segmentation is also analyzed. |
2211.13557 | Fernando Alonso-Fernandez | Hartwig Fronthaler, Klaus Kollreider, Josef Bigun, Julian Fierrez,
Fernando Alonso-Fernandez, Javier Ortega-Garcia, Joaquin Gonzalez-Rodriguez | Fingerprint Image-Quality Estimation and its Application to
Multialgorithm Verification | Published at IEEE Transactions on Information Forensics and Security | null | 10.1109/TIFS.2008.920725 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Signal-quality awareness has been found to increase recognition rates and to
support decisions in multisensor environments significantly. Nevertheless,
automatic quality assessment is still an open issue. Here, we study the
orientation tensor of fingerprint images to quantify signal impairments, such
as noise, lack of structure, blur, with the help of symmetry descriptors. A
strongly reduced reference is especially favorable in biometrics, but less
information is not sufficient for the approach. This is also supported by
numerous experiments involving a simpler quality estimator, a trained method
(NFIQ), as well as the human perception of fingerprint quality on several
public databases. Furthermore, quality measurements are extensively reused to
adapt fusion parameters in a monomodal multialgorithm fingerprint recognition
environment. In this study, several trained and nontrained score-level fusion
schemes are investigated. A Bayes-based strategy for incorporating experts past
performances and current quality conditions, a novel cascaded scheme for
computational efficiency, besides simple fusion rules, is presented. The
quantitative results favor quality awareness under all aspects, boosting
recognition rates and fusing differently skilled experts efficiently as well as
effectively (by training).
| [
{
"created": "Thu, 24 Nov 2022 12:17:49 GMT",
"version": "v1"
}
] | 2022-11-28 | [
[
"Fronthaler",
"Hartwig",
""
],
[
"Kollreider",
"Klaus",
""
],
[
"Bigun",
"Josef",
""
],
[
"Fierrez",
"Julian",
""
],
[
"Alonso-Fernandez",
"Fernando",
""
],
[
"Ortega-Garcia",
"Javier",
""
],
[
"Gonzalez-Rodriguez",
"Joaquin",
""
]
] | Signal-quality awareness has been found to increase recognition rates and to support decisions in multisensor environments significantly. Nevertheless, automatic quality assessment is still an open issue. Here, we study the orientation tensor of fingerprint images to quantify signal impairments, such as noise, lack of structure, blur, with the help of symmetry descriptors. A strongly reduced reference is especially favorable in biometrics, but less information is not sufficient for the approach. This is also supported by numerous experiments involving a simpler quality estimator, a trained method (NFIQ), as well as the human perception of fingerprint quality on several public databases. Furthermore, quality measurements are extensively reused to adapt fusion parameters in a monomodal multialgorithm fingerprint recognition environment. In this study, several trained and nontrained score-level fusion schemes are investigated. A Bayes-based strategy for incorporating experts past performances and current quality conditions, a novel cascaded scheme for computational efficiency, besides simple fusion rules, is presented. The quantitative results favor quality awareness under all aspects, boosting recognition rates and fusing differently skilled experts efficiently as well as effectively (by training). |
2108.09402 | Bonaventure Molokwu Ph.D. | Bonaventure Chidube Molokwu, Shaon Bhatta Shuvo, Ziad Kobti, Anne
Snowdon | A Multi-Task Learning Framework for COVID-19 Monitoring and Prediction
of PPE Demand in Community Health Centres | 6-page article/manuscript | null | null | null | cs.LG cs.AI cs.SI | http://creativecommons.org/licenses/by/4.0/ | Currently, the world seeks to find appropriate mitigation techniques to
control and prevent the spread of the new SARS-CoV-2. In our paper herein, we
present a peculiar Multi-Task Learning framework that jointly predicts the
effect of SARS-CoV-2 as well as Personal-Protective-Equipment consumption in
Community Health Centres for a given populace. Predicting the effect of the
virus (SARS-CoV-2), via studies and analyses, enables us to understand the
nature of SARS-CoV- 2 with reference to factors that promote its growth and
spread. Therefore, these foster widespread awareness; and the populace can
become more proactive and cautious so as to mitigate the spread of Corona Virus
Disease 2019 (COVID- 19). Furthermore, understanding and predicting the demand
for Personal Protective Equipment promotes the efficiency and safety of
healthcare workers in Community Health Centres. Owing to the novel nature and
strains of SARS-CoV-2, relatively few literature and research exist in this
regard. These existing literature have attempted to solve the problem
statement(s) using either Agent-based Models, Machine Learning Models, or
Mathematical Models. In view of this, our work herein adds to existing
literature via modeling our problem statements as Multi- Task Learning
problems. Results from our research indicate that government actions and human
factors are the most significant determinants that influence the spread of
SARS-CoV-2.
| [
{
"created": "Fri, 20 Aug 2021 23:32:41 GMT",
"version": "v1"
}
] | 2021-08-24 | [
[
"Molokwu",
"Bonaventure Chidube",
""
],
[
"Shuvo",
"Shaon Bhatta",
""
],
[
"Kobti",
"Ziad",
""
],
[
"Snowdon",
"Anne",
""
]
] | Currently, the world seeks to find appropriate mitigation techniques to control and prevent the spread of the new SARS-CoV-2. In our paper herein, we present a peculiar Multi-Task Learning framework that jointly predicts the effect of SARS-CoV-2 as well as Personal-Protective-Equipment consumption in Community Health Centres for a given populace. Predicting the effect of the virus (SARS-CoV-2), via studies and analyses, enables us to understand the nature of SARS-CoV- 2 with reference to factors that promote its growth and spread. Therefore, these foster widespread awareness; and the populace can become more proactive and cautious so as to mitigate the spread of Corona Virus Disease 2019 (COVID- 19). Furthermore, understanding and predicting the demand for Personal Protective Equipment promotes the efficiency and safety of healthcare workers in Community Health Centres. Owing to the novel nature and strains of SARS-CoV-2, relatively few literature and research exist in this regard. These existing literature have attempted to solve the problem statement(s) using either Agent-based Models, Machine Learning Models, or Mathematical Models. In view of this, our work herein adds to existing literature via modeling our problem statements as Multi- Task Learning problems. Results from our research indicate that government actions and human factors are the most significant determinants that influence the spread of SARS-CoV-2. |
2101.11744 | Matthew Smart | Matthew Smart, Anton Zilman | On the mapping between Hopfield networks and Restricted Boltzmann
Machines | ICLR 2021 oral paper | The 9th International Conference on Learning Representations (ICLR
2021) | null | null | cs.LG cond-mat.dis-nn cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hopfield networks (HNs) and Restricted Boltzmann Machines (RBMs) are two
important models at the interface of statistical physics, machine learning, and
neuroscience. Recently, there has been interest in the relationship between HNs
and RBMs, due to their similarity under the statistical mechanics formalism. An
exact mapping between HNs and RBMs has been previously noted for the special
case of orthogonal (uncorrelated) encoded patterns. We present here an exact
mapping in the case of correlated pattern HNs, which are more broadly
applicable to existing datasets. Specifically, we show that any HN with $N$
binary variables and $p<N$ arbitrary binary patterns can be transformed into an
RBM with $N$ binary visible variables and $p$ gaussian hidden variables. We
outline the conditions under which the reverse mapping exists, and conduct
experiments on the MNIST dataset which suggest the mapping provides a useful
initialization to the RBM weights. We discuss extensions, the potential
importance of this correspondence for the training of RBMs, and for
understanding the performance of deep architectures which utilize RBMs.
| [
{
"created": "Wed, 27 Jan 2021 23:49:48 GMT",
"version": "v1"
},
{
"created": "Sat, 6 Mar 2021 02:08:12 GMT",
"version": "v2"
}
] | 2021-03-09 | [
[
"Smart",
"Matthew",
""
],
[
"Zilman",
"Anton",
""
]
] | Hopfield networks (HNs) and Restricted Boltzmann Machines (RBMs) are two important models at the interface of statistical physics, machine learning, and neuroscience. Recently, there has been interest in the relationship between HNs and RBMs, due to their similarity under the statistical mechanics formalism. An exact mapping between HNs and RBMs has been previously noted for the special case of orthogonal (uncorrelated) encoded patterns. We present here an exact mapping in the case of correlated pattern HNs, which are more broadly applicable to existing datasets. Specifically, we show that any HN with $N$ binary variables and $p<N$ arbitrary binary patterns can be transformed into an RBM with $N$ binary visible variables and $p$ gaussian hidden variables. We outline the conditions under which the reverse mapping exists, and conduct experiments on the MNIST dataset which suggest the mapping provides a useful initialization to the RBM weights. We discuss extensions, the potential importance of this correspondence for the training of RBMs, and for understanding the performance of deep architectures which utilize RBMs. |
1506.07257 | Jingyu Gao | Jingyu Gao, Jinfu Yang, Guanghui Wang and Mingai Li | A Novel Feature Extraction Method for Scene Recognition Based on
Centered Convolutional Restricted Boltzmann Machines | 22 pages, 11 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Scene recognition is an important research topic in computer vision, while
feature extraction is a key step of object recognition. Although classical
Restricted Boltzmann machines (RBM) can efficiently represent complicated data,
it is hard to handle large images due to its complexity in computation. In this
paper, a novel feature extraction method, named Centered Convolutional
Restricted Boltzmann Machines (CCRBM), is proposed for scene recognition. The
proposed model is an improved Convolutional Restricted Boltzmann Machines
(CRBM) by introducing centered factors in its learning strategy to reduce the
source of instabilities. First, the visible units of the network are redefined
using centered factors. Then, the hidden units are learned with a modified
energy function by utilizing a distribution function, and the visible units are
reconstructed using the learned hidden units. In order to achieve better
generative ability, the Centered Convolutional Deep Belief Networks (CCDBN) is
trained in a greedy layer-wise way. Finally, a softmax regression is
incorporated for scene recognition. Extensive experimental evaluations using
natural scenes, MIT-indoor scenes, and Caltech 101 datasets show that the
proposed approach performs better than other counterparts in terms of
stability, generalization, and discrimination. The CCDBN model is more suitable
for natural scene image recognition by virtue of convolutional property.
| [
{
"created": "Wed, 24 Jun 2015 06:42:42 GMT",
"version": "v1"
}
] | 2015-06-25 | [
[
"Gao",
"Jingyu",
""
],
[
"Yang",
"Jinfu",
""
],
[
"Wang",
"Guanghui",
""
],
[
"Li",
"Mingai",
""
]
] | Scene recognition is an important research topic in computer vision, while feature extraction is a key step of object recognition. Although classical Restricted Boltzmann machines (RBM) can efficiently represent complicated data, it is hard to handle large images due to its complexity in computation. In this paper, a novel feature extraction method, named Centered Convolutional Restricted Boltzmann Machines (CCRBM), is proposed for scene recognition. The proposed model is an improved Convolutional Restricted Boltzmann Machines (CRBM) by introducing centered factors in its learning strategy to reduce the source of instabilities. First, the visible units of the network are redefined using centered factors. Then, the hidden units are learned with a modified energy function by utilizing a distribution function, and the visible units are reconstructed using the learned hidden units. In order to achieve better generative ability, the Centered Convolutional Deep Belief Networks (CCDBN) is trained in a greedy layer-wise way. Finally, a softmax regression is incorporated for scene recognition. Extensive experimental evaluations using natural scenes, MIT-indoor scenes, and Caltech 101 datasets show that the proposed approach performs better than other counterparts in terms of stability, generalization, and discrimination. The CCDBN model is more suitable for natural scene image recognition by virtue of convolutional property. |
2212.12603 | Yao Yao | Yao Yao, Qihang Lin, Tianbao Yang | Stochastic Methods for AUC Optimization subject to AUC-based Fairness
Constraints | Published in AISTATS 2023 | null | null | null | cs.LG math.OC stat.ML | http://creativecommons.org/licenses/by/4.0/ | As machine learning being used increasingly in making high-stakes decisions,
an arising challenge is to avoid unfair AI systems that lead to discriminatory
decisions for protected population. A direct approach for obtaining a fair
predictive model is to train the model through optimizing its prediction
performance subject to fairness constraints, which achieves Pareto efficiency
when trading off performance against fairness. Among various fairness metrics,
the ones based on the area under the ROC curve (AUC) are emerging recently
because they are threshold-agnostic and effective for unbalanced data. In this
work, we formulate the training problem of a fairness-aware machine learning
model as an AUC optimization problem subject to a class of AUC-based fairness
constraints. This problem can be reformulated as a min-max optimization problem
with min-max constraints, which we solve by stochastic first-order methods
based on a new Bregman divergence designed for the special structure of the
problem. We numerically demonstrate the effectiveness of our approach on
real-world data under different fairness metrics.
| [
{
"created": "Fri, 23 Dec 2022 22:29:08 GMT",
"version": "v1"
},
{
"created": "Tue, 27 Dec 2022 02:01:30 GMT",
"version": "v2"
},
{
"created": "Wed, 22 Feb 2023 21:26:56 GMT",
"version": "v3"
}
] | 2023-02-24 | [
[
"Yao",
"Yao",
""
],
[
"Lin",
"Qihang",
""
],
[
"Yang",
"Tianbao",
""
]
] | As machine learning being used increasingly in making high-stakes decisions, an arising challenge is to avoid unfair AI systems that lead to discriminatory decisions for protected population. A direct approach for obtaining a fair predictive model is to train the model through optimizing its prediction performance subject to fairness constraints, which achieves Pareto efficiency when trading off performance against fairness. Among various fairness metrics, the ones based on the area under the ROC curve (AUC) are emerging recently because they are threshold-agnostic and effective for unbalanced data. In this work, we formulate the training problem of a fairness-aware machine learning model as an AUC optimization problem subject to a class of AUC-based fairness constraints. This problem can be reformulated as a min-max optimization problem with min-max constraints, which we solve by stochastic first-order methods based on a new Bregman divergence designed for the special structure of the problem. We numerically demonstrate the effectiveness of our approach on real-world data under different fairness metrics. |
2302.03596 | Jaehyeong Jo | Jaehyeong Jo, Dongki Kim, Sung Ju Hwang | Graph Generation with Diffusion Mixture | ICML 2024 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generation of graphs is a major challenge for real-world tasks that require
understanding the complex nature of their non-Euclidean structures. Although
diffusion models have achieved notable success in graph generation recently,
they are ill-suited for modeling the topological properties of graphs since
learning to denoise the noisy samples does not explicitly learn the graph
structures to be generated. To tackle this limitation, we propose a generative
framework that models the topology of graphs by explicitly learning the final
graph structures of the diffusion process. Specifically, we design the
generative process as a mixture of endpoint-conditioned diffusion processes
which is driven toward the predicted graph that results in rapid convergence.
We further introduce a simple parameterization of the mixture process and
develop an objective for learning the final graph structure, which enables
maximum likelihood training. Through extensive experimental validation on
general graph and 2D/3D molecule generation tasks, we show that our method
outperforms previous generative models, generating graphs with correct topology
with both continuous (e.g. 3D coordinates) and discrete (e.g. atom types)
features. Our code is available at https://github.com/harryjo97/GruM.
| [
{
"created": "Tue, 7 Feb 2023 17:07:46 GMT",
"version": "v1"
},
{
"created": "Wed, 24 May 2023 06:09:45 GMT",
"version": "v2"
},
{
"created": "Mon, 5 Feb 2024 02:22:58 GMT",
"version": "v3"
},
{
"created": "Sun, 2 Jun 2024 20:00:20 GMT",
"version": "v4"
}
] | 2024-06-04 | [
[
"Jo",
"Jaehyeong",
""
],
[
"Kim",
"Dongki",
""
],
[
"Hwang",
"Sung Ju",
""
]
] | Generation of graphs is a major challenge for real-world tasks that require understanding the complex nature of their non-Euclidean structures. Although diffusion models have achieved notable success in graph generation recently, they are ill-suited for modeling the topological properties of graphs since learning to denoise the noisy samples does not explicitly learn the graph structures to be generated. To tackle this limitation, we propose a generative framework that models the topology of graphs by explicitly learning the final graph structures of the diffusion process. Specifically, we design the generative process as a mixture of endpoint-conditioned diffusion processes which is driven toward the predicted graph that results in rapid convergence. We further introduce a simple parameterization of the mixture process and develop an objective for learning the final graph structure, which enables maximum likelihood training. Through extensive experimental validation on general graph and 2D/3D molecule generation tasks, we show that our method outperforms previous generative models, generating graphs with correct topology with both continuous (e.g. 3D coordinates) and discrete (e.g. atom types) features. Our code is available at https://github.com/harryjo97/GruM. |
2003.06945 | Cho-Ying Wu | Cho-Ying Wu, Ulrich Neumann | Scene Completeness-Aware Lidar Depth Completion for Driving Scenario | Present at ICASSP 2021; fix typos | null | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by/4.0/ | This paper introduces Scene Completeness-Aware Depth Completion (SCADC) to
complete raw lidar scans into dense depth maps with fine and complete scene
structures. Recent sparse depth completion for lidars only focuses on the lower
scenes and produces irregular estimations on the upper because existing
datasets, such as KITTI, do not provide groundtruth for upper areas. These
areas are considered less important since they are usually sky or trees of less
scene understanding interest. However, we argue that in several driving
scenarios such as large trucks or cars with loads, objects could extend to the
upper parts of scenes. Thus depth maps with structured upper scene estimation
are important for RGBD algorithms. SCADC adopts stereo images that produce
disparities with better scene completeness but are generally less precise than
lidars, to help sparse lidar depth completion. To our knowledge, we are the
first to focus on scene completeness of sparse depth completion. We validate
our SCADC on both depth estimate precision and scene-completeness on KITTI.
Moreover, we experiment on less-explored outdoor RGBD semantic segmentation
with scene completeness-aware D-input to validate our method.
| [
{
"created": "Sun, 15 Mar 2020 23:23:26 GMT",
"version": "v1"
},
{
"created": "Wed, 18 Mar 2020 03:46:18 GMT",
"version": "v2"
},
{
"created": "Sat, 20 Feb 2021 23:50:16 GMT",
"version": "v3"
},
{
"created": "Wed, 17 Jan 2024 05:29:28 GMT",
"version": "v4"
}
] | 2024-01-18 | [
[
"Wu",
"Cho-Ying",
""
],
[
"Neumann",
"Ulrich",
""
]
] | This paper introduces Scene Completeness-Aware Depth Completion (SCADC) to complete raw lidar scans into dense depth maps with fine and complete scene structures. Recent sparse depth completion for lidars only focuses on the lower scenes and produces irregular estimations on the upper because existing datasets, such as KITTI, do not provide groundtruth for upper areas. These areas are considered less important since they are usually sky or trees of less scene understanding interest. However, we argue that in several driving scenarios such as large trucks or cars with loads, objects could extend to the upper parts of scenes. Thus depth maps with structured upper scene estimation are important for RGBD algorithms. SCADC adopts stereo images that produce disparities with better scene completeness but are generally less precise than lidars, to help sparse lidar depth completion. To our knowledge, we are the first to focus on scene completeness of sparse depth completion. We validate our SCADC on both depth estimate precision and scene-completeness on KITTI. Moreover, we experiment on less-explored outdoor RGBD semantic segmentation with scene completeness-aware D-input to validate our method. |
2401.12624 | Yongjun Kim | Yongjun Kim, Sejin Seo, Jihong Park, Mehdi Bennis, Seong-Lyun Kim,
Junil Choi | Knowledge Distillation from Language-Oriented to Emergent Communication
for Multi-Agent Remote Control | null | null | null | null | cs.AI cs.IT cs.LG cs.NI math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we compare emergent communication (EC) built upon multi-agent
deep reinforcement learning (MADRL) and language-oriented semantic
communication (LSC) empowered by a pre-trained large language model (LLM) using
human language. In a multi-agent remote navigation task, with multimodal input
data comprising location and channel maps, it is shown that EC incurs high
training cost and struggles when using multimodal data, whereas LSC yields high
inference computing cost due to the LLM's large size. To address their
respective bottlenecks, we propose a novel framework of language-guided EC
(LEC) by guiding the EC training using LSC via knowledge distillation (KD).
Simulations corroborate that LEC achieves faster travel time while avoiding
areas with poor channel conditions, as well as speeding up the MADRL training
convergence by up to 61.8% compared to EC.
| [
{
"created": "Tue, 23 Jan 2024 10:23:13 GMT",
"version": "v1"
},
{
"created": "Sun, 3 Mar 2024 14:15:52 GMT",
"version": "v2"
}
] | 2024-03-05 | [
[
"Kim",
"Yongjun",
""
],
[
"Seo",
"Sejin",
""
],
[
"Park",
"Jihong",
""
],
[
"Bennis",
"Mehdi",
""
],
[
"Kim",
"Seong-Lyun",
""
],
[
"Choi",
"Junil",
""
]
] | In this work, we compare emergent communication (EC) built upon multi-agent deep reinforcement learning (MADRL) and language-oriented semantic communication (LSC) empowered by a pre-trained large language model (LLM) using human language. In a multi-agent remote navigation task, with multimodal input data comprising location and channel maps, it is shown that EC incurs high training cost and struggles when using multimodal data, whereas LSC yields high inference computing cost due to the LLM's large size. To address their respective bottlenecks, we propose a novel framework of language-guided EC (LEC) by guiding the EC training using LSC via knowledge distillation (KD). Simulations corroborate that LEC achieves faster travel time while avoiding areas with poor channel conditions, as well as speeding up the MADRL training convergence by up to 61.8% compared to EC. |
2402.08245 | Manh Duong Phung | Duy Nam Bui, Manh Duong Phung, Hung Pham Duy | Self-Reconfigurable V-shape Formation of Multiple UAVs in Narrow Space
Environments | Published in: 2024 IEEE/SICE International Symposium on System
Integration (SII) | null | 10.1109/SII58957.2024.10417519 | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | This paper presents the design and implementation of a self-reconfigurable
V-shape formation controller for multiple unmanned aerial vehicles (UAVs)
navigating through narrow spaces in a dense obstacle environment. The selection
of the V-shape formation is motivated by its maneuverability and visibility
advantages. The main objective is to develop an effective formation control
strategy that allows UAVs to autonomously adjust their positions to form the
desired formation while navigating through obstacles. To achieve this, we
propose a distributed behavior-based control algorithm that combines the
behaviors designed for individual UAVs so that they together navigate the UAVs
to their desired positions. The reconfiguration process is automatic, utilizing
individual UAV sensing within the formation, allowing for dynamic adaptations
such as opening/closing wings or merging into a straight line. Simulation
results show that the self-reconfigurable V-shape formation offers adaptability
and effectiveness for UAV formations in complex operational scenarios.
| [
{
"created": "Tue, 13 Feb 2024 06:19:11 GMT",
"version": "v1"
}
] | 2024-02-14 | [
[
"Bui",
"Duy Nam",
""
],
[
"Phung",
"Manh Duong",
""
],
[
"Duy",
"Hung Pham",
""
]
] | This paper presents the design and implementation of a self-reconfigurable V-shape formation controller for multiple unmanned aerial vehicles (UAVs) navigating through narrow spaces in a dense obstacle environment. The selection of the V-shape formation is motivated by its maneuverability and visibility advantages. The main objective is to develop an effective formation control strategy that allows UAVs to autonomously adjust their positions to form the desired formation while navigating through obstacles. To achieve this, we propose a distributed behavior-based control algorithm that combines the behaviors designed for individual UAVs so that they together navigate the UAVs to their desired positions. The reconfiguration process is automatic, utilizing individual UAV sensing within the formation, allowing for dynamic adaptations such as opening/closing wings or merging into a straight line. Simulation results show that the self-reconfigurable V-shape formation offers adaptability and effectiveness for UAV formations in complex operational scenarios. |
2005.10266 | Liang-Chieh Chen | Liang-Chieh Chen, Raphael Gontijo Lopes, Bowen Cheng, Maxwell D.
Collins, Ekin D. Cubuk, Barret Zoph, Hartwig Adam, Jonathon Shlens | Naive-Student: Leveraging Semi-Supervised Learning in Video Sequences
for Urban Scene Segmentation | Accepted to ECCV 2020 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Supervised learning in large discriminative models is a mainstay for modern
computer vision. Such an approach necessitates investing in large-scale
human-annotated datasets for achieving state-of-the-art results. In turn, the
efficacy of supervised learning may be limited by the size of the human
annotated dataset. This limitation is particularly notable for image
segmentation tasks, where the expense of human annotation is especially large,
yet large amounts of unlabeled data may exist. In this work, we ask if we may
leverage semi-supervised learning in unlabeled video sequences and extra images
to improve the performance on urban scene segmentation, simultaneously tackling
semantic, instance, and panoptic segmentation. The goal of this work is to
avoid the construction of sophisticated, learned architectures specific to
label propagation (e.g., patch matching and optical flow). Instead, we simply
predict pseudo-labels for the unlabeled data and train subsequent models with
both human-annotated and pseudo-labeled data. The procedure is iterated for
several times. As a result, our Naive-Student model, trained with such simple
yet effective iterative semi-supervised learning, attains state-of-the-art
results at all three Cityscapes benchmarks, reaching the performance of 67.8%
PQ, 42.6% AP, and 85.2% mIOU on the test set. We view this work as a notable
step towards building a simple procedure to harness unlabeled video sequences
and extra images to surpass state-of-the-art performance on core computer
vision tasks.
| [
{
"created": "Wed, 20 May 2020 18:00:05 GMT",
"version": "v1"
},
{
"created": "Fri, 22 May 2020 04:38:50 GMT",
"version": "v2"
},
{
"created": "Wed, 8 Jul 2020 16:29:10 GMT",
"version": "v3"
},
{
"created": "Mon, 20 Jul 2020 03:40:38 GMT",
"version": "v4"
}
] | 2020-07-21 | [
[
"Chen",
"Liang-Chieh",
""
],
[
"Lopes",
"Raphael Gontijo",
""
],
[
"Cheng",
"Bowen",
""
],
[
"Collins",
"Maxwell D.",
""
],
[
"Cubuk",
"Ekin D.",
""
],
[
"Zoph",
"Barret",
""
],
[
"Adam",
"Hartwig",
""
],
[
"Shlens",
"Jonathon",
""
]
] | Supervised learning in large discriminative models is a mainstay for modern computer vision. Such an approach necessitates investing in large-scale human-annotated datasets for achieving state-of-the-art results. In turn, the efficacy of supervised learning may be limited by the size of the human annotated dataset. This limitation is particularly notable for image segmentation tasks, where the expense of human annotation is especially large, yet large amounts of unlabeled data may exist. In this work, we ask if we may leverage semi-supervised learning in unlabeled video sequences and extra images to improve the performance on urban scene segmentation, simultaneously tackling semantic, instance, and panoptic segmentation. The goal of this work is to avoid the construction of sophisticated, learned architectures specific to label propagation (e.g., patch matching and optical flow). Instead, we simply predict pseudo-labels for the unlabeled data and train subsequent models with both human-annotated and pseudo-labeled data. The procedure is iterated for several times. As a result, our Naive-Student model, trained with such simple yet effective iterative semi-supervised learning, attains state-of-the-art results at all three Cityscapes benchmarks, reaching the performance of 67.8% PQ, 42.6% AP, and 85.2% mIOU on the test set. We view this work as a notable step towards building a simple procedure to harness unlabeled video sequences and extra images to surpass state-of-the-art performance on core computer vision tasks. |
1806.02325 | Ay\c{c}a \"Oz\c{c}elikkale | Ayca Ozcelikkale, Mehmet Koseoglu and Mani Srivastava | Optimization vs. Reinforcement Learning for Wirelessly Powered Sensor
Networks | null | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider a sensing application where the sensor nodes are wirelessly
powered by an energy beacon. We focus on the problem of jointly optimizing the
energy allocation of the energy beacon to different sensors and the data
transmission powers of the sensors in order to minimize the field
reconstruction error at the sink. In contrast to the standard ideal linear
energy harvesting (EH) model, we consider practical non-linear EH models. We
investigate this problem under two different frameworks: i) an optimization
approach where the energy beacon knows the utility function of the nodes,
channel state information and the energy harvesting characteristics of the
devices; hence optimal power allocation strategies can be designed using an
optimization problem and ii) a learning approach where the energy beacon
decides on its strategies adaptively with battery level information and
feedback on the utility function. Our results illustrate that deep
reinforcement learning approach can obtain the same error levels with the
optimization approach and provides a promising alternative to the optimization
framework.
| [
{
"created": "Wed, 6 Jun 2018 17:43:31 GMT",
"version": "v1"
}
] | 2018-06-07 | [
[
"Ozcelikkale",
"Ayca",
""
],
[
"Koseoglu",
"Mehmet",
""
],
[
"Srivastava",
"Mani",
""
]
] | We consider a sensing application where the sensor nodes are wirelessly powered by an energy beacon. We focus on the problem of jointly optimizing the energy allocation of the energy beacon to different sensors and the data transmission powers of the sensors in order to minimize the field reconstruction error at the sink. In contrast to the standard ideal linear energy harvesting (EH) model, we consider practical non-linear EH models. We investigate this problem under two different frameworks: i) an optimization approach where the energy beacon knows the utility function of the nodes, channel state information and the energy harvesting characteristics of the devices; hence optimal power allocation strategies can be designed using an optimization problem and ii) a learning approach where the energy beacon decides on its strategies adaptively with battery level information and feedback on the utility function. Our results illustrate that deep reinforcement learning approach can obtain the same error levels with the optimization approach and provides a promising alternative to the optimization framework. |
2401.15545 | Simin Chen | Simin Chen, Xiaoning Feng, Xiaohong Han, Cong Liu, Wei Yang | PPM: Automated Generation of Diverse Programming Problems for
Benchmarking Code Generation Models | This paper has been accepted to The ACM International Conference on
the Foundations of Software Engineering FSE 2024 | null | null | null | cs.SE cs.AI cs.CL cs.PL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent times, a plethora of Large Code Generation Models (LCGMs) have been
proposed, showcasing significant potential in assisting developers with complex
programming tasks. Benchmarking LCGMs necessitates the creation of a set of
diverse programming problems, and each problem comprises the prompt (including
the task description), canonical solution, and test inputs. The existing
methods for constructing such a problem set can be categorized into two main
types: manual methods and perturbation-based methods. However, manual methods
demand high effort and lack scalability, while also risking data integrity due
to LCGMs' potentially contaminated data collection, and perturbation-based
approaches mainly generate semantically homogeneous problems with the same
canonical solutions and introduce typos that can be easily auto-corrected by
IDE, making them ineffective and unrealistic. In this work, we propose the idea
of programming problem merging (PPM) and provide two implementation of this
idea, we utilize our tool on two widely-used datasets and compare it against
nine baseline methods using eight code generation models. The results
demonstrate the effectiveness of our tool in generating more challenging,
diverse, and natural programming problems, comparing to the baselines.
| [
{
"created": "Sun, 28 Jan 2024 02:27:38 GMT",
"version": "v1"
}
] | 2024-01-30 | [
[
"Chen",
"Simin",
""
],
[
"Feng",
"Xiaoning",
""
],
[
"Han",
"Xiaohong",
""
],
[
"Liu",
"Cong",
""
],
[
"Yang",
"Wei",
""
]
] | In recent times, a plethora of Large Code Generation Models (LCGMs) have been proposed, showcasing significant potential in assisting developers with complex programming tasks. Benchmarking LCGMs necessitates the creation of a set of diverse programming problems, and each problem comprises the prompt (including the task description), canonical solution, and test inputs. The existing methods for constructing such a problem set can be categorized into two main types: manual methods and perturbation-based methods. However, manual methods demand high effort and lack scalability, while also risking data integrity due to LCGMs' potentially contaminated data collection, and perturbation-based approaches mainly generate semantically homogeneous problems with the same canonical solutions and introduce typos that can be easily auto-corrected by IDE, making them ineffective and unrealistic. In this work, we propose the idea of programming problem merging (PPM) and provide two implementation of this idea, we utilize our tool on two widely-used datasets and compare it against nine baseline methods using eight code generation models. The results demonstrate the effectiveness of our tool in generating more challenging, diverse, and natural programming problems, comparing to the baselines. |
2305.06236 | Sara Ahmadi Majd | Mohammad Mashayekhi, Sara Ahmadi Majd, Arian Amiramjadi, Babak
Mashayekhi | Radious: Unveiling the Enigma of Dental Radiology with BEIT Adaptor and
Mask2Former in Semantic Segmentation | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | X-ray images are the first steps for diagnosing and further treating dental
problems. So, early diagnosis prevents the development and increase of oral and
dental diseases. In this paper, we developed a semantic segmentation algorithm
based on BEIT adaptor and Mask2Former to detect and identify teeth, roots, and
multiple dental diseases and abnormalities such as pulp chamber, restoration,
endodontics, crown, decay, pin, composite, bridge, pulpitis, orthodontics,
radicular cyst, periapical cyst, cyst, implant, and bone graft material in
panoramic, periapical, and bitewing X-ray images. We compared the result of our
algorithm to two state-of-the-art algorithms in image segmentation named:
Deeplabv3 and Segformer on our own data set. We discovered that Radious
outperformed those algorithms by increasing the mIoU scores by 9% and 33% in
Deeplabv3+ and Segformer, respectively.
| [
{
"created": "Wed, 10 May 2023 15:15:09 GMT",
"version": "v1"
}
] | 2023-05-11 | [
[
"Mashayekhi",
"Mohammad",
""
],
[
"Majd",
"Sara Ahmadi",
""
],
[
"Amiramjadi",
"Arian",
""
],
[
"Mashayekhi",
"Babak",
""
]
] | X-ray images are the first steps for diagnosing and further treating dental problems. So, early diagnosis prevents the development and increase of oral and dental diseases. In this paper, we developed a semantic segmentation algorithm based on BEIT adaptor and Mask2Former to detect and identify teeth, roots, and multiple dental diseases and abnormalities such as pulp chamber, restoration, endodontics, crown, decay, pin, composite, bridge, pulpitis, orthodontics, radicular cyst, periapical cyst, cyst, implant, and bone graft material in panoramic, periapical, and bitewing X-ray images. We compared the result of our algorithm to two state-of-the-art algorithms in image segmentation named: Deeplabv3 and Segformer on our own data set. We discovered that Radious outperformed those algorithms by increasing the mIoU scores by 9% and 33% in Deeplabv3+ and Segformer, respectively. |
1811.08366 | Stephen Bonner | Stephen Bonner, John Brennan, Ibad Kureshi, Georgios Theodoropoulos,
Andrew Stephen McGough, Boguslaw Obara | Temporal Graph Offset Reconstruction: Towards Temporally Robust Graph
Representation Learning | Accepted as a workshop paper at IEEE Big Data 2018 | null | null | null | cs.SI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graphs are a commonly used construct for representing relationships between
elements in complex high dimensional datasets. Many real-world phenomenon are
dynamic in nature, meaning that any graph used to represent them is inherently
temporal. However, many of the machine learning models designed to capture
knowledge about the structure of these graphs ignore this rich temporal
information when creating representations of the graph. This results in models
which do not perform well when used to make predictions about the future state
of the graph -- especially when the delta between time stamps is not small. In
this work, we explore a novel training procedure and an associated unsupervised
model which creates graph representations optimised to predict the future state
of the graph. We make use of graph convolutional neural networks to encode the
graph into a latent representation, which we then use to train our temporal
offset reconstruction method, inspired by auto-encoders, to predict a later
time point -- multiple time steps into the future. Using our method, we
demonstrate superior performance for the task of future link prediction
compared with none-temporal state-of-the-art baselines. We show our approach to
be capable of outperforming non-temporal baselines by 38% on a real world
dataset.
| [
{
"created": "Tue, 20 Nov 2018 17:01:16 GMT",
"version": "v1"
}
] | 2018-11-21 | [
[
"Bonner",
"Stephen",
""
],
[
"Brennan",
"John",
""
],
[
"Kureshi",
"Ibad",
""
],
[
"Theodoropoulos",
"Georgios",
""
],
[
"McGough",
"Andrew Stephen",
""
],
[
"Obara",
"Boguslaw",
""
]
] | Graphs are a commonly used construct for representing relationships between elements in complex high dimensional datasets. Many real-world phenomenon are dynamic in nature, meaning that any graph used to represent them is inherently temporal. However, many of the machine learning models designed to capture knowledge about the structure of these graphs ignore this rich temporal information when creating representations of the graph. This results in models which do not perform well when used to make predictions about the future state of the graph -- especially when the delta between time stamps is not small. In this work, we explore a novel training procedure and an associated unsupervised model which creates graph representations optimised to predict the future state of the graph. We make use of graph convolutional neural networks to encode the graph into a latent representation, which we then use to train our temporal offset reconstruction method, inspired by auto-encoders, to predict a later time point -- multiple time steps into the future. Using our method, we demonstrate superior performance for the task of future link prediction compared with none-temporal state-of-the-art baselines. We show our approach to be capable of outperforming non-temporal baselines by 38% on a real world dataset. |
2308.13094 | Takamichi Miyata Ph.D. | Takamichi Miyata | Interpretable Image Quality Assessment via CLIP with Multiple
Antonym-Prompt Pairs | 2pages, 1 figure | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | No reference image quality assessment (NR-IQA) is a task to estimate the
perceptual quality of an image without its corresponding original image. It is
even more difficult to perform this task in a zero-shot manner, i.e., without
task-specific training. In this paper, we propose a new zero-shot and
interpretable NRIQA method that exploits the ability of a pre-trained
visionlanguage model to estimate the correlation between an image and a textual
prompt. The proposed method employs a prompt pairing strategy and multiple
antonym-prompt pairs corresponding to carefully selected descriptive features
corresponding to the perceptual image quality. Thus, the proposed method is
able to identify not only the perceptual quality evaluation of the image, but
also the cause on which the quality evaluation is based. Experimental results
show that the proposed method outperforms existing zero-shot NR-IQA methods in
terms of accuracy and can evaluate the causes of perceptual quality
degradation.
| [
{
"created": "Thu, 24 Aug 2023 21:37:00 GMT",
"version": "v1"
}
] | 2023-08-28 | [
[
"Miyata",
"Takamichi",
""
]
] | No reference image quality assessment (NR-IQA) is a task to estimate the perceptual quality of an image without its corresponding original image. It is even more difficult to perform this task in a zero-shot manner, i.e., without task-specific training. In this paper, we propose a new zero-shot and interpretable NRIQA method that exploits the ability of a pre-trained visionlanguage model to estimate the correlation between an image and a textual prompt. The proposed method employs a prompt pairing strategy and multiple antonym-prompt pairs corresponding to carefully selected descriptive features corresponding to the perceptual image quality. Thus, the proposed method is able to identify not only the perceptual quality evaluation of the image, but also the cause on which the quality evaluation is based. Experimental results show that the proposed method outperforms existing zero-shot NR-IQA methods in terms of accuracy and can evaluate the causes of perceptual quality degradation. |
2305.17773 | Madhav Desai | Madhav P. Desai | An evaluation of a microprocessor with two independent hardware
execution threads coupled through a shared cache | null | null | null | null | cs.AR | http://creativecommons.org/licenses/by/4.0/ | We investigate the utility of augmenting a microprocessor with a single
execution pipeline by adding a second copy of the execution pipeline in
parallel with the existing one. The resulting dual-hardware-threaded
microprocessor has two identical, independent, single-issue in-order execution
pipelines (hardware threads) which share a common memory sub-system (consisting
of instruction and data caches together with a memory management unit). From a
design perspective, the assembly and verification of the dual threaded
processor is simplified by the use of existing verified implementations of the
execution pipeline and a memory unit. Because the memory unit is shared by the
two hardware threads, the relative area overhead of adding the second hardware
thread is 25\% of the area of the existing single threaded processor. Using an
FPGA implementation we evaluate the performance of the dual threaded processor
relative to the single threaded one. On applications which can be parallelized,
we observe speedups of 1.6X to 1.88X. For applications that are not
parallelizable, the speedup is more modest. We also observe that the dual
threaded processor performance is degraded on applications which generate large
numbers of cache misses.
| [
{
"created": "Sun, 28 May 2023 16:47:56 GMT",
"version": "v1"
}
] | 2023-05-30 | [
[
"Desai",
"Madhav P.",
""
]
] | We investigate the utility of augmenting a microprocessor with a single execution pipeline by adding a second copy of the execution pipeline in parallel with the existing one. The resulting dual-hardware-threaded microprocessor has two identical, independent, single-issue in-order execution pipelines (hardware threads) which share a common memory sub-system (consisting of instruction and data caches together with a memory management unit). From a design perspective, the assembly and verification of the dual threaded processor is simplified by the use of existing verified implementations of the execution pipeline and a memory unit. Because the memory unit is shared by the two hardware threads, the relative area overhead of adding the second hardware thread is 25\% of the area of the existing single threaded processor. Using an FPGA implementation we evaluate the performance of the dual threaded processor relative to the single threaded one. On applications which can be parallelized, we observe speedups of 1.6X to 1.88X. For applications that are not parallelizable, the speedup is more modest. We also observe that the dual threaded processor performance is degraded on applications which generate large numbers of cache misses. |
2209.00841 | Zeyong Wei | Honghua Chen, Mingqiang Wei, Jun Wang | Geometric and Learning-based Mesh Denoising: A Comprehensive Survey | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mesh denoising is a fundamental problem in digital geometry processing. It
seeks to remove surface noise, while preserving surface intrinsic signals as
accurately as possible. While the traditional wisdom has been built upon
specialized priors to smooth surfaces, learning-based approaches are making
their debut with great success in generalization and automation. In this work,
we provide a comprehensive review of the advances in mesh denoising, containing
both traditional geometric approaches and recent learning-based methods. First,
to familiarize readers with the denoising tasks, we summarize four common
issues in mesh denoising. We then provide two categorizations of the existing
denoising methods. Furthermore, three important categories, including
optimization-, filter-, and data-driven-based techniques, are introduced and
analyzed in detail, respectively. Both qualitative and quantitative comparisons
are illustrated, to demonstrate the effectiveness of the state-of-the-art
denoising methods. Finally, potential directions of future work are pointed out
to solve the common problems of these approaches. A mesh denoising benchmark is
also built in this work, and future researchers will easily and conveniently
evaluate their methods with the state-of-the-art approaches.
| [
{
"created": "Fri, 2 Sep 2022 06:54:32 GMT",
"version": "v1"
}
] | 2022-09-05 | [
[
"Chen",
"Honghua",
""
],
[
"Wei",
"Mingqiang",
""
],
[
"Wang",
"Jun",
""
]
] | Mesh denoising is a fundamental problem in digital geometry processing. It seeks to remove surface noise, while preserving surface intrinsic signals as accurately as possible. While the traditional wisdom has been built upon specialized priors to smooth surfaces, learning-based approaches are making their debut with great success in generalization and automation. In this work, we provide a comprehensive review of the advances in mesh denoising, containing both traditional geometric approaches and recent learning-based methods. First, to familiarize readers with the denoising tasks, we summarize four common issues in mesh denoising. We then provide two categorizations of the existing denoising methods. Furthermore, three important categories, including optimization-, filter-, and data-driven-based techniques, are introduced and analyzed in detail, respectively. Both qualitative and quantitative comparisons are illustrated, to demonstrate the effectiveness of the state-of-the-art denoising methods. Finally, potential directions of future work are pointed out to solve the common problems of these approaches. A mesh denoising benchmark is also built in this work, and future researchers will easily and conveniently evaluate their methods with the state-of-the-art approaches. |
2011.07932 | Siyeong Lee | Kwanghee Choi and Siyeong Lee | Combating the Instability of Mutual Information-based Losses via
Regularization | Kwanghee Choi and Siyeong Lee contributed equally to this paper.
Accepted for the 38th Conference on Uncertainty in Artificial Intelligence
(UAI 2022) | null | null | null | cs.LG cs.IT math.IT stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Notable progress has been made in numerous fields of machine learning based
on neural network-driven mutual information (MI) bounds. However, utilizing the
conventional MI-based losses is often challenging due to their practical and
mathematical limitations. In this work, we first identify the symptoms behind
their instability: (1) the neural network not converging even after the loss
seemed to converge, and (2) saturating neural network outputs causing the loss
to diverge. We mitigate both issues by adding a novel regularization term to
the existing losses. We theoretically and experimentally demonstrate that added
regularization stabilizes training. Finally, we present a novel benchmark that
evaluates MI-based losses on both the MI estimation power and its capability on
the downstream tasks, closely following the pre-existing supervised and
contrastive learning settings. We evaluate six different MI-based losses and
their regularized counterparts on multiple benchmarks to show that our approach
is simple yet effective.
| [
{
"created": "Mon, 16 Nov 2020 13:29:15 GMT",
"version": "v1"
},
{
"created": "Mon, 28 Feb 2022 14:19:26 GMT",
"version": "v2"
},
{
"created": "Fri, 4 Mar 2022 01:10:24 GMT",
"version": "v3"
},
{
"created": "Sat, 18 Jun 2022 04:01:51 GMT",
"version": "v4"
}
] | 2022-06-22 | [
[
"Choi",
"Kwanghee",
""
],
[
"Lee",
"Siyeong",
""
]
] | Notable progress has been made in numerous fields of machine learning based on neural network-driven mutual information (MI) bounds. However, utilizing the conventional MI-based losses is often challenging due to their practical and mathematical limitations. In this work, we first identify the symptoms behind their instability: (1) the neural network not converging even after the loss seemed to converge, and (2) saturating neural network outputs causing the loss to diverge. We mitigate both issues by adding a novel regularization term to the existing losses. We theoretically and experimentally demonstrate that added regularization stabilizes training. Finally, we present a novel benchmark that evaluates MI-based losses on both the MI estimation power and its capability on the downstream tasks, closely following the pre-existing supervised and contrastive learning settings. We evaluate six different MI-based losses and their regularized counterparts on multiple benchmarks to show that our approach is simple yet effective. |
1706.06457 | Mohsen Imani | Mohsen Imani, Mehrdad Amirabadi, and Matthew Wright | The Evaluation of Circuit Selection Methods on Tor | arXiv admin note: substantial text overlap with arXiv:1608.07343 | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tor provides anonymity online by routing traffic through encrypted tunnels,
called circuits, over paths of anonymizing relays. To enable users to connect
to their selected destination servers without waiting for the circuit to be
build, the Tor client maintains a few circuits at all times. Nevertheless, Tor
is slower to use than directly connecting to the destination server. In this
paper, we propose to have the Tor client measure the performance of the
pre-built circuits and select the fastest circuits for users to send their
traffic over. To this end, we define and evaluate nine metrics for selecting
which pre-built circuit to use based on different combinations of circuit
length, Round Trip Time (RTT), and congestion. We also explore the effect on
performance of the number of pre-built circuits at the time of the selection.
Through whole-network experiments in Shadow, we show that using circuit RTT
with at least three pre-built circuits allows the Tor client to identify fast
circuits and improves median time to first byte (TTFB) by 22% over Tor and 15%
over congestion-aware routing, the state-of-the-art in Tor circuit selection.
We evaluate the security of the proposed circuit selection mechanism against
both a relay-level and a network-level adversary and find no loss of security
compared with Tor.
| [
{
"created": "Sat, 17 Jun 2017 23:39:19 GMT",
"version": "v1"
}
] | 2017-06-21 | [
[
"Imani",
"Mohsen",
""
],
[
"Amirabadi",
"Mehrdad",
""
],
[
"Wright",
"Matthew",
""
]
] | Tor provides anonymity online by routing traffic through encrypted tunnels, called circuits, over paths of anonymizing relays. To enable users to connect to their selected destination servers without waiting for the circuit to be build, the Tor client maintains a few circuits at all times. Nevertheless, Tor is slower to use than directly connecting to the destination server. In this paper, we propose to have the Tor client measure the performance of the pre-built circuits and select the fastest circuits for users to send their traffic over. To this end, we define and evaluate nine metrics for selecting which pre-built circuit to use based on different combinations of circuit length, Round Trip Time (RTT), and congestion. We also explore the effect on performance of the number of pre-built circuits at the time of the selection. Through whole-network experiments in Shadow, we show that using circuit RTT with at least three pre-built circuits allows the Tor client to identify fast circuits and improves median time to first byte (TTFB) by 22% over Tor and 15% over congestion-aware routing, the state-of-the-art in Tor circuit selection. We evaluate the security of the proposed circuit selection mechanism against both a relay-level and a network-level adversary and find no loss of security compared with Tor. |
1103.5320 | Francesco De Pellegrini Dr. | Alberto Montresor, Francesco De Pellegrini and Daniele Miorandi | Distributed k-Core Decomposition | null | null | null | null | cs.OH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Among the novel metrics used to study the relative importance of nodes in
complex networks, k-core decomposition has found a number of applications in
areas as diverse as sociology, proteinomics, graph visualization, and
distributed system analysis and design. This paper proposes new distributed
algorithms for the computation of the k-core decomposition of a network, with
the purpose of (i) enabling the run-time computation of k-cores in "live"
distributed systems and (ii) allowing the decomposition, over a set of
connected machines, of very large graphs, that cannot be hosted in a single
machine. Lower bounds on the algorithms complexity are given, and an exhaustive
experimental analysis on real-world graphs is provided.
| [
{
"created": "Mon, 28 Mar 2011 10:27:48 GMT",
"version": "v1"
},
{
"created": "Tue, 29 Mar 2011 10:12:14 GMT",
"version": "v2"
}
] | 2011-03-30 | [
[
"Montresor",
"Alberto",
""
],
[
"De Pellegrini",
"Francesco",
""
],
[
"Miorandi",
"Daniele",
""
]
] | Among the novel metrics used to study the relative importance of nodes in complex networks, k-core decomposition has found a number of applications in areas as diverse as sociology, proteinomics, graph visualization, and distributed system analysis and design. This paper proposes new distributed algorithms for the computation of the k-core decomposition of a network, with the purpose of (i) enabling the run-time computation of k-cores in "live" distributed systems and (ii) allowing the decomposition, over a set of connected machines, of very large graphs, that cannot be hosted in a single machine. Lower bounds on the algorithms complexity are given, and an exhaustive experimental analysis on real-world graphs is provided. |
2305.07244 | Prasad Talasila | Prasad Talasila, Cl\'audio Gomes, Peter H{\o}gh Mikkelsen, Santiago
Gil Arboleda, Eduard Kamburjan, Peter Gorm Larsen | Digital Twin as a Service (DTaaS): A Platform for Digital Twin
Developers and Users | 8 pages, 6 figures. Accepted at Digital Twin 2023 | null | 10.1109/SWC57546.2023.10448890 | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Establishing digital twins is a non-trivial endeavour especially when users
face significant challenges in creating them from scratch. Ready availability
of reusable models, data and tool assets, can help with creation and use of
digital twins. A number of digital twin frameworks exist to facilitate creation
and use of digital twins. In this paper we propose a digital twin framework to
author digital twin assets, create digital twins from reusable assets and make
the digital twins available as a service to other users. The proposed framework
automates the management of reusable assets, storage, provision of compute
infrastructure, communication and monitoring tasks. The users operate at the
level of digital twins and delegate rest of the work to the digital twin as a
service framework.
| [
{
"created": "Fri, 12 May 2023 04:34:30 GMT",
"version": "v1"
},
{
"created": "Tue, 13 Jun 2023 08:59:12 GMT",
"version": "v2"
}
] | 2024-03-11 | [
[
"Talasila",
"Prasad",
""
],
[
"Gomes",
"Cláudio",
""
],
[
"Mikkelsen",
"Peter Høgh",
""
],
[
"Arboleda",
"Santiago Gil",
""
],
[
"Kamburjan",
"Eduard",
""
],
[
"Larsen",
"Peter Gorm",
""
]
] | Establishing digital twins is a non-trivial endeavour especially when users face significant challenges in creating them from scratch. Ready availability of reusable models, data and tool assets, can help with creation and use of digital twins. A number of digital twin frameworks exist to facilitate creation and use of digital twins. In this paper we propose a digital twin framework to author digital twin assets, create digital twins from reusable assets and make the digital twins available as a service to other users. The proposed framework automates the management of reusable assets, storage, provision of compute infrastructure, communication and monitoring tasks. The users operate at the level of digital twins and delegate rest of the work to the digital twin as a service framework. |
2303.00109 | Felix Schr\"oder | Stefan Felsner and Hendrik Schrezenmaier and Felix Schr\"oder and
Raphael Steiner | Linear Size Universal Point Sets for Classes of Planar Graphs | null | null | null | null | cs.CG cs.DM math.CO | http://creativecommons.org/licenses/by-sa/4.0/ | A finite set $P$ of points in the plane is $n$-universal with respect to a
class $\mathcal{C}$ of planar graphs if every $n$-vertex graph in $\mathcal{C}$
admits a crossing-free straight-line drawing with vertices at points of $P$.
For the class of all planar graphs the best known upper bound on the size of a
universal point set is quadratic and the best known lower bound is linear in
$n$. Some classes of planar graphs are known to admit universal point sets of
near linear size, however, there are no truly linear bounds for interesting
classes beyond outerplanar graphs.
In this paper, we show that there is a universal point set of size $2n-2$ for
the class of bipartite planar graphs with $n$ vertices. The same point set is
also universal for the class of $n$-vertex planar graphs of maximum degree $3$.
The point set used for the results is what we call an exploding double chain,
and we prove that this point set allows planar straight-line embeddings of many
more planar graphs, namely of all subgraphs of planar graphs admitting a
one-sided Hamiltonian cycle. The result for bipartite graphs also implies that
every $n$-vertex plane graph has a $1$-bend drawing all whose bends and
vertices are contained in a specific point set of size $4n-6$, this improves a
bound of $6n-10$ for the same problem by L\"offler and T\'oth.
| [
{
"created": "Tue, 28 Feb 2023 22:15:38 GMT",
"version": "v1"
}
] | 2023-03-02 | [
[
"Felsner",
"Stefan",
""
],
[
"Schrezenmaier",
"Hendrik",
""
],
[
"Schröder",
"Felix",
""
],
[
"Steiner",
"Raphael",
""
]
] | A finite set $P$ of points in the plane is $n$-universal with respect to a class $\mathcal{C}$ of planar graphs if every $n$-vertex graph in $\mathcal{C}$ admits a crossing-free straight-line drawing with vertices at points of $P$. For the class of all planar graphs the best known upper bound on the size of a universal point set is quadratic and the best known lower bound is linear in $n$. Some classes of planar graphs are known to admit universal point sets of near linear size, however, there are no truly linear bounds for interesting classes beyond outerplanar graphs. In this paper, we show that there is a universal point set of size $2n-2$ for the class of bipartite planar graphs with $n$ vertices. The same point set is also universal for the class of $n$-vertex planar graphs of maximum degree $3$. The point set used for the results is what we call an exploding double chain, and we prove that this point set allows planar straight-line embeddings of many more planar graphs, namely of all subgraphs of planar graphs admitting a one-sided Hamiltonian cycle. The result for bipartite graphs also implies that every $n$-vertex plane graph has a $1$-bend drawing all whose bends and vertices are contained in a specific point set of size $4n-6$, this improves a bound of $6n-10$ for the same problem by L\"offler and T\'oth. |
1608.03580 | Erik Waingarten | Alexandr Andoni and Thijs Laarhoven and Ilya Razenshteyn and Erik
Waingarten | Optimal Hashing-based Time-Space Trade-offs for Approximate Near
Neighbors | 62 pages, 5 figures; a merger of arXiv:1511.07527 [cs.DS] and
arXiv:1605.02701 [cs.DS], which subsumes both of the preprints. New version
contains more elaborated proofs and fixed some typos | null | 10.1137/1.9781611974782.4 | null | cs.DS cs.CC cs.CG cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | [See the paper for the full abstract.]
We show tight upper and lower bounds for time-space trade-offs for the
$c$-Approximate Near Neighbor Search problem. For the $d$-dimensional Euclidean
space and $n$-point datasets, we develop a data structure with space $n^{1 +
\rho_u + o(1)} + O(dn)$ and query time $n^{\rho_q + o(1)} + d n^{o(1)}$ for
every $\rho_u, \rho_q \geq 0$ such that: \begin{equation} c^2 \sqrt{\rho_q} +
(c^2 - 1) \sqrt{\rho_u} = \sqrt{2c^2 - 1}. \end{equation}
This is the first data structure that achieves sublinear query time and
near-linear space for every approximation factor $c > 1$, improving upon
[Kapralov, PODS 2015]. The data structure is a culmination of a long line of
work on the problem for all space regimes; it builds on Spherical
Locality-Sensitive Filtering [Becker, Ducas, Gama, Laarhoven, SODA 2016] and
data-dependent hashing [Andoni, Indyk, Nguyen, Razenshteyn, SODA 2014] [Andoni,
Razenshteyn, STOC 2015].
Our matching lower bounds are of two types: conditional and unconditional.
First, we prove tightness of the whole above trade-off in a restricted model of
computation, which captures all known hashing-based approaches. We then show
unconditional cell-probe lower bounds for one and two probes that match the
above trade-off for $\rho_q = 0$, improving upon the best known lower bounds
from [Panigrahy, Talwar, Wieder, FOCS 2010]. In particular, this is the first
space lower bound (for any static data structure) for two probes which is not
polynomially smaller than the one-probe bound. To show the result for two
probes, we establish and exploit a connection to locally-decodable codes.
| [
{
"created": "Thu, 11 Aug 2016 19:50:00 GMT",
"version": "v1"
},
{
"created": "Sun, 21 May 2017 16:57:47 GMT",
"version": "v2"
}
] | 2019-10-04 | [
[
"Andoni",
"Alexandr",
""
],
[
"Laarhoven",
"Thijs",
""
],
[
"Razenshteyn",
"Ilya",
""
],
[
"Waingarten",
"Erik",
""
]
] | [See the paper for the full abstract.] We show tight upper and lower bounds for time-space trade-offs for the $c$-Approximate Near Neighbor Search problem. For the $d$-dimensional Euclidean space and $n$-point datasets, we develop a data structure with space $n^{1 + \rho_u + o(1)} + O(dn)$ and query time $n^{\rho_q + o(1)} + d n^{o(1)}$ for every $\rho_u, \rho_q \geq 0$ such that: \begin{equation} c^2 \sqrt{\rho_q} + (c^2 - 1) \sqrt{\rho_u} = \sqrt{2c^2 - 1}. \end{equation} This is the first data structure that achieves sublinear query time and near-linear space for every approximation factor $c > 1$, improving upon [Kapralov, PODS 2015]. The data structure is a culmination of a long line of work on the problem for all space regimes; it builds on Spherical Locality-Sensitive Filtering [Becker, Ducas, Gama, Laarhoven, SODA 2016] and data-dependent hashing [Andoni, Indyk, Nguyen, Razenshteyn, SODA 2014] [Andoni, Razenshteyn, STOC 2015]. Our matching lower bounds are of two types: conditional and unconditional. First, we prove tightness of the whole above trade-off in a restricted model of computation, which captures all known hashing-based approaches. We then show unconditional cell-probe lower bounds for one and two probes that match the above trade-off for $\rho_q = 0$, improving upon the best known lower bounds from [Panigrahy, Talwar, Wieder, FOCS 2010]. In particular, this is the first space lower bound (for any static data structure) for two probes which is not polynomially smaller than the one-probe bound. To show the result for two probes, we establish and exploit a connection to locally-decodable codes. |
2103.15339 | Haw-Shiuan Chang | Rohan Paul, Haw-Shiuan Chang, Andrew McCallum | Multi-facet Universal Schema | EACL 2021 | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Universal schema (USchema) assumes that two sentence patterns that share the
same entity pairs are similar to each other. This assumption is widely adopted
for solving various types of relation extraction (RE) tasks. Nevertheless, each
sentence pattern could contain multiple facets, and not every facet is similar
to all the facets of another sentence pattern co-occurring with the same entity
pair. To address the violation of the USchema assumption, we propose
multi-facet universal schema that uses a neural model to represent each
sentence pattern as multiple facet embeddings and encourage one of these facet
embeddings to be close to that of another sentence pattern if they co-occur
with the same entity pair. In our experiments, we demonstrate that multi-facet
embeddings significantly outperform their single-facet embedding counterpart,
compositional universal schema (CUSchema) (Verga et al., 2016), in distantly
supervised relation extraction tasks. Moreover, we can also use multiple
embeddings to detect the entailment relation between two sentence patterns when
no manual label is available.
| [
{
"created": "Mon, 29 Mar 2021 05:10:10 GMT",
"version": "v1"
}
] | 2021-03-30 | [
[
"Paul",
"Rohan",
""
],
[
"Chang",
"Haw-Shiuan",
""
],
[
"McCallum",
"Andrew",
""
]
] | Universal schema (USchema) assumes that two sentence patterns that share the same entity pairs are similar to each other. This assumption is widely adopted for solving various types of relation extraction (RE) tasks. Nevertheless, each sentence pattern could contain multiple facets, and not every facet is similar to all the facets of another sentence pattern co-occurring with the same entity pair. To address the violation of the USchema assumption, we propose multi-facet universal schema that uses a neural model to represent each sentence pattern as multiple facet embeddings and encourage one of these facet embeddings to be close to that of another sentence pattern if they co-occur with the same entity pair. In our experiments, we demonstrate that multi-facet embeddings significantly outperform their single-facet embedding counterpart, compositional universal schema (CUSchema) (Verga et al., 2016), in distantly supervised relation extraction tasks. Moreover, we can also use multiple embeddings to detect the entailment relation between two sentence patterns when no manual label is available. |
2404.05044 | Morteza Maleki | Morteza Maleki, SeyedAli Ghahari | Clinical Trials Protocol Authoring using LLMs | 29 pages, under review by IEEE Journal | null | null | null | cs.CE | http://creativecommons.org/licenses/by/4.0/ | This report embarks on a mission to revolutionize clinical trial protocol
development through the integration of advanced AI technologies. With a focus
on leveraging the capabilities of generative AI, specifically GPT-4, this
initiative aimed to streamline and enhance the efficiency and accuracy of
clinical trial protocols. The methodology encompassed a detailed analysis and
preparation of comprehensive drug and study level metadata, followed by the
deployment of GPT-4 for automated protocol section generation. Results
demonstrated a significant improvement in protocol authoring, highlighted by
increases in efficiency, accuracy, and the customization of protocols to
specific trial requirements. Challenges encountered during model selection and
prompt engineering were systematically addressed, leading to refined
methodologies that capitalized on the advanced text generation capabilities of
GPT-4. This project not only showcases the practical applications and benefits
of generative AI in clinical trial design but also sets a foundation for future
innovations in the field.
| [
{
"created": "Sun, 7 Apr 2024 18:59:03 GMT",
"version": "v1"
},
{
"created": "Sun, 4 Aug 2024 20:31:35 GMT",
"version": "v2"
}
] | 2024-08-06 | [
[
"Maleki",
"Morteza",
""
],
[
"Ghahari",
"SeyedAli",
""
]
] | This report embarks on a mission to revolutionize clinical trial protocol development through the integration of advanced AI technologies. With a focus on leveraging the capabilities of generative AI, specifically GPT-4, this initiative aimed to streamline and enhance the efficiency and accuracy of clinical trial protocols. The methodology encompassed a detailed analysis and preparation of comprehensive drug and study level metadata, followed by the deployment of GPT-4 for automated protocol section generation. Results demonstrated a significant improvement in protocol authoring, highlighted by increases in efficiency, accuracy, and the customization of protocols to specific trial requirements. Challenges encountered during model selection and prompt engineering were systematically addressed, leading to refined methodologies that capitalized on the advanced text generation capabilities of GPT-4. This project not only showcases the practical applications and benefits of generative AI in clinical trial design but also sets a foundation for future innovations in the field. |
2204.11842 | Michael Beukman | Michael Beukman and Michael Mitchley and Dean Wookey and Steven James
and George Konidaris | Adaptive Online Value Function Approximation with Wavelets | Accepted to RLDM 2022. Code is located at
https://github.com/Michael-Beukman/WaveletRL | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Using function approximation to represent a value function is necessary for
continuous and high-dimensional state spaces. Linear function approximation has
desirable theoretical guarantees and often requires less compute and samples
than neural networks, but most approaches suffer from an exponential growth in
the number of functions as the dimensionality of the state space increases. In
this work, we introduce the wavelet basis for reinforcement learning. Wavelets
can effectively be used as a fixed basis and additionally provide the ability
to adaptively refine the basis set as learning progresses, making it feasible
to start with a minimal basis set. This adaptive method can either increase the
granularity of the approximation at a point in state space, or add in
interactions between different dimensions as necessary. We prove that wavelets
are both necessary and sufficient if we wish to construct a function
approximator that can be adaptively refined without loss of precision. We
further demonstrate that a fixed wavelet basis set performs comparably against
the high-performing Fourier basis on Mountain Car and Acrobot, and that the
adaptive methods provide a convenient approach to addressing an oversized
initial basis set, while demonstrating performance comparable to, or greater
than, the fixed wavelet basis.
| [
{
"created": "Fri, 22 Apr 2022 11:35:57 GMT",
"version": "v1"
}
] | 2022-04-27 | [
[
"Beukman",
"Michael",
""
],
[
"Mitchley",
"Michael",
""
],
[
"Wookey",
"Dean",
""
],
[
"James",
"Steven",
""
],
[
"Konidaris",
"George",
""
]
] | Using function approximation to represent a value function is necessary for continuous and high-dimensional state spaces. Linear function approximation has desirable theoretical guarantees and often requires less compute and samples than neural networks, but most approaches suffer from an exponential growth in the number of functions as the dimensionality of the state space increases. In this work, we introduce the wavelet basis for reinforcement learning. Wavelets can effectively be used as a fixed basis and additionally provide the ability to adaptively refine the basis set as learning progresses, making it feasible to start with a minimal basis set. This adaptive method can either increase the granularity of the approximation at a point in state space, or add in interactions between different dimensions as necessary. We prove that wavelets are both necessary and sufficient if we wish to construct a function approximator that can be adaptively refined without loss of precision. We further demonstrate that a fixed wavelet basis set performs comparably against the high-performing Fourier basis on Mountain Car and Acrobot, and that the adaptive methods provide a convenient approach to addressing an oversized initial basis set, while demonstrating performance comparable to, or greater than, the fixed wavelet basis. |
1904.06268 | Zuxuan Wu | Zuxuan Wu, Xin Wang, Joseph E. Gonzalez, Tom Goldstein, Larry S. Davis | ACE: Adapting to Changing Environments for Semantic Segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep neural networks exhibit exceptional accuracy when they are trained and
tested on the same data distributions. However, neural classifiers are often
extremely brittle when confronted with domain shift---changes in the input
distribution that occur over time. We present ACE, a framework for semantic
segmentation that dynamically adapts to changing environments over the time. By
aligning the distribution of labeled training data from the original source
domain with the distribution of incoming data in a shifted domain, ACE
synthesizes labeled training data for environments as it sees them. This
stylized data is then used to update a segmentation model so that it performs
well in new environments. To avoid forgetting knowledge from past environments,
we introduce a memory that stores feature statistics from previously seen
domains. These statistics can be used to replay images in any of the previously
observed domains, thus preventing catastrophic forgetting. In addition to
standard batch training using stochastic gradient decent (SGD), we also
experiment with fast adaptation methods based on adaptive meta-learning.
Extensive experiments are conducted on two datasets from SYNTHIA, the results
demonstrate the effectiveness of the proposed approach when adapting to a
number of tasks.
| [
{
"created": "Fri, 12 Apr 2019 15:15:15 GMT",
"version": "v1"
}
] | 2019-04-15 | [
[
"Wu",
"Zuxuan",
""
],
[
"Wang",
"Xin",
""
],
[
"Gonzalez",
"Joseph E.",
""
],
[
"Goldstein",
"Tom",
""
],
[
"Davis",
"Larry S.",
""
]
] | Deep neural networks exhibit exceptional accuracy when they are trained and tested on the same data distributions. However, neural classifiers are often extremely brittle when confronted with domain shift---changes in the input distribution that occur over time. We present ACE, a framework for semantic segmentation that dynamically adapts to changing environments over the time. By aligning the distribution of labeled training data from the original source domain with the distribution of incoming data in a shifted domain, ACE synthesizes labeled training data for environments as it sees them. This stylized data is then used to update a segmentation model so that it performs well in new environments. To avoid forgetting knowledge from past environments, we introduce a memory that stores feature statistics from previously seen domains. These statistics can be used to replay images in any of the previously observed domains, thus preventing catastrophic forgetting. In addition to standard batch training using stochastic gradient decent (SGD), we also experiment with fast adaptation methods based on adaptive meta-learning. Extensive experiments are conducted on two datasets from SYNTHIA, the results demonstrate the effectiveness of the proposed approach when adapting to a number of tasks. |
2111.04798 | Cristina Menghini | Wasu Piriyakulkij and Cristina Menghini and Ross Briden and Nihal V.
Nayak and Jeffrey Zhu and Elaheh Raisi and Stephen H. Bach | TAGLETS: A System for Automatic Semi-Supervised Learning with Auxiliary
Data | Paper published at MLSys 2022. It passed the artifact evaluation
earning two ACM badges: (1) Artifacts Evaluated Functional v1.1 and (2)
Artifacts Available v1.1 | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine learning practitioners often have access to a spectrum of data:
labeled data for the target task (which is often limited), unlabeled data, and
auxiliary data, the many available labeled datasets for other tasks. We
describe TAGLETS, a system built to study techniques for automatically
exploiting all three types of data and creating high-quality, servable
classifiers. The key components of TAGLETS are: (1) auxiliary data organized
according to a knowledge graph, (2) modules encapsulating different methods for
exploiting auxiliary and unlabeled data, and (3) a distillation stage in which
the ensembled modules are combined into a servable model. We compare TAGLETS
with state-of-the-art transfer learning and semi-supervised learning methods on
four image classification tasks. Our study covers a range of settings, varying
the amount of labeled data and the semantic relatedness of the auxiliary data
to the target task. We find that the intelligent incorporation of auxiliary and
unlabeled data into multiple learning techniques enables TAGLETS to match-and
most often significantly surpass-these alternatives. TAGLETS is available as an
open-source system at github.com/BatsResearch/taglets.
| [
{
"created": "Mon, 8 Nov 2021 20:08:45 GMT",
"version": "v1"
},
{
"created": "Wed, 10 Nov 2021 15:33:24 GMT",
"version": "v2"
},
{
"created": "Thu, 5 May 2022 23:49:23 GMT",
"version": "v3"
}
] | 2022-05-09 | [
[
"Piriyakulkij",
"Wasu",
""
],
[
"Menghini",
"Cristina",
""
],
[
"Briden",
"Ross",
""
],
[
"Nayak",
"Nihal V.",
""
],
[
"Zhu",
"Jeffrey",
""
],
[
"Raisi",
"Elaheh",
""
],
[
"Bach",
"Stephen H.",
""
]
] | Machine learning practitioners often have access to a spectrum of data: labeled data for the target task (which is often limited), unlabeled data, and auxiliary data, the many available labeled datasets for other tasks. We describe TAGLETS, a system built to study techniques for automatically exploiting all three types of data and creating high-quality, servable classifiers. The key components of TAGLETS are: (1) auxiliary data organized according to a knowledge graph, (2) modules encapsulating different methods for exploiting auxiliary and unlabeled data, and (3) a distillation stage in which the ensembled modules are combined into a servable model. We compare TAGLETS with state-of-the-art transfer learning and semi-supervised learning methods on four image classification tasks. Our study covers a range of settings, varying the amount of labeled data and the semantic relatedness of the auxiliary data to the target task. We find that the intelligent incorporation of auxiliary and unlabeled data into multiple learning techniques enables TAGLETS to match-and most often significantly surpass-these alternatives. TAGLETS is available as an open-source system at github.com/BatsResearch/taglets. |
2105.04632 | Hunter Priniski | J. Hunter Priniski, Mason McClay, Keith J. Holyoak | Rise of QAnon: A Mental Model of Good and Evil Stews in an Echochamber | 2 figures, 7 pages | null | null | null | cs.SI cs.CY | http://creativecommons.org/licenses/by/4.0/ | The QAnon conspiracy posits that Satan-worshiping Democrats operate a covert
child sex-trafficking operation, which Donald Trump is destined to expose and
annihilate. Emblematic of the ease with which political misconceptions can
spread through social media, QAnon originated in late 2017 and rapidly grew to
shape the political beliefs of millions. To illuminate the process by which a
conspiracy theory spreads, we report two computational studies examining the
social network structure and semantic content of tweets produced by users
central to the early QAnon network on Twitter. Using data mined in the summer
of 2018, we examined over 800,000 tweets about QAnon made by about 100,000
users. The majority of users disseminated rather than produced information,
serving to create an online echochamber. Users appeared to hold a simplistic
mental model in which political events are viewed as a struggle between
antithetical forces-both observed and unobserved-of Good and Evil.
| [
{
"created": "Mon, 10 May 2021 19:34:35 GMT",
"version": "v1"
}
] | 2021-05-12 | [
[
"Priniski",
"J. Hunter",
""
],
[
"McClay",
"Mason",
""
],
[
"Holyoak",
"Keith J.",
""
]
] | The QAnon conspiracy posits that Satan-worshiping Democrats operate a covert child sex-trafficking operation, which Donald Trump is destined to expose and annihilate. Emblematic of the ease with which political misconceptions can spread through social media, QAnon originated in late 2017 and rapidly grew to shape the political beliefs of millions. To illuminate the process by which a conspiracy theory spreads, we report two computational studies examining the social network structure and semantic content of tweets produced by users central to the early QAnon network on Twitter. Using data mined in the summer of 2018, we examined over 800,000 tweets about QAnon made by about 100,000 users. The majority of users disseminated rather than produced information, serving to create an online echochamber. Users appeared to hold a simplistic mental model in which political events are viewed as a struggle between antithetical forces-both observed and unobserved-of Good and Evil. |
2404.13370 | Zihao Yue | Zihao Yue, Yepeng Zhang, Ziheng Wang, Qin Jin | Movie101v2: Improved Movie Narration Benchmark | null | null | null | null | cs.CV cs.CL cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatic movie narration targets at creating video-aligned plot descriptions
to assist visually impaired audiences. It differs from standard video
captioning in that it requires not only describing key visual details but also
inferring the plots developed across multiple movie shots, thus posing unique
and ongoing challenges. To advance the development of automatic movie narrating
systems, we first revisit the limitations of existing datasets and develop a
large-scale, bilingual movie narration dataset, Movie101v2. Second, taking into
account the essential difficulties in achieving applicable movie narration, we
break the long-term goal into three progressive stages and tentatively focus on
the initial stages featuring understanding within individual clips. We also
introduce a new narration assessment to align with our staged task goals.
Third, using our new dataset, we baseline several leading large vision-language
models, including GPT-4V, and conduct in-depth investigations into the
challenges current models face for movie narration generation. Our findings
reveal that achieving applicable movie narration generation is a fascinating
goal that requires thorough research.
| [
{
"created": "Sat, 20 Apr 2024 13:15:27 GMT",
"version": "v1"
}
] | 2024-04-23 | [
[
"Yue",
"Zihao",
""
],
[
"Zhang",
"Yepeng",
""
],
[
"Wang",
"Ziheng",
""
],
[
"Jin",
"Qin",
""
]
] | Automatic movie narration targets at creating video-aligned plot descriptions to assist visually impaired audiences. It differs from standard video captioning in that it requires not only describing key visual details but also inferring the plots developed across multiple movie shots, thus posing unique and ongoing challenges. To advance the development of automatic movie narrating systems, we first revisit the limitations of existing datasets and develop a large-scale, bilingual movie narration dataset, Movie101v2. Second, taking into account the essential difficulties in achieving applicable movie narration, we break the long-term goal into three progressive stages and tentatively focus on the initial stages featuring understanding within individual clips. We also introduce a new narration assessment to align with our staged task goals. Third, using our new dataset, we baseline several leading large vision-language models, including GPT-4V, and conduct in-depth investigations into the challenges current models face for movie narration generation. Our findings reveal that achieving applicable movie narration generation is a fascinating goal that requires thorough research. |
2404.13749 | Xinyu Huang | Xinyu Huang and Shisheng Hu and Mushu Li and Cheng Huang and Xuemin
Shen | Efficient Digital Twin Data Processing for Low-Latency Multicast Short
Video Streaming | 6 pages, 6 figures, submitted to ICCC 2024 | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a novel efficient digital twin (DT) data processing
scheme to reduce service latency for multicast short video streaming.
Particularly, DT is constructed to emulate and analyze user status for
multicast group update and swipe feature abstraction. Then, a precise
measurement model of DT data processing is developed to characterize the
relationship among DT model size, user dynamics, and user clustering accuracy.
A service latency model, consisting of DT data processing delay, video
transcoding delay, and multicast transmission delay, is constructed by
incorporating the impact of user clustering accuracy. Finally, a joint
optimization problem of DT model size selection and bandwidth allocation is
formulated to minimize the service latency. To efficiently solve this problem,
a diffusion-based resource management algorithm is proposed, which utilizes the
denoising technique to improve the action-generation process in the deep
reinforcement learning algorithm. Simulation results based on the real-world
dataset demonstrate that the proposed DT data processing scheme outperforms
benchmark schemes in terms of service latency.
| [
{
"created": "Sun, 21 Apr 2024 19:12:22 GMT",
"version": "v1"
}
] | 2024-04-23 | [
[
"Huang",
"Xinyu",
""
],
[
"Hu",
"Shisheng",
""
],
[
"Li",
"Mushu",
""
],
[
"Huang",
"Cheng",
""
],
[
"Shen",
"Xuemin",
""
]
] | In this paper, we propose a novel efficient digital twin (DT) data processing scheme to reduce service latency for multicast short video streaming. Particularly, DT is constructed to emulate and analyze user status for multicast group update and swipe feature abstraction. Then, a precise measurement model of DT data processing is developed to characterize the relationship among DT model size, user dynamics, and user clustering accuracy. A service latency model, consisting of DT data processing delay, video transcoding delay, and multicast transmission delay, is constructed by incorporating the impact of user clustering accuracy. Finally, a joint optimization problem of DT model size selection and bandwidth allocation is formulated to minimize the service latency. To efficiently solve this problem, a diffusion-based resource management algorithm is proposed, which utilizes the denoising technique to improve the action-generation process in the deep reinforcement learning algorithm. Simulation results based on the real-world dataset demonstrate that the proposed DT data processing scheme outperforms benchmark schemes in terms of service latency. |
2309.11248 | Xuyang Chen | Xuyang Chen, Dong Wang, Konrad Schindler, Mingwei Sun, Yongliang Wang,
Nicolo Savioli, Liqiu Meng | Box2Poly: Memory-Efficient Polygon Prediction of Arbitrarily Shaped and
Rotated Text | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, Transformer-based text detection techniques have sought to predict
polygons by encoding the coordinates of individual boundary vertices using
distinct query features. However, this approach incurs a significant memory
overhead and struggles to effectively capture the intricate relationships
between vertices belonging to the same instance. Consequently, irregular text
layouts often lead to the prediction of outlined vertices, diminishing the
quality of results. To address these challenges, we present an innovative
approach rooted in Sparse R-CNN: a cascade decoding pipeline for polygon
prediction. Our method ensures precision by iteratively refining polygon
predictions, considering both the scale and location of preceding results.
Leveraging this stabilized regression pipeline, even employing just a single
feature vector to guide polygon instance regression yields promising detection
results. Simultaneously, the leverage of instance-level feature proposal
substantially enhances memory efficiency (>50% less vs. the state-of-the-art
method DPText-DETR) and reduces inference speed (>40% less vs. DPText-DETR)
with minor performance drop on benchmarks.
| [
{
"created": "Wed, 20 Sep 2023 12:19:07 GMT",
"version": "v1"
}
] | 2023-09-21 | [
[
"Chen",
"Xuyang",
""
],
[
"Wang",
"Dong",
""
],
[
"Schindler",
"Konrad",
""
],
[
"Sun",
"Mingwei",
""
],
[
"Wang",
"Yongliang",
""
],
[
"Savioli",
"Nicolo",
""
],
[
"Meng",
"Liqiu",
""
]
] | Recently, Transformer-based text detection techniques have sought to predict polygons by encoding the coordinates of individual boundary vertices using distinct query features. However, this approach incurs a significant memory overhead and struggles to effectively capture the intricate relationships between vertices belonging to the same instance. Consequently, irregular text layouts often lead to the prediction of outlined vertices, diminishing the quality of results. To address these challenges, we present an innovative approach rooted in Sparse R-CNN: a cascade decoding pipeline for polygon prediction. Our method ensures precision by iteratively refining polygon predictions, considering both the scale and location of preceding results. Leveraging this stabilized regression pipeline, even employing just a single feature vector to guide polygon instance regression yields promising detection results. Simultaneously, the leverage of instance-level feature proposal substantially enhances memory efficiency (>50% less vs. the state-of-the-art method DPText-DETR) and reduces inference speed (>40% less vs. DPText-DETR) with minor performance drop on benchmarks. |
2303.15754 | Jianping Zhang | Jianping Zhang, Yizhan Huang, Weibin Wu, Michael R. Lyu | Transferable Adversarial Attacks on Vision Transformers with Token
Gradient Regularization | CVPR 2023, Code is available at https://github.com/jpzhang1810/TGR | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vision transformers (ViTs) have been successfully deployed in a variety of
computer vision tasks, but they are still vulnerable to adversarial samples.
Transfer-based attacks use a local model to generate adversarial samples and
directly transfer them to attack a target black-box model. The high efficiency
of transfer-based attacks makes it a severe security threat to ViT-based
applications. Therefore, it is vital to design effective transfer-based attacks
to identify the deficiencies of ViTs beforehand in security-sensitive
scenarios. Existing efforts generally focus on regularizing the input gradients
to stabilize the updated direction of adversarial samples. However, the
variance of the back-propagated gradients in intermediate blocks of ViTs may
still be large, which may make the generated adversarial samples focus on some
model-specific features and get stuck in poor local optima. To overcome the
shortcomings of existing approaches, we propose the Token Gradient
Regularization (TGR) method. According to the structural characteristics of
ViTs, TGR reduces the variance of the back-propagated gradient in each internal
block of ViTs in a token-wise manner and utilizes the regularized gradient to
generate adversarial samples. Extensive experiments on attacking both ViTs and
CNNs confirm the superiority of our approach. Notably, compared to the
state-of-the-art transfer-based attacks, our TGR offers a performance
improvement of 8.8% on average.
| [
{
"created": "Tue, 28 Mar 2023 06:23:17 GMT",
"version": "v1"
},
{
"created": "Mon, 5 Jun 2023 07:25:12 GMT",
"version": "v2"
}
] | 2023-06-06 | [
[
"Zhang",
"Jianping",
""
],
[
"Huang",
"Yizhan",
""
],
[
"Wu",
"Weibin",
""
],
[
"Lyu",
"Michael R.",
""
]
] | Vision transformers (ViTs) have been successfully deployed in a variety of computer vision tasks, but they are still vulnerable to adversarial samples. Transfer-based attacks use a local model to generate adversarial samples and directly transfer them to attack a target black-box model. The high efficiency of transfer-based attacks makes it a severe security threat to ViT-based applications. Therefore, it is vital to design effective transfer-based attacks to identify the deficiencies of ViTs beforehand in security-sensitive scenarios. Existing efforts generally focus on regularizing the input gradients to stabilize the updated direction of adversarial samples. However, the variance of the back-propagated gradients in intermediate blocks of ViTs may still be large, which may make the generated adversarial samples focus on some model-specific features and get stuck in poor local optima. To overcome the shortcomings of existing approaches, we propose the Token Gradient Regularization (TGR) method. According to the structural characteristics of ViTs, TGR reduces the variance of the back-propagated gradient in each internal block of ViTs in a token-wise manner and utilizes the regularized gradient to generate adversarial samples. Extensive experiments on attacking both ViTs and CNNs confirm the superiority of our approach. Notably, compared to the state-of-the-art transfer-based attacks, our TGR offers a performance improvement of 8.8% on average. |
2404.11045 | James Y. Huang | James Y. Huang, Wenxuan Zhou, Fei Wang, Fred Morstatter, Sheng Zhang,
Hoifung Poon, Muhao Chen | Offset Unlearning for Large Language Models | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the strong capabilities of Large Language Models (LLMs) to acquire
knowledge from their training corpora, the memorization of sensitive
information in the corpora such as copyrighted, harmful, and private content
has led to ethical and legal concerns. In response to these challenges,
unlearning has emerged as a potential remedy for LLMs affected by problematic
training data. However, previous unlearning techniques are either not
applicable to black-box LLMs due to required access to model internal weights,
or violate data protection principles by retaining sensitive data for
inference-time correction. We propose $\delta$-unlearning, an offset unlearning
framework for black-box LLMs. Instead of tuning the black-box LLM itself,
$\delta$-unlearning learns the logit offset needed for unlearning by
contrasting the logits from a pair of smaller models. Experiments demonstrate
that $\delta$-unlearning can effectively unlearn target data while maintaining
similar or even stronger performance on general out-of-forget-scope tasks.
$\delta$-unlearning also effectively incorporates different unlearning
algorithms, making our approach a versatile solution to adapting various
existing unlearning algorithms to black-box LLMs.
| [
{
"created": "Wed, 17 Apr 2024 03:39:51 GMT",
"version": "v1"
}
] | 2024-04-18 | [
[
"Huang",
"James Y.",
""
],
[
"Zhou",
"Wenxuan",
""
],
[
"Wang",
"Fei",
""
],
[
"Morstatter",
"Fred",
""
],
[
"Zhang",
"Sheng",
""
],
[
"Poon",
"Hoifung",
""
],
[
"Chen",
"Muhao",
""
]
] | Despite the strong capabilities of Large Language Models (LLMs) to acquire knowledge from their training corpora, the memorization of sensitive information in the corpora such as copyrighted, harmful, and private content has led to ethical and legal concerns. In response to these challenges, unlearning has emerged as a potential remedy for LLMs affected by problematic training data. However, previous unlearning techniques are either not applicable to black-box LLMs due to required access to model internal weights, or violate data protection principles by retaining sensitive data for inference-time correction. We propose $\delta$-unlearning, an offset unlearning framework for black-box LLMs. Instead of tuning the black-box LLM itself, $\delta$-unlearning learns the logit offset needed for unlearning by contrasting the logits from a pair of smaller models. Experiments demonstrate that $\delta$-unlearning can effectively unlearn target data while maintaining similar or even stronger performance on general out-of-forget-scope tasks. $\delta$-unlearning also effectively incorporates different unlearning algorithms, making our approach a versatile solution to adapting various existing unlearning algorithms to black-box LLMs. |
2006.15374 | Debashmita Poddar | Gianlorenzo D'Angelo, Debashmita Poddar, Cosimo Vinci | Better Bounds on the Adaptivity Gap of Influence Maximization under
Full-adoption Feedback | 18 pages | The 35th AAAI Conference on Artificial Intelligence (AAAI 2021) | null | null | cs.SI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the influence maximization (IM) problem, we are given a social network and
a budget $k$, and we look for a set of $k$ nodes in the network, called seeds,
that maximize the expected number of nodes that are reached by an influence
cascade generated by the seeds, according to some stochastic model for
influence diffusion. In this paper, we study the adaptive IM, where the nodes
are selected sequentially one by one, and the decision on the $i$th seed can be
based on the observed cascade produced by the first $i-1$ seeds. We focus on
the full-adoption feedback in which we can observe the entire cascade of each
previously selected seed and on the independent cascade model where each edge
is associated with an independent probability of diffusing influence.
Our main result is the first sub-linear upper bound that holds for any graph.
Specifically, we show that the adaptivity gap is upper-bounded by $\lceil
n^{1/3}\rceil $, where $n$ is the number of nodes in the graph. Moreover, we
improve over the known upper bound for in-arborescences from
$\frac{2e}{e-1}\approx 3.16$ to $\frac{2e^2}{e^2-1}\approx 2.31$. Finally, we
study $\alpha$-bounded graphs, a class of undirected graphs in which the sum of
node degrees higher than two is at most $\alpha$, and show that the adaptivity
gap is upper-bounded by $\sqrt{\alpha}+O(1)$. Moreover, we show that in
0-bounded graphs, i.e. undirected graphs in which each connected component is a
path or a cycle, the adaptivity gap is at most $\frac{3e^3}{e^3-1}\approx
3.16$. To prove our bounds, we introduce new techniques to relate adaptive
policies with non-adaptive ones that might be of their own interest.
| [
{
"created": "Sat, 27 Jun 2020 14:43:34 GMT",
"version": "v1"
}
] | 2021-05-11 | [
[
"D'Angelo",
"Gianlorenzo",
""
],
[
"Poddar",
"Debashmita",
""
],
[
"Vinci",
"Cosimo",
""
]
] | In the influence maximization (IM) problem, we are given a social network and a budget $k$, and we look for a set of $k$ nodes in the network, called seeds, that maximize the expected number of nodes that are reached by an influence cascade generated by the seeds, according to some stochastic model for influence diffusion. In this paper, we study the adaptive IM, where the nodes are selected sequentially one by one, and the decision on the $i$th seed can be based on the observed cascade produced by the first $i-1$ seeds. We focus on the full-adoption feedback in which we can observe the entire cascade of each previously selected seed and on the independent cascade model where each edge is associated with an independent probability of diffusing influence. Our main result is the first sub-linear upper bound that holds for any graph. Specifically, we show that the adaptivity gap is upper-bounded by $\lceil n^{1/3}\rceil $, where $n$ is the number of nodes in the graph. Moreover, we improve over the known upper bound for in-arborescences from $\frac{2e}{e-1}\approx 3.16$ to $\frac{2e^2}{e^2-1}\approx 2.31$. Finally, we study $\alpha$-bounded graphs, a class of undirected graphs in which the sum of node degrees higher than two is at most $\alpha$, and show that the adaptivity gap is upper-bounded by $\sqrt{\alpha}+O(1)$. Moreover, we show that in 0-bounded graphs, i.e. undirected graphs in which each connected component is a path or a cycle, the adaptivity gap is at most $\frac{3e^3}{e^3-1}\approx 3.16$. To prove our bounds, we introduce new techniques to relate adaptive policies with non-adaptive ones that might be of their own interest. |
2406.15565 | Paridhi Singh | Paridhi Singh, Arun Kumar | Unseen Object Reasoning with Shared Appearance Cues | null | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | This paper introduces an innovative approach to open world recognition (OWR),
where we leverage knowledge acquired from known objects to address the
recognition of previously unseen objects. The traditional method of object
modeling relies on supervised learning with strict closed-set assumptions,
presupposing that objects encountered during inference are already known at the
training phase. However, this assumption proves inadequate for real-world
scenarios due to the impracticality of accounting for the immense diversity of
objects. Our hypothesis posits that object appearances can be represented as
collections of "shareable" mid-level features, arranged in constellations to
form object instances. By adopting this framework, we can efficiently dissect
and represent both known and unknown objects in terms of their appearance cues.
Our paper introduces a straightforward yet elegant method for modeling novel or
unseen objects, utilizing established appearance cues and accounting for
inherent uncertainties. This representation not only enables the detection of
out-of-distribution objects or novel categories among unseen objects but also
facilitates a deeper level of reasoning, empowering the identification of the
superclass to which an unknown instance belongs. This novel approach holds
promise for advancing open world recognition in diverse applications.
| [
{
"created": "Fri, 21 Jun 2024 18:04:13 GMT",
"version": "v1"
}
] | 2024-06-25 | [
[
"Singh",
"Paridhi",
""
],
[
"Kumar",
"Arun",
""
]
] | This paper introduces an innovative approach to open world recognition (OWR), where we leverage knowledge acquired from known objects to address the recognition of previously unseen objects. The traditional method of object modeling relies on supervised learning with strict closed-set assumptions, presupposing that objects encountered during inference are already known at the training phase. However, this assumption proves inadequate for real-world scenarios due to the impracticality of accounting for the immense diversity of objects. Our hypothesis posits that object appearances can be represented as collections of "shareable" mid-level features, arranged in constellations to form object instances. By adopting this framework, we can efficiently dissect and represent both known and unknown objects in terms of their appearance cues. Our paper introduces a straightforward yet elegant method for modeling novel or unseen objects, utilizing established appearance cues and accounting for inherent uncertainties. This representation not only enables the detection of out-of-distribution objects or novel categories among unseen objects but also facilitates a deeper level of reasoning, empowering the identification of the superclass to which an unknown instance belongs. This novel approach holds promise for advancing open world recognition in diverse applications. |
2310.09166 | Seth Benson | Seth P. Benson and Iain J. Cruickshank | Developing a Natural Language Understanding Model to Characterize Cable
News Bias | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Media bias has been extensively studied by both social and computational
sciences. However, current work still has a large reliance on human input and
subjective assessment to label biases. This is especially true for cable news
research. To address these issues, we develop an unsupervised machine learning
method to characterize the bias of cable news programs without any human input.
This method relies on the analysis of what topics are mentioned through Named
Entity Recognition and how those topics are discussed through Stance Analysis
in order to cluster programs with similar biases together. Applying our method
to 2020 cable news transcripts, we find that program clusters are consistent
over time and roughly correspond to the cable news network of the program. This
method reveals the potential for future tools to objectively assess media bias
and characterize unfamiliar media environments.
| [
{
"created": "Fri, 13 Oct 2023 15:01:17 GMT",
"version": "v1"
},
{
"created": "Tue, 17 Oct 2023 22:37:58 GMT",
"version": "v2"
}
] | 2023-10-19 | [
[
"Benson",
"Seth P.",
""
],
[
"Cruickshank",
"Iain J.",
""
]
] | Media bias has been extensively studied by both social and computational sciences. However, current work still has a large reliance on human input and subjective assessment to label biases. This is especially true for cable news research. To address these issues, we develop an unsupervised machine learning method to characterize the bias of cable news programs without any human input. This method relies on the analysis of what topics are mentioned through Named Entity Recognition and how those topics are discussed through Stance Analysis in order to cluster programs with similar biases together. Applying our method to 2020 cable news transcripts, we find that program clusters are consistent over time and roughly correspond to the cable news network of the program. This method reveals the potential for future tools to objectively assess media bias and characterize unfamiliar media environments. |
2312.03721 | Simon Lermen | Simon Lermen and Ond\v{r}ej Kvapil | Exploring the Robustness of Model-Graded Evaluations and Automated
Interpretability | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | There has been increasing interest in evaluations of language models for a
variety of risks and characteristics. Evaluations relying on natural language
understanding for grading can often be performed at scale by using other
language models. We test the robustness of these model-graded evaluations to
injections on different datasets including a new Deception Eval. These
injections resemble direct communication between the testee and the evaluator
to change their grading. We extrapolate that future, more intelligent models
might manipulate or cooperate with their evaluation model. We find significant
susceptibility to these injections in state-of-the-art commercial models on all
examined evaluations. Furthermore, similar injections can be used on automated
interpretability frameworks to produce misleading model-written explanations.
The results inspire future work and should caution against unqualified trust in
evaluations and automated interpretability.
| [
{
"created": "Sun, 26 Nov 2023 17:11:55 GMT",
"version": "v1"
},
{
"created": "Fri, 8 Dec 2023 11:16:39 GMT",
"version": "v2"
}
] | 2023-12-11 | [
[
"Lermen",
"Simon",
""
],
[
"Kvapil",
"Ondřej",
""
]
] | There has been increasing interest in evaluations of language models for a variety of risks and characteristics. Evaluations relying on natural language understanding for grading can often be performed at scale by using other language models. We test the robustness of these model-graded evaluations to injections on different datasets including a new Deception Eval. These injections resemble direct communication between the testee and the evaluator to change their grading. We extrapolate that future, more intelligent models might manipulate or cooperate with their evaluation model. We find significant susceptibility to these injections in state-of-the-art commercial models on all examined evaluations. Furthermore, similar injections can be used on automated interpretability frameworks to produce misleading model-written explanations. The results inspire future work and should caution against unqualified trust in evaluations and automated interpretability. |
2306.06872 | Hao Sun | Hao Sun, Yang Li, Liwei Deng, Bowen Li, Binyuan Hui, Binhua Li, Yunshi
Lan, Yan Zhang, Yongbin Li | History Semantic Graph Enhanced Conversational KBQA with Temporal
Information Modeling | Accepted to ACL 2023 Main Conference | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Context information modeling is an important task in conversational KBQA.
However, existing methods usually assume the independence of utterances and
model them in isolation. In this paper, we propose a History Semantic Graph
Enhanced KBQA model (HSGE) that is able to effectively model long-range
semantic dependencies in conversation history while maintaining low
computational cost. The framework incorporates a context-aware encoder, which
employs a dynamic memory decay mechanism and models context at different levels
of granularity. We evaluate HSGE on a widely used benchmark dataset for complex
sequential question answering. Experimental results demonstrate that it
outperforms existing baselines averaged on all question types.
| [
{
"created": "Mon, 12 Jun 2023 05:10:58 GMT",
"version": "v1"
}
] | 2023-06-13 | [
[
"Sun",
"Hao",
""
],
[
"Li",
"Yang",
""
],
[
"Deng",
"Liwei",
""
],
[
"Li",
"Bowen",
""
],
[
"Hui",
"Binyuan",
""
],
[
"Li",
"Binhua",
""
],
[
"Lan",
"Yunshi",
""
],
[
"Zhang",
"Yan",
""
],
[
"Li",
"Yongbin",
""
]
] | Context information modeling is an important task in conversational KBQA. However, existing methods usually assume the independence of utterances and model them in isolation. In this paper, we propose a History Semantic Graph Enhanced KBQA model (HSGE) that is able to effectively model long-range semantic dependencies in conversation history while maintaining low computational cost. The framework incorporates a context-aware encoder, which employs a dynamic memory decay mechanism and models context at different levels of granularity. We evaluate HSGE on a widely used benchmark dataset for complex sequential question answering. Experimental results demonstrate that it outperforms existing baselines averaged on all question types. |
2404.03344 | Juri Opitz | Juri Opitz | Schroedinger's Threshold: When the AUC doesn't predict Accuracy | LREC-COLING 2024, added more details on data setups, fixed typo | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Area Under Curve measure (AUC) seems apt to evaluate and compare diverse
models, possibly without calibration. An important example of AUC application
is the evaluation and benchmarking of models that predict faithfulness of
generated text. But we show that the AUC yields an academic and optimistic
notion of accuracy that can misalign with the actual accuracy observed in
application, yielding significant changes in benchmark rankings. To paint a
more realistic picture of downstream model performance (and prepare a model for
actual application), we explore different calibration modes, testing
calibration data and method.
| [
{
"created": "Thu, 4 Apr 2024 10:18:03 GMT",
"version": "v1"
},
{
"created": "Mon, 27 May 2024 10:33:40 GMT",
"version": "v2"
}
] | 2024-05-28 | [
[
"Opitz",
"Juri",
""
]
] | The Area Under Curve measure (AUC) seems apt to evaluate and compare diverse models, possibly without calibration. An important example of AUC application is the evaluation and benchmarking of models that predict faithfulness of generated text. But we show that the AUC yields an academic and optimistic notion of accuracy that can misalign with the actual accuracy observed in application, yielding significant changes in benchmark rankings. To paint a more realistic picture of downstream model performance (and prepare a model for actual application), we explore different calibration modes, testing calibration data and method. |
1804.09160 | Xin Wang | Xin Wang, Wenhu Chen, Yuan-Fang Wang, William Yang Wang | No Metrics Are Perfect: Adversarial Reward Learning for Visual
Storytelling | ACL 2018. 15 pages, 10 figures, 4 tables, with supplementary material | null | null | null | cs.CL cs.AI cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Though impressive results have been achieved in visual captioning, the task
of generating abstract stories from photo streams is still a little-tapped
problem. Different from captions, stories have more expressive language styles
and contain many imaginary concepts that do not appear in the images. Thus it
poses challenges to behavioral cloning algorithms. Furthermore, due to the
limitations of automatic metrics on evaluating story quality, reinforcement
learning methods with hand-crafted rewards also face difficulties in gaining an
overall performance boost. Therefore, we propose an Adversarial REward Learning
(AREL) framework to learn an implicit reward function from human
demonstrations, and then optimize policy search with the learned reward
function. Though automatic eval- uation indicates slight performance boost over
state-of-the-art (SOTA) methods in cloning expert behaviors, human evaluation
shows that our approach achieves significant improvement in generating more
human-like stories than SOTA systems.
| [
{
"created": "Tue, 24 Apr 2018 17:41:24 GMT",
"version": "v1"
},
{
"created": "Mon, 9 Jul 2018 00:15:14 GMT",
"version": "v2"
}
] | 2018-07-10 | [
[
"Wang",
"Xin",
""
],
[
"Chen",
"Wenhu",
""
],
[
"Wang",
"Yuan-Fang",
""
],
[
"Wang",
"William Yang",
""
]
] | Though impressive results have been achieved in visual captioning, the task of generating abstract stories from photo streams is still a little-tapped problem. Different from captions, stories have more expressive language styles and contain many imaginary concepts that do not appear in the images. Thus it poses challenges to behavioral cloning algorithms. Furthermore, due to the limitations of automatic metrics on evaluating story quality, reinforcement learning methods with hand-crafted rewards also face difficulties in gaining an overall performance boost. Therefore, we propose an Adversarial REward Learning (AREL) framework to learn an implicit reward function from human demonstrations, and then optimize policy search with the learned reward function. Though automatic eval- uation indicates slight performance boost over state-of-the-art (SOTA) methods in cloning expert behaviors, human evaluation shows that our approach achieves significant improvement in generating more human-like stories than SOTA systems. |
2403.20046 | Yongqi Tong | Yongqi Tong, Dawei Li, Sizhe Wang, Yujia Wang, Fei Teng, Jingbo Shang | Can LLMs Learn from Previous Mistakes? Investigating LLMs' Errors to
Boost for Reasoning | The 62nd Annual Meeting of the Association for Computational
Linguistics (ACL 2024) - Main Conference | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent works have shown the benefits to LLMs from fine-tuning golden-standard
Chain-of-Thought (CoT) rationales or using them as correct examples in few-shot
prompting. While humans can indeed imitate correct examples, learning from our
mistakes is another vital aspect of human cognition. Hence, a question
naturally arises: \textit{can LLMs learn and benefit from their mistakes,
especially for their reasoning? } This study investigates this problem from
both the prompting and model-tuning perspectives. We begin by introducing
\textsc{CoTErrorSet}, a new benchmark with 609,432 questions, each designed
with both correct and error references, and demonstrating the types and reasons
for making such mistakes. To explore the effectiveness of those mistakes, we
design two methods: (1) \textbf{Self-rethinking} prompting guides LLMs to
rethink whether they have made similar previous mistakes; and (2)
\textbf{Mistake tuning} involves finetuning models in both correct and
incorrect reasoning domains, rather than only tuning models to learn ground
truth in traditional methodology. We conduct a series of experiments to prove
LLMs can obtain benefits from mistakes in both directions. Our two methods
offer potentially cost-effective strategies by leveraging errors to enhance
reasoning capabilities, which costs significantly less than creating
meticulously hand-crafted golden references. We ultimately make a thorough
analysis of the reasons behind LLMs' errors, which provides directions that
future research needs to overcome. \textsc{CoTErrorSet} will be published soon
on \texttt{\url{https://github.com/YookiTong/Learn-from-Mistakes-CotErrorSet}}.
| [
{
"created": "Fri, 29 Mar 2024 08:30:34 GMT",
"version": "v1"
},
{
"created": "Fri, 7 Jun 2024 06:27:50 GMT",
"version": "v2"
}
] | 2024-06-10 | [
[
"Tong",
"Yongqi",
""
],
[
"Li",
"Dawei",
""
],
[
"Wang",
"Sizhe",
""
],
[
"Wang",
"Yujia",
""
],
[
"Teng",
"Fei",
""
],
[
"Shang",
"Jingbo",
""
]
] | Recent works have shown the benefits to LLMs from fine-tuning golden-standard Chain-of-Thought (CoT) rationales or using them as correct examples in few-shot prompting. While humans can indeed imitate correct examples, learning from our mistakes is another vital aspect of human cognition. Hence, a question naturally arises: \textit{can LLMs learn and benefit from their mistakes, especially for their reasoning? } This study investigates this problem from both the prompting and model-tuning perspectives. We begin by introducing \textsc{CoTErrorSet}, a new benchmark with 609,432 questions, each designed with both correct and error references, and demonstrating the types and reasons for making such mistakes. To explore the effectiveness of those mistakes, we design two methods: (1) \textbf{Self-rethinking} prompting guides LLMs to rethink whether they have made similar previous mistakes; and (2) \textbf{Mistake tuning} involves finetuning models in both correct and incorrect reasoning domains, rather than only tuning models to learn ground truth in traditional methodology. We conduct a series of experiments to prove LLMs can obtain benefits from mistakes in both directions. Our two methods offer potentially cost-effective strategies by leveraging errors to enhance reasoning capabilities, which costs significantly less than creating meticulously hand-crafted golden references. We ultimately make a thorough analysis of the reasons behind LLMs' errors, which provides directions that future research needs to overcome. \textsc{CoTErrorSet} will be published soon on \texttt{\url{https://github.com/YookiTong/Learn-from-Mistakes-CotErrorSet}}. |
1902.06034 | Michihiro Yasunaga | Michihiro Yasunaga, John Lafferty | TopicEq: A Joint Topic and Mathematical Equation Model for Scientific
Texts | AAAI 2019 | null | null | null | cs.IR cs.CL cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Scientific documents rely on both mathematics and text to communicate ideas.
Inspired by the topical correspondence between mathematical equations and word
contexts observed in scientific texts, we propose a novel topic model that
jointly generates mathematical equations and their surrounding text (TopicEq).
Using an extension of the correlated topic model, the context is generated from
a mixture of latent topics, and the equation is generated by an RNN that
depends on the latent topic activations. To experiment with this model, we
create a corpus of 400K equation-context pairs extracted from a range of
scientific articles from arXiv, and fit the model using a variational
autoencoder approach. Experimental results show that this joint model
significantly outperforms existing topic models and equation models for
scientific texts. Moreover, we qualitatively show that the model effectively
captures the relationship between topics and mathematics, enabling novel
applications such as topic-aware equation generation, equation topic inference,
and topic-aware alignment of mathematical symbols and words.
| [
{
"created": "Sat, 16 Feb 2019 03:39:51 GMT",
"version": "v1"
},
{
"created": "Wed, 20 Feb 2019 16:55:23 GMT",
"version": "v2"
},
{
"created": "Thu, 25 Apr 2019 21:24:05 GMT",
"version": "v3"
}
] | 2019-04-29 | [
[
"Yasunaga",
"Michihiro",
""
],
[
"Lafferty",
"John",
""
]
] | Scientific documents rely on both mathematics and text to communicate ideas. Inspired by the topical correspondence between mathematical equations and word contexts observed in scientific texts, we propose a novel topic model that jointly generates mathematical equations and their surrounding text (TopicEq). Using an extension of the correlated topic model, the context is generated from a mixture of latent topics, and the equation is generated by an RNN that depends on the latent topic activations. To experiment with this model, we create a corpus of 400K equation-context pairs extracted from a range of scientific articles from arXiv, and fit the model using a variational autoencoder approach. Experimental results show that this joint model significantly outperforms existing topic models and equation models for scientific texts. Moreover, we qualitatively show that the model effectively captures the relationship between topics and mathematics, enabling novel applications such as topic-aware equation generation, equation topic inference, and topic-aware alignment of mathematical symbols and words. |
1302.5442 | Weisheng Si | Weisheng Si | Are Yao Graph and Theta Graph Void Free? | 4 pages | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Greedy Forwarding algorithm is a widely-used routing algorithm for wireless
networks. However, it can fail if network topologies (usually modeled by
geometric graphs) contain voids. Since Yao Graph and Theta Graph are two types
of geometric graphs exploited to construct wireless network topologies, this
paper studies whether these two types of graphs can contain voids.
Specifically, this paper shows that when the number of cones in a Yao Graph or
Theta Graph is less than 6, Yao Graph and Theta Graph can have voids, but when
the number of cones equals or exceeds 6, Yao Graph and Theta Graph are free of
voids.
| [
{
"created": "Thu, 21 Feb 2013 22:09:08 GMT",
"version": "v1"
},
{
"created": "Tue, 2 Jul 2013 23:04:20 GMT",
"version": "v2"
}
] | 2013-07-04 | [
[
"Si",
"Weisheng",
""
]
] | Greedy Forwarding algorithm is a widely-used routing algorithm for wireless networks. However, it can fail if network topologies (usually modeled by geometric graphs) contain voids. Since Yao Graph and Theta Graph are two types of geometric graphs exploited to construct wireless network topologies, this paper studies whether these two types of graphs can contain voids. Specifically, this paper shows that when the number of cones in a Yao Graph or Theta Graph is less than 6, Yao Graph and Theta Graph can have voids, but when the number of cones equals or exceeds 6, Yao Graph and Theta Graph are free of voids. |
1011.1531 | Jaydip Sen | Jaydip Sen | An Agent-Based Intrusion Detection System for Local Area Networks | 13 pages, 5 figures, 2 tables | International Journal of Communication Networks and Information
Security (IJCNIS), Vol 2, No 2, August 2010 | null | null | cs.CR cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Since it is impossible to predict and identify all the vulnerabilities of a
network beforehand, and penetration into a system by malicious intruders cannot
always be prevented, intrusion detection systems (IDSs) are essential entities
to ensure the security of a networked system. To be effective in carrying out
their functions, the IDSs need to be accurate, adaptive, and extensible. Given
these stringent requirements and the high level of vulnerabilities of the
current days' networks, the design of an IDS has become a very challenging
task. Although, an extensive research has been done on intrusion detection in a
distributed environment, distributed IDSs suffer from a number of drawbacks
e.g., high rates of false positives, low detection efficiency etc. In this
paper, the design of a distributed IDS is proposed that consists of a group of
autonomous and cooperating agents. In addition to its ability to detect
attacks, the system is capable of identifying and isolating compromised nodes
in the network thereby introducing fault-tolerance in its operations. The
experiments conducted on the system have shown that it has a high detection
efficiency and low false positives compared to some of the currently existing
systems.
| [
{
"created": "Sat, 6 Nov 2010 01:05:20 GMT",
"version": "v1"
}
] | 2010-11-13 | [
[
"Sen",
"Jaydip",
""
]
] | Since it is impossible to predict and identify all the vulnerabilities of a network beforehand, and penetration into a system by malicious intruders cannot always be prevented, intrusion detection systems (IDSs) are essential entities to ensure the security of a networked system. To be effective in carrying out their functions, the IDSs need to be accurate, adaptive, and extensible. Given these stringent requirements and the high level of vulnerabilities of the current days' networks, the design of an IDS has become a very challenging task. Although, an extensive research has been done on intrusion detection in a distributed environment, distributed IDSs suffer from a number of drawbacks e.g., high rates of false positives, low detection efficiency etc. In this paper, the design of a distributed IDS is proposed that consists of a group of autonomous and cooperating agents. In addition to its ability to detect attacks, the system is capable of identifying and isolating compromised nodes in the network thereby introducing fault-tolerance in its operations. The experiments conducted on the system have shown that it has a high detection efficiency and low false positives compared to some of the currently existing systems. |
1509.02840 | Andrew Knyazev | Andrew Knyazev, Peizhen Zhu, Stefano Di Cairano | Explicit model predictive control accuracy analysis | 6 pages, 7 figures. Accepted to IEEE CDC 2015 | 2015 54th IEEE Conference on Decision and Control (CDC), Osaka,
2015, pp. 2389-2394 | 10.1109/CDC.2015.7402565 | MERL TR2015-149 | cs.SY math.NA math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Model Predictive Control (MPC) can efficiently control constrained systems in
real-time applications. MPC feedback law for a linear system with linear
inequality constraints can be explicitly computed off-line, which results in an
off-line partition of the state space into non-overlapped convex regions, with
affine control laws associated to each region of the partition. An actual
implementation of this explicit MPC in low cost micro-controllers requires the
data to be "quantized", i.e. represented with a small number of memory bits. An
aggressive quantization decreases the number of bits and the controller
manufacturing costs, and may increase the speed of the controller, but reduces
accuracy of the control input computation. We derive upper bounds for the
absolute error in the control depending on the number of quantization bits and
system parameters. The bounds can be used to determine how many quantization
bits are needed in order to guarantee a specific level of accuracy in the
control input.
| [
{
"created": "Wed, 9 Sep 2015 16:22:08 GMT",
"version": "v1"
}
] | 2016-06-13 | [
[
"Knyazev",
"Andrew",
""
],
[
"Zhu",
"Peizhen",
""
],
[
"Di Cairano",
"Stefano",
""
]
] | Model Predictive Control (MPC) can efficiently control constrained systems in real-time applications. MPC feedback law for a linear system with linear inequality constraints can be explicitly computed off-line, which results in an off-line partition of the state space into non-overlapped convex regions, with affine control laws associated to each region of the partition. An actual implementation of this explicit MPC in low cost micro-controllers requires the data to be "quantized", i.e. represented with a small number of memory bits. An aggressive quantization decreases the number of bits and the controller manufacturing costs, and may increase the speed of the controller, but reduces accuracy of the control input computation. We derive upper bounds for the absolute error in the control depending on the number of quantization bits and system parameters. The bounds can be used to determine how many quantization bits are needed in order to guarantee a specific level of accuracy in the control input. |
2407.08334 | TianChen Wang | TianChen Wang | ADMM Based Semi-Structured Pattern Pruning Framework For Transformer | 11 pages, 5 figures | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | NLP(natural language processsing) has achieved great success through the
transformer model.However, the model has hundreds of millions or billions
parameters,which is huge burden for its deployment on personal computer or
small scale of server.To deal with it, we either make the model's weight matrix
relatively sparser, or compress attention layer. Pattern pruning ,one of the
most important pruning methods, permits selecting fixed number of parameters in
each divided pattern block and prunes it. However, the effect of pattern
pruning is strictly limited by the sparsity within a region of weights in each
layer. In this paper,we first introduced Alternating Direction Method of
Multipliers(ADMM) based pattern pruning framework to reshape the distribution
of activation map. Specifically, we propose to formulate the pattern pruning on
transformer as a constrained optimization and use ADMM to optimize the problem.
In this way, the initial dense feature maps is transformed to rather regionally
sparsified ones.Therefore, we can then achieve higher compression ratio with
better performance based on pattern pruning method. Additionally, this paper
provides a theoretical derivations of the ADMM with local sparsity. Finally, we
also extend the proposed ADMM based framework with SR-STE to demonstrate its
generalization and to avoid gradient vanishing problem. We conduct extensive
experiments on classification tasks over GLUE datasets. Significantly, we
achieve 50% percent compression ratio while maintaining overall score 80.1 on
GLUE dataset.
| [
{
"created": "Thu, 11 Jul 2024 09:35:08 GMT",
"version": "v1"
},
{
"created": "Fri, 12 Jul 2024 03:36:01 GMT",
"version": "v2"
},
{
"created": "Sat, 20 Jul 2024 03:40:43 GMT",
"version": "v3"
}
] | 2024-07-23 | [
[
"Wang",
"TianChen",
""
]
] | NLP(natural language processsing) has achieved great success through the transformer model.However, the model has hundreds of millions or billions parameters,which is huge burden for its deployment on personal computer or small scale of server.To deal with it, we either make the model's weight matrix relatively sparser, or compress attention layer. Pattern pruning ,one of the most important pruning methods, permits selecting fixed number of parameters in each divided pattern block and prunes it. However, the effect of pattern pruning is strictly limited by the sparsity within a region of weights in each layer. In this paper,we first introduced Alternating Direction Method of Multipliers(ADMM) based pattern pruning framework to reshape the distribution of activation map. Specifically, we propose to formulate the pattern pruning on transformer as a constrained optimization and use ADMM to optimize the problem. In this way, the initial dense feature maps is transformed to rather regionally sparsified ones.Therefore, we can then achieve higher compression ratio with better performance based on pattern pruning method. Additionally, this paper provides a theoretical derivations of the ADMM with local sparsity. Finally, we also extend the proposed ADMM based framework with SR-STE to demonstrate its generalization and to avoid gradient vanishing problem. We conduct extensive experiments on classification tasks over GLUE datasets. Significantly, we achieve 50% percent compression ratio while maintaining overall score 80.1 on GLUE dataset. |
2209.13017 | Karish Grover | Karish Grover, S.M. Phaneendra Angara, Md. Shad Akhtar, Tanmoy
Chakraborty | Public Wisdom Matters! Discourse-Aware Hyperbolic Fourier Co-Attention
for Social-Text Classification | NeurIPS 2022 | null | null | null | cs.CL cs.LG cs.SI | http://creativecommons.org/licenses/by/4.0/ | Social media has become the fulcrum of all forms of communication.
Classifying social texts such as fake news, rumour, sarcasm, etc. has gained
significant attention. The surface-level signals expressed by a social-text
itself may not be adequate for such tasks; therefore, recent methods attempted
to incorporate other intrinsic signals such as user behavior and the underlying
graph structure. Oftentimes, the `public wisdom' expressed through the
comments/replies to a social-text acts as a surrogate of crowd-sourced view and
may provide us with complementary signals. State-of-the-art methods on
social-text classification tend to ignore such a rich hierarchical signal.
Here, we propose Hyphen, a discourse-aware hyperbolic spectral co-attention
network. Hyphen is a fusion of hyperbolic graph representation learning with a
novel Fourier co-attention mechanism in an attempt to generalise the
social-text classification tasks by incorporating public discourse. We parse
public discourse as an Abstract Meaning Representation (AMR) graph and use the
powerful hyperbolic geometric representation to model graphs with hierarchical
structure. Finally, we equip it with a novel Fourier co-attention mechanism to
capture the correlation between the source post and public discourse. Extensive
experiments on four different social-text classification tasks, namely
detecting fake news, hate speech, rumour, and sarcasm, show that Hyphen
generalises well, and achieves state-of-the-art results on ten benchmark
datasets. We also employ a sentence-level fact-checked and annotated dataset to
evaluate how Hyphen is capable of producing explanations as analogous evidence
to the final prediction.
| [
{
"created": "Thu, 15 Sep 2022 16:04:32 GMT",
"version": "v1"
},
{
"created": "Tue, 11 Oct 2022 15:57:31 GMT",
"version": "v2"
}
] | 2022-10-12 | [
[
"Grover",
"Karish",
""
],
[
"Angara",
"S. M. Phaneendra",
""
],
[
"Akhtar",
"Md. Shad",
""
],
[
"Chakraborty",
"Tanmoy",
""
]
] | Social media has become the fulcrum of all forms of communication. Classifying social texts such as fake news, rumour, sarcasm, etc. has gained significant attention. The surface-level signals expressed by a social-text itself may not be adequate for such tasks; therefore, recent methods attempted to incorporate other intrinsic signals such as user behavior and the underlying graph structure. Oftentimes, the `public wisdom' expressed through the comments/replies to a social-text acts as a surrogate of crowd-sourced view and may provide us with complementary signals. State-of-the-art methods on social-text classification tend to ignore such a rich hierarchical signal. Here, we propose Hyphen, a discourse-aware hyperbolic spectral co-attention network. Hyphen is a fusion of hyperbolic graph representation learning with a novel Fourier co-attention mechanism in an attempt to generalise the social-text classification tasks by incorporating public discourse. We parse public discourse as an Abstract Meaning Representation (AMR) graph and use the powerful hyperbolic geometric representation to model graphs with hierarchical structure. Finally, we equip it with a novel Fourier co-attention mechanism to capture the correlation between the source post and public discourse. Extensive experiments on four different social-text classification tasks, namely detecting fake news, hate speech, rumour, and sarcasm, show that Hyphen generalises well, and achieves state-of-the-art results on ten benchmark datasets. We also employ a sentence-level fact-checked and annotated dataset to evaluate how Hyphen is capable of producing explanations as analogous evidence to the final prediction. |
2407.20530 | Youqiang Zheng | Youqiang Zheng, Weiping Tu, Li Xiao, Xinmeng Xu | SuperCodec: A Neural Speech Codec with Selective Back-Projection Network | Accepted by ICASSP 2024 | null | null | null | cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural speech coding is a rapidly developing topic, where state-of-the-art
approaches now exhibit superior compression performance than conventional
methods. Despite significant progress, existing methods still have limitations
in preserving and reconstructing fine details for optimal reconstruction,
especially at low bitrates. In this study, we introduce SuperCodec, a neural
speech codec that achieves state-of-the-art performance at low bitrates. It
employs a novel back projection method with selective feature fusion for
augmented representation. Specifically, we propose to use Selective Up-sampling
Back Projection (SUBP) and Selective Down-sampling Back Projection (SDBP)
modules to replace the standard up- and down-sampling layers at the encoder and
decoder, respectively. Experimental results show that our method outperforms
the existing neural speech codecs operating at various bitrates. Specifically,
our proposed method can achieve higher quality reconstructed speech at 1 kbps
than Lyra V2 at 3.2 kbps and Encodec at 6 kbps.
| [
{
"created": "Tue, 30 Jul 2024 04:12:17 GMT",
"version": "v1"
}
] | 2024-07-31 | [
[
"Zheng",
"Youqiang",
""
],
[
"Tu",
"Weiping",
""
],
[
"Xiao",
"Li",
""
],
[
"Xu",
"Xinmeng",
""
]
] | Neural speech coding is a rapidly developing topic, where state-of-the-art approaches now exhibit superior compression performance than conventional methods. Despite significant progress, existing methods still have limitations in preserving and reconstructing fine details for optimal reconstruction, especially at low bitrates. In this study, we introduce SuperCodec, a neural speech codec that achieves state-of-the-art performance at low bitrates. It employs a novel back projection method with selective feature fusion for augmented representation. Specifically, we propose to use Selective Up-sampling Back Projection (SUBP) and Selective Down-sampling Back Projection (SDBP) modules to replace the standard up- and down-sampling layers at the encoder and decoder, respectively. Experimental results show that our method outperforms the existing neural speech codecs operating at various bitrates. Specifically, our proposed method can achieve higher quality reconstructed speech at 1 kbps than Lyra V2 at 3.2 kbps and Encodec at 6 kbps. |
2010.14503 | Jean De Dieu Mutangana | Jean de Dieu Mutangana, Ravi Tandon | Topological Interference Management with Confidential Messages | Accepted and published in IEEE Transactions on Information Theory | null | null | null | cs.IT cs.CR math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The topological interference management (TIM) problem refers to the study of
the K-user partially connected interference networks with no channel state
information at the transmitters (CSIT), except for the knowledge of network
topology. In this paper, we study the TIM problem with confidential messages
(TIM-CM), where message confidentiality must be satisfied in addition to
reliability constraints. In particular, each transmitted message must be
decodable at its intended receiver and remain confidential at the remaining
(K-1) receivers.
Our main contribution is to present a comprehensive set of results for the
TIM-CM problem by studying the symmetric secure degrees of freedom (SDoF). To
this end, we first characterize necessary and sufficient conditions for
feasibility of positive symmetric SDoF for any arbitrary topology. We next
present two achievable schemes for the TIM-CM problem: For the first scheme, we
use the concept of secure partition and, for the second one, we use the concept
of secure independent sets. We also present outer bounds on symmetric SDoF for
any arbitrary network topology. Using these bounds, we characterize the optimal
symmetric SDoF of all K=2-user and K=3-user network topologies.
| [
{
"created": "Tue, 27 Oct 2020 17:59:07 GMT",
"version": "v1"
},
{
"created": "Mon, 30 Jan 2023 14:26:58 GMT",
"version": "v2"
}
] | 2023-01-31 | [
[
"Mutangana",
"Jean de Dieu",
""
],
[
"Tandon",
"Ravi",
""
]
] | The topological interference management (TIM) problem refers to the study of the K-user partially connected interference networks with no channel state information at the transmitters (CSIT), except for the knowledge of network topology. In this paper, we study the TIM problem with confidential messages (TIM-CM), where message confidentiality must be satisfied in addition to reliability constraints. In particular, each transmitted message must be decodable at its intended receiver and remain confidential at the remaining (K-1) receivers. Our main contribution is to present a comprehensive set of results for the TIM-CM problem by studying the symmetric secure degrees of freedom (SDoF). To this end, we first characterize necessary and sufficient conditions for feasibility of positive symmetric SDoF for any arbitrary topology. We next present two achievable schemes for the TIM-CM problem: For the first scheme, we use the concept of secure partition and, for the second one, we use the concept of secure independent sets. We also present outer bounds on symmetric SDoF for any arbitrary network topology. Using these bounds, we characterize the optimal symmetric SDoF of all K=2-user and K=3-user network topologies. |
2305.13469 | Saurabh Srivastava | Saurabh Srivastava, Gaurav Singh, Shou Matsumoto, Ali Raz, Paulo
Costa, Joshua Poore, Ziyu Yao | MAILEX: Email Event and Argument Extraction | Accepted at EMNLP 2023 | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we present the first dataset, MailEx, for performing event
extraction from conversational email threads. To this end, we first proposed a
new taxonomy covering 10 event types and 76 arguments in the email domain. Our
final dataset includes 1.5K email threads and ~4K emails, which are annotated
with totally ~8K event instances. To understand the task challenges, we
conducted a series of experiments comparing three types of approaches, i.e.,
fine-tuned sequence labeling, fine-tuned generative extraction, and few-shot
in-context learning. Our results showed that the task of email event extraction
is far from being addressed, due to challenges lying in, e.g., extracting
non-continuous, shared trigger spans, extracting non-named entity arguments,
and modeling the email conversational history. Our work thus suggests more
future investigations in this domain-specific event extraction task.
| [
{
"created": "Mon, 22 May 2023 20:28:23 GMT",
"version": "v1"
},
{
"created": "Sat, 21 Oct 2023 02:15:22 GMT",
"version": "v2"
}
] | 2023-10-24 | [
[
"Srivastava",
"Saurabh",
""
],
[
"Singh",
"Gaurav",
""
],
[
"Matsumoto",
"Shou",
""
],
[
"Raz",
"Ali",
""
],
[
"Costa",
"Paulo",
""
],
[
"Poore",
"Joshua",
""
],
[
"Yao",
"Ziyu",
""
]
] | In this work, we present the first dataset, MailEx, for performing event extraction from conversational email threads. To this end, we first proposed a new taxonomy covering 10 event types and 76 arguments in the email domain. Our final dataset includes 1.5K email threads and ~4K emails, which are annotated with totally ~8K event instances. To understand the task challenges, we conducted a series of experiments comparing three types of approaches, i.e., fine-tuned sequence labeling, fine-tuned generative extraction, and few-shot in-context learning. Our results showed that the task of email event extraction is far from being addressed, due to challenges lying in, e.g., extracting non-continuous, shared trigger spans, extracting non-named entity arguments, and modeling the email conversational history. Our work thus suggests more future investigations in this domain-specific event extraction task. |
1711.03473 | Michel Melo Silva | Michel Melo Silva, Washington Luis Souza Ramos, Felipe Cadar Chamone,
Jo\~ao Pedro Klock Ferreira, Mario Fernando Montenegro Campos, Erickson
Rangel Nascimento | Making a long story short: A Multi-Importance fast-forwarding egocentric
videos with the emphasis on relevant objects | Accepted to publication in the Journal of Visual Communication and
Image Representation (JVCI) 2018. Project website:
https://www.verlab.dcc.ufmg.br/semantic-hyperlapse | null | 10.1016/j.jvcir.2018.02.013 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The emergence of low-cost high-quality personal wearable cameras combined
with the increasing storage capacity of video-sharing websites have evoked a
growing interest in first-person videos, since most videos are composed of
long-running unedited streams which are usually tedious and unpleasant to
watch. State-of-the-art semantic fast-forward methods currently face the
challenge of providing an adequate balance between smoothness in visual flow
and the emphasis on the relevant parts. In this work, we present the
Multi-Importance Fast-Forward (MIFF), a fully automatic methodology to
fast-forward egocentric videos facing these challenges. The dilemma of defining
what is the semantic information of a video is addressed by a learning process
based on the preferences of the user. Results show that the proposed method
keeps over $3$ times more semantic content than the state-of-the-art
fast-forward. Finally, we discuss the need of a particular video stabilization
technique for fast-forward egocentric videos.
| [
{
"created": "Thu, 9 Nov 2017 17:03:29 GMT",
"version": "v1"
},
{
"created": "Thu, 1 Mar 2018 15:56:26 GMT",
"version": "v2"
},
{
"created": "Wed, 7 Mar 2018 17:59:11 GMT",
"version": "v3"
}
] | 2018-03-08 | [
[
"Silva",
"Michel Melo",
""
],
[
"Ramos",
"Washington Luis Souza",
""
],
[
"Chamone",
"Felipe Cadar",
""
],
[
"Ferreira",
"João Pedro Klock",
""
],
[
"Campos",
"Mario Fernando Montenegro",
""
],
[
"Nascimento",
"Erickson Rangel",
""
]
] | The emergence of low-cost high-quality personal wearable cameras combined with the increasing storage capacity of video-sharing websites have evoked a growing interest in first-person videos, since most videos are composed of long-running unedited streams which are usually tedious and unpleasant to watch. State-of-the-art semantic fast-forward methods currently face the challenge of providing an adequate balance between smoothness in visual flow and the emphasis on the relevant parts. In this work, we present the Multi-Importance Fast-Forward (MIFF), a fully automatic methodology to fast-forward egocentric videos facing these challenges. The dilemma of defining what is the semantic information of a video is addressed by a learning process based on the preferences of the user. Results show that the proposed method keeps over $3$ times more semantic content than the state-of-the-art fast-forward. Finally, we discuss the need of a particular video stabilization technique for fast-forward egocentric videos. |
2002.05973 | Dongfang Zhao | Dongfang Zhao | Algebraic Structure of Blockchains: A Group-Theoretical Primer | null | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although recent advances of blockchain systems, notably in the form of
cryptocurrency, have drawn tremendous interests from both researchers and
practitioners, limited studies existed toward the theoretical foundation of
blockchains. This paper presents the first study on the algebraic structure of
blockchains with an emphasis on the internal properties under algebraic groups.
We axiomatically construct a blockchain group and derive some interesting
properties that can be potentially taken into the design space and parametric
analysis of real-world blockchain systems.
| [
{
"created": "Fri, 14 Feb 2020 11:28:59 GMT",
"version": "v1"
}
] | 2020-02-17 | [
[
"Zhao",
"Dongfang",
""
]
] | Although recent advances of blockchain systems, notably in the form of cryptocurrency, have drawn tremendous interests from both researchers and practitioners, limited studies existed toward the theoretical foundation of blockchains. This paper presents the first study on the algebraic structure of blockchains with an emphasis on the internal properties under algebraic groups. We axiomatically construct a blockchain group and derive some interesting properties that can be potentially taken into the design space and parametric analysis of real-world blockchain systems. |
2210.00058 | Gino Chacon | Gino A. Chacon, Charles Williams, Johann Knechtel, Ozgur Sinanoglu,
Paul V. Gratz | Hardware Trojan Threats to Cache Coherence in Modern 2.5D Chiplet
Systems | null | null | null | null | cs.CR cs.AR | http://creativecommons.org/licenses/by/4.0/ | As industry moves toward chiplet-based designs, the insertion of hardware
Trojans poses a significant threat to the security of these systems. These
systems rely heavily on cache coherence for coherent data communication, making
coherence an attractive target. Critically, unlike prior work, which focuses
only on malicious packet modifications, a Trojan attack that exploits coherence
can modify data in memory that was never touched and is not owned by the
chiplet which contains the Trojan. Further, the Trojan need not even be
physically between the victim and the memory controller to attack the victim's
memory transactions. Here, we explore the fundamental attack vectors possible
in chiplet-based systems and provide an example Trojan implementation capable
of directly modifying victim data in memory. This work aims to highlight the
need for developing mechanisms that can protect and secure the coherence scheme
from these forms of attacks.
| [
{
"created": "Fri, 30 Sep 2022 19:45:04 GMT",
"version": "v1"
}
] | 2022-10-04 | [
[
"Chacon",
"Gino A.",
""
],
[
"Williams",
"Charles",
""
],
[
"Knechtel",
"Johann",
""
],
[
"Sinanoglu",
"Ozgur",
""
],
[
"Gratz",
"Paul V.",
""
]
] | As industry moves toward chiplet-based designs, the insertion of hardware Trojans poses a significant threat to the security of these systems. These systems rely heavily on cache coherence for coherent data communication, making coherence an attractive target. Critically, unlike prior work, which focuses only on malicious packet modifications, a Trojan attack that exploits coherence can modify data in memory that was never touched and is not owned by the chiplet which contains the Trojan. Further, the Trojan need not even be physically between the victim and the memory controller to attack the victim's memory transactions. Here, we explore the fundamental attack vectors possible in chiplet-based systems and provide an example Trojan implementation capable of directly modifying victim data in memory. This work aims to highlight the need for developing mechanisms that can protect and secure the coherence scheme from these forms of attacks. |
1803.05069 | Maofan Yin | Maofan Yin, Dahlia Malkhi, Michael K. Reiter, Guy Golan Gueta, Ittai
Abraham | HotStuff: BFT Consensus in the Lens of Blockchain | a shorter version of this paper has been published in PODC'19, which
does not include interpretation of other protocols using the framework,
system evaluation or additional proofs in appendices | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present HotStuff, a leader-based Byzantine fault-tolerant replication
protocol for the partially synchronous model. Once network communication
becomes synchronous, HotStuff enables a correct leader to drive the protocol to
consensus at the pace of actual (vs. maximum) network delay--a property called
responsiveness--and with communication complexity that is linear in the number
of replicas. To our knowledge, HotStuff is the first partially synchronous BFT
replication protocol exhibiting these combined properties. HotStuff is built
around a novel framework that forms a bridge between classical BFT foundations
and blockchains. It allows the expression of other known protocols (DLS, PBFT,
Tendermint, Casper), and ours, in a common framework.
Our deployment of HotStuff over a network with over 100 replicas achieves
throughput and latency comparable to that of BFT-SMaRt, while enjoying linear
communication footprint during leader failover (vs. quadratic with BFT-SMaRt).
| [
{
"created": "Tue, 13 Mar 2018 23:01:05 GMT",
"version": "v1"
},
{
"created": "Thu, 18 Oct 2018 15:39:12 GMT",
"version": "v2"
},
{
"created": "Mon, 18 Mar 2019 18:21:08 GMT",
"version": "v3"
},
{
"created": "Tue, 2 Apr 2019 00:48:38 GMT",
"version": "v4"
},
{
"created": "Wed, 5 Jun 2019 04:26:20 GMT",
"version": "v5"
},
{
"created": "Tue, 23 Jul 2019 05:19:36 GMT",
"version": "v6"
}
] | 2019-07-24 | [
[
"Yin",
"Maofan",
""
],
[
"Malkhi",
"Dahlia",
""
],
[
"Reiter",
"Michael K.",
""
],
[
"Gueta",
"Guy Golan",
""
],
[
"Abraham",
"Ittai",
""
]
] | We present HotStuff, a leader-based Byzantine fault-tolerant replication protocol for the partially synchronous model. Once network communication becomes synchronous, HotStuff enables a correct leader to drive the protocol to consensus at the pace of actual (vs. maximum) network delay--a property called responsiveness--and with communication complexity that is linear in the number of replicas. To our knowledge, HotStuff is the first partially synchronous BFT replication protocol exhibiting these combined properties. HotStuff is built around a novel framework that forms a bridge between classical BFT foundations and blockchains. It allows the expression of other known protocols (DLS, PBFT, Tendermint, Casper), and ours, in a common framework. Our deployment of HotStuff over a network with over 100 replicas achieves throughput and latency comparable to that of BFT-SMaRt, while enjoying linear communication footprint during leader failover (vs. quadratic with BFT-SMaRt). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.