id stringlengths 9 10 | submitter stringlengths 1 64 ⌀ | authors stringlengths 4 20.7k | title stringlengths 4 246 | comments stringlengths 1 523 ⌀ | journal-ref stringlengths 4 404 ⌀ | doi stringlengths 11 153 ⌀ | report-no stringlengths 2 254 ⌀ | categories stringlengths 5 98 | license stringclasses 9 values | orig_abstract stringlengths 14 3.35k | versions listlengths 1 60 | update_date stringlengths 10 10 | authors_parsed listlengths 1 1.35k | abstract stringlengths 11 3.34k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2402.19012 | Matteo Palazzo | Matteo Palazzo (1), Luca Roversi (1) ((1) Universit\`a di Torino) | Algorithmically Expressive, Always-Terminating Model for Reversible
Computation | 16 pages, 4 figures, 2 listings | null | null | null | cs.PL cs.LO | http://creativecommons.org/licenses/by/4.0/ | Concerning classical computational models able to express all the Primitive
Recursive Functions (PRF), there are interesting results regarding limits on
their algorithmic expressiveness or, equivalently, efficiency, namely the
ability to express algorithms with minimal computational cost. By introducing
the reversible programming model Forest, at our knowledge, we provide a first
study of analogous properties, adapted to the context of reversible
computational models that can represent all the functions in PRF. Firstly, we
show that Forest extends Matos' linear reversible computational model MSRL, the
very extension being a guaranteed terminating iteration that can be halted by
means of logical predicates. The consequence is that Forest is PRF complete,
because MSRL is. Secondly, we show that Forest is strictly algorithmically more
expressive than MSRL: it can encode a reversible algorithm for the minimum
between two integers in optimal time, while MSRL cannot.
| [
{
"created": "Thu, 29 Feb 2024 10:15:58 GMT",
"version": "v1"
}
] | 2024-03-01 | [
[
"Palazzo",
"Matteo",
"",
"Università di Torino"
],
[
"Roversi",
"Luca",
"",
"Università di Torino"
]
] | Concerning classical computational models able to express all the Primitive Recursive Functions (PRF), there are interesting results regarding limits on their algorithmic expressiveness or, equivalently, efficiency, namely the ability to express algorithms with minimal computational cost. By introducing the reversible programming model Forest, at our knowledge, we provide a first study of analogous properties, adapted to the context of reversible computational models that can represent all the functions in PRF. Firstly, we show that Forest extends Matos' linear reversible computational model MSRL, the very extension being a guaranteed terminating iteration that can be halted by means of logical predicates. The consequence is that Forest is PRF complete, because MSRL is. Secondly, we show that Forest is strictly algorithmically more expressive than MSRL: it can encode a reversible algorithm for the minimum between two integers in optimal time, while MSRL cannot. |
2102.01197 | Rami Ezzine | Rami Ezzine and Moritz Wiese and Christian Deppe and Holger Boche | Common Randomness Generation over Slow Fading Channels | null | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper analyzes the problem of common randomness (CR) generation from
correlated discrete sources aided by unidirectional communication over
Single-Input Single-Output (SISO) slow fading channels with additive white
Gaussian noise (AWGN) and arbitrary state distribution. Slow fading channels
are practically relevant for wireless communications.
We completely solve the SISO slow fading case by establishing its
corresponding outage CR capacity using our characterization of its channel
outage capacity.
The generated CR could be exploited to improve the performance gain in the
identification scheme. The latter is known to be more efficient than the
classical transmission scheme in many new applications, which demand
ultra-reliable low latency communication.
| [
{
"created": "Mon, 1 Feb 2021 21:51:18 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Feb 2021 13:02:54 GMT",
"version": "v2"
},
{
"created": "Mon, 10 May 2021 17:23:24 GMT",
"version": "v3"
},
{
"created": "Mon, 7 Jun 2021 13:11:44 GMT",
"version": "v4"
}
] | 2021-06-08 | [
[
"Ezzine",
"Rami",
""
],
[
"Wiese",
"Moritz",
""
],
[
"Deppe",
"Christian",
""
],
[
"Boche",
"Holger",
""
]
] | This paper analyzes the problem of common randomness (CR) generation from correlated discrete sources aided by unidirectional communication over Single-Input Single-Output (SISO) slow fading channels with additive white Gaussian noise (AWGN) and arbitrary state distribution. Slow fading channels are practically relevant for wireless communications. We completely solve the SISO slow fading case by establishing its corresponding outage CR capacity using our characterization of its channel outage capacity. The generated CR could be exploited to improve the performance gain in the identification scheme. The latter is known to be more efficient than the classical transmission scheme in many new applications, which demand ultra-reliable low latency communication. |
1002.0678 | Andreas Faatz Dr. | Andreas Faatz, Andreas Zinnen | FORMT: Form-based Mutation Testing of Logical Specifications | null | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The draft paper defines a system, which is capable of maintaining bases of
test cases for logical specifications. The specifications, which are subject to
this system are transformed from their original shape in first-order logic to
form-based expressions as originally introduced in logics of George
Spencer-Brown. The innovation comes from the operations the system provides
when injecting faults - so-called mutations - to the specifications. The system
presented here applies to logical specifications from areas as different as
programming, ontologies or hardware specifications.
| [
{
"created": "Wed, 3 Feb 2010 09:35:21 GMT",
"version": "v1"
}
] | 2010-02-04 | [
[
"Faatz",
"Andreas",
""
],
[
"Zinnen",
"Andreas",
""
]
] | The draft paper defines a system, which is capable of maintaining bases of test cases for logical specifications. The specifications, which are subject to this system are transformed from their original shape in first-order logic to form-based expressions as originally introduced in logics of George Spencer-Brown. The innovation comes from the operations the system provides when injecting faults - so-called mutations - to the specifications. The system presented here applies to logical specifications from areas as different as programming, ontologies or hardware specifications. |
2107.09423 | Marcin Kozik | Libor Barto and Marcin Kozik | Combinatorial Gap Theorem and Reductions between Promise CSPs | null | null | null | null | cs.CC cs.LO | http://creativecommons.org/licenses/by/4.0/ | A value of a CSP instance is typically defined as a fraction of constraints
that can be simultaneously met. We propose an alternative definition of a value
of an instance and show that, for purely combinatorial reasons, a value of an
unsolvable instance is bounded away from one; we call this fact a gap theorem.
We show that the gap theorem implies NP-hardness of a gap version of the
Layered Label Cover Problem. The same result can be derived from the PCP
Theorem, but a full, self-contained proof of our reduction is quite short and
the result can still provide PCP-free NP-hardness proofs for numerous problems.
The simplicity of our reasoning also suggests that weaker versions of
Unique-Games-type conjectures, e.g., the d-to-1 conjecture, might be accessible
and serve as an intermediate step for proving these conjectures in their full
strength.
As the second, main application we provide a sufficient condition under which
a fixed template Promise Constraint Satisfaction Problem (PCSP) reduces to
another PCSP. The correctness of the reduction hinges on the gap theorem, but
the reduction itself is very simple. As a consequence, we obtain that every CSP
can be canonically reduced to most of the known NP-hard PCSPs, such as the
approximate hypergraph coloring problem.
| [
{
"created": "Tue, 20 Jul 2021 11:36:17 GMT",
"version": "v1"
}
] | 2021-07-21 | [
[
"Barto",
"Libor",
""
],
[
"Kozik",
"Marcin",
""
]
] | A value of a CSP instance is typically defined as a fraction of constraints that can be simultaneously met. We propose an alternative definition of a value of an instance and show that, for purely combinatorial reasons, a value of an unsolvable instance is bounded away from one; we call this fact a gap theorem. We show that the gap theorem implies NP-hardness of a gap version of the Layered Label Cover Problem. The same result can be derived from the PCP Theorem, but a full, self-contained proof of our reduction is quite short and the result can still provide PCP-free NP-hardness proofs for numerous problems. The simplicity of our reasoning also suggests that weaker versions of Unique-Games-type conjectures, e.g., the d-to-1 conjecture, might be accessible and serve as an intermediate step for proving these conjectures in their full strength. As the second, main application we provide a sufficient condition under which a fixed template Promise Constraint Satisfaction Problem (PCSP) reduces to another PCSP. The correctness of the reduction hinges on the gap theorem, but the reduction itself is very simple. As a consequence, we obtain that every CSP can be canonically reduced to most of the known NP-hard PCSPs, such as the approximate hypergraph coloring problem. |
1004.1789 | Rdv Ijcsis | H. B. Kekre, Saylee Gharge, Tanuja K. Sarode | SAR Image Segmentation using Vector Quantization Technique on Entropy
Images | IEEE Publication format, International Journal of Computer Science
and Information Security, IJCSIS, Vol. 7 No. 3, March 2010, USA. ISSN 1947
5500, http://sites.google.com/site/ijcsis/ | null | null | null | cs.MM cs.CV | http://creativecommons.org/licenses/by-nc-sa/3.0/ | The development and application of various remote sensing platforms result in
the production of huge amounts of satellite image data. Therefore, there is an
increasing need for effective querying and browsing in these image databases.
In order to take advantage and make good use of satellite images data, we must
be able to extract meaningful information from the imagery. Hence we proposed a
new algorithm for SAR image segmentation. In this paper we propose segmentation
using vector quantization technique on entropy image. Initially, we obtain
entropy image and in second step we use Kekre's Fast Codebook Generation (KFCG)
algorithm for segmentation of the entropy image. Thereafter, a codebook of size
128 was generated for the Entropy image. These code vectors were further
clustered in 8 clusters using same KFCG algorithm and converted into 8 images.
These 8 images were displayed as a result. This approach does not lead to over
segmentation or under segmentation. We compared these results with well known
Gray Level Co-occurrence Matrix. The proposed algorithm gives better
segmentation with less complexity.
| [
{
"created": "Sun, 11 Apr 2010 11:05:33 GMT",
"version": "v1"
}
] | 2010-04-13 | [
[
"Kekre",
"H. B.",
""
],
[
"Gharge",
"Saylee",
""
],
[
"Sarode",
"Tanuja K.",
""
]
] | The development and application of various remote sensing platforms result in the production of huge amounts of satellite image data. Therefore, there is an increasing need for effective querying and browsing in these image databases. In order to take advantage and make good use of satellite images data, we must be able to extract meaningful information from the imagery. Hence we proposed a new algorithm for SAR image segmentation. In this paper we propose segmentation using vector quantization technique on entropy image. Initially, we obtain entropy image and in second step we use Kekre's Fast Codebook Generation (KFCG) algorithm for segmentation of the entropy image. Thereafter, a codebook of size 128 was generated for the Entropy image. These code vectors were further clustered in 8 clusters using same KFCG algorithm and converted into 8 images. These 8 images were displayed as a result. This approach does not lead to over segmentation or under segmentation. We compared these results with well known Gray Level Co-occurrence Matrix. The proposed algorithm gives better segmentation with less complexity. |
2012.12310 | Alan Kaplan | Alan D. Kaplan, Qi Cheng, K. Aditya Mohan, Lindsay D. Nelson, Sonia
Jain, Harvey Levin, Abel Torres-Espin, Austin Chou, J. Russell Huie, Adam R.
Ferguson, Michael McCrea, Joseph Giacino, Shivshankar Sundaram, Amy J.
Markowitz, Geoffrey T. Manley | Mixture Model Framework for Traumatic Brain Injury Prognosis Using
Heterogeneous Clinical and Outcome Data | 12 pages, 5 figures | null | 10.1109/JBHI.2021.3099745 | null | cs.LG q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Prognoses of Traumatic Brain Injury (TBI) outcomes are neither easily nor
accurately determined from clinical indicators. This is due in part to the
heterogeneity of damage inflicted to the brain, ultimately resulting in diverse
and complex outcomes. Using a data-driven approach on many distinct data
elements may be necessary to describe this large set of outcomes and thereby
robustly depict the nuanced differences among TBI patients' recovery. In this
work, we develop a method for modeling large heterogeneous data types relevant
to TBI. Our approach is geared toward the probabilistic representation of mixed
continuous and discrete variables with missing values. The model is trained on
a dataset encompassing a variety of data types, including demographics,
blood-based biomarkers, and imaging findings. In addition, it includes a set of
clinical outcome assessments at 3, 6, and 12 months post-injury. The model is
used to stratify patients into distinct groups in an unsupervised learning
setting. We use the model to infer outcomes using input data, and show that the
collection of input data reduces uncertainty of outcomes over a baseline
approach. In addition, we quantify the performance of a likelihood scoring
technique that can be used to self-evaluate the extrapolation risk of prognosis
on unseen patients.
| [
{
"created": "Tue, 22 Dec 2020 19:31:03 GMT",
"version": "v1"
},
{
"created": "Sun, 4 Apr 2021 02:21:30 GMT",
"version": "v2"
},
{
"created": "Tue, 20 Jul 2021 23:57:44 GMT",
"version": "v3"
}
] | 2022-03-24 | [
[
"Kaplan",
"Alan D.",
""
],
[
"Cheng",
"Qi",
""
],
[
"Mohan",
"K. Aditya",
""
],
[
"Nelson",
"Lindsay D.",
""
],
[
"Jain",
"Sonia",
""
],
[
"Levin",
"Harvey",
""
],
[
"Torres-Espin",
"Abel",
""
],
[
"Chou",
"Austin",
""
],
[
"Huie",
"J. Russell",
""
],
[
"Ferguson",
"Adam R.",
""
],
[
"McCrea",
"Michael",
""
],
[
"Giacino",
"Joseph",
""
],
[
"Sundaram",
"Shivshankar",
""
],
[
"Markowitz",
"Amy J.",
""
],
[
"Manley",
"Geoffrey T.",
""
]
] | Prognoses of Traumatic Brain Injury (TBI) outcomes are neither easily nor accurately determined from clinical indicators. This is due in part to the heterogeneity of damage inflicted to the brain, ultimately resulting in diverse and complex outcomes. Using a data-driven approach on many distinct data elements may be necessary to describe this large set of outcomes and thereby robustly depict the nuanced differences among TBI patients' recovery. In this work, we develop a method for modeling large heterogeneous data types relevant to TBI. Our approach is geared toward the probabilistic representation of mixed continuous and discrete variables with missing values. The model is trained on a dataset encompassing a variety of data types, including demographics, blood-based biomarkers, and imaging findings. In addition, it includes a set of clinical outcome assessments at 3, 6, and 12 months post-injury. The model is used to stratify patients into distinct groups in an unsupervised learning setting. We use the model to infer outcomes using input data, and show that the collection of input data reduces uncertainty of outcomes over a baseline approach. In addition, we quantify the performance of a likelihood scoring technique that can be used to self-evaluate the extrapolation risk of prognosis on unseen patients. |
1812.10119 | Salah Zaiem | Salah Zaiem and Fatiha Sadat | Sequence to Sequence Learning for Query Expansion | 8 pages, 2 figures, AAAI-19 Student Abstract and Poster Program | null | null | null | cs.IR cs.CL stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Using sequence to sequence algorithms for query expansion has not been
explored yet in Information Retrieval literature nor in Question-Answering's.
We tried to fill this gap in the literature with a custom Query Expansion
engine trained and tested on open datasets. Starting from open datasets, we
built a Query Expansion training set using sentence-embeddings-based Keyword
Extraction. We therefore assessed the ability of the Sequence to Sequence
neural networks to capture expanding relations in the words embeddings' space.
| [
{
"created": "Tue, 25 Dec 2018 15:24:04 GMT",
"version": "v1"
}
] | 2018-12-27 | [
[
"Zaiem",
"Salah",
""
],
[
"Sadat",
"Fatiha",
""
]
] | Using sequence to sequence algorithms for query expansion has not been explored yet in Information Retrieval literature nor in Question-Answering's. We tried to fill this gap in the literature with a custom Query Expansion engine trained and tested on open datasets. Starting from open datasets, we built a Query Expansion training set using sentence-embeddings-based Keyword Extraction. We therefore assessed the ability of the Sequence to Sequence neural networks to capture expanding relations in the words embeddings' space. |
2008.12002 | Victor Joos De Ter Beerst | Antoine Vanderschueren, Victor Joos, Christophe De Vleeschouwer | How semantic and geometric information mutually reinforce each other in
ToF object localization | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel approach to localize a 3D object from the intensity and
depth information images provided by a Time-of-Flight (ToF) sensor. Our method
uses two CNNs. The first one uses raw depth and intensity images as input, to
segment the floor pixels, from which the extrinsic parameters of the camera are
estimated. The second CNN is in charge of segmenting the object-of-interest. As
a main innovation, it exploits the calibration estimated from the prediction of
the first CNN to represent the geometric depth information in a coordinate
system that is attached to the ground, and is thus independent of the camera
elevation. In practice, both the height of pixels with respect to the ground,
and the orientation of normals to the point cloud are provided as input to the
second CNN. Given the segmentation predicted by the second CNN, the object is
localized based on point cloud alignment with a reference model. Our
experiments demonstrate that our proposed two-step approach improves
segmentation and localization accuracy by a significant margin compared to a
conventional CNN architecture, ignoring calibration and height maps, but also
compared to PointNet++.
| [
{
"created": "Thu, 27 Aug 2020 09:13:26 GMT",
"version": "v1"
}
] | 2020-08-28 | [
[
"Vanderschueren",
"Antoine",
""
],
[
"Joos",
"Victor",
""
],
[
"De Vleeschouwer",
"Christophe",
""
]
] | We propose a novel approach to localize a 3D object from the intensity and depth information images provided by a Time-of-Flight (ToF) sensor. Our method uses two CNNs. The first one uses raw depth and intensity images as input, to segment the floor pixels, from which the extrinsic parameters of the camera are estimated. The second CNN is in charge of segmenting the object-of-interest. As a main innovation, it exploits the calibration estimated from the prediction of the first CNN to represent the geometric depth information in a coordinate system that is attached to the ground, and is thus independent of the camera elevation. In practice, both the height of pixels with respect to the ground, and the orientation of normals to the point cloud are provided as input to the second CNN. Given the segmentation predicted by the second CNN, the object is localized based on point cloud alignment with a reference model. Our experiments demonstrate that our proposed two-step approach improves segmentation and localization accuracy by a significant margin compared to a conventional CNN architecture, ignoring calibration and height maps, but also compared to PointNet++. |
2003.03658 | Brian Powell | Brian A. Powell | Securing LSB embedding against structural steganalysis | 23 pages, 6 figures. Section 3 added; revisions made to Section 6.3.
Version accepted by Journal of Computer Security | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work explores the extent to which LSB embedding can be made secure
against structural steganalysis through a modification of cover image
statistics prior to message embedding. Natural images possess symmetries that
are expressed through approximately equal cardinalities of certain sets of
$k$-tuples of consecutive pixels. LSB embedding disturbs this balance and a
$k^{\rm th}$-order structural attack infers the presence of a hidden message
with a length in proportion to the size of the imbalance amongst sets of
$k$-tuples. To protect against $k^{\rm th}$-order structural attacks, cover
modifications involve the redistribution of $k$-tuples among the different sets
so that symmetries of the cover image are broken, then repaired through the act
of LSB embedding so that the stego image bears the statistics of the original
cover. To protect against all orders up to some order $k$, the statistics of
$n$-tuples must be preserved where $n$ is the least common multiple of all
orders $\leq k$. We find that this is only feasible for securing against up to
$3^{\rm rd}$-order attacks (Sample Pairs and Triples analyses) since
higher-order protections result in virtually zero embedding capacities.
Securing up to $3^{\rm rd}$-order requires redistribution of sextuplets: rather
than perform these $6^{\rm th}$-order cover modifications, which result in tiny
embedding capacities, we reduce the problem to the redistribution of triplets
in a manner that also preserves the statistics of pairs. This is done by
embedding into only certain pixels of each sextuplet, constraining the maximum
embedding rate to be $\leq 2/3$ bits per channel. Testing on a variety of image
formats, we report best performance for JPEG-compressed images with a mean
maximum embedding rate undetectable by $2^{\rm nd}$- and $3^{\rm rd}$-order
attacks of 0.21 bits per channel.
| [
{
"created": "Sat, 7 Mar 2020 20:41:18 GMT",
"version": "v1"
},
{
"created": "Tue, 5 Oct 2021 20:21:52 GMT",
"version": "v2"
}
] | 2021-10-07 | [
[
"Powell",
"Brian A.",
""
]
] | This work explores the extent to which LSB embedding can be made secure against structural steganalysis through a modification of cover image statistics prior to message embedding. Natural images possess symmetries that are expressed through approximately equal cardinalities of certain sets of $k$-tuples of consecutive pixels. LSB embedding disturbs this balance and a $k^{\rm th}$-order structural attack infers the presence of a hidden message with a length in proportion to the size of the imbalance amongst sets of $k$-tuples. To protect against $k^{\rm th}$-order structural attacks, cover modifications involve the redistribution of $k$-tuples among the different sets so that symmetries of the cover image are broken, then repaired through the act of LSB embedding so that the stego image bears the statistics of the original cover. To protect against all orders up to some order $k$, the statistics of $n$-tuples must be preserved where $n$ is the least common multiple of all orders $\leq k$. We find that this is only feasible for securing against up to $3^{\rm rd}$-order attacks (Sample Pairs and Triples analyses) since higher-order protections result in virtually zero embedding capacities. Securing up to $3^{\rm rd}$-order requires redistribution of sextuplets: rather than perform these $6^{\rm th}$-order cover modifications, which result in tiny embedding capacities, we reduce the problem to the redistribution of triplets in a manner that also preserves the statistics of pairs. This is done by embedding into only certain pixels of each sextuplet, constraining the maximum embedding rate to be $\leq 2/3$ bits per channel. Testing on a variety of image formats, we report best performance for JPEG-compressed images with a mean maximum embedding rate undetectable by $2^{\rm nd}$- and $3^{\rm rd}$-order attacks of 0.21 bits per channel. |
1808.02939 | Yuan Gong | Yuan Gong, Christian Poellabauer | Towards Learning Fine-Grained Disentangled Representations from Speech | null | null | null | null | cs.SD cs.CL cs.LG eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning disentangled representations of high-dimensional data is currently
an active research area. However, compared to the field of computer vision,
less work has been done for speech processing. In this paper, we provide a
review of two representative efforts on this topic and propose the novel
concept of fine-grained disentangled speech representation learning.
| [
{
"created": "Wed, 8 Aug 2018 20:59:26 GMT",
"version": "v1"
}
] | 2018-08-10 | [
[
"Gong",
"Yuan",
""
],
[
"Poellabauer",
"Christian",
""
]
] | Learning disentangled representations of high-dimensional data is currently an active research area. However, compared to the field of computer vision, less work has been done for speech processing. In this paper, we provide a review of two representative efforts on this topic and propose the novel concept of fine-grained disentangled speech representation learning. |
1705.02777 | Bin Han | Bin Han, Vincenzo Sciancalepore, Oliver Holland, Mischa Dohler and
Hans D. Schotten | D2D-Based Grouped Random Access to Mitigate Mobile Access Congestion in
5G Sensor Networks | First submission to IEEE Communications Magazine on Oct.28.2017.
Accepted on Aug.18.2019. This is the camera-ready version | IEEE Communications Magazine ( Volume: 57, Issue: 9, September
2019) | 10.1109/MCOM.001.1701032 | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Fifth Generation (5G) wireless service of sensor networks involves
significant challenges when dealing with the coordination of ever-increasing
number of devices accessing shared resources. This has drawn major interest
from the research community as many existing works focus on the radio access
network congestion control to efficiently manage resources in the context of
device-to-device (D2D) interaction in huge sensor networks. In this context,
this paper pioneers a study on the impact of D2D link reliability in
group-assisted random access protocols, by shedding the light on beneficial
performance and potential limitations of approaches of this kind against
tunable parameters such as group size, number of sensors and reliability of D2D
links. Additionally, we leverage on the association with a Geolocation Database
(GDB) capability to assist the grouping decisions by drawing parallels with
recent regulatory-driven initiatives around GDBs and arguing benefits of the
suggested proposal. Finally, the proposed method is approved to significantly
reduce the delay over random access channels, by means of an exhaustive
simulation campaign.
| [
{
"created": "Mon, 8 May 2017 08:41:27 GMT",
"version": "v1"
},
{
"created": "Tue, 9 May 2017 11:33:19 GMT",
"version": "v2"
},
{
"created": "Sat, 28 Oct 2017 13:13:37 GMT",
"version": "v3"
},
{
"created": "Fri, 21 Sep 2018 15:51:16 GMT",
"version": "v4"
},
{
"created": "Thu, 18 Jul 2019 17:54:07 GMT",
"version": "v5"
}
] | 2021-11-30 | [
[
"Han",
"Bin",
""
],
[
"Sciancalepore",
"Vincenzo",
""
],
[
"Holland",
"Oliver",
""
],
[
"Dohler",
"Mischa",
""
],
[
"Schotten",
"Hans D.",
""
]
] | The Fifth Generation (5G) wireless service of sensor networks involves significant challenges when dealing with the coordination of ever-increasing number of devices accessing shared resources. This has drawn major interest from the research community as many existing works focus on the radio access network congestion control to efficiently manage resources in the context of device-to-device (D2D) interaction in huge sensor networks. In this context, this paper pioneers a study on the impact of D2D link reliability in group-assisted random access protocols, by shedding the light on beneficial performance and potential limitations of approaches of this kind against tunable parameters such as group size, number of sensors and reliability of D2D links. Additionally, we leverage on the association with a Geolocation Database (GDB) capability to assist the grouping decisions by drawing parallels with recent regulatory-driven initiatives around GDBs and arguing benefits of the suggested proposal. Finally, the proposed method is approved to significantly reduce the delay over random access channels, by means of an exhaustive simulation campaign. |
2107.13693 | Yaohai Zhou | Zhiyuan Ren, Yaohai Zhou, Yizhe Chen, Ruisong Zhou, Yayu Gao | Efficient Human Pose Estimation by Maximizing Fusion and High-Level
Spatial Attention | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose an efficient human pose estimation network -- SFM
(slender fusion model) by fusing multi-level features and adding lightweight
attention blocks -- HSA (High-Level Spatial Attention). Many existing methods
on efficient network have already taken feature fusion into consideration,
which largely boosts the performance. However, its performance is far inferior
to large network such as ResNet and HRNet due to its limited fusion operation
in the network. Specifically, we expand the number of fusion operation by
building bridges between two pyramid frameworks without adding layers.
Meanwhile, to capture long-range dependency, we propose a lightweight attention
block -- HSA, which computes second-order attention map. In summary, SFM
maximizes the number of feature fusion in a limited number of layers. HSA
learns high precise spatial information by computing the attention of spatial
attention map. With the help of SFM and HSA, our network is able to generate
multi-level feature and extract precise global spatial information with little
computing resource. Thus, our method achieve comparable or even better accuracy
with less parameters and computational cost. Our SFM achieve 89.0 in PCKh@0.5,
42.0 in PCKh@0.1 on MPII validation set and 71.7 in AP, 90.7 in AP@0.5 on COCO
validation with only 1.7G FLOPs and 1.5M parameters. The source code will be
public soon.
| [
{
"created": "Thu, 29 Jul 2021 00:55:17 GMT",
"version": "v1"
}
] | 2021-07-30 | [
[
"Ren",
"Zhiyuan",
""
],
[
"Zhou",
"Yaohai",
""
],
[
"Chen",
"Yizhe",
""
],
[
"Zhou",
"Ruisong",
""
],
[
"Gao",
"Yayu",
""
]
] | In this paper, we propose an efficient human pose estimation network -- SFM (slender fusion model) by fusing multi-level features and adding lightweight attention blocks -- HSA (High-Level Spatial Attention). Many existing methods on efficient network have already taken feature fusion into consideration, which largely boosts the performance. However, its performance is far inferior to large network such as ResNet and HRNet due to its limited fusion operation in the network. Specifically, we expand the number of fusion operation by building bridges between two pyramid frameworks without adding layers. Meanwhile, to capture long-range dependency, we propose a lightweight attention block -- HSA, which computes second-order attention map. In summary, SFM maximizes the number of feature fusion in a limited number of layers. HSA learns high precise spatial information by computing the attention of spatial attention map. With the help of SFM and HSA, our network is able to generate multi-level feature and extract precise global spatial information with little computing resource. Thus, our method achieve comparable or even better accuracy with less parameters and computational cost. Our SFM achieve 89.0 in PCKh@0.5, 42.0 in PCKh@0.1 on MPII validation set and 71.7 in AP, 90.7 in AP@0.5 on COCO validation with only 1.7G FLOPs and 1.5M parameters. The source code will be public soon. |
2105.10830 | Blai Bonet | Ivan D. Rodriguez, Blai Bonet, Javier Romero, Hector Geffner | Learning First-Order Representations for Planning from Black-Box States:
New Results | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently Bonet and Geffner have shown that first-order representations for
planning domains can be learned from the structure of the state space without
any prior knowledge about the action schemas or domain predicates. For this,
the learning problem is formulated as the search for a simplest first-order
domain description D that along with information about instances I_i (number of
objects and initial state) determine state space graphs G(P_i) that match the
observed state graphs G_i where P_i = (D, I_i). The search is cast and solved
approximately by means of a SAT solver that is called over a large family of
propositional theories that differ just in the parameters encoding the possible
number of action schemas and domain predicates, their arities, and the number
of objects. In this work, we push the limits of these learners by moving to an
answer set programming (ASP) encoding using the CLINGO system. The new
encodings are more transparent and concise, extending the range of possible
models while facilitating their exploration. We show that the domains
introduced by Bonet and Geffner can be solved more efficiently in the new
approach, often optimally, and furthermore, that the approach can be easily
extended to handle partial information about the state graphs as well as noise
that prevents some states from being distinguished.
| [
{
"created": "Sun, 23 May 2021 00:08:42 GMT",
"version": "v1"
}
] | 2021-05-25 | [
[
"Rodriguez",
"Ivan D.",
""
],
[
"Bonet",
"Blai",
""
],
[
"Romero",
"Javier",
""
],
[
"Geffner",
"Hector",
""
]
] | Recently Bonet and Geffner have shown that first-order representations for planning domains can be learned from the structure of the state space without any prior knowledge about the action schemas or domain predicates. For this, the learning problem is formulated as the search for a simplest first-order domain description D that along with information about instances I_i (number of objects and initial state) determine state space graphs G(P_i) that match the observed state graphs G_i where P_i = (D, I_i). The search is cast and solved approximately by means of a SAT solver that is called over a large family of propositional theories that differ just in the parameters encoding the possible number of action schemas and domain predicates, their arities, and the number of objects. In this work, we push the limits of these learners by moving to an answer set programming (ASP) encoding using the CLINGO system. The new encodings are more transparent and concise, extending the range of possible models while facilitating their exploration. We show that the domains introduced by Bonet and Geffner can be solved more efficiently in the new approach, often optimally, and furthermore, that the approach can be easily extended to handle partial information about the state graphs as well as noise that prevents some states from being distinguished. |
2403.00809 | Abdelhak Kelious | Abdelhak Kelious, Mounir Okirim | Abdelhak at SemEval-2024 Task 9 : Decoding Brainteasers, The Efficacy of
Dedicated Models Versus ChatGPT | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study introduces a dedicated model aimed at solving the BRAINTEASER task
9 , a novel challenge designed to assess models lateral thinking capabilities
through sentence and word puzzles. Our model demonstrates remarkable efficacy,
securing Rank 1 in sentence puzzle solving during the test phase with an
overall score of 0.98. Additionally, we explore the comparative performance of
ChatGPT, specifically analyzing how variations in temperature settings affect
its ability to engage in lateral thinking and problem-solving. Our findings
indicate a notable performance disparity between the dedicated model and
ChatGPT, underscoring the potential of specialized approaches in enhancing
creative reasoning in AI.
| [
{
"created": "Sat, 24 Feb 2024 20:00:03 GMT",
"version": "v1"
}
] | 2024-03-05 | [
[
"Kelious",
"Abdelhak",
""
],
[
"Okirim",
"Mounir",
""
]
] | This study introduces a dedicated model aimed at solving the BRAINTEASER task 9 , a novel challenge designed to assess models lateral thinking capabilities through sentence and word puzzles. Our model demonstrates remarkable efficacy, securing Rank 1 in sentence puzzle solving during the test phase with an overall score of 0.98. Additionally, we explore the comparative performance of ChatGPT, specifically analyzing how variations in temperature settings affect its ability to engage in lateral thinking and problem-solving. Our findings indicate a notable performance disparity between the dedicated model and ChatGPT, underscoring the potential of specialized approaches in enhancing creative reasoning in AI. |
1603.06668 | Gustav Larsson | Gustav Larsson, Michael Maire, Gregory Shakhnarovich | Learning Representations for Automatic Colorization | ECCV 2016 (Project page:
http://people.cs.uchicago.edu/~larsson/colorization/) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We develop a fully automatic image colorization system. Our approach
leverages recent advances in deep networks, exploiting both low-level and
semantic representations. As many scene elements naturally appear according to
multimodal color distributions, we train our model to predict per-pixel color
histograms. This intermediate output can be used to automatically generate a
color image, or further manipulated prior to image formation. On both fully and
partially automatic colorization tasks, we outperform existing methods. We also
explore colorization as a vehicle for self-supervised visual representation
learning.
| [
{
"created": "Tue, 22 Mar 2016 04:08:01 GMT",
"version": "v1"
},
{
"created": "Thu, 28 Jul 2016 07:28:21 GMT",
"version": "v2"
},
{
"created": "Sun, 13 Aug 2017 17:50:50 GMT",
"version": "v3"
}
] | 2017-08-15 | [
[
"Larsson",
"Gustav",
""
],
[
"Maire",
"Michael",
""
],
[
"Shakhnarovich",
"Gregory",
""
]
] | We develop a fully automatic image colorization system. Our approach leverages recent advances in deep networks, exploiting both low-level and semantic representations. As many scene elements naturally appear according to multimodal color distributions, we train our model to predict per-pixel color histograms. This intermediate output can be used to automatically generate a color image, or further manipulated prior to image formation. On both fully and partially automatic colorization tasks, we outperform existing methods. We also explore colorization as a vehicle for self-supervised visual representation learning. |
2007.03184 | Yang Fang | Yang Fang, Xiang Zhao, Yifan Chen, Weidong Xiao, Maarten de Rijke | Pre-Trained Models for Heterogeneous Information Networks | Submitted to TKDE | null | null | null | cs.AI cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In network representation learning we learn how to represent heterogeneous
information networks in a low-dimensional space so as to facilitate effective
search, classification, and prediction solutions. Previous network
representation learning methods typically require sufficient task-specific
labeled data to address domain-specific problems. The trained model usually
cannot be transferred to out-of-domain datasets. We propose a self-supervised
pre-training and fine-tuning framework, PF-HIN, to capture the features of a
heterogeneous information network. Unlike traditional network representation
learning models that have to train the entire model all over again for every
downstream task and dataset, PF-HIN only needs to fine-tune the model and a
small number of extra task-specific parameters, thus improving model efficiency
and effectiveness. During pre-training, we first transform the neighborhood of
a given node into a sequence. PF-HIN is pre-trained based on two
self-supervised tasks, masked node modeling and adjacent node prediction. We
adopt deep bi-directional transformer encoders to train the model, and leverage
factorized embedding parameterization and cross-layer parameter sharing to
reduce the parameters. In the fine-tuning stage, we choose four benchmark
downstream tasks, i.e., link prediction, similarity search, node
classification, and node clustering. PF-HIN consistently and significantly
outperforms state-of-the-art alternatives on each of these tasks, on four
datasets.
| [
{
"created": "Tue, 7 Jul 2020 03:36:28 GMT",
"version": "v1"
},
{
"created": "Tue, 18 May 2021 09:53:57 GMT",
"version": "v2"
}
] | 2021-05-19 | [
[
"Fang",
"Yang",
""
],
[
"Zhao",
"Xiang",
""
],
[
"Chen",
"Yifan",
""
],
[
"Xiao",
"Weidong",
""
],
[
"de Rijke",
"Maarten",
""
]
] | In network representation learning we learn how to represent heterogeneous information networks in a low-dimensional space so as to facilitate effective search, classification, and prediction solutions. Previous network representation learning methods typically require sufficient task-specific labeled data to address domain-specific problems. The trained model usually cannot be transferred to out-of-domain datasets. We propose a self-supervised pre-training and fine-tuning framework, PF-HIN, to capture the features of a heterogeneous information network. Unlike traditional network representation learning models that have to train the entire model all over again for every downstream task and dataset, PF-HIN only needs to fine-tune the model and a small number of extra task-specific parameters, thus improving model efficiency and effectiveness. During pre-training, we first transform the neighborhood of a given node into a sequence. PF-HIN is pre-trained based on two self-supervised tasks, masked node modeling and adjacent node prediction. We adopt deep bi-directional transformer encoders to train the model, and leverage factorized embedding parameterization and cross-layer parameter sharing to reduce the parameters. In the fine-tuning stage, we choose four benchmark downstream tasks, i.e., link prediction, similarity search, node classification, and node clustering. PF-HIN consistently and significantly outperforms state-of-the-art alternatives on each of these tasks, on four datasets. |
2007.06059 | Mark Hamilton | Mark Hamilton, Evan Shelhamer, William T. Freeman | It Is Likely That Your Loss Should be a Likelihood | null | null | null | null | cs.LG cs.CV stat.ML | http://creativecommons.org/licenses/by/4.0/ | Many common loss functions such as mean-squared-error, cross-entropy, and
reconstruction loss are unnecessarily rigid. Under a probabilistic
interpretation, these common losses correspond to distributions with fixed
shapes and scales. We instead argue for optimizing full likelihoods that
include parameters like the normal variance and softmax temperature. Joint
optimization of these "likelihood parameters" with model parameters can
adaptively tune the scales and shapes of losses in addition to the strength of
regularization. We explore and systematically evaluate how to parameterize and
apply likelihood parameters for robust modeling, outlier-detection, and
re-calibration. Additionally, we propose adaptively tuning $L_2$ and $L_1$
weights by fitting the scale parameters of normal and Laplace priors and
introduce more flexible element-wise regularizers.
| [
{
"created": "Sun, 12 Jul 2020 18:25:17 GMT",
"version": "v1"
},
{
"created": "Fri, 2 Oct 2020 14:39:37 GMT",
"version": "v2"
}
] | 2020-10-05 | [
[
"Hamilton",
"Mark",
""
],
[
"Shelhamer",
"Evan",
""
],
[
"Freeman",
"William T.",
""
]
] | Many common loss functions such as mean-squared-error, cross-entropy, and reconstruction loss are unnecessarily rigid. Under a probabilistic interpretation, these common losses correspond to distributions with fixed shapes and scales. We instead argue for optimizing full likelihoods that include parameters like the normal variance and softmax temperature. Joint optimization of these "likelihood parameters" with model parameters can adaptively tune the scales and shapes of losses in addition to the strength of regularization. We explore and systematically evaluate how to parameterize and apply likelihood parameters for robust modeling, outlier-detection, and re-calibration. Additionally, we propose adaptively tuning $L_2$ and $L_1$ weights by fitting the scale parameters of normal and Laplace priors and introduce more flexible element-wise regularizers. |
1907.10628 | Vinod Kumar Kurmi | Vinod Kumar Kurmi, Vipul Bajaj, Venkatesh K Subramanian, Vinay P
Namboodiri | Curriculum based Dropout Discriminator for Domain Adaptation | BMVC 2019 Accepted, Project Page:
https://delta-lab-iitk.github.io/CD3A/ | null | null | null | cs.LG cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Domain adaptation is essential to enable wide usage of deep learning based
networks trained using large labeled datasets. Adversarial learning based
techniques have shown their utility towards solving this problem using a
discriminator that ensures source and target distributions are close. However,
here we suggest that rather than using a point estimate, it would be useful if
a distribution based discriminator could be used to bridge this gap. This could
be achieved using multiple classifiers or using traditional ensemble methods.
In contrast, we suggest that a Monte Carlo dropout based ensemble discriminator
could suffice to obtain the distribution based discriminator. Specifically, we
propose a curriculum based dropout discriminator that gradually increases the
variance of the sample based distribution and the corresponding reverse
gradients are used to align the source and target feature representations. The
detailed results and thorough ablation analysis show that our model outperforms
state-of-the-art results.
| [
{
"created": "Wed, 24 Jul 2019 18:00:12 GMT",
"version": "v1"
},
{
"created": "Sat, 19 Oct 2019 19:43:26 GMT",
"version": "v2"
}
] | 2019-10-22 | [
[
"Kurmi",
"Vinod Kumar",
""
],
[
"Bajaj",
"Vipul",
""
],
[
"Subramanian",
"Venkatesh K",
""
],
[
"Namboodiri",
"Vinay P",
""
]
] | Domain adaptation is essential to enable wide usage of deep learning based networks trained using large labeled datasets. Adversarial learning based techniques have shown their utility towards solving this problem using a discriminator that ensures source and target distributions are close. However, here we suggest that rather than using a point estimate, it would be useful if a distribution based discriminator could be used to bridge this gap. This could be achieved using multiple classifiers or using traditional ensemble methods. In contrast, we suggest that a Monte Carlo dropout based ensemble discriminator could suffice to obtain the distribution based discriminator. Specifically, we propose a curriculum based dropout discriminator that gradually increases the variance of the sample based distribution and the corresponding reverse gradients are used to align the source and target feature representations. The detailed results and thorough ablation analysis show that our model outperforms state-of-the-art results. |
2111.05039 | Gerald Kembellec | G\'erald Kembellec (DHIP = IHA, DICEN-IDF) | Multimodal intelligibility of scholarly hypertext: the documentalist's
contribution. A required collaboration for serial documentisation in the
scientific editorial process | in French, H2PTM, Oct 2021, Paris, France | null | null | null | cs.CL cs.IR cs.IT cs.SI math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This article shows that the boundaries between the editing and online
publishingprofessions are losing their strength. In this context it would only
make sense that the wayhypertexts are documented be renewed, especially facing
of the Web's evolution. We arethinking in particular of the trickier scholar
hypertexts documentation process - specifically inscientific or cultural
contexts. The purpose of this article is to demonstrate that, consideringthe
numerous branches of the Web, the hypertext enhance of a document of quality
can onlybe done through a proper dialogue between authors, editors, and
broadcasters. It would satisfythe readership as they could reach the
appropriate information. It will also be shown that eachactor in this
auctorial-editorial process would be a gainer. Indeed, a qualitative
formalizationwork would be coupled with a strong broadcasting scope. Finally,
we will point out that thiswork of mediating must be led by an actor of
information-communication, to make the textunderstandable to both humans and
machines. This meditative act is designated here under theterm of serial
documentarisation.
| [
{
"created": "Tue, 9 Nov 2021 10:28:01 GMT",
"version": "v1"
}
] | 2021-11-10 | [
[
"Kembellec",
"Gérald",
"",
"DHIP = IHA, DICEN-IDF"
]
] | This article shows that the boundaries between the editing and online publishingprofessions are losing their strength. In this context it would only make sense that the wayhypertexts are documented be renewed, especially facing of the Web's evolution. We arethinking in particular of the trickier scholar hypertexts documentation process - specifically inscientific or cultural contexts. The purpose of this article is to demonstrate that, consideringthe numerous branches of the Web, the hypertext enhance of a document of quality can onlybe done through a proper dialogue between authors, editors, and broadcasters. It would satisfythe readership as they could reach the appropriate information. It will also be shown that eachactor in this auctorial-editorial process would be a gainer. Indeed, a qualitative formalizationwork would be coupled with a strong broadcasting scope. Finally, we will point out that thiswork of mediating must be led by an actor of information-communication, to make the textunderstandable to both humans and machines. This meditative act is designated here under theterm of serial documentarisation. |
1007.5425 | Zhong Fan | Zhong Fan | Distributed Demand Response and User Adaptation in Smart Grids | null | null | null | null | cs.NI cs.DC | http://creativecommons.org/licenses/by-nc-sa/3.0/ | This paper proposes a distributed framework for demand response and user
adaptation in smart grid networks. In particular, we borrow the concept of
congestion pricing in Internet traffic control and show that pricing
information is very useful to regulate user demand and hence balance network
load. User preference is modeled as a willingness to pay parameter which can be
seen as an indicator of differential quality of service. Both analysis and
simulation results are presented to demonstrate the dynamics and convergence
behavior of the algorithm.
| [
{
"created": "Fri, 30 Jul 2010 12:16:45 GMT",
"version": "v1"
}
] | 2010-08-02 | [
[
"Fan",
"Zhong",
""
]
] | This paper proposes a distributed framework for demand response and user adaptation in smart grid networks. In particular, we borrow the concept of congestion pricing in Internet traffic control and show that pricing information is very useful to regulate user demand and hence balance network load. User preference is modeled as a willingness to pay parameter which can be seen as an indicator of differential quality of service. Both analysis and simulation results are presented to demonstrate the dynamics and convergence behavior of the algorithm. |
cs/0701140 | Corina P?s?reanu | Corina S. Pasareanu, Radek Pelanek, Willem Visser | Predicate Abstraction with Under-approximation Refinement | 22 pages, 3 figures, accepted for publication in Logical Methods in
Computer Science journal (special issue CAV 2005) | Logical Methods in Computer Science, Volume 3, Issue 1 (February
26, 2007) lmcs:2227 | 10.2168/LMCS-3(1:5)2007 | null | cs.GT | null | We propose an abstraction-based model checking method which relies on
refinement of an under-approximation of the feasible behaviors of the system
under analysis. The method preserves errors to safety properties, since all
analyzed behaviors are feasible by definition. The method does not require an
abstract transition relation to be generated, but instead executes the concrete
transitions while storing abstract versions of the concrete states, as
specified by a set of abstraction predicates. For each explored transition the
method checks, with the help of a theorem prover, whether there is any loss of
precision introduced by abstraction. The results of these checks are used to
decide termination or to refine the abstraction by generating new abstraction
predicates. If the (possibly infinite) concrete system under analysis has a
finite bisimulation quotient, then the method is guaranteed to eventually
explore an equivalent finite bisimilar structure. We illustrate the application
of the approach for checking concurrent programs.
| [
{
"created": "Mon, 22 Jan 2007 21:29:37 GMT",
"version": "v1"
},
{
"created": "Mon, 26 Feb 2007 10:52:16 GMT",
"version": "v2"
}
] | 2017-01-11 | [
[
"Pasareanu",
"Corina S.",
""
],
[
"Pelanek",
"Radek",
""
],
[
"Visser",
"Willem",
""
]
] | We propose an abstraction-based model checking method which relies on refinement of an under-approximation of the feasible behaviors of the system under analysis. The method preserves errors to safety properties, since all analyzed behaviors are feasible by definition. The method does not require an abstract transition relation to be generated, but instead executes the concrete transitions while storing abstract versions of the concrete states, as specified by a set of abstraction predicates. For each explored transition the method checks, with the help of a theorem prover, whether there is any loss of precision introduced by abstraction. The results of these checks are used to decide termination or to refine the abstraction by generating new abstraction predicates. If the (possibly infinite) concrete system under analysis has a finite bisimulation quotient, then the method is guaranteed to eventually explore an equivalent finite bisimilar structure. We illustrate the application of the approach for checking concurrent programs. |
2307.11783 | Zongmin Liu | Zongmin Liu, Jirui Wang, Jie Li, Zufeng Li, Kai Ren, Peng Shi | A novel integrated method of detection-grasping for specific object
based on the box coordinate matching | null | null | null | null | cs.RO cs.CV eess.IV | http://creativecommons.org/licenses/by/4.0/ | To better care for the elderly and disabled, it is essential for service
robots to have an effective fusion method of object detection and grasp
estimation. However, limited research has been observed on the combination of
object detection and grasp estimation. To overcome this technical difficulty, a
novel integrated method of detection-grasping for specific object based on the
box coordinate matching is proposed in this paper. Firstly, the SOLOv2 instance
segmentation model is improved by adding channel attention module (CAM) and
spatial attention module (SAM). Then, the atrous spatial pyramid pooling (ASPP)
and CAM are added to the generative residual convolutional neural network
(GR-CNN) model to optimize grasp estimation. Furthermore, a detection-grasping
integrated algorithm based on box coordinate matching (DG-BCM) is proposed to
obtain the fusion model of object detection and grasp estimation. For
verification, experiments on object detection and grasp estimation are
conducted separately to verify the superiority of improved models.
Additionally, grasping tasks for several specific objects are implemented on a
simulation platform, demonstrating the feasibility and effectiveness of DG-BCM
algorithm proposed in this paper.
| [
{
"created": "Thu, 20 Jul 2023 12:23:12 GMT",
"version": "v1"
}
] | 2023-07-25 | [
[
"Liu",
"Zongmin",
""
],
[
"Wang",
"Jirui",
""
],
[
"Li",
"Jie",
""
],
[
"Li",
"Zufeng",
""
],
[
"Ren",
"Kai",
""
],
[
"Shi",
"Peng",
""
]
] | To better care for the elderly and disabled, it is essential for service robots to have an effective fusion method of object detection and grasp estimation. However, limited research has been observed on the combination of object detection and grasp estimation. To overcome this technical difficulty, a novel integrated method of detection-grasping for specific object based on the box coordinate matching is proposed in this paper. Firstly, the SOLOv2 instance segmentation model is improved by adding channel attention module (CAM) and spatial attention module (SAM). Then, the atrous spatial pyramid pooling (ASPP) and CAM are added to the generative residual convolutional neural network (GR-CNN) model to optimize grasp estimation. Furthermore, a detection-grasping integrated algorithm based on box coordinate matching (DG-BCM) is proposed to obtain the fusion model of object detection and grasp estimation. For verification, experiments on object detection and grasp estimation are conducted separately to verify the superiority of improved models. Additionally, grasping tasks for several specific objects are implemented on a simulation platform, demonstrating the feasibility and effectiveness of DG-BCM algorithm proposed in this paper. |
2102.02767 | Lukas Bernreiter | Lukas Bernreiter, Lionel Ott, Juan Nieto, Roland Siegwart and Cesar
Cadena | PHASER: a Robust and Correspondence-free Global Pointcloud Registration | null | IEEE Robotics and Automation Letters ( Volume: 6, Issue: 2, April
2021) | 10.1109/LRA.2021.3052418 | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose PHASER, a correspondence-free global registration of
sensor-centric pointclouds that is robust to noise, sparsity, and partial
overlaps. Our method can seamlessly handle multimodal information and does not
rely on keypoint nor descriptor preprocessing modules. By exploiting properties
of Fourier analysis, PHASER operates directly on the sensor's signal, fusing
the spectra of multiple channels and computing the 6-DoF transformation based
on correlation. Our registration pipeline starts by finding the most likely
rotation followed by computing the most likely translation. Both estimates are
distributed according to a probability distribution that takes the underlying
manifold into account, i.e., a Bingham and Gaussian distribution, respectively.
This further allows our approach to consider the periodic-nature of rotations
and naturally represent its uncertainty. We extensively compare PHASER against
several well-known registration algorithms on both simulated datasets, and
real-world data acquired using different sensor configurations. Our results
show that PHASER can globally align pointclouds in less than 100ms with an
average accuracy of 2cm and 0.5deg, is resilient against noise, and can handle
partial overlap.
| [
{
"created": "Wed, 3 Feb 2021 11:07:37 GMT",
"version": "v1"
}
] | 2021-02-05 | [
[
"Bernreiter",
"Lukas",
""
],
[
"Ott",
"Lionel",
""
],
[
"Nieto",
"Juan",
""
],
[
"Siegwart",
"Roland",
""
],
[
"Cadena",
"Cesar",
""
]
] | We propose PHASER, a correspondence-free global registration of sensor-centric pointclouds that is robust to noise, sparsity, and partial overlaps. Our method can seamlessly handle multimodal information and does not rely on keypoint nor descriptor preprocessing modules. By exploiting properties of Fourier analysis, PHASER operates directly on the sensor's signal, fusing the spectra of multiple channels and computing the 6-DoF transformation based on correlation. Our registration pipeline starts by finding the most likely rotation followed by computing the most likely translation. Both estimates are distributed according to a probability distribution that takes the underlying manifold into account, i.e., a Bingham and Gaussian distribution, respectively. This further allows our approach to consider the periodic-nature of rotations and naturally represent its uncertainty. We extensively compare PHASER against several well-known registration algorithms on both simulated datasets, and real-world data acquired using different sensor configurations. Our results show that PHASER can globally align pointclouds in less than 100ms with an average accuracy of 2cm and 0.5deg, is resilient against noise, and can handle partial overlap. |
2403.02151 | Zixuan Huang | Dmitry Tochilkin, David Pankratz, Zexiang Liu, Zixuan Huang, Adam
Letts, Yangguang Li, Ding Liang, Christian Laforte, Varun Jampani, Yan-Pei
Cao | TripoSR: Fast 3D Object Reconstruction from a Single Image | Model: https://huggingface.co/stabilityai/TripoSR Code:
https://github.com/VAST-AI-Research/TripoSR Demo:
https://huggingface.co/spaces/stabilityai/TripoSR | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | This technical report introduces TripoSR, a 3D reconstruction model
leveraging transformer architecture for fast feed-forward 3D generation,
producing 3D mesh from a single image in under 0.5 seconds. Building upon the
LRM network architecture, TripoSR integrates substantial improvements in data
processing, model design, and training techniques. Evaluations on public
datasets show that TripoSR exhibits superior performance, both quantitatively
and qualitatively, compared to other open-source alternatives. Released under
the MIT license, TripoSR is intended to empower researchers, developers, and
creatives with the latest advancements in 3D generative AI.
| [
{
"created": "Mon, 4 Mar 2024 16:00:56 GMT",
"version": "v1"
}
] | 2024-03-05 | [
[
"Tochilkin",
"Dmitry",
""
],
[
"Pankratz",
"David",
""
],
[
"Liu",
"Zexiang",
""
],
[
"Huang",
"Zixuan",
""
],
[
"Letts",
"Adam",
""
],
[
"Li",
"Yangguang",
""
],
[
"Liang",
"Ding",
""
],
[
"Laforte",
"Christian",
""
],
[
"Jampani",
"Varun",
""
],
[
"Cao",
"Yan-Pei",
""
]
] | This technical report introduces TripoSR, a 3D reconstruction model leveraging transformer architecture for fast feed-forward 3D generation, producing 3D mesh from a single image in under 0.5 seconds. Building upon the LRM network architecture, TripoSR integrates substantial improvements in data processing, model design, and training techniques. Evaluations on public datasets show that TripoSR exhibits superior performance, both quantitatively and qualitatively, compared to other open-source alternatives. Released under the MIT license, TripoSR is intended to empower researchers, developers, and creatives with the latest advancements in 3D generative AI. |
2004.09735 | Tatsuya Akutsu | Avraham A. Melkman, Sini Guo, Wai-Ki Ching, Pengyu Liu, Tatsuya Akutsu | On the Compressive Power of Boolean Threshold Autoencoders | 13 pages, 3 figures, 1 table | IEEE Transactions on Neural Networks and Learning Systems, 2021 | 10.1109/TNNLS.2021.3104646 | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An autoencoder is a layered neural network whose structure can be viewed as
consisting of an encoder, which compresses an input vector of dimension $D$ to
a vector of low dimension $d$, and a decoder which transforms the
low-dimensional vector back to the original input vector (or one that is very
similar). In this paper we explore the compressive power of autoencoders that
are Boolean threshold networks by studying the numbers of nodes and layers that
are required to ensure that the numbers of nodes and layers that are required
to ensure that each vector in a given set of distinct input binary vectors is
transformed back to its original. We show that for any set of $n$ distinct
vectors there exists a seven-layer autoencoder with the smallest possible
middle layer, (i.e., its size is logarithmic in $n$), but that there is a set
of $n$ vectors for which there is no three-layer autoencoder with a middle
layer of the same size. In addition we present a kind of trade-off: if a
considerably larger middle layer is permissible then a five-layer autoencoder
does exist. We also study encoding by itself. The results we obtain suggest
that it is the decoding that constitutes the bottleneck of autoencoding. For
example, there always is a three-layer Boolean threshold encoder that
compresses $n$ vectors into a dimension that is reduced to twice the logarithm
of $n$.
| [
{
"created": "Tue, 21 Apr 2020 03:21:43 GMT",
"version": "v1"
}
] | 2023-09-21 | [
[
"Melkman",
"Avraham A.",
""
],
[
"Guo",
"Sini",
""
],
[
"Ching",
"Wai-Ki",
""
],
[
"Liu",
"Pengyu",
""
],
[
"Akutsu",
"Tatsuya",
""
]
] | An autoencoder is a layered neural network whose structure can be viewed as consisting of an encoder, which compresses an input vector of dimension $D$ to a vector of low dimension $d$, and a decoder which transforms the low-dimensional vector back to the original input vector (or one that is very similar). In this paper we explore the compressive power of autoencoders that are Boolean threshold networks by studying the numbers of nodes and layers that are required to ensure that the numbers of nodes and layers that are required to ensure that each vector in a given set of distinct input binary vectors is transformed back to its original. We show that for any set of $n$ distinct vectors there exists a seven-layer autoencoder with the smallest possible middle layer, (i.e., its size is logarithmic in $n$), but that there is a set of $n$ vectors for which there is no three-layer autoencoder with a middle layer of the same size. In addition we present a kind of trade-off: if a considerably larger middle layer is permissible then a five-layer autoencoder does exist. We also study encoding by itself. The results we obtain suggest that it is the decoding that constitutes the bottleneck of autoencoding. For example, there always is a three-layer Boolean threshold encoder that compresses $n$ vectors into a dimension that is reduced to twice the logarithm of $n$. |
2210.03618 | Benjamin Doerr | Benjamin Doerr, Omar El Hadri, Adrien Pinard | The $(1+(\lambda,\lambda))$ Global SEMO Algorithm | Author generated version of a paper at GECCO 2022 | The (1 + ({\lambda}, {\lambda})) global SEMO algorithm. GECCO
2022: 520-528. ACM | 10.1145/3512290.3528868 | null | cs.NE cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The $(1+(\lambda,\lambda))$ genetic algorithm is a recently proposed
single-objective evolutionary algorithm with several interesting properties. We
show that its main working principle, mutation with a high rate and crossover
as repair mechanism, can be transported also to multi-objective evolutionary
computation. We define the $(1+(\lambda,\lambda))$ global SEMO algorithm, a
variant of the classic global SEMO algorithm, and prove that it optimizes the
OneMinMax benchmark asymptotically faster than the global SEMO. Following the
single-objective example, we design a one-fifth rule inspired dynamic parameter
setting (to the best of our knowledge for the first time in discrete
multi-objective optimization) and prove that it further improves the runtime to
$O(n^2)$, whereas the best runtime guarantee for the global SEMO is only $O(n^2
\log n)$.
| [
{
"created": "Fri, 7 Oct 2022 15:18:32 GMT",
"version": "v1"
}
] | 2022-10-10 | [
[
"Doerr",
"Benjamin",
""
],
[
"Hadri",
"Omar El",
""
],
[
"Pinard",
"Adrien",
""
]
] | The $(1+(\lambda,\lambda))$ genetic algorithm is a recently proposed single-objective evolutionary algorithm with several interesting properties. We show that its main working principle, mutation with a high rate and crossover as repair mechanism, can be transported also to multi-objective evolutionary computation. We define the $(1+(\lambda,\lambda))$ global SEMO algorithm, a variant of the classic global SEMO algorithm, and prove that it optimizes the OneMinMax benchmark asymptotically faster than the global SEMO. Following the single-objective example, we design a one-fifth rule inspired dynamic parameter setting (to the best of our knowledge for the first time in discrete multi-objective optimization) and prove that it further improves the runtime to $O(n^2)$, whereas the best runtime guarantee for the global SEMO is only $O(n^2 \log n)$. |
2006.09265 | Wenda Li | Wenda Li and Lei Yu and Yuhuai Wu and Lawrence C. Paulson | IsarStep: a Benchmark for High-level Mathematical Reasoning | 9 pages, published at ICLR 2021 | null | null | null | cs.LO cs.AI cs.CL cs.LG cs.PL stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A well-defined benchmark is essential for measuring and accelerating research
progress of machine learning models. In this paper, we present a benchmark for
high-level mathematical reasoning and study the reasoning capabilities of
neural sequence-to-sequence models. We build a non-synthetic dataset from the
largest repository of proofs written by human experts in a theorem prover. The
dataset has a broad coverage of undergraduate and research-level mathematical
and computer science theorems. In our defined task, a model is required to fill
in a missing intermediate proposition given surrounding proofs. This task
provides a starting point for the long-term goal of having machines generate
human-readable proofs automatically. Our experiments and analysis reveal that
while the task is challenging, neural models can capture non-trivial
mathematical reasoning. We further design a hierarchical transformer that
outperforms the transformer baseline.
| [
{
"created": "Sat, 13 Jun 2020 21:09:23 GMT",
"version": "v1"
},
{
"created": "Wed, 24 Mar 2021 16:45:18 GMT",
"version": "v2"
}
] | 2021-03-25 | [
[
"Li",
"Wenda",
""
],
[
"Yu",
"Lei",
""
],
[
"Wu",
"Yuhuai",
""
],
[
"Paulson",
"Lawrence C.",
""
]
] | A well-defined benchmark is essential for measuring and accelerating research progress of machine learning models. In this paper, we present a benchmark for high-level mathematical reasoning and study the reasoning capabilities of neural sequence-to-sequence models. We build a non-synthetic dataset from the largest repository of proofs written by human experts in a theorem prover. The dataset has a broad coverage of undergraduate and research-level mathematical and computer science theorems. In our defined task, a model is required to fill in a missing intermediate proposition given surrounding proofs. This task provides a starting point for the long-term goal of having machines generate human-readable proofs automatically. Our experiments and analysis reveal that while the task is challenging, neural models can capture non-trivial mathematical reasoning. We further design a hierarchical transformer that outperforms the transformer baseline. |
2312.05534 | Yansheng Wu | Yansheng Wu, Cunsheng Ding, Tingfang Chen | Extended codes and deep holes of MDS codes | 22 pages, submitted for possible publication | null | null | null | cs.IT math.CO math.IT | http://creativecommons.org/licenses/by/4.0/ | For a given linear code $\C$ of length $n$ over $\gf(q)$ and a nonzero vector
$\bu$ in $\gf(q)^n$, Sun, Ding and Chen defined an extended linear code
$\overline{\C}(\bu)$ of $\C$, which is a generalisation of the classical
extended code $\overline{\C}(-\bone)$ of $\C$ and called the second kind of an
extended code of $\C$ (see arXiv:2307.04076 and arXiv:2307.08053). They
developed some general theory of the extended codes $\overline{\C}(\bu)$ and
studied the extended codes $\overline{\C}(\bu)$ of several families of linear
codes, including cyclic codes, projective two-weight codes, nonbinary Hamming
codes, and a family of reversible MDS cyclic codes. The objective of this paper
is to investigate the extended codes $\overline{\C}(\bu)$ of MDS codes $\C$
over finite fields. The main result of this paper is that the extended code
$\overline{\C}(\bu)$ of an MDS $[n,k]$ code $\C$ remains MDS if and only if the
covering radius $\rho(\mathcal{C}^{\bot})=k$ and the vector $\bu$ is a deep
hole of the dual code $\C^\perp$. As applications of this main result, the
extended codes of the GRS codes and extended GRS codes are investigated and the
covering radii of several families of MDS codes are determined.
| [
{
"created": "Sat, 9 Dec 2023 10:55:11 GMT",
"version": "v1"
}
] | 2023-12-12 | [
[
"Wu",
"Yansheng",
""
],
[
"Ding",
"Cunsheng",
""
],
[
"Chen",
"Tingfang",
""
]
] | For a given linear code $\C$ of length $n$ over $\gf(q)$ and a nonzero vector $\bu$ in $\gf(q)^n$, Sun, Ding and Chen defined an extended linear code $\overline{\C}(\bu)$ of $\C$, which is a generalisation of the classical extended code $\overline{\C}(-\bone)$ of $\C$ and called the second kind of an extended code of $\C$ (see arXiv:2307.04076 and arXiv:2307.08053). They developed some general theory of the extended codes $\overline{\C}(\bu)$ and studied the extended codes $\overline{\C}(\bu)$ of several families of linear codes, including cyclic codes, projective two-weight codes, nonbinary Hamming codes, and a family of reversible MDS cyclic codes. The objective of this paper is to investigate the extended codes $\overline{\C}(\bu)$ of MDS codes $\C$ over finite fields. The main result of this paper is that the extended code $\overline{\C}(\bu)$ of an MDS $[n,k]$ code $\C$ remains MDS if and only if the covering radius $\rho(\mathcal{C}^{\bot})=k$ and the vector $\bu$ is a deep hole of the dual code $\C^\perp$. As applications of this main result, the extended codes of the GRS codes and extended GRS codes are investigated and the covering radii of several families of MDS codes are determined. |
1702.05251 | Robert Falkenberg | Robert Falkenberg and Benjamin Sliwa and Christian Wietfeld | Rushing Full Speed with LTE-Advanced is Economical -- A Power
Consumption Analysis | null | 2017 IEEE 85th Vehicular Technology Conference (VTC Spring) | 10.1109/VTCSpring.2017.8108515 | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Boosting data rates in LTE mobile networks is one of the key features of
LTE-Advanced. This improved user experience is achieved by Carrier Aggregation
(CA), in which the available spectrum of an operator is bundled out of several
frequency bands. Accordingly, the user equipment has to supply multiple
reception chains and therefore consumes considerably more power during a
transmission. On the other hand, transmissions terminate faster, which enables
a quick switchover into energy-saving mode. In order to examine these opposed
facts, empirical analyses of existing devices are first carried out.
Subsequently, we present a new CA enhancement of an existing context-aware
power consumption model which incorporates the development density of the
environment and the mobile device mobility. Based on the extended model we
perform a detailed power consumption analysis and show that CA leads to power
savings of 31% if the data rate doubled for large file transmissions. In
addition, we show that CA can lead to power savings even from a data rate
increase of 25%, regardless of mobility and urban development density. Besides,
the measurement results show that CA operated in the same band leads to a lower
power consumption than inter-band CA.
| [
{
"created": "Fri, 17 Feb 2017 08:20:40 GMT",
"version": "v1"
},
{
"created": "Thu, 20 Feb 2020 09:16:46 GMT",
"version": "v2"
}
] | 2020-02-21 | [
[
"Falkenberg",
"Robert",
""
],
[
"Sliwa",
"Benjamin",
""
],
[
"Wietfeld",
"Christian",
""
]
] | Boosting data rates in LTE mobile networks is one of the key features of LTE-Advanced. This improved user experience is achieved by Carrier Aggregation (CA), in which the available spectrum of an operator is bundled out of several frequency bands. Accordingly, the user equipment has to supply multiple reception chains and therefore consumes considerably more power during a transmission. On the other hand, transmissions terminate faster, which enables a quick switchover into energy-saving mode. In order to examine these opposed facts, empirical analyses of existing devices are first carried out. Subsequently, we present a new CA enhancement of an existing context-aware power consumption model which incorporates the development density of the environment and the mobile device mobility. Based on the extended model we perform a detailed power consumption analysis and show that CA leads to power savings of 31% if the data rate doubled for large file transmissions. In addition, we show that CA can lead to power savings even from a data rate increase of 25%, regardless of mobility and urban development density. Besides, the measurement results show that CA operated in the same band leads to a lower power consumption than inter-band CA. |
2302.14817 | Peng Wang | Weihua Wu, Peng Wang, Yuan Zhang, Weijia Han, He Yi and Tony Q. S.
Quek | A Cooperative Content Dissemination Framework for Fog-Based Internet of
Vehicles | null | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As the fog-based internet of vehicles (IoV) is equipped with rich perception,
computing, communication and storage resources, it provides a new solution for
the bulk data processing. However, the impact caused by the mobility of
vehicles brings a challenge to the content scheduling and resource allocation
of content dissemination service. In this paper, we propose a time-varying
resource relationship graph to model the intertwined impact of the perception,
computation, communication and storage resources across multiple snapshots on
the content dissemination process of IoV. Based on this graph model, the
content dissemination process is modeled as a mathematical optimization
problem, where the quality of service of both delay tolerant and delay
sensitive services are considered. Owing to its NP-completeness, the
optimization problem is decomposed into a joint link and subchannel scheduling
subproblem and as well a joint power and flow control subproblem. Then, a
cascaded low complexity scheduling algorithm is proposed for the joint link and
subchannel scheduling subproblem. Moreover, a robust resource management
algorithm is developed for the power and flow control subproblem, where the
channel uncertainties in future snapshots are fully considered in the
algorithm. Finally, we conduct simulations to show that the effectiveness of
the proposed approaches outperforms other state-of-art approaches.
| [
{
"created": "Mon, 20 Feb 2023 13:26:03 GMT",
"version": "v1"
}
] | 2023-03-01 | [
[
"Wu",
"Weihua",
""
],
[
"Wang",
"Peng",
""
],
[
"Zhang",
"Yuan",
""
],
[
"Han",
"Weijia",
""
],
[
"Yi",
"He",
""
],
[
"Quek",
"Tony Q. S.",
""
]
] | As the fog-based internet of vehicles (IoV) is equipped with rich perception, computing, communication and storage resources, it provides a new solution for the bulk data processing. However, the impact caused by the mobility of vehicles brings a challenge to the content scheduling and resource allocation of content dissemination service. In this paper, we propose a time-varying resource relationship graph to model the intertwined impact of the perception, computation, communication and storage resources across multiple snapshots on the content dissemination process of IoV. Based on this graph model, the content dissemination process is modeled as a mathematical optimization problem, where the quality of service of both delay tolerant and delay sensitive services are considered. Owing to its NP-completeness, the optimization problem is decomposed into a joint link and subchannel scheduling subproblem and as well a joint power and flow control subproblem. Then, a cascaded low complexity scheduling algorithm is proposed for the joint link and subchannel scheduling subproblem. Moreover, a robust resource management algorithm is developed for the power and flow control subproblem, where the channel uncertainties in future snapshots are fully considered in the algorithm. Finally, we conduct simulations to show that the effectiveness of the proposed approaches outperforms other state-of-art approaches. |
2111.14596 | Jun Zhao Dr | Anirudh Ekambaranathan and Jun Zhao and Max Van Kleek | "Money makes the world go around'': Identifying Barriers to Better
Privacy in Children's Apps From Developers' Perspectives | 15 pages, 4 tables, Proceedings of the 2021 ACM Conference on Human
Factors in Computing Systems | Proceedings of the 2021 ACM Conference on Human Factors in
Computing Systems | 10.1145/3411764.3445599 | null | cs.HC cs.CY | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The industry for children's apps is thriving at the cost of children's
privacy: these apps routinely disclose children's data to multiple data
trackers and ad networks. As children spend increasing time online, such
exposure accumulates to long-term privacy risks. In this paper, we used a
mixed-methods approach to investigate why this is happening and how developers
might change their practices. We base our analysis against 5 leading data
protection frameworks that set out requirements and recommendations for data
collection in children's apps. To understand developers' perspectives and
constraints, we conducted 134 surveys and 20 semi-structured interviews with
popular Android children's app developers. Our analysis revealed that
developers largely respect children's best interests; however, they have to
make compromises due to limited monetisation options, perceived harmlessness of
certain third-party libraries, and lack of availability of design guidelines.
We identified concrete approaches and directions for future research to help
overcome these barriers.
| [
{
"created": "Mon, 29 Nov 2021 15:27:55 GMT",
"version": "v1"
}
] | 2021-11-30 | [
[
"Ekambaranathan",
"Anirudh",
""
],
[
"Zhao",
"Jun",
""
],
[
"Van Kleek",
"Max",
""
]
] | The industry for children's apps is thriving at the cost of children's privacy: these apps routinely disclose children's data to multiple data trackers and ad networks. As children spend increasing time online, such exposure accumulates to long-term privacy risks. In this paper, we used a mixed-methods approach to investigate why this is happening and how developers might change their practices. We base our analysis against 5 leading data protection frameworks that set out requirements and recommendations for data collection in children's apps. To understand developers' perspectives and constraints, we conducted 134 surveys and 20 semi-structured interviews with popular Android children's app developers. Our analysis revealed that developers largely respect children's best interests; however, they have to make compromises due to limited monetisation options, perceived harmlessness of certain third-party libraries, and lack of availability of design guidelines. We identified concrete approaches and directions for future research to help overcome these barriers. |
2205.00388 | Shutong Ni | Shutong Ni | Abnormal-aware Multi-person Evaluation System with Improved Fuzzy
Weighting | 13 pages, 5 figures | null | null | null | cs.CY cs.LG stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There exists a phenomenon that subjectivity highly lies in the daily
evaluation process. Our research primarily concentrates on a multi-person
evaluation system with anomaly detection to minimize the possible inaccuracy
that subjective assessment brings. We choose the two-stage screening method,
which consists of rough screening and score-weighted Kendall-$\tau$ Distance to
winnow out abnormal data, coupled with hypothesis testing to narrow global
discrepancy. Then we use Fuzzy Synthetic Evaluation Method(FSE) to determine
the significance of scores given by reviewers as well as their reliability,
culminating in a more impartial weight for each reviewer in the final
conclusion. The results demonstrate a clear and comprehensive ranking instead
of unilateral scores, and we get to have an efficiency in filtering out
abnormal data as well as a reasonably objective weight determination mechanism.
We can sense that through our study, people will have a chance of modifying a
multi-person evaluation system to attain both equity and a relatively superior
competitive atmosphere.
| [
{
"created": "Sun, 1 May 2022 03:42:43 GMT",
"version": "v1"
}
] | 2022-05-03 | [
[
"Ni",
"Shutong",
""
]
] | There exists a phenomenon that subjectivity highly lies in the daily evaluation process. Our research primarily concentrates on a multi-person evaluation system with anomaly detection to minimize the possible inaccuracy that subjective assessment brings. We choose the two-stage screening method, which consists of rough screening and score-weighted Kendall-$\tau$ Distance to winnow out abnormal data, coupled with hypothesis testing to narrow global discrepancy. Then we use Fuzzy Synthetic Evaluation Method(FSE) to determine the significance of scores given by reviewers as well as their reliability, culminating in a more impartial weight for each reviewer in the final conclusion. The results demonstrate a clear and comprehensive ranking instead of unilateral scores, and we get to have an efficiency in filtering out abnormal data as well as a reasonably objective weight determination mechanism. We can sense that through our study, people will have a chance of modifying a multi-person evaluation system to attain both equity and a relatively superior competitive atmosphere. |
2306.03571 | Florian Adriaens | Florian Adriaens, Honglian Wang, Aristides Gionis | Minimizing Hitting Time between Disparate Groups with Shortcut Edges | To appear in KDD 2023 | null | null | null | cs.SI cs.DS | http://creativecommons.org/licenses/by/4.0/ | Structural bias or segregation of networks refers to situations where two or
more disparate groups are present in the network, so that the groups are highly
connected internally, but loosely connected to each other. In many cases it is
of interest to increase the connectivity of disparate groups so as to, e.g.,
minimize social friction, or expose individuals to diverse viewpoints. A
commonly-used mechanism for increasing the network connectivity is to add edge
shortcuts between pairs of nodes. In many applications of interest, edge
shortcuts typically translate to recommendations, e.g., what video to watch, or
what news article to read next. The problem of reducing structural bias or
segregation via edge shortcuts has recently been studied in the literature, and
random walks have been an essential tool for modeling navigation and
connectivity in the underlying networks. Existing methods, however, either do
not offer approximation guarantees, or engineer the objective so that it
satisfies certain desirable properties that simplify the optimization~task. In
this paper we address the problem of adding a given number of shortcut edges in
the network so as to directly minimize the average hitting time and the maximum
hitting time between two disparate groups. Our algorithm for minimizing average
hitting time is a greedy bicriteria that relies on supermodularity. In
contrast, maximum hitting time is not supermodular. Despite, we develop an
approximation algorithm for that objective as well, by leveraging connections
with average hitting time and the asymmetric k-center problem.
| [
{
"created": "Tue, 6 Jun 2023 10:37:37 GMT",
"version": "v1"
},
{
"created": "Fri, 16 Jun 2023 18:32:49 GMT",
"version": "v2"
}
] | 2023-06-21 | [
[
"Adriaens",
"Florian",
""
],
[
"Wang",
"Honglian",
""
],
[
"Gionis",
"Aristides",
""
]
] | Structural bias or segregation of networks refers to situations where two or more disparate groups are present in the network, so that the groups are highly connected internally, but loosely connected to each other. In many cases it is of interest to increase the connectivity of disparate groups so as to, e.g., minimize social friction, or expose individuals to diverse viewpoints. A commonly-used mechanism for increasing the network connectivity is to add edge shortcuts between pairs of nodes. In many applications of interest, edge shortcuts typically translate to recommendations, e.g., what video to watch, or what news article to read next. The problem of reducing structural bias or segregation via edge shortcuts has recently been studied in the literature, and random walks have been an essential tool for modeling navigation and connectivity in the underlying networks. Existing methods, however, either do not offer approximation guarantees, or engineer the objective so that it satisfies certain desirable properties that simplify the optimization~task. In this paper we address the problem of adding a given number of shortcut edges in the network so as to directly minimize the average hitting time and the maximum hitting time between two disparate groups. Our algorithm for minimizing average hitting time is a greedy bicriteria that relies on supermodularity. In contrast, maximum hitting time is not supermodular. Despite, we develop an approximation algorithm for that objective as well, by leveraging connections with average hitting time and the asymmetric k-center problem. |
1506.00330 | Shuqiao Jia | Shuqiao Jia and Behnaam Aazhang | Signaling Design of Two-Way MIMO Full-Duplex Channel: Optimality Under
Imperfect Transmit Front-End Chain | submitted to IEEE transactions on wireless communications | null | null | null | cs.IT math.IT | http://creativecommons.org/licenses/by-nc-sa/3.0/ | We derive the optimal signaling for a multiple input multiple output (MIMO)
full-duplex two-way channel under the imperfect transmit front-end chain. We
characterize the two-way rates of the channel by using a game-theoretical
approach, where we focus on the Pareto boundary of the achievable rate region
and Nash equilibia (NE). For a MISO full-duplex two-way channel, we prove that
beamforming is an optimal transmission strategy which can achieve any point on
the Pareto boundary. Furthermore, we present a closed-form expression for the
optimal beamforming weights. In our numerical examples we quantify gains in the
achievable rates of the proposed beamforming over the zero-forcing beamforming.
For a general MIMO full-duplex channel, we establish the existence of NE and
present a condition for the uniqueness of NE. We then propose an iterative
water-filling algorithm which is capable of reaching NE. Through simulations
the threshold of the self-interference level is found, below which the
full-duplex NE outperforms the half-duplex TDMA.
| [
{
"created": "Mon, 1 Jun 2015 02:37:33 GMT",
"version": "v1"
}
] | 2015-06-02 | [
[
"Jia",
"Shuqiao",
""
],
[
"Aazhang",
"Behnaam",
""
]
] | We derive the optimal signaling for a multiple input multiple output (MIMO) full-duplex two-way channel under the imperfect transmit front-end chain. We characterize the two-way rates of the channel by using a game-theoretical approach, where we focus on the Pareto boundary of the achievable rate region and Nash equilibia (NE). For a MISO full-duplex two-way channel, we prove that beamforming is an optimal transmission strategy which can achieve any point on the Pareto boundary. Furthermore, we present a closed-form expression for the optimal beamforming weights. In our numerical examples we quantify gains in the achievable rates of the proposed beamforming over the zero-forcing beamforming. For a general MIMO full-duplex channel, we establish the existence of NE and present a condition for the uniqueness of NE. We then propose an iterative water-filling algorithm which is capable of reaching NE. Through simulations the threshold of the self-interference level is found, below which the full-duplex NE outperforms the half-duplex TDMA. |
1409.5114 | Shuxin Ouyang | Shuxin Ouyang, Timothy Hospedales, Yi-Zhe Song, Xueming Li | A Survey on Heterogeneous Face Recognition: Sketch, Infra-red, 3D and
Low-resolution | survey paper(35 pages) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Heterogeneous face recognition (HFR) refers to matching face imagery across
different domains. It has received much interest from the research community as
a result of its profound implications in law enforcement. A wide variety of new
invariant features, cross-modality matching models and heterogeneous datasets
being established in recent years. This survey provides a comprehensive review
of established techniques and recent developments in HFR. Moreover, we offer a
detailed account of datasets and benchmarks commonly used for evaluation. We
finish by assessing the state of the field and discussing promising directions
for future research.
| [
{
"created": "Wed, 17 Sep 2014 19:55:34 GMT",
"version": "v1"
},
{
"created": "Fri, 10 Oct 2014 13:23:30 GMT",
"version": "v2"
}
] | 2014-10-13 | [
[
"Ouyang",
"Shuxin",
""
],
[
"Hospedales",
"Timothy",
""
],
[
"Song",
"Yi-Zhe",
""
],
[
"Li",
"Xueming",
""
]
] | Heterogeneous face recognition (HFR) refers to matching face imagery across different domains. It has received much interest from the research community as a result of its profound implications in law enforcement. A wide variety of new invariant features, cross-modality matching models and heterogeneous datasets being established in recent years. This survey provides a comprehensive review of established techniques and recent developments in HFR. Moreover, we offer a detailed account of datasets and benchmarks commonly used for evaluation. We finish by assessing the state of the field and discussing promising directions for future research. |
2205.12538 | Mihir Parmar | Pruthvi Patel, Swaroop Mishra, Mihir Parmar, Chitta Baral | Is a Question Decomposition Unit All We Need? | EMNLP 2022 (17 pages) | null | null | null | cs.CL cs.AI cs.HC cs.LG | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LMs) have achieved state-of-the-art performance on
many Natural Language Processing (NLP) benchmarks. With the growing number of
new benchmarks, we build bigger and more complex LMs. However, building new LMs
may not be an ideal option owing to the cost, time and environmental impact
associated with it. We explore an alternative route: can we modify data by
expressing it in terms of the model's strengths, so that a question becomes
easier for models to answer? We investigate if humans can decompose a hard
question into a set of simpler questions that are relatively easier for models
to solve. We analyze a range of datasets involving various forms of reasoning
and find that it is indeed possible to significantly improve model performance
(24% for GPT3 and 29% for RoBERTa-SQuAD along with a symbolic calculator) via
decomposition. Our approach provides a viable option to involve people in NLP
research in a meaningful way. Our findings indicate that Human-in-the-loop
Question Decomposition (HQD) can potentially provide an alternate path to
building large LMs. Code and data is available at
https://github.com/Pruthvi98/QuestionDecomposition
| [
{
"created": "Wed, 25 May 2022 07:24:09 GMT",
"version": "v1"
},
{
"created": "Wed, 26 Oct 2022 21:32:49 GMT",
"version": "v2"
}
] | 2022-10-28 | [
[
"Patel",
"Pruthvi",
""
],
[
"Mishra",
"Swaroop",
""
],
[
"Parmar",
"Mihir",
""
],
[
"Baral",
"Chitta",
""
]
] | Large Language Models (LMs) have achieved state-of-the-art performance on many Natural Language Processing (NLP) benchmarks. With the growing number of new benchmarks, we build bigger and more complex LMs. However, building new LMs may not be an ideal option owing to the cost, time and environmental impact associated with it. We explore an alternative route: can we modify data by expressing it in terms of the model's strengths, so that a question becomes easier for models to answer? We investigate if humans can decompose a hard question into a set of simpler questions that are relatively easier for models to solve. We analyze a range of datasets involving various forms of reasoning and find that it is indeed possible to significantly improve model performance (24% for GPT3 and 29% for RoBERTa-SQuAD along with a symbolic calculator) via decomposition. Our approach provides a viable option to involve people in NLP research in a meaningful way. Our findings indicate that Human-in-the-loop Question Decomposition (HQD) can potentially provide an alternate path to building large LMs. Code and data is available at https://github.com/Pruthvi98/QuestionDecomposition |
2211.12051 | Hao Shen | Hao Shen, Zhong-Qiu Zhao, Wandi Zhang | Adaptive Dynamic Filtering Network for Image Denoising | 9 pages, Accepted in AAAI Conference on Artificial Intelligence
(AAAI) 2023 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In image denoising networks, feature scaling is widely used to enlarge the
receptive field size and reduce computational costs. This practice, however,
also leads to the loss of high-frequency information and fails to consider
within-scale characteristics. Recently, dynamic convolution has exhibited
powerful capabilities in processing high-frequency information (e.g., edges,
corners, textures), but previous works lack sufficient spatial contextual
information in filter generation. To alleviate these issues, we propose to
employ dynamic convolution to improve the learning of high-frequency and
multi-scale features. Specifically, we design a spatially enhanced kernel
generation (SEKG) module to improve dynamic convolution, enabling the learning
of spatial context information with a very low computational complexity. Based
on the SEKG module, we propose a dynamic convolution block (DCB) and a
multi-scale dynamic convolution block (MDCB). The former enhances the
high-frequency information via dynamic convolution and preserves low-frequency
information via skip connections. The latter utilizes shared adaptive dynamic
kernels and the idea of dilated convolution to achieve efficient multi-scale
feature extraction. The proposed multi-dimension feature integration (MFI)
mechanism further fuses the multi-scale features, providing precise and
contextually enriched feature representations. Finally, we build an efficient
denoising network with the proposed DCB and MDCB, named ADFNet. It achieves
better performance with low computational complexity on real-world and
synthetic Gaussian noisy datasets. The source code is available at
https://github.com/it-hao/ADFNet.
| [
{
"created": "Tue, 22 Nov 2022 06:54:27 GMT",
"version": "v1"
},
{
"created": "Sat, 26 Nov 2022 05:56:20 GMT",
"version": "v2"
},
{
"created": "Mon, 3 Apr 2023 01:14:24 GMT",
"version": "v3"
}
] | 2023-04-04 | [
[
"Shen",
"Hao",
""
],
[
"Zhao",
"Zhong-Qiu",
""
],
[
"Zhang",
"Wandi",
""
]
] | In image denoising networks, feature scaling is widely used to enlarge the receptive field size and reduce computational costs. This practice, however, also leads to the loss of high-frequency information and fails to consider within-scale characteristics. Recently, dynamic convolution has exhibited powerful capabilities in processing high-frequency information (e.g., edges, corners, textures), but previous works lack sufficient spatial contextual information in filter generation. To alleviate these issues, we propose to employ dynamic convolution to improve the learning of high-frequency and multi-scale features. Specifically, we design a spatially enhanced kernel generation (SEKG) module to improve dynamic convolution, enabling the learning of spatial context information with a very low computational complexity. Based on the SEKG module, we propose a dynamic convolution block (DCB) and a multi-scale dynamic convolution block (MDCB). The former enhances the high-frequency information via dynamic convolution and preserves low-frequency information via skip connections. The latter utilizes shared adaptive dynamic kernels and the idea of dilated convolution to achieve efficient multi-scale feature extraction. The proposed multi-dimension feature integration (MFI) mechanism further fuses the multi-scale features, providing precise and contextually enriched feature representations. Finally, we build an efficient denoising network with the proposed DCB and MDCB, named ADFNet. It achieves better performance with low computational complexity on real-world and synthetic Gaussian noisy datasets. The source code is available at https://github.com/it-hao/ADFNet. |
2406.15808 | Mike Perkins | Jasper Roe (1), Mike Perkins (2), Daniel Ruelle (3) ((1) James Cook
University Singapore, (2) British University Vietnam, (3) VinUniversity) | Understanding Student and Academic Staff Perceptions of AI Use in
Assessment and Feedback | null | null | null | null | cs.HC cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The rise of Artificial Intelligence (AI) and Generative Artificial
Intelligence (GenAI) in higher education necessitates assessment reform. This
study addresses a critical gap by exploring student and academic staff
experiences with AI and GenAI tools, focusing on their familiarity and comfort
with current and potential future applications in learning and assessment. An
online survey collected data from 35 academic staff and 282 students across two
universities in Vietnam and one in Singapore, examining GenAI familiarity,
perceptions of its use in assessment marking and feedback, knowledge checking
and participation, and experiences of GenAI text detection.
Descriptive statistics and reflexive thematic analysis revealed a generally
low familiarity with GenAI among both groups. GenAI feedback was viewed
negatively; however, it was viewed more positively when combined with
instructor feedback. Academic staff were more accepting of GenAI text detection
tools and grade adjustments based on detection results compared to students.
Qualitative analysis identified three themes: unclear understanding of text
detection tools, variability in experiences with GenAI detectors, and mixed
feelings about GenAI's future impact on educational assessment. These findings
have major implications regarding the development of policies and practices for
GenAI-enabled assessment and feedback in higher education.
| [
{
"created": "Sat, 22 Jun 2024 10:25:01 GMT",
"version": "v1"
}
] | 2024-06-25 | [
[
"Roe",
"Jasper",
""
],
[
"Perkins",
"Mike",
""
],
[
"Ruelle",
"Daniel",
""
]
] | The rise of Artificial Intelligence (AI) and Generative Artificial Intelligence (GenAI) in higher education necessitates assessment reform. This study addresses a critical gap by exploring student and academic staff experiences with AI and GenAI tools, focusing on their familiarity and comfort with current and potential future applications in learning and assessment. An online survey collected data from 35 academic staff and 282 students across two universities in Vietnam and one in Singapore, examining GenAI familiarity, perceptions of its use in assessment marking and feedback, knowledge checking and participation, and experiences of GenAI text detection. Descriptive statistics and reflexive thematic analysis revealed a generally low familiarity with GenAI among both groups. GenAI feedback was viewed negatively; however, it was viewed more positively when combined with instructor feedback. Academic staff were more accepting of GenAI text detection tools and grade adjustments based on detection results compared to students. Qualitative analysis identified three themes: unclear understanding of text detection tools, variability in experiences with GenAI detectors, and mixed feelings about GenAI's future impact on educational assessment. These findings have major implications regarding the development of policies and practices for GenAI-enabled assessment and feedback in higher education. |
cs/0511106 | Sergiu Chelcea | Sergiu Theodor Chelcea (INRIA Rocquencourt / INRIA Sophia Antipolis),
Alzennyr Da Silva (INRIA Rocquencourt / INRIA Sophia Antipolis), Yves
Lechevallier (INRIA Rocquencourt / INRIA Sophia Antipolis), Doru Tanasa
(INRIA Rocquencourt / INRIA Sophia Antipolis), Brigitte Trousse (INRIA
Rocquencourt / INRIA Sophia Antipolis) | Benefits of InterSite Pre-Processing and Clustering Methods in
E-Commerce Domain | null | Dans Proceedings of the ECML/PKDD2005 Discovery Challenge, A
Collaborative Effort in Knowledge Discovery from Databases | null | null | cs.DB | null | This paper presents our preprocessing and clustering analysis on the
clickstream dataset proposed for the ECMLPKDD 2005 Discovery Challenge. The
main contributions of this article are double. First, after presenting the
clickstream dataset, we show how we build a rich data warehouse based an
advanced preprocesing. We take into account the intersite aspects in the given
ecommerce domain, which offers an interesting data structuration. A preliminary
statistical analysis based on time period clickstreams is given, emphasing the
importance of intersite user visits in such a context. Secondly, we describe
our crossed-clustering method which is applied on data generated from our data
warehouse. Our preliminary results are interesting and promising illustrating
the benefits of our WUM methods, even if more investigations are needed on the
same dataset.
| [
{
"created": "Wed, 30 Nov 2005 16:12:38 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Chelcea",
"Sergiu Theodor",
"",
"INRIA Rocquencourt / INRIA Sophia Antipolis"
],
[
"Da Silva",
"Alzennyr",
"",
"INRIA Rocquencourt / INRIA Sophia Antipolis"
],
[
"Lechevallier",
"Yves",
"",
"INRIA Rocquencourt / INRIA Sophia Antipolis"
],
[
"Tanasa",
"Doru",
"",
"INRIA Rocquencourt / INRIA Sophia Antipolis"
],
[
"Trousse",
"Brigitte",
"",
"INRIA\n Rocquencourt / INRIA Sophia Antipolis"
]
] | This paper presents our preprocessing and clustering analysis on the clickstream dataset proposed for the ECMLPKDD 2005 Discovery Challenge. The main contributions of this article are double. First, after presenting the clickstream dataset, we show how we build a rich data warehouse based an advanced preprocesing. We take into account the intersite aspects in the given ecommerce domain, which offers an interesting data structuration. A preliminary statistical analysis based on time period clickstreams is given, emphasing the importance of intersite user visits in such a context. Secondly, we describe our crossed-clustering method which is applied on data generated from our data warehouse. Our preliminary results are interesting and promising illustrating the benefits of our WUM methods, even if more investigations are needed on the same dataset. |
2106.02009 | Mario Graff | Eric S. Tellez, Sabino Miranda-Jim\'enez, Mario Graff, Daniela
Moctezuma, Oscar S. Siodia, and Elio A. Villase\~nor | A Case Study of Spanish Text Transformations for Twitter Sentiment
Analysis | null | null | 10.1016/j.eswa.2017.03.071 | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sentiment analysis is a text mining task that determines the polarity of a
given text, i.e., its positiveness or negativeness. Recently, it has received a
lot of attention given the interest in opinion mining in micro-blogging
platforms. These new forms of textual expressions present new challenges to
analyze text given the use of slang, orthographic and grammatical errors, among
others. Along with these challenges, a practical sentiment classifier should be
able to handle efficiently large workloads.
The aim of this research is to identify which text transformations
(lemmatization, stemming, entity removal, among others), tokenizers (e.g.,
words $n$-grams), and tokens weighting schemes impact the most the accuracy of
a classifier (Support Vector Machine) trained on two Spanish corpus. The
methodology used is to exhaustively analyze all the combinations of the text
transformations and their respective parameters to find out which
characteristics the best performing classifiers have in common. Furthermore,
among the different text transformations studied, we introduce a novel approach
based on the combination of word based $n$-grams and character based $q$-grams.
The results show that this novel combination of words and characters produces a
classifier that outperforms the traditional word based combination by $11.17\%$
and $5.62\%$ on the INEGI and TASS'15 dataset, respectively.
| [
{
"created": "Thu, 3 Jun 2021 17:24:31 GMT",
"version": "v1"
}
] | 2021-06-04 | [
[
"Tellez",
"Eric S.",
""
],
[
"Miranda-Jiménez",
"Sabino",
""
],
[
"Graff",
"Mario",
""
],
[
"Moctezuma",
"Daniela",
""
],
[
"Siodia",
"Oscar S.",
""
],
[
"Villaseñor",
"Elio A.",
""
]
] | Sentiment analysis is a text mining task that determines the polarity of a given text, i.e., its positiveness or negativeness. Recently, it has received a lot of attention given the interest in opinion mining in micro-blogging platforms. These new forms of textual expressions present new challenges to analyze text given the use of slang, orthographic and grammatical errors, among others. Along with these challenges, a practical sentiment classifier should be able to handle efficiently large workloads. The aim of this research is to identify which text transformations (lemmatization, stemming, entity removal, among others), tokenizers (e.g., words $n$-grams), and tokens weighting schemes impact the most the accuracy of a classifier (Support Vector Machine) trained on two Spanish corpus. The methodology used is to exhaustively analyze all the combinations of the text transformations and their respective parameters to find out which characteristics the best performing classifiers have in common. Furthermore, among the different text transformations studied, we introduce a novel approach based on the combination of word based $n$-grams and character based $q$-grams. The results show that this novel combination of words and characters produces a classifier that outperforms the traditional word based combination by $11.17\%$ and $5.62\%$ on the INEGI and TASS'15 dataset, respectively. |
2311.02629 | Alessandro Barro | Alessandro Barro | Pointer Networks with Q-Learning for Combinatorial Optimization | null | null | null | null | cs.LG math.OC | http://creativecommons.org/licenses/by/4.0/ | We introduce the Pointer Q-Network (PQN), a hybrid neural architecture that
integrates model-free Q-value policy approximation with Pointer Networks
(Ptr-Nets) to enhance the optimality of attention-based sequence generation,
focusing on long-term outcomes. This integration proves particularly effective
in solving combinatorial optimization (CO) tasks, especially the Travelling
Salesman Problem (TSP), which is the focus of our study. We address this
challenge by defining a Markov Decision Process (MDP) compatible with PQN,
which involves iterative graph embedding, encoding and decoding by an
LSTM-based recurrent neural network. This process generates a context vector
and computes raw attention scores, which are dynamically adjusted by Q-values
calculated for all available state-action pairs before applying softmax. The
resulting attention vector is utilized as an action distribution, with actions
selected hinged to exploration-exploitation dynamic adaptibility of PQN. Our
empirical results demonstrate the efficacy of this approach, also testing the
model in unstable environments.
| [
{
"created": "Sun, 5 Nov 2023 12:03:58 GMT",
"version": "v1"
},
{
"created": "Tue, 19 Mar 2024 09:15:36 GMT",
"version": "v2"
},
{
"created": "Mon, 17 Jun 2024 10:27:54 GMT",
"version": "v3"
}
] | 2024-06-18 | [
[
"Barro",
"Alessandro",
""
]
] | We introduce the Pointer Q-Network (PQN), a hybrid neural architecture that integrates model-free Q-value policy approximation with Pointer Networks (Ptr-Nets) to enhance the optimality of attention-based sequence generation, focusing on long-term outcomes. This integration proves particularly effective in solving combinatorial optimization (CO) tasks, especially the Travelling Salesman Problem (TSP), which is the focus of our study. We address this challenge by defining a Markov Decision Process (MDP) compatible with PQN, which involves iterative graph embedding, encoding and decoding by an LSTM-based recurrent neural network. This process generates a context vector and computes raw attention scores, which are dynamically adjusted by Q-values calculated for all available state-action pairs before applying softmax. The resulting attention vector is utilized as an action distribution, with actions selected hinged to exploration-exploitation dynamic adaptibility of PQN. Our empirical results demonstrate the efficacy of this approach, also testing the model in unstable environments. |
2405.06945 | Ancheng Lin | Ancheng Lin, Jun Li | Direct Learning of Mesh and Appearance via 3D Gaussian Splatting | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurately reconstructing a 3D scene including explicit geometry information
is both attractive and challenging. Geometry reconstruction can benefit from
incorporating differentiable appearance models, such as Neural Radiance Fields
and 3D Gaussian Splatting (3DGS). In this work, we propose a learnable scene
model that incorporates 3DGS with an explicit geometry representation, namely a
mesh. Our model learns the mesh and appearance in an end-to-end manner, where
we bind 3D Gaussians to the mesh faces and perform differentiable rendering of
3DGS to obtain photometric supervision. The model creates an effective
information pathway to supervise the learning of the scene, including the mesh.
Experimental results demonstrate that the learned scene model not only achieves
state-of-the-art rendering quality but also supports manipulation using the
explicit mesh. In addition, our model has a unique advantage in adapting to
scene updates, thanks to the end-to-end learning of both mesh and appearance.
| [
{
"created": "Sat, 11 May 2024 07:56:19 GMT",
"version": "v1"
}
] | 2024-05-14 | [
[
"Lin",
"Ancheng",
""
],
[
"Li",
"Jun",
""
]
] | Accurately reconstructing a 3D scene including explicit geometry information is both attractive and challenging. Geometry reconstruction can benefit from incorporating differentiable appearance models, such as Neural Radiance Fields and 3D Gaussian Splatting (3DGS). In this work, we propose a learnable scene model that incorporates 3DGS with an explicit geometry representation, namely a mesh. Our model learns the mesh and appearance in an end-to-end manner, where we bind 3D Gaussians to the mesh faces and perform differentiable rendering of 3DGS to obtain photometric supervision. The model creates an effective information pathway to supervise the learning of the scene, including the mesh. Experimental results demonstrate that the learned scene model not only achieves state-of-the-art rendering quality but also supports manipulation using the explicit mesh. In addition, our model has a unique advantage in adapting to scene updates, thanks to the end-to-end learning of both mesh and appearance. |
2312.03126 | Minqi Jiang | Minqi Jiang | Learning Curricula in Open-Ended Worlds | PhD dissertation | null | null | null | cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep reinforcement learning (RL) provides powerful methods for training
optimal sequential decision-making agents. As collecting real-world
interactions can entail additional costs and safety risks, the common paradigm
of sim2real conducts training in a simulator, followed by real-world
deployment. Unfortunately, RL agents easily overfit to the choice of simulated
training environments, and worse still, learning ends when the agent masters
the specific set of simulated environments. In contrast, the real world is
highly open-ended, featuring endlessly evolving environments and challenges,
making such RL approaches unsuitable. Simply randomizing over simulated
environments is insufficient, as it requires making arbitrary distributional
assumptions and can be combinatorially less likely to sample specific
environment instances that are useful for learning. An ideal learning process
should automatically adapt the training environment to maximize the learning
potential of the agent over an open-ended task space that matches or surpasses
the complexity of the real world. This thesis develops a class of methods
called Unsupervised Environment Design (UED), which aim to produce such
open-ended processes. Given an environment design space, UED automatically
generates an infinite sequence or curriculum of training environments at the
frontier of the learning agent's capabilities. Through extensive empirical
studies and theoretical arguments founded on minimax-regret decision theory and
game theory, the findings in this thesis show that UED autocurricula can
produce RL agents exhibiting significantly improved robustness and
generalization to previously unseen environment instances. Such autocurricula
are promising paths toward open-ended learning systems that achieve more
general intelligence by continually generating and mastering additional
challenges of their own design.
| [
{
"created": "Sun, 3 Dec 2023 16:44:00 GMT",
"version": "v1"
},
{
"created": "Fri, 8 Dec 2023 01:49:54 GMT",
"version": "v2"
}
] | 2023-12-11 | [
[
"Jiang",
"Minqi",
""
]
] | Deep reinforcement learning (RL) provides powerful methods for training optimal sequential decision-making agents. As collecting real-world interactions can entail additional costs and safety risks, the common paradigm of sim2real conducts training in a simulator, followed by real-world deployment. Unfortunately, RL agents easily overfit to the choice of simulated training environments, and worse still, learning ends when the agent masters the specific set of simulated environments. In contrast, the real world is highly open-ended, featuring endlessly evolving environments and challenges, making such RL approaches unsuitable. Simply randomizing over simulated environments is insufficient, as it requires making arbitrary distributional assumptions and can be combinatorially less likely to sample specific environment instances that are useful for learning. An ideal learning process should automatically adapt the training environment to maximize the learning potential of the agent over an open-ended task space that matches or surpasses the complexity of the real world. This thesis develops a class of methods called Unsupervised Environment Design (UED), which aim to produce such open-ended processes. Given an environment design space, UED automatically generates an infinite sequence or curriculum of training environments at the frontier of the learning agent's capabilities. Through extensive empirical studies and theoretical arguments founded on minimax-regret decision theory and game theory, the findings in this thesis show that UED autocurricula can produce RL agents exhibiting significantly improved robustness and generalization to previously unseen environment instances. Such autocurricula are promising paths toward open-ended learning systems that achieve more general intelligence by continually generating and mastering additional challenges of their own design. |
2009.09852 | Karl-Ludwig Besser | Eduard A. Jorswieck, Karl-Ludwig Besser | Copula-Based Bounds for Multi-User Communications -- Part I: Average
Performance | 5 pages, 2 figures | IEEE Communications Letters, vol. 25, no. 1, pp. 3-7, Jan. 2021 | 10.1109/LCOMM.2020.3023056 | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Statistically independent or positively correlated fading models are usually
applied to compute the average performance of wireless communications. However,
there exist scenarios with negative dependency and it is therefore of interest
how different performance metrics behave for different general dependency
structures of the channels. Especially best-case and worst-case bounds are
practically relevant as a system design guideline. In this two-part letter, we
present methods and tools from dependency modeling which can be applied to
analyze and design multi-user communications systems exploiting and creating
dependencies of the effective fading channels. The first part focuses on fast
fading with average performance metrics, while the second part considers slow
fading with outage performance metrics.
| [
{
"created": "Mon, 21 Sep 2020 13:30:09 GMT",
"version": "v1"
}
] | 2021-01-12 | [
[
"Jorswieck",
"Eduard A.",
""
],
[
"Besser",
"Karl-Ludwig",
""
]
] | Statistically independent or positively correlated fading models are usually applied to compute the average performance of wireless communications. However, there exist scenarios with negative dependency and it is therefore of interest how different performance metrics behave for different general dependency structures of the channels. Especially best-case and worst-case bounds are practically relevant as a system design guideline. In this two-part letter, we present methods and tools from dependency modeling which can be applied to analyze and design multi-user communications systems exploiting and creating dependencies of the effective fading channels. The first part focuses on fast fading with average performance metrics, while the second part considers slow fading with outage performance metrics. |
2111.08400 | Yi-Chang Chen | Yi-Chang Chen, Chun-Yen Cheng, Chien-An Chen, Ming-Chieh Sung and
Yi-Ren Yeh | Integrated Semantic and Phonetic Post-correction for Chinese Speech
Recognition | null | null | null | null | cs.CL cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Due to the recent advances of natural language processing, several works have
applied the pre-trained masked language model (MLM) of BERT to the
post-correction of speech recognition. However, existing pre-trained models
only consider the semantic correction while the phonetic features of words is
neglected. The semantic-only post-correction will consequently decrease the
performance since homophonic errors are fairly common in Chinese ASR. In this
paper, we proposed a novel approach to collectively exploit the contextualized
representation and the phonetic information between the error and its replacing
candidates to alleviate the error rate of Chinese ASR. Our experiment results
on real world speech recognition datasets showed that our proposed method has
evidently lower CER than the baseline model, which utilized a pre-trained BERT
MLM as the corrector.
| [
{
"created": "Tue, 16 Nov 2021 11:55:27 GMT",
"version": "v1"
}
] | 2021-11-17 | [
[
"Chen",
"Yi-Chang",
""
],
[
"Cheng",
"Chun-Yen",
""
],
[
"Chen",
"Chien-An",
""
],
[
"Sung",
"Ming-Chieh",
""
],
[
"Yeh",
"Yi-Ren",
""
]
] | Due to the recent advances of natural language processing, several works have applied the pre-trained masked language model (MLM) of BERT to the post-correction of speech recognition. However, existing pre-trained models only consider the semantic correction while the phonetic features of words is neglected. The semantic-only post-correction will consequently decrease the performance since homophonic errors are fairly common in Chinese ASR. In this paper, we proposed a novel approach to collectively exploit the contextualized representation and the phonetic information between the error and its replacing candidates to alleviate the error rate of Chinese ASR. Our experiment results on real world speech recognition datasets showed that our proposed method has evidently lower CER than the baseline model, which utilized a pre-trained BERT MLM as the corrector. |
2106.15597 | Dewen Zeng | Dewen Zeng, Mingqi Li, Yukun Ding, Xiaowei Xu, Qiu Xie, Ruixue Xu,
Hongwen Fei, Meiping Huang, Jian Zhuang and Yiyu Shi | Segmentation with Multiple Acceptable Annotations: A Case Study of
Myocardial Segmentation in Contrast Echocardiography | 12 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most existing deep learning-based frameworks for image segmentation assume
that a unique ground truth is known and can be used for performance evaluation.
This is true for many applications, but not all. Myocardial segmentation of
Myocardial Contrast Echocardiography (MCE), a critical task in automatic
myocardial perfusion analysis, is an example. Due to the low resolution and
serious artifacts in MCE data, annotations from different cardiologists can
vary significantly, and it is hard to tell which one is the best. In this case,
how can we find a good way to evaluate segmentation performance and how do we
train the neural network? In this paper, we address the first problem by
proposing a new extended Dice to effectively evaluate the segmentation
performance when multiple accepted ground truth is available. Then based on our
proposed metric, we solve the second problem by further incorporating the new
metric into a loss function that enables neural networks to flexibly learn
general features of myocardium. Experiment results on our clinical MCE data set
demonstrate that the neural network trained with the proposed loss function
outperforms those existing ones that try to obtain a unique ground truth from
multiple annotations, both quantitatively and qualitatively. Finally, our
grading study shows that using extended Dice as an evaluation metric can better
identify segmentation results that need manual correction compared with using
Dice.
| [
{
"created": "Tue, 29 Jun 2021 17:32:24 GMT",
"version": "v1"
}
] | 2021-06-30 | [
[
"Zeng",
"Dewen",
""
],
[
"Li",
"Mingqi",
""
],
[
"Ding",
"Yukun",
""
],
[
"Xu",
"Xiaowei",
""
],
[
"Xie",
"Qiu",
""
],
[
"Xu",
"Ruixue",
""
],
[
"Fei",
"Hongwen",
""
],
[
"Huang",
"Meiping",
""
],
[
"Zhuang",
"Jian",
""
],
[
"Shi",
"Yiyu",
""
]
] | Most existing deep learning-based frameworks for image segmentation assume that a unique ground truth is known and can be used for performance evaluation. This is true for many applications, but not all. Myocardial segmentation of Myocardial Contrast Echocardiography (MCE), a critical task in automatic myocardial perfusion analysis, is an example. Due to the low resolution and serious artifacts in MCE data, annotations from different cardiologists can vary significantly, and it is hard to tell which one is the best. In this case, how can we find a good way to evaluate segmentation performance and how do we train the neural network? In this paper, we address the first problem by proposing a new extended Dice to effectively evaluate the segmentation performance when multiple accepted ground truth is available. Then based on our proposed metric, we solve the second problem by further incorporating the new metric into a loss function that enables neural networks to flexibly learn general features of myocardium. Experiment results on our clinical MCE data set demonstrate that the neural network trained with the proposed loss function outperforms those existing ones that try to obtain a unique ground truth from multiple annotations, both quantitatively and qualitatively. Finally, our grading study shows that using extended Dice as an evaluation metric can better identify segmentation results that need manual correction compared with using Dice. |
1902.00375 | Benjamin Paassen | Benjamin Paa{\ss}en and Astrid Bunge and Carolin Hainke and Leon
Sindelar and Matthias Vogelsang | Dynamic fairness - Breaking vicious cycles in automatic decision making | preprint of a paper accepted for oral presentation at the 27th
European Symposium on Artificial Neural Networks (ESANN 2019) | Proc. ESANN (2019), 477-482 | null | null | cs.LG cs.CY stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, machine learning techniques have been increasingly applied
in sensitive decision making processes, raising fairness concerns. Past
research has shown that machine learning may reproduce and even exacerbate
human bias due to biased training data or flawed model assumptions, and thus
may lead to discriminatory actions. To counteract such biased models,
researchers have proposed multiple mathematical definitions of fairness
according to which classifiers can be optimized. However, it has also been
shown that the outcomes generated by some fairness notions may be
unsatisfactory.
In this contribution, we add to this research by considering decision making
processes in time. We establish a theoretic model in which even perfectly
accurate classifiers which adhere to almost all common fairness definitions
lead to stable long-term inequalities due to vicious cycles. Only demographic
parity, which enforces equal rates of positive decisions across groups, avoids
these effects and establishes a virtuous cycle, which leads to perfectly
accurate and fair classification in the long term.
| [
{
"created": "Fri, 1 Feb 2019 14:47:01 GMT",
"version": "v1"
},
{
"created": "Sun, 10 Feb 2019 16:29:34 GMT",
"version": "v2"
}
] | 2019-05-16 | [
[
"Paaßen",
"Benjamin",
""
],
[
"Bunge",
"Astrid",
""
],
[
"Hainke",
"Carolin",
""
],
[
"Sindelar",
"Leon",
""
],
[
"Vogelsang",
"Matthias",
""
]
] | In recent years, machine learning techniques have been increasingly applied in sensitive decision making processes, raising fairness concerns. Past research has shown that machine learning may reproduce and even exacerbate human bias due to biased training data or flawed model assumptions, and thus may lead to discriminatory actions. To counteract such biased models, researchers have proposed multiple mathematical definitions of fairness according to which classifiers can be optimized. However, it has also been shown that the outcomes generated by some fairness notions may be unsatisfactory. In this contribution, we add to this research by considering decision making processes in time. We establish a theoretic model in which even perfectly accurate classifiers which adhere to almost all common fairness definitions lead to stable long-term inequalities due to vicious cycles. Only demographic parity, which enforces equal rates of positive decisions across groups, avoids these effects and establishes a virtuous cycle, which leads to perfectly accurate and fair classification in the long term. |
1804.01824 | Victor Escorcia | Victor Escorcia and Cuong D. Dao and Mihir Jain and Bernard Ghanem and
Cees Snoek | Guess Where? Actor-Supervision for Spatiotemporal Action Localization | cvpr version | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This paper addresses the problem of spatiotemporal localization of actions in
videos. Compared to leading approaches, which all learn to localize based on
carefully annotated boxes on training video frames, we adhere to a
weakly-supervised solution that only requires a video class label. We introduce
an actor-supervised architecture that exploits the inherent compositionality of
actions in terms of actor transformations, to localize actions. We make two
contributions. First, we propose actor proposals derived from a detector for
human and non-human actors intended for images, which is linked over time by
Siamese similarity matching to account for actor deformations. Second, we
propose an actor-based attention mechanism that enables the localization of the
actions from action class labels and actor proposals and is end-to-end
trainable. Experiments on three human and non-human action datasets show actor
supervision is state-of-the-art for weakly-supervised action localization and
is even competitive to some fully-supervised alternatives.
| [
{
"created": "Thu, 5 Apr 2018 13:08:25 GMT",
"version": "v1"
}
] | 2018-04-06 | [
[
"Escorcia",
"Victor",
""
],
[
"Dao",
"Cuong D.",
""
],
[
"Jain",
"Mihir",
""
],
[
"Ghanem",
"Bernard",
""
],
[
"Snoek",
"Cees",
""
]
] | This paper addresses the problem of spatiotemporal localization of actions in videos. Compared to leading approaches, which all learn to localize based on carefully annotated boxes on training video frames, we adhere to a weakly-supervised solution that only requires a video class label. We introduce an actor-supervised architecture that exploits the inherent compositionality of actions in terms of actor transformations, to localize actions. We make two contributions. First, we propose actor proposals derived from a detector for human and non-human actors intended for images, which is linked over time by Siamese similarity matching to account for actor deformations. Second, we propose an actor-based attention mechanism that enables the localization of the actions from action class labels and actor proposals and is end-to-end trainable. Experiments on three human and non-human action datasets show actor supervision is state-of-the-art for weakly-supervised action localization and is even competitive to some fully-supervised alternatives. |
2103.14330 | Hao Li | Hao Li, Xueliang Zhang, Guanglai Gao | Guided Training: A Simple Method for Single-channel Speaker Separation | 5 pages | null | null | null | cs.SD cs.AI eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning has shown a great potential for speech separation, especially
for speech and non-speech separation. However, it encounters permutation
problem for multi-speaker separation where both target and interference are
speech. Permutation Invariant training (PIT) was proposed to solve this problem
by permuting the order of the multiple speakers. Another way is to use an
anchor speech, a short speech of the target speaker, to model the speaker
identity. In this paper, we propose a simple strategy to train a long
short-term memory (LSTM) model to solve the permutation problem in speaker
separation. Specifically, we insert a short speech of target speaker at the
beginning of a mixture as guide information. So, the first appearing speaker is
defined as the target. Due to the powerful capability on sequence modeling,
LSTM can use its memory cells to track and separate target speech from
interfering speech. Experimental results show that the proposed training
strategy is effective for speaker separation.
| [
{
"created": "Fri, 26 Mar 2021 08:46:50 GMT",
"version": "v1"
}
] | 2021-03-29 | [
[
"Li",
"Hao",
""
],
[
"Zhang",
"Xueliang",
""
],
[
"Gao",
"Guanglai",
""
]
] | Deep learning has shown a great potential for speech separation, especially for speech and non-speech separation. However, it encounters permutation problem for multi-speaker separation where both target and interference are speech. Permutation Invariant training (PIT) was proposed to solve this problem by permuting the order of the multiple speakers. Another way is to use an anchor speech, a short speech of the target speaker, to model the speaker identity. In this paper, we propose a simple strategy to train a long short-term memory (LSTM) model to solve the permutation problem in speaker separation. Specifically, we insert a short speech of target speaker at the beginning of a mixture as guide information. So, the first appearing speaker is defined as the target. Due to the powerful capability on sequence modeling, LSTM can use its memory cells to track and separate target speech from interfering speech. Experimental results show that the proposed training strategy is effective for speaker separation. |
2308.07972 | Yuan Huang | Yuan Huang | PKE-RRT: Efficient Multi-Goal Path Finding Algorithm Driven by
Multi-Task Learning Model | 8 pages, 12 figures | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-goal path finding (MGPF) aims to find a closed and collision-free path
to visit a sequence of goals orderly. As a physical travelling salesman
problem, an undirected complete graph with accurate weights is crucial for
determining the visiting order. Lack of prior knowledge of local paths between
vertices poses challenges in meeting the optimality and efficiency requirements
of algorithms. In this study, a multi-task learning model designated Prior
Knowledge Extraction (PKE), is designed to estimate the local path length
between pairwise vertices as the weights of the graph. Simultaneously, a
promising region and a guideline are predicted as heuristics for the
path-finding process. Utilizing the outputs of the PKE model, a variant of
Rapidly-exploring Random Tree (RRT) is proposed known as PKE-RRT. It
effectively tackles the MGPF problem by a local planner incorporating a
prioritized visiting order, which is obtained from the complete graph.
Furthermore, the predicted region and guideline facilitate efficient
exploration of the tree structure, enabling the algorithm to rapidly provide a
sub-optimal solution. Extensive numerical experiments demonstrate the
outstanding performance of the PKE-RRT for the MGPF problem with a different
number of goals, in terms of calculation time, path cost, sample number, and
success rate.
| [
{
"created": "Tue, 15 Aug 2023 18:21:08 GMT",
"version": "v1"
},
{
"created": "Fri, 22 Dec 2023 12:22:12 GMT",
"version": "v2"
}
] | 2023-12-25 | [
[
"Huang",
"Yuan",
""
]
] | Multi-goal path finding (MGPF) aims to find a closed and collision-free path to visit a sequence of goals orderly. As a physical travelling salesman problem, an undirected complete graph with accurate weights is crucial for determining the visiting order. Lack of prior knowledge of local paths between vertices poses challenges in meeting the optimality and efficiency requirements of algorithms. In this study, a multi-task learning model designated Prior Knowledge Extraction (PKE), is designed to estimate the local path length between pairwise vertices as the weights of the graph. Simultaneously, a promising region and a guideline are predicted as heuristics for the path-finding process. Utilizing the outputs of the PKE model, a variant of Rapidly-exploring Random Tree (RRT) is proposed known as PKE-RRT. It effectively tackles the MGPF problem by a local planner incorporating a prioritized visiting order, which is obtained from the complete graph. Furthermore, the predicted region and guideline facilitate efficient exploration of the tree structure, enabling the algorithm to rapidly provide a sub-optimal solution. Extensive numerical experiments demonstrate the outstanding performance of the PKE-RRT for the MGPF problem with a different number of goals, in terms of calculation time, path cost, sample number, and success rate. |
1709.03439 | Hansang Lee | Han S. Lee, Alex A. Agarwal, Junmo Kim | Why Do Deep Neural Networks Still Not Recognize These Images?: A
Qualitative Analysis on Failure Cases of ImageNet Classification | Poster presented at CVPR 2017 Scene Understanding Workshop | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In a recent decade, ImageNet has become the most notable and powerful
benchmark database in computer vision and machine learning community. As
ImageNet has emerged as a representative benchmark for evaluating the
performance of novel deep learning models, its evaluation tends to include only
quantitative measures such as error rate, rather than qualitative analysis.
Thus, there are few studies that analyze the failure cases of deep learning
models in ImageNet, though there are numerous works analyzing the networks
themselves and visualizing them. In this abstract, we qualitatively analyze the
failure cases of ImageNet classification results from recent deep learning
model, and categorize these cases according to the certain image patterns.
Through this failure analysis, we believe that it can be discovered what the
final challenges are in ImageNet database, which the current deep learning
model is still vulnerable to.
| [
{
"created": "Mon, 11 Sep 2017 15:35:05 GMT",
"version": "v1"
}
] | 2017-09-12 | [
[
"Lee",
"Han S.",
""
],
[
"Agarwal",
"Alex A.",
""
],
[
"Kim",
"Junmo",
""
]
] | In a recent decade, ImageNet has become the most notable and powerful benchmark database in computer vision and machine learning community. As ImageNet has emerged as a representative benchmark for evaluating the performance of novel deep learning models, its evaluation tends to include only quantitative measures such as error rate, rather than qualitative analysis. Thus, there are few studies that analyze the failure cases of deep learning models in ImageNet, though there are numerous works analyzing the networks themselves and visualizing them. In this abstract, we qualitatively analyze the failure cases of ImageNet classification results from recent deep learning model, and categorize these cases according to the certain image patterns. Through this failure analysis, we believe that it can be discovered what the final challenges are in ImageNet database, which the current deep learning model is still vulnerable to. |
2105.07605 | Shenghao Yang | Yanyan Dong, Sheng Jin, Yanzuo Chen, Shenghao Yang and Hoover H. F.
Yin | Utility Maximization for Multihop Wireless Networks Employing BATS Codes | This paper was presented in part at 2020 IEEE International
Conference on Communications | null | null | null | cs.IT cs.NI math.IT | http://creativecommons.org/licenses/by-nc-nd/4.0/ | BATS (BATched Sparse) codes are a class of efficient random linear network
coding variation that has been studied for multihop wireless networks mostly in
scenarios of a single communication flow. Towards sophisticated multi-flow
network communications, we formulate a network utility maximization (NUM)
problem that jointly optimizes the BATS code parameters of all the flows and
network scheduling. The NUM problem adopts a batch-wise packet loss model that
can be obtained from the network local statistics without any constraints on
packet loss patterns. Moreover, the NUM problem allows a different number of
recoded packets to be transmitted for different batches in a flow, which is
called adaptive recoding. Due to both the probably nonconcave objective and the
BATS code-related variables, the algorithms developed for the existing flow
optimization problems cannot be applied directly to solve our NUM problem. We
introduce a two-step algorithm to solve our NUM problem, where the first step
solves the problem with nonadaptive recoding schemes, and the second step
optimizes adaptive recoding hop-by-hop from upstream to downstream in each
flow. We perform various numerical evaluations and simulations to verify the
effectiveness and efficiency of the algorithm.
| [
{
"created": "Mon, 17 May 2021 04:23:26 GMT",
"version": "v1"
},
{
"created": "Wed, 15 Sep 2021 12:10:33 GMT",
"version": "v2"
}
] | 2021-09-16 | [
[
"Dong",
"Yanyan",
""
],
[
"Jin",
"Sheng",
""
],
[
"Chen",
"Yanzuo",
""
],
[
"Yang",
"Shenghao",
""
],
[
"Yin",
"Hoover H. F.",
""
]
] | BATS (BATched Sparse) codes are a class of efficient random linear network coding variation that has been studied for multihop wireless networks mostly in scenarios of a single communication flow. Towards sophisticated multi-flow network communications, we formulate a network utility maximization (NUM) problem that jointly optimizes the BATS code parameters of all the flows and network scheduling. The NUM problem adopts a batch-wise packet loss model that can be obtained from the network local statistics without any constraints on packet loss patterns. Moreover, the NUM problem allows a different number of recoded packets to be transmitted for different batches in a flow, which is called adaptive recoding. Due to both the probably nonconcave objective and the BATS code-related variables, the algorithms developed for the existing flow optimization problems cannot be applied directly to solve our NUM problem. We introduce a two-step algorithm to solve our NUM problem, where the first step solves the problem with nonadaptive recoding schemes, and the second step optimizes adaptive recoding hop-by-hop from upstream to downstream in each flow. We perform various numerical evaluations and simulations to verify the effectiveness and efficiency of the algorithm. |
1804.03437 | Wojciech Skaba | Wojciech Skaba | The AGINAO Self-Programming Engine | Journal of Artificial General Intelligence | Journal of Artificial General Intelligence 3(3) 2012 | 10.2478/v10229-011-0018-0 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The AGINAO is a project to create a human-level artificial general
intelligence system (HL AGI) embodied in the Aldebaran Robotics' NAO humanoid
robot. The dynamical and open-ended cognitive engine of the robot is
represented by an embedded and multi-threaded control program, that is
self-crafted rather than hand-crafted, and is executed on a simulated Universal
Turing Machine (UTM). The actual structure of the cognitive engine emerges as a
result of placing the robot in a natural preschool-like environment and running
a core start-up system that executes self-programming of the cognitive layer on
top of the core layer. The data from the robot's sensory devices supplies the
training samples for the machine learning methods, while the commands sent to
actuators enable testing hypotheses and getting a feedback. The individual
self-created subroutines are supposed to reflect the patterns and concepts of
the real world, while the overall program structure reflects the spatial and
temporal hierarchy of the world dependencies. This paper focuses on the details
of the self-programming approach, limiting the discussion of the applied
cognitive architecture to a necessary minimum.
| [
{
"created": "Tue, 10 Apr 2018 10:29:14 GMT",
"version": "v1"
}
] | 2018-04-11 | [
[
"Skaba",
"Wojciech",
""
]
] | The AGINAO is a project to create a human-level artificial general intelligence system (HL AGI) embodied in the Aldebaran Robotics' NAO humanoid robot. The dynamical and open-ended cognitive engine of the robot is represented by an embedded and multi-threaded control program, that is self-crafted rather than hand-crafted, and is executed on a simulated Universal Turing Machine (UTM). The actual structure of the cognitive engine emerges as a result of placing the robot in a natural preschool-like environment and running a core start-up system that executes self-programming of the cognitive layer on top of the core layer. The data from the robot's sensory devices supplies the training samples for the machine learning methods, while the commands sent to actuators enable testing hypotheses and getting a feedback. The individual self-created subroutines are supposed to reflect the patterns and concepts of the real world, while the overall program structure reflects the spatial and temporal hierarchy of the world dependencies. This paper focuses on the details of the self-programming approach, limiting the discussion of the applied cognitive architecture to a necessary minimum. |
1006.2348 | Thomas Unger | Susanne Pumpluen and Thomas Unger | Space-time block codes from nonassociative division algebras | 23 pages; final version; to appear in Advances in Mathematics of
Communications | Adv. Math. Commun. 5 (2011), no. 3, 449-471 | 10.3934/amc.2011.5.449 | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Associative division algebras are a rich source of fully diverse space-time
block codes (STBCs). In this paper the systematic construction of fully diverse
STBCs from nonassociative algebras is discussed. As examples, families of fully
diverse $2\times 2$, $2\times 4$ multiblock and $4\x 4$ STBCs are designed,
employing nonassociative quaternion division algebras.
| [
{
"created": "Fri, 11 Jun 2010 17:11:06 GMT",
"version": "v1"
},
{
"created": "Thu, 19 May 2011 16:07:31 GMT",
"version": "v2"
},
{
"created": "Tue, 31 May 2011 21:46:53 GMT",
"version": "v3"
},
{
"created": "Thu, 9 Jun 2011 16:59:27 GMT",
"version": "v4"
}
] | 2012-02-07 | [
[
"Pumpluen",
"Susanne",
""
],
[
"Unger",
"Thomas",
""
]
] | Associative division algebras are a rich source of fully diverse space-time block codes (STBCs). In this paper the systematic construction of fully diverse STBCs from nonassociative algebras is discussed. As examples, families of fully diverse $2\times 2$, $2\times 4$ multiblock and $4\x 4$ STBCs are designed, employing nonassociative quaternion division algebras. |
1912.08812 | Teofilo de Campos | Frederico Guth and Teofilo Emidio de-Campos | Research Frontiers in Transfer Learning -- a systematic and bibliometric
review | 19 pages, 9 figures | null | null | null | cs.DL cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Humans can learn from very few samples, demonstrating an outstanding
generalization ability that learning algorithms are still far from reaching.
Currently, the most successful models demand enormous amounts of well-labeled
data, which are expensive and difficult to obtain, becoming one of the biggest
obstacles to the use of machine learning in practice. This scenario shows the
massive potential for Transfer Learning, which aims to harness previously
acquired knowledge to the learning of new tasks more effectively and
efficiently. In this systematic review, we apply a quantitative method to
select the main contributions to the field and make use of bibliographic
coupling metrics to identify research frontiers. We further analyze the
linguistic variation between the classics of the field and the frontier and map
promising research directions.
| [
{
"created": "Wed, 18 Dec 2019 15:08:19 GMT",
"version": "v1"
}
] | 2019-12-20 | [
[
"Guth",
"Frederico",
""
],
[
"de-Campos",
"Teofilo Emidio",
""
]
] | Humans can learn from very few samples, demonstrating an outstanding generalization ability that learning algorithms are still far from reaching. Currently, the most successful models demand enormous amounts of well-labeled data, which are expensive and difficult to obtain, becoming one of the biggest obstacles to the use of machine learning in practice. This scenario shows the massive potential for Transfer Learning, which aims to harness previously acquired knowledge to the learning of new tasks more effectively and efficiently. In this systematic review, we apply a quantitative method to select the main contributions to the field and make use of bibliographic coupling metrics to identify research frontiers. We further analyze the linguistic variation between the classics of the field and the frontier and map promising research directions. |
2104.08635 | Andrei Paraschiv | Andrei Paraschiv, Dumitru-Clementin Cercel, Mihai Dascalu | UPB at SemEval-2021 Task 5: Virtual Adversarial Training for Toxic Spans
Detection | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | The real-world impact of polarization and toxicity in the online sphere
marked the end of 2020 and the beginning of this year in a negative way.
Semeval-2021, Task 5 - Toxic Spans Detection is based on a novel annotation of
a subset of the Jigsaw Unintended Bias dataset and is the first language
toxicity detection task dedicated to identifying the toxicity-level spans. For
this task, participants had to automatically detect character spans in short
comments that render the message as toxic. Our model considers applying Virtual
Adversarial Training in a semi-supervised setting during the fine-tuning
process of several Transformer-based models (i.e., BERT and RoBERTa), in
combination with Conditional Random Fields. Our approach leads to performance
improvements and more robust models, enabling us to achieve an F1-score of
65.73% in the official submission and an F1-score of 66.13% after further
tuning during post-evaluation.
| [
{
"created": "Sat, 17 Apr 2021 19:42:12 GMT",
"version": "v1"
}
] | 2021-04-20 | [
[
"Paraschiv",
"Andrei",
""
],
[
"Cercel",
"Dumitru-Clementin",
""
],
[
"Dascalu",
"Mihai",
""
]
] | The real-world impact of polarization and toxicity in the online sphere marked the end of 2020 and the beginning of this year in a negative way. Semeval-2021, Task 5 - Toxic Spans Detection is based on a novel annotation of a subset of the Jigsaw Unintended Bias dataset and is the first language toxicity detection task dedicated to identifying the toxicity-level spans. For this task, participants had to automatically detect character spans in short comments that render the message as toxic. Our model considers applying Virtual Adversarial Training in a semi-supervised setting during the fine-tuning process of several Transformer-based models (i.e., BERT and RoBERTa), in combination with Conditional Random Fields. Our approach leads to performance improvements and more robust models, enabling us to achieve an F1-score of 65.73% in the official submission and an F1-score of 66.13% after further tuning during post-evaluation. |
1008.5170 | Abdelsalam Amer Mr | Abdelsalam Amer and Fayez Gebali | General Model for Single and Multiple Channels WLANs with Quality of
Service Support | 19 pages, 12 figures | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we develop an intergraded model for request mechanism and data
transmission in the uplink phase in the presence of channel noise. This model
supports quality of service. The wireless channel is prone to many impairments.
Thus, certain techniques have to be developed to deliver data to the receiver.
We calculated the performance parameters for single and multichannel wireless
networks, like the requests throughput, data throughput and the requests
acceptance probability and data acceptance probability. The proposed model is
general model since it can be applied to different wireless networks such as
IEEE802.11a, IEEE802.16e, CDMA operated networks and Hiperlan\2.
| [
{
"created": "Mon, 30 Aug 2010 21:21:22 GMT",
"version": "v1"
}
] | 2010-09-01 | [
[
"Amer",
"Abdelsalam",
""
],
[
"Gebali",
"Fayez",
""
]
] | In this paper we develop an intergraded model for request mechanism and data transmission in the uplink phase in the presence of channel noise. This model supports quality of service. The wireless channel is prone to many impairments. Thus, certain techniques have to be developed to deliver data to the receiver. We calculated the performance parameters for single and multichannel wireless networks, like the requests throughput, data throughput and the requests acceptance probability and data acceptance probability. The proposed model is general model since it can be applied to different wireless networks such as IEEE802.11a, IEEE802.16e, CDMA operated networks and Hiperlan\2. |
1412.0057 | David Howey | Ross Drummond, David A. Howey, Stephen R. Duncan | Low-Order Mathematical Modelling of Electric Double Layer
Supercapacitors Using Spectral Methods | Pre-print submitted to Journal of Power Sources | null | 10.1016/j.jpowsour.2014.11.116 | null | cs.SY physics.chem-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work investigates two physics-based models that simulate the non-linear
partial differential algebraic equations describing an electric double layer
supercapacitor. In one model the linear dependence between electrolyte
concentration and conductivity is accounted for, while in the other model it is
not. A spectral element method is used to discretise the model equations and it
is found that the error convergence rate with respect to the number of elements
is faster compared to a finite difference method. The increased accuracy of the
spectral element approach means that, for a similar level of solution accuracy,
the model simulation computing time is approximately 50% of that of the finite
difference method. This suggests that the spectral element model could be used
for control and state estimation purposes. For a typical supercapacitor
charging profile, the numerical solutions from both models closely match
experimental voltage and current data. However, when the electrolyte is dilute
or where there is a long charging time, a noticeable difference between the
numerical solutions of the two models is observed. Electrical impedance
spectroscopy simulations show that the capacitance of the two models rapidly
decreases when the frequency of the perturbation current exceeds an upper
threshold.
| [
{
"created": "Sat, 29 Nov 2014 01:28:35 GMT",
"version": "v1"
}
] | 2014-12-09 | [
[
"Drummond",
"Ross",
""
],
[
"Howey",
"David A.",
""
],
[
"Duncan",
"Stephen R.",
""
]
] | This work investigates two physics-based models that simulate the non-linear partial differential algebraic equations describing an electric double layer supercapacitor. In one model the linear dependence between electrolyte concentration and conductivity is accounted for, while in the other model it is not. A spectral element method is used to discretise the model equations and it is found that the error convergence rate with respect to the number of elements is faster compared to a finite difference method. The increased accuracy of the spectral element approach means that, for a similar level of solution accuracy, the model simulation computing time is approximately 50% of that of the finite difference method. This suggests that the spectral element model could be used for control and state estimation purposes. For a typical supercapacitor charging profile, the numerical solutions from both models closely match experimental voltage and current data. However, when the electrolyte is dilute or where there is a long charging time, a noticeable difference between the numerical solutions of the two models is observed. Electrical impedance spectroscopy simulations show that the capacitance of the two models rapidly decreases when the frequency of the perturbation current exceeds an upper threshold. |
2006.10085 | Samira Samadi | Mehrdad Ghadiri, Samira Samadi, Santosh Vempala | Socially Fair k-Means Clustering | 12 pages, 11 figures | null | null | null | cs.LG cs.AI cs.CG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We show that the popular k-means clustering algorithm (Lloyd's heuristic),
used for a variety of scientific data, can result in outcomes that are
unfavorable to subgroups of data (e.g., demographic groups). Such biased
clusterings can have deleterious implications for human-centric applications
such as resource allocation. We present a fair k-means objective and algorithm
to choose cluster centers that provide equitable costs for different groups.
The algorithm, Fair-Lloyd, is a modification of Lloyd's heuristic for k-means,
inheriting its simplicity, efficiency, and stability. In comparison with
standard Lloyd's, we find that on benchmark datasets, Fair-Lloyd exhibits
unbiased performance by ensuring that all groups have equal costs in the output
k-clustering, while incurring a negligible increase in running time, thus
making it a viable fair option wherever k-means is currently used.
| [
{
"created": "Wed, 17 Jun 2020 18:05:17 GMT",
"version": "v1"
},
{
"created": "Thu, 29 Oct 2020 16:03:50 GMT",
"version": "v2"
}
] | 2020-10-30 | [
[
"Ghadiri",
"Mehrdad",
""
],
[
"Samadi",
"Samira",
""
],
[
"Vempala",
"Santosh",
""
]
] | We show that the popular k-means clustering algorithm (Lloyd's heuristic), used for a variety of scientific data, can result in outcomes that are unfavorable to subgroups of data (e.g., demographic groups). Such biased clusterings can have deleterious implications for human-centric applications such as resource allocation. We present a fair k-means objective and algorithm to choose cluster centers that provide equitable costs for different groups. The algorithm, Fair-Lloyd, is a modification of Lloyd's heuristic for k-means, inheriting its simplicity, efficiency, and stability. In comparison with standard Lloyd's, we find that on benchmark datasets, Fair-Lloyd exhibits unbiased performance by ensuring that all groups have equal costs in the output k-clustering, while incurring a negligible increase in running time, thus making it a viable fair option wherever k-means is currently used. |
2004.05569 | Veronica Latcinnik | Veronica Latcinnik, Jonathan Berant | Explaining Question Answering Models through Text Generation | null | null | null | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large pre-trained language models (LMs) have been shown to perform
surprisingly well when fine-tuned on tasks that require commonsense and world
knowledge. However, in end-to-end architectures, it is difficult to explain
what is the knowledge in the LM that allows it to make a correct prediction. In
this work, we propose a model for multi-choice question answering, where a
LM-based generator generates a textual hypothesis that is later used by a
classifier to answer the question. The hypothesis provides a window into the
information used by the fine-tuned LM that can be inspected by humans. A key
challenge in this setup is how to constrain the model to generate hypotheses
that are meaningful to humans. We tackle this by (a) joint training with a
simple similarity classifier that encourages meaningful hypotheses, and (b) by
adding loss functions that encourage natural text without repetitions. We show
on several tasks that our model reaches performance that is comparable to
end-to-end architectures, while producing hypotheses that elucidate the
knowledge used by the LM for answering the question.
| [
{
"created": "Sun, 12 Apr 2020 09:06:46 GMT",
"version": "v1"
}
] | 2020-04-14 | [
[
"Latcinnik",
"Veronica",
""
],
[
"Berant",
"Jonathan",
""
]
] | Large pre-trained language models (LMs) have been shown to perform surprisingly well when fine-tuned on tasks that require commonsense and world knowledge. However, in end-to-end architectures, it is difficult to explain what is the knowledge in the LM that allows it to make a correct prediction. In this work, we propose a model for multi-choice question answering, where a LM-based generator generates a textual hypothesis that is later used by a classifier to answer the question. The hypothesis provides a window into the information used by the fine-tuned LM that can be inspected by humans. A key challenge in this setup is how to constrain the model to generate hypotheses that are meaningful to humans. We tackle this by (a) joint training with a simple similarity classifier that encourages meaningful hypotheses, and (b) by adding loss functions that encourage natural text without repetitions. We show on several tasks that our model reaches performance that is comparable to end-to-end architectures, while producing hypotheses that elucidate the knowledge used by the LM for answering the question. |
1912.06248 | Sayandev Mukherjee | Sayandev Mukherjee | General Information Bottleneck Objectives and their Applications to
Machine Learning | null | null | null | null | cs.LG cs.IT math.IT stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We view the Information Bottleneck Principle (IBP: Tishby et al., 1999;
Schwartz-Ziv and Tishby, 2017) and Predictive Information Bottleneck Principle
(PIBP: Still et al., 2007; Alemi, 2019) as special cases of a family of general
information bottleneck objectives (IBOs). Each IBO corresponds to a particular
constrained optimization problem where the constraints apply to: (a) the mutual
information between the training data and the learned model parameters or
extracted representation of the data, and (b) the mutual information between
the learned model parameters or extracted representation of the data and the
test data (if any). The heuristics behind the IBP and PIBP are shown to yield
different constraints in the corresponding constrained optimization problem
formulations. We show how other heuristics lead to a new IBO, different from
both the IBP and PIBP, and use the techniques from (Alemi, 2019) to derive and
optimize a variational upper bound on the new IBO.
We then apply the theory of general IBOs to resolve the seeming contradiction
between, on the one hand, the recommendations of IBP and PIBP to maximize the
mutual information between the model parameters and test data, and on the
other, recent information-theoretic results (see Xu and Raginsky, 2017)
suggesting that this mutual information should be minimized. The key insight is
that the heuristics (and thus the constraints in the constrained optimization
problems) of IBP and PIBP are not applicable to the scenario analyzed by (Xu
and Raginsky, 2017) because the latter makes the additional assumption that the
parameters of the trained model have been selected to minimize the empirical
loss function. Aided by this insight, we formulate a new IBO that accounts for
this property of the parameters of the trained model, and derive and optimize a
variational bound on this IBO.
| [
{
"created": "Thu, 12 Dec 2019 22:46:58 GMT",
"version": "v1"
},
{
"created": "Fri, 20 Dec 2019 22:07:33 GMT",
"version": "v2"
}
] | 2019-12-24 | [
[
"Mukherjee",
"Sayandev",
""
]
] | We view the Information Bottleneck Principle (IBP: Tishby et al., 1999; Schwartz-Ziv and Tishby, 2017) and Predictive Information Bottleneck Principle (PIBP: Still et al., 2007; Alemi, 2019) as special cases of a family of general information bottleneck objectives (IBOs). Each IBO corresponds to a particular constrained optimization problem where the constraints apply to: (a) the mutual information between the training data and the learned model parameters or extracted representation of the data, and (b) the mutual information between the learned model parameters or extracted representation of the data and the test data (if any). The heuristics behind the IBP and PIBP are shown to yield different constraints in the corresponding constrained optimization problem formulations. We show how other heuristics lead to a new IBO, different from both the IBP and PIBP, and use the techniques from (Alemi, 2019) to derive and optimize a variational upper bound on the new IBO. We then apply the theory of general IBOs to resolve the seeming contradiction between, on the one hand, the recommendations of IBP and PIBP to maximize the mutual information between the model parameters and test data, and on the other, recent information-theoretic results (see Xu and Raginsky, 2017) suggesting that this mutual information should be minimized. The key insight is that the heuristics (and thus the constraints in the constrained optimization problems) of IBP and PIBP are not applicable to the scenario analyzed by (Xu and Raginsky, 2017) because the latter makes the additional assumption that the parameters of the trained model have been selected to minimize the empirical loss function. Aided by this insight, we formulate a new IBO that accounts for this property of the parameters of the trained model, and derive and optimize a variational bound on this IBO. |
2202.04367 | Laure Crochepierre | Laure Crochepierre (RTE, LORIA, ORPAILLEUR, UL), Lydia
Boudjeloud-Assala (LORIA, ORPAILLEUR, UL), Vincent Barbesant (RTE) | A Reinforcement Learning Approach to Domain-Knowledge Inclusion Using
Grammar Guided Symbolic Regression | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, symbolic regression has been of wide interest to provide an
interpretable symbolic representation of potentially large data relationships.
Initially circled to genetic algorithms, symbolic regression methods now
include a variety of Deep Learning based alternatives. However, these methods
still do not generalize well to real-world data, mainly because they hardly
include domain knowledge nor consider physical relationships between variables
such as known equations and units. Regarding these issues, we propose a
Reinforcement-Based Grammar-Guided Symbolic Regression (RBG2-SR) method that
constrains the representational space with domain-knowledge using context-free
grammar as reinforcement action space. We detail a Partially-Observable Markov
Decision Process (POMDP) modeling of the problem and benchmark our approach
against state-of-the-art methods. We also analyze the POMDP state definition
and propose a physical equation search use case on which we compare our
approach to grammar-based and non-grammarbased symbolic regression methods. The
experiment results show that our method is competitive against other
state-of-the-art methods on the benchmarks and offers the best error-complexity
trade-off, highlighting the interest of using a grammar-based method in a
real-world scenario.
| [
{
"created": "Wed, 9 Feb 2022 10:13:14 GMT",
"version": "v1"
}
] | 2022-02-10 | [
[
"Crochepierre",
"Laure",
"",
"RTE, LORIA, ORPAILLEUR, UL"
],
[
"Boudjeloud-Assala",
"Lydia",
"",
"LORIA, ORPAILLEUR, UL"
],
[
"Barbesant",
"Vincent",
"",
"RTE"
]
] | In recent years, symbolic regression has been of wide interest to provide an interpretable symbolic representation of potentially large data relationships. Initially circled to genetic algorithms, symbolic regression methods now include a variety of Deep Learning based alternatives. However, these methods still do not generalize well to real-world data, mainly because they hardly include domain knowledge nor consider physical relationships between variables such as known equations and units. Regarding these issues, we propose a Reinforcement-Based Grammar-Guided Symbolic Regression (RBG2-SR) method that constrains the representational space with domain-knowledge using context-free grammar as reinforcement action space. We detail a Partially-Observable Markov Decision Process (POMDP) modeling of the problem and benchmark our approach against state-of-the-art methods. We also analyze the POMDP state definition and propose a physical equation search use case on which we compare our approach to grammar-based and non-grammarbased symbolic regression methods. The experiment results show that our method is competitive against other state-of-the-art methods on the benchmarks and offers the best error-complexity trade-off, highlighting the interest of using a grammar-based method in a real-world scenario. |
2303.12964 | Tao Yang | Tao Yang | Continuous Indeterminate Probability Neural Network | 8 pages | null | null | null | cs.LG cs.AI cs.CV stat.ML | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This paper introduces a general model called CIPNN - Continuous Indeterminate
Probability Neural Network, and this model is based on IPNN, which is used for
discrete latent random variables. Currently, posterior of continuous latent
variables is regarded as intractable, with the new theory proposed by IPNN this
problem can be solved. Our contributions are Four-fold. First, we derive the
analytical solution of the posterior calculation of continuous latent random
variables and propose a general classification model (CIPNN). Second, we
propose a general auto-encoder called CIPAE - Continuous Indeterminate
Probability Auto-Encoder, the decoder part is not a neural network and uses a
fully probabilistic inference model for the first time. Third, we propose a new
method to visualize the latent random variables, we use one of N dimensional
latent variables as a decoder to reconstruct the input image, which can work
even for classification tasks, in this way, we can see what each latent
variable has learned. Fourth, IPNN has shown great classification capability,
CIPNN has pushed this classification capability to infinity. Theoretical
advantages are reflected in experimental results.
| [
{
"created": "Thu, 23 Mar 2023 00:11:17 GMT",
"version": "v1"
}
] | 2023-03-24 | [
[
"Yang",
"Tao",
""
]
] | This paper introduces a general model called CIPNN - Continuous Indeterminate Probability Neural Network, and this model is based on IPNN, which is used for discrete latent random variables. Currently, posterior of continuous latent variables is regarded as intractable, with the new theory proposed by IPNN this problem can be solved. Our contributions are Four-fold. First, we derive the analytical solution of the posterior calculation of continuous latent random variables and propose a general classification model (CIPNN). Second, we propose a general auto-encoder called CIPAE - Continuous Indeterminate Probability Auto-Encoder, the decoder part is not a neural network and uses a fully probabilistic inference model for the first time. Third, we propose a new method to visualize the latent random variables, we use one of N dimensional latent variables as a decoder to reconstruct the input image, which can work even for classification tasks, in this way, we can see what each latent variable has learned. Fourth, IPNN has shown great classification capability, CIPNN has pushed this classification capability to infinity. Theoretical advantages are reflected in experimental results. |
2104.02865 | Art Owen | Sifan Liu and Art B. Owen | Quasi-Newton Quasi-Monte Carlo for variational Bayes | null | null | null | null | cs.LG cs.NA math.NA stat.ML | http://creativecommons.org/licenses/by/4.0/ | Many machine learning problems optimize an objective that must be measured
with noise. The primary method is a first order stochastic gradient descent
using one or more Monte Carlo (MC) samples at each step. There are settings
where ill-conditioning makes second order methods such as L-BFGS more
effective. We study the use of randomized quasi-Monte Carlo (RQMC) sampling for
such problems. When MC sampling has a root mean squared error (RMSE) of
$O(n^{-1/2})$ then RQMC has an RMSE of $o(n^{-1/2})$ that can be close to
$O(n^{-3/2})$ in favorable settings. We prove that improved sampling accuracy
translates directly to improved optimization. In our empirical investigations
for variational Bayes, using RQMC with stochastic L-BFGS greatly speeds up the
optimization, and sometimes finds a better parameter value than MC does.
| [
{
"created": "Wed, 7 Apr 2021 02:34:03 GMT",
"version": "v1"
},
{
"created": "Wed, 21 Apr 2021 00:58:02 GMT",
"version": "v2"
}
] | 2021-04-22 | [
[
"Liu",
"Sifan",
""
],
[
"Owen",
"Art B.",
""
]
] | Many machine learning problems optimize an objective that must be measured with noise. The primary method is a first order stochastic gradient descent using one or more Monte Carlo (MC) samples at each step. There are settings where ill-conditioning makes second order methods such as L-BFGS more effective. We study the use of randomized quasi-Monte Carlo (RQMC) sampling for such problems. When MC sampling has a root mean squared error (RMSE) of $O(n^{-1/2})$ then RQMC has an RMSE of $o(n^{-1/2})$ that can be close to $O(n^{-3/2})$ in favorable settings. We prove that improved sampling accuracy translates directly to improved optimization. In our empirical investigations for variational Bayes, using RQMC with stochastic L-BFGS greatly speeds up the optimization, and sometimes finds a better parameter value than MC does. |
1203.1833 | Paul Hines Ph.D. | Josh C. Bongard, Paul D. H. Hines, Dylan Conger, Peter Hurd, and
Zhenyu Lu | Crowdsourcing Predictors of Behavioral Outcomes | null | IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol.
43, no. 1, pp. 176 - 185, 2013 | 10.1109/TSMCA.2012.2195168 | null | cs.CY cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generating models from large data sets -- and determining which subsets of
data to mine -- is becoming increasingly automated. However choosing what data
to collect in the first place requires human intuition or experience, usually
supplied by a domain expert. This paper describes a new approach to machine
science which demonstrates for the first time that non-domain experts can
collectively formulate features, and provide values for those features such
that they are predictive of some behavioral outcome of interest. This was
accomplished by building a web platform in which human groups interact to both
respond to questions likely to help predict a behavioral outcome and pose new
questions to their peers. This results in a dynamically-growing online survey,
but the result of this cooperative behavior also leads to models that can
predict user's outcomes based on their responses to the user-generated survey
questions. Here we describe two web-based experiments that instantiate this
approach: the first site led to models that can predict users' monthly electric
energy consumption; the other led to models that can predict users' body mass
index. As exponential increases in content are often observed in successful
online collaborative communities, the proposed methodology may, in the future,
lead to similar exponential rises in discovery and insight into the causal
factors of behavioral outcomes.
| [
{
"created": "Thu, 8 Mar 2012 15:56:22 GMT",
"version": "v1"
}
] | 2014-05-20 | [
[
"Bongard",
"Josh C.",
""
],
[
"Hines",
"Paul D. H.",
""
],
[
"Conger",
"Dylan",
""
],
[
"Hurd",
"Peter",
""
],
[
"Lu",
"Zhenyu",
""
]
] | Generating models from large data sets -- and determining which subsets of data to mine -- is becoming increasingly automated. However choosing what data to collect in the first place requires human intuition or experience, usually supplied by a domain expert. This paper describes a new approach to machine science which demonstrates for the first time that non-domain experts can collectively formulate features, and provide values for those features such that they are predictive of some behavioral outcome of interest. This was accomplished by building a web platform in which human groups interact to both respond to questions likely to help predict a behavioral outcome and pose new questions to their peers. This results in a dynamically-growing online survey, but the result of this cooperative behavior also leads to models that can predict user's outcomes based on their responses to the user-generated survey questions. Here we describe two web-based experiments that instantiate this approach: the first site led to models that can predict users' monthly electric energy consumption; the other led to models that can predict users' body mass index. As exponential increases in content are often observed in successful online collaborative communities, the proposed methodology may, in the future, lead to similar exponential rises in discovery and insight into the causal factors of behavioral outcomes. |
2207.04188 | Joao P. A. Dantas | Joao P. A. Dantas, Andre N. Costa, Felipe L. L. Medeiros, Diego
Geraldo, Marcos R. O. A. Maximo and Takashi Yoneyama | Supervised Machine Learning for Effective Missile Launch Based on Beyond
Visual Range Air Combat Simulations | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work compares supervised machine learning methods using reliable data
from constructive simulations to estimate the most effective moment for
launching missiles during air combat. We employed resampling techniques to
improve the predictive model, analyzing accuracy, precision, recall, and
f1-score. Indeed, we could identify the remarkable performance of the models
based on decision trees and the significant sensitivity of other algorithms to
resampling techniques. The models with the best f1-score brought values of
0.379 and 0.465 without and with the resampling technique, respectively, which
is an increase of 22.69%. Thus, if desirable, resampling techniques can improve
the model's recall and f1-score with a slight decline in accuracy and
precision. Therefore, through data obtained through constructive simulations,
it is possible to develop decision support tools based on machine learning
models, which may improve the flight quality in BVR air combat, increasing the
effectiveness of offensive missions to hit a particular target.
| [
{
"created": "Sat, 9 Jul 2022 04:06:00 GMT",
"version": "v1"
}
] | 2022-07-12 | [
[
"Dantas",
"Joao P. A.",
""
],
[
"Costa",
"Andre N.",
""
],
[
"Medeiros",
"Felipe L. L.",
""
],
[
"Geraldo",
"Diego",
""
],
[
"Maximo",
"Marcos R. O. A.",
""
],
[
"Yoneyama",
"Takashi",
""
]
] | This work compares supervised machine learning methods using reliable data from constructive simulations to estimate the most effective moment for launching missiles during air combat. We employed resampling techniques to improve the predictive model, analyzing accuracy, precision, recall, and f1-score. Indeed, we could identify the remarkable performance of the models based on decision trees and the significant sensitivity of other algorithms to resampling techniques. The models with the best f1-score brought values of 0.379 and 0.465 without and with the resampling technique, respectively, which is an increase of 22.69%. Thus, if desirable, resampling techniques can improve the model's recall and f1-score with a slight decline in accuracy and precision. Therefore, through data obtained through constructive simulations, it is possible to develop decision support tools based on machine learning models, which may improve the flight quality in BVR air combat, increasing the effectiveness of offensive missions to hit a particular target. |
2303.08652 | Nestor Prieto-Chavana | Nestor Prieto-Chavana, Julie Weeds, David Weir | Automated Query Generation for Evidence Collection from Web Search
Engines | null | null | null | null | cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is widely accepted that so-called facts can be checked by searching for
information on the Internet. This process requires a fact-checker to formulate
a search query based on the fact and to present it to a search engine. Then,
relevant and believable passages need to be identified in the search results
before a decision is made. This process is carried out by sub-editors at many
news and media organisations on a daily basis. Here, we ask the question as to
whether it is possible to automate the first step, that of query generation.
Can we automatically formulate search queries based on factual statements which
are similar to those formulated by human experts? Here, we consider similarity
both in terms of textual similarity and with respect to relevant documents
being returned by a search engine. First, we introduce a moderate-sized
evidence collection dataset which includes 390 factual statements together with
associated human-generated search queries and search results. Then, we
investigate generating queries using a number of rule-based and automatic text
generation methods based on pre-trained large language models (LLMs). We show
that these methods have different merits and propose a hybrid approach which
has superior performance in practice.
| [
{
"created": "Wed, 15 Mar 2023 14:32:00 GMT",
"version": "v1"
}
] | 2023-03-16 | [
[
"Prieto-Chavana",
"Nestor",
""
],
[
"Weeds",
"Julie",
""
],
[
"Weir",
"David",
""
]
] | It is widely accepted that so-called facts can be checked by searching for information on the Internet. This process requires a fact-checker to formulate a search query based on the fact and to present it to a search engine. Then, relevant and believable passages need to be identified in the search results before a decision is made. This process is carried out by sub-editors at many news and media organisations on a daily basis. Here, we ask the question as to whether it is possible to automate the first step, that of query generation. Can we automatically formulate search queries based on factual statements which are similar to those formulated by human experts? Here, we consider similarity both in terms of textual similarity and with respect to relevant documents being returned by a search engine. First, we introduce a moderate-sized evidence collection dataset which includes 390 factual statements together with associated human-generated search queries and search results. Then, we investigate generating queries using a number of rule-based and automatic text generation methods based on pre-trained large language models (LLMs). We show that these methods have different merits and propose a hybrid approach which has superior performance in practice. |
2208.06242 | Pegah Rokhforoz | Pegah Rokhforoz, Olga Fink | Multi-Agent Reinforcement Learning with Graph Convolutional Neural
Networks for optimal Bidding Strategies of Generation Units in Electricity
Markets | null | null | null | null | cs.AI cs.LG cs.MA | http://creativecommons.org/licenses/by/4.0/ | Finding optimal bidding strategies for generation units in electricity
markets would result in higher profit. However, it is a challenging problem due
to the system uncertainty which is due to the unknown other generation units'
strategies. Distributed optimization, where each entity or agent decides on its
bid individually, has become state of the art. However, it cannot overcome the
challenges of system uncertainties. Deep reinforcement learning is a promising
approach to learn the optimal strategy in uncertain environments. Nevertheless,
it is not able to integrate the information on the spatial system topology in
the learning process. This paper proposes a distributed learning algorithm
based on deep reinforcement learning (DRL) combined with a graph convolutional
neural network (GCN). In fact, the proposed framework helps the agents to
update their decisions by getting feedback from the environment so that it can
overcome the challenges of the uncertainties. In this proposed algorithm, the
state and connection between nodes are the inputs of the GCN, which can make
agents aware of the structure of the system. This information on the system
topology helps the agents to improve their bidding strategies and increase the
profit. We evaluate the proposed algorithm on the IEEE 30-bus system under
different scenarios. Also, to investigate the generalization ability of the
proposed approach, we test the trained model on IEEE 39-bus system. The results
show that the proposed algorithm has more generalization abilities compare to
the DRL and can result in higher profit when changing the topology of the
system.
| [
{
"created": "Thu, 11 Aug 2022 09:29:31 GMT",
"version": "v1"
}
] | 2022-08-15 | [
[
"Rokhforoz",
"Pegah",
""
],
[
"Fink",
"Olga",
""
]
] | Finding optimal bidding strategies for generation units in electricity markets would result in higher profit. However, it is a challenging problem due to the system uncertainty which is due to the unknown other generation units' strategies. Distributed optimization, where each entity or agent decides on its bid individually, has become state of the art. However, it cannot overcome the challenges of system uncertainties. Deep reinforcement learning is a promising approach to learn the optimal strategy in uncertain environments. Nevertheless, it is not able to integrate the information on the spatial system topology in the learning process. This paper proposes a distributed learning algorithm based on deep reinforcement learning (DRL) combined with a graph convolutional neural network (GCN). In fact, the proposed framework helps the agents to update their decisions by getting feedback from the environment so that it can overcome the challenges of the uncertainties. In this proposed algorithm, the state and connection between nodes are the inputs of the GCN, which can make agents aware of the structure of the system. This information on the system topology helps the agents to improve their bidding strategies and increase the profit. We evaluate the proposed algorithm on the IEEE 30-bus system under different scenarios. Also, to investigate the generalization ability of the proposed approach, we test the trained model on IEEE 39-bus system. The results show that the proposed algorithm has more generalization abilities compare to the DRL and can result in higher profit when changing the topology of the system. |
2105.10227 | Sani Abdullahi | Sani M. Abdullahi and Sun Shuifa | Random Hash Code Generation for Cancelable Fingerprint Templates using
Vector Permutation and Shift-order Process | 10 pages, 5 figures | null | null | null | cs.CR cs.CV | http://creativecommons.org/licenses/by/4.0/ | Cancelable biometric techniques have been used to prevent the compromise of
biometric data by generating and using their corresponding cancelable templates
for user authentication. However, the non-invertible distance preserving
transformation methods employed in various schemes are often vulnerable to
information leakage since matching is performed in the transformed domain. In
this paper, we propose a non-invertible distance preserving scheme based on
vector permutation and shift-order process. First, the dimension of feature
vectors is reduced using kernelized principle component analysis (KPCA) prior
to randomly permuting the extracted vector features. A shift-order process is
then applied to the generated features in order to achieve non-invertibility
and combat similarity-based attacks. The generated hash codes are resilient to
different security and privacy attacks whilst fulfilling the major revocability
and unlinkability requirements. Experimental evaluation conducted on 6 datasets
of FVC2002 and FVC2004 reveals a high-performance accuracy of the proposed
scheme better than other existing state-of-the-art schemes.
| [
{
"created": "Fri, 21 May 2021 09:37:54 GMT",
"version": "v1"
}
] | 2021-05-25 | [
[
"Abdullahi",
"Sani M.",
""
],
[
"Shuifa",
"Sun",
""
]
] | Cancelable biometric techniques have been used to prevent the compromise of biometric data by generating and using their corresponding cancelable templates for user authentication. However, the non-invertible distance preserving transformation methods employed in various schemes are often vulnerable to information leakage since matching is performed in the transformed domain. In this paper, we propose a non-invertible distance preserving scheme based on vector permutation and shift-order process. First, the dimension of feature vectors is reduced using kernelized principle component analysis (KPCA) prior to randomly permuting the extracted vector features. A shift-order process is then applied to the generated features in order to achieve non-invertibility and combat similarity-based attacks. The generated hash codes are resilient to different security and privacy attacks whilst fulfilling the major revocability and unlinkability requirements. Experimental evaluation conducted on 6 datasets of FVC2002 and FVC2004 reveals a high-performance accuracy of the proposed scheme better than other existing state-of-the-art schemes. |
1907.05514 | Sing-Ho Bae | Abdul Muqeet, Md Tauhid Bin Iqbal, and Sung-Ho Bae | Hybrid Residual Attention Network for Single Image Super Resolution | 12 pages, 5 figures | null | 10.1109/ACCESS.2019.2942346 | null | cs.CV cs.LG eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The extraction and proper utilization of convolution neural network (CNN)
features have a significant impact on the performance of image super-resolution
(SR). Although CNN features contain both the spatial and channel information,
current deep techniques on SR often suffer to maximize performance due to using
either the spatial or channel information. Moreover, they integrate such
information within a deep or wide network rather than exploiting all the
available features, eventually resulting in high computational complexity. To
address these issues, we present a binarized feature fusion (BFF) structure
that utilizes the extracted features from residual groups (RG) in an effective
way. Each residual group (RG) consists of multiple hybrid residual attention
blocks (HRAB) that effectively integrates the multiscale feature extraction
module and channel attention mechanism in a single block. Furthermore, we use
dilated convolutions with different dilation factors to extract multiscale
features. We also propose to adopt global, short and long skip connections and
residual groups (RG) structure to ease the flow of information without losing
important features details. In the paper, we call this overall network
architecture as hybrid residual attention network (HRAN). In the experiment, we
have observed the efficacy of our method against the state-of-the-art methods
for both the quantitative and qualitative comparisons.
| [
{
"created": "Thu, 11 Jul 2019 22:48:23 GMT",
"version": "v1"
}
] | 2019-10-23 | [
[
"Muqeet",
"Abdul",
""
],
[
"Iqbal",
"Md Tauhid Bin",
""
],
[
"Bae",
"Sung-Ho",
""
]
] | The extraction and proper utilization of convolution neural network (CNN) features have a significant impact on the performance of image super-resolution (SR). Although CNN features contain both the spatial and channel information, current deep techniques on SR often suffer to maximize performance due to using either the spatial or channel information. Moreover, they integrate such information within a deep or wide network rather than exploiting all the available features, eventually resulting in high computational complexity. To address these issues, we present a binarized feature fusion (BFF) structure that utilizes the extracted features from residual groups (RG) in an effective way. Each residual group (RG) consists of multiple hybrid residual attention blocks (HRAB) that effectively integrates the multiscale feature extraction module and channel attention mechanism in a single block. Furthermore, we use dilated convolutions with different dilation factors to extract multiscale features. We also propose to adopt global, short and long skip connections and residual groups (RG) structure to ease the flow of information without losing important features details. In the paper, we call this overall network architecture as hybrid residual attention network (HRAN). In the experiment, we have observed the efficacy of our method against the state-of-the-art methods for both the quantitative and qualitative comparisons. |
2402.04999 | Francesco Corso | Francesco Corso, Giuseppe Russo, Francesco Pierri | A Longitudinal Study of Italian and French Reddit Conversations Around
the Russian Invasion of Ukraine | 18 pages, 10 figures, Accepted at ACM WEBSCI'24 - Update: Added a
reference | null | 10.1145/3614419.3644012 | null | cs.SI cs.CY | http://creativecommons.org/licenses/by/4.0/ | Global events like wars and pandemics can intensify online discussions,
fostering information sharing and connection among individuals. However, the
divisive nature of such events may lead to polarization within online
communities, shaping the dynamics of online interactions. Our study delves into
the conversations within the largest Italian and French Reddit communities,
specifically examining how the Russian invasion of Ukraine affected online
interactions. We use a dataset with over 3 million posts (i.e., comments and
submissions) to (1) describe the patterns of moderation activity and (2)
characterize war-related discussions in the subreddits. We found changes in
moderators' behavior, who became more active during the first month of the war.
Moreover, we identified a connection between the daily sentiment of comments
and the prevalence of war-related discussions. These discussions were not only
more negative and toxic compared to non-war-related ones but also did not
involve a specific demographic group. Our research reveals that there is no
tendency for users with similar characteristics to interact more. Overall, our
study reveals how the war in Ukraine had a negative influence on daily
conversations in the analyzed communities. This sheds light on how users
responded to this significant event, providing insights into the dynamics of
online discussions during events of global relevance.
| [
{
"created": "Wed, 7 Feb 2024 16:15:52 GMT",
"version": "v1"
},
{
"created": "Mon, 19 Feb 2024 09:31:50 GMT",
"version": "v2"
}
] | 2024-02-20 | [
[
"Corso",
"Francesco",
""
],
[
"Russo",
"Giuseppe",
""
],
[
"Pierri",
"Francesco",
""
]
] | Global events like wars and pandemics can intensify online discussions, fostering information sharing and connection among individuals. However, the divisive nature of such events may lead to polarization within online communities, shaping the dynamics of online interactions. Our study delves into the conversations within the largest Italian and French Reddit communities, specifically examining how the Russian invasion of Ukraine affected online interactions. We use a dataset with over 3 million posts (i.e., comments and submissions) to (1) describe the patterns of moderation activity and (2) characterize war-related discussions in the subreddits. We found changes in moderators' behavior, who became more active during the first month of the war. Moreover, we identified a connection between the daily sentiment of comments and the prevalence of war-related discussions. These discussions were not only more negative and toxic compared to non-war-related ones but also did not involve a specific demographic group. Our research reveals that there is no tendency for users with similar characteristics to interact more. Overall, our study reveals how the war in Ukraine had a negative influence on daily conversations in the analyzed communities. This sheds light on how users responded to this significant event, providing insights into the dynamics of online discussions during events of global relevance. |
1909.00989 | Andreas Pavlogiannis | Krishnendu Chatterjee and Andreas Pavlogiannis and Viktor Toman | Value-centric Dynamic Partial Order Reduction | null | null | null | null | cs.PL cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The verification of concurrent programs remains an open challenge, as thread
interaction has to be accounted for, which leads to state-space explosion.
Stateless model checking battles this problem by exploring traces rather than
states of the program. As there are exponentially many traces, dynamic
partial-order reduction (DPOR) techniques are used to partition the trace space
into equivalence classes, and explore a few representatives from each class.
The standard equivalence that underlies most DPOR techniques is the
happens-before equivalence, however recent works have spawned a vivid interest
towards coarser equivalences. The efficiency of such approaches is a product of
two parameters: (i) the size of the partitioning induced by the equivalence,
and (ii) the time spent by the exploration algorithm in each class of the
partitioning.
In this work, we present a new equivalence, called value-happens-before and
show that it has two appealing features. First, value-happens-before is always
at least as coarse as the happens-before equivalence, and can be even
exponentially coarser. Second, the value-happens-before partitioning is
efficiently explorable when the number of threads is bounded. We present an
algorithm called value-centric DPOR (VCDPOR), which explores the underlying
partitioning using polynomial time per class. Finally, we perform an
experimental evaluation of VCDPOR on various benchmarks, and compare it against
other state-of-the-art approaches. Our results show that value-happens-before
typically induces a significant reduction in the size of the underlying
partitioning, which leads to a considerable reduction in the running time for
exploring the whole partitioning.
| [
{
"created": "Tue, 3 Sep 2019 08:06:03 GMT",
"version": "v1"
}
] | 2019-09-04 | [
[
"Chatterjee",
"Krishnendu",
""
],
[
"Pavlogiannis",
"Andreas",
""
],
[
"Toman",
"Viktor",
""
]
] | The verification of concurrent programs remains an open challenge, as thread interaction has to be accounted for, which leads to state-space explosion. Stateless model checking battles this problem by exploring traces rather than states of the program. As there are exponentially many traces, dynamic partial-order reduction (DPOR) techniques are used to partition the trace space into equivalence classes, and explore a few representatives from each class. The standard equivalence that underlies most DPOR techniques is the happens-before equivalence, however recent works have spawned a vivid interest towards coarser equivalences. The efficiency of such approaches is a product of two parameters: (i) the size of the partitioning induced by the equivalence, and (ii) the time spent by the exploration algorithm in each class of the partitioning. In this work, we present a new equivalence, called value-happens-before and show that it has two appealing features. First, value-happens-before is always at least as coarse as the happens-before equivalence, and can be even exponentially coarser. Second, the value-happens-before partitioning is efficiently explorable when the number of threads is bounded. We present an algorithm called value-centric DPOR (VCDPOR), which explores the underlying partitioning using polynomial time per class. Finally, we perform an experimental evaluation of VCDPOR on various benchmarks, and compare it against other state-of-the-art approaches. Our results show that value-happens-before typically induces a significant reduction in the size of the underlying partitioning, which leads to a considerable reduction in the running time for exploring the whole partitioning. |
2306.12144 | Ying Li | Ying Li, Xiaodong Lee, Botao Peng, Themis Palpanas, Jingan Xue | PrivSketch: A Private Sketch-based Frequency Estimation Protocol for
Data Streams | null | null | null | null | cs.CR | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Local differential privacy (LDP) has recently become a popular
privacy-preserving data collection technique protecting users' privacy. The
main problem of data stream collection under LDP is the poor utility due to
multi-item collection from a very large domain. This paper proposes PrivSketch,
a high-utility frequency estimation protocol taking advantage of sketches,
suitable for private data stream collection. Combining the proposed background
information and a decode-first collection-side workflow, PrivSketch improves
the utility by reducing the errors introduced by the sketching algorithm and
the privacy budget utilization when collecting multiple items. We analytically
prove the superior accuracy and privacy characteristics of PrivSketch, and also
evaluate them experimentally. Our evaluation, with several diverse synthetic
and real datasets, demonstrates that PrivSketch is 1-3 orders of magnitude
better than the competitors in terms of utility in both frequency estimation
and frequent item estimation, while being up to ~100x faster.
| [
{
"created": "Wed, 21 Jun 2023 09:42:13 GMT",
"version": "v1"
}
] | 2023-06-22 | [
[
"Li",
"Ying",
""
],
[
"Lee",
"Xiaodong",
""
],
[
"Peng",
"Botao",
""
],
[
"Palpanas",
"Themis",
""
],
[
"Xue",
"Jingan",
""
]
] | Local differential privacy (LDP) has recently become a popular privacy-preserving data collection technique protecting users' privacy. The main problem of data stream collection under LDP is the poor utility due to multi-item collection from a very large domain. This paper proposes PrivSketch, a high-utility frequency estimation protocol taking advantage of sketches, suitable for private data stream collection. Combining the proposed background information and a decode-first collection-side workflow, PrivSketch improves the utility by reducing the errors introduced by the sketching algorithm and the privacy budget utilization when collecting multiple items. We analytically prove the superior accuracy and privacy characteristics of PrivSketch, and also evaluate them experimentally. Our evaluation, with several diverse synthetic and real datasets, demonstrates that PrivSketch is 1-3 orders of magnitude better than the competitors in terms of utility in both frequency estimation and frequent item estimation, while being up to ~100x faster. |
2111.05990 | Bo Wang | Bo Wang, Reza Mohajerpoor, Chen Cai, Inhi Kim, Hai L. Vu | Traffic4cast -- Large-scale Traffic Prediction using 3DResNet and
Sparse-UNet | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The IARAI competition Traffic4cast 2021 aims to predict short-term city-wide
high-resolution traffic states given the static and dynamic traffic information
obtained previously. The aim is to build a machine learning model for
predicting the normalized average traffic speed and flow of the subregions of
multiple large-scale cities using historical data points. The model is supposed
to be generic, in a way that it can be applied to new cities. By considering
spatiotemporal feature learning and modeling efficiency, we explore 3DResNet
and Sparse-UNet approaches for the tasks in this competition. The 3DResNet
based models use 3D convolution to learn the spatiotemporal features and apply
sequential convolutional layers to enhance the temporal relationship of the
outputs. The Sparse-UNet model uses sparse convolutions as the backbone for
spatiotemporal feature learning. Since the latter algorithm mainly focuses on
non-zero data points of the inputs, it dramatically reduces the computation
time, while maintaining a competitive accuracy. Our results show that both of
the proposed models achieve much better performance than the baseline
algorithms. The codes and pretrained models are available at
https://github.com/resuly/Traffic4Cast-2021.
| [
{
"created": "Wed, 10 Nov 2021 23:40:52 GMT",
"version": "v1"
}
] | 2021-11-12 | [
[
"Wang",
"Bo",
""
],
[
"Mohajerpoor",
"Reza",
""
],
[
"Cai",
"Chen",
""
],
[
"Kim",
"Inhi",
""
],
[
"Vu",
"Hai L.",
""
]
] | The IARAI competition Traffic4cast 2021 aims to predict short-term city-wide high-resolution traffic states given the static and dynamic traffic information obtained previously. The aim is to build a machine learning model for predicting the normalized average traffic speed and flow of the subregions of multiple large-scale cities using historical data points. The model is supposed to be generic, in a way that it can be applied to new cities. By considering spatiotemporal feature learning and modeling efficiency, we explore 3DResNet and Sparse-UNet approaches for the tasks in this competition. The 3DResNet based models use 3D convolution to learn the spatiotemporal features and apply sequential convolutional layers to enhance the temporal relationship of the outputs. The Sparse-UNet model uses sparse convolutions as the backbone for spatiotemporal feature learning. Since the latter algorithm mainly focuses on non-zero data points of the inputs, it dramatically reduces the computation time, while maintaining a competitive accuracy. Our results show that both of the proposed models achieve much better performance than the baseline algorithms. The codes and pretrained models are available at https://github.com/resuly/Traffic4Cast-2021. |
1002.0215 | Mauro Gaio | Marie-No\"elle Bessagnet (LIUPPA), Eric Kergosien (LIUPPA), Mauro Gaio
(LIUPPA) | Extraction de termes, reconnaissance et labellisation de relations dans
un th\'esaurus | null | CIDE'12: 12e Colloque International sur le Document Electronique,
Montr\'eal : Canada (2009) | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Within the documentary system domain, the integration of thesauri for
indexing and retrieval information steps is usual. In libraries, documents own
rich descriptive information made by librarians, under descriptive notice based
on Rameau thesaurus. We exploit two kinds of information in order to create a
first semantic structure. A step of conceptualization allows us to define the
various modules used to automatically build the semantic structure of the
indexation work. Our current work focuses on an approach that aims to define an
ontology based on a thesaurus. We hope to integrate new knowledge
characterizing the territory of our structure (adding "toponyms" and links
between concepts) thanks to a geographic information system (GIS).
| [
{
"created": "Mon, 1 Feb 2010 10:17:55 GMT",
"version": "v1"
}
] | 2010-10-06 | [
[
"Bessagnet",
"Marie-Noëlle",
"",
"LIUPPA"
],
[
"Kergosien",
"Eric",
"",
"LIUPPA"
],
[
"Gaio",
"Mauro",
"",
"LIUPPA"
]
] | Within the documentary system domain, the integration of thesauri for indexing and retrieval information steps is usual. In libraries, documents own rich descriptive information made by librarians, under descriptive notice based on Rameau thesaurus. We exploit two kinds of information in order to create a first semantic structure. A step of conceptualization allows us to define the various modules used to automatically build the semantic structure of the indexation work. Our current work focuses on an approach that aims to define an ontology based on a thesaurus. We hope to integrate new knowledge characterizing the territory of our structure (adding "toponyms" and links between concepts) thanks to a geographic information system (GIS). |
2008.05777 | Tianyi Ko | Tianyi Ko | A Tendon-driven Robot Gripper with Passively Switchable Underactuated
Surface and its Physics Simulation Based Parameter Optimization | null | IEEE Robotics and Automation Letters, vol. 5, no. 4, pp.
5002-5009, Oct. 2020 | 10.1109/LRA.2020.3005131 | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a single-actuator gripper that can lift thin
objects lying on a flat surface, in addition to the ability as a standard
parallel gripper. The key is a crawler on the fingertip, which is underactuated
together with other finger joints and switched with a passive and spring-loaded
mechanism. While the idea of crawling finger is not a new one, this paper
contributes to realize the crawling without additional motor. The gripper can
passively change the mode from the parallel approach mode to the pull-in mode,
then finally to the power grasp mode, according to the grasping state. To
optimize the highly underactuated system, we take a combination of black-box
optimization and physics simulation of the whole grasp process. We show that
this simulation-based approach can effectively consider the precontact motion,
in-hand manipulation, power grasp stability, and even failure mode, which is
difficult for the static-equilibrium-analysis-based approaches. In the last
part of the paper, we demonstrate that a prototype gripper with the proposed
structure and design parameters optimized under the proposed process
successfully power-grasped a thin sheet, a softcover book, and a cylinder lying
on a flat surface.
| [
{
"created": "Thu, 13 Aug 2020 09:44:28 GMT",
"version": "v1"
}
] | 2020-08-14 | [
[
"Ko",
"Tianyi",
""
]
] | In this paper, we propose a single-actuator gripper that can lift thin objects lying on a flat surface, in addition to the ability as a standard parallel gripper. The key is a crawler on the fingertip, which is underactuated together with other finger joints and switched with a passive and spring-loaded mechanism. While the idea of crawling finger is not a new one, this paper contributes to realize the crawling without additional motor. The gripper can passively change the mode from the parallel approach mode to the pull-in mode, then finally to the power grasp mode, according to the grasping state. To optimize the highly underactuated system, we take a combination of black-box optimization and physics simulation of the whole grasp process. We show that this simulation-based approach can effectively consider the precontact motion, in-hand manipulation, power grasp stability, and even failure mode, which is difficult for the static-equilibrium-analysis-based approaches. In the last part of the paper, we demonstrate that a prototype gripper with the proposed structure and design parameters optimized under the proposed process successfully power-grasped a thin sheet, a softcover book, and a cylinder lying on a flat surface. |
2404.11470 | Tharindu Ranasinghe Dr | Marcos Zampieri, Damith Premasiri, Tharindu Ranasinghe | A Federated Learning Approach to Privacy Preserving Offensive Language
Identification | Accepted to TRAC 2024 (Fourth Workshop on Threat, Aggression and
Cyberbullying) at LREC-COLING 2024 (The 2024 Joint International Conference
on Computational Linguistics, Language Resources and Evaluation) | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | The spread of various forms of offensive speech online is an important
concern in social media. While platforms have been investing heavily in ways of
coping with this problem, the question of privacy remains largely unaddressed.
Models trained to detect offensive language on social media are trained and/or
fine-tuned using large amounts of data often stored in centralized servers.
Since most social media data originates from end users, we propose a privacy
preserving decentralized architecture for identifying offensive language online
by introducing Federated Learning (FL) in the context of offensive language
identification. FL is a decentralized architecture that allows multiple models
to be trained locally without the need for data sharing hence preserving users'
privacy. We propose a model fusion approach to perform FL. We trained multiple
deep learning models on four publicly available English benchmark datasets
(AHSD, HASOC, HateXplain, OLID) and evaluated their performance in detail. We
also present initial cross-lingual experiments in English and Spanish. We show
that the proposed model fusion approach outperforms baselines in all the
datasets while preserving privacy.
| [
{
"created": "Wed, 17 Apr 2024 15:23:12 GMT",
"version": "v1"
}
] | 2024-04-18 | [
[
"Zampieri",
"Marcos",
""
],
[
"Premasiri",
"Damith",
""
],
[
"Ranasinghe",
"Tharindu",
""
]
] | The spread of various forms of offensive speech online is an important concern in social media. While platforms have been investing heavily in ways of coping with this problem, the question of privacy remains largely unaddressed. Models trained to detect offensive language on social media are trained and/or fine-tuned using large amounts of data often stored in centralized servers. Since most social media data originates from end users, we propose a privacy preserving decentralized architecture for identifying offensive language online by introducing Federated Learning (FL) in the context of offensive language identification. FL is a decentralized architecture that allows multiple models to be trained locally without the need for data sharing hence preserving users' privacy. We propose a model fusion approach to perform FL. We trained multiple deep learning models on four publicly available English benchmark datasets (AHSD, HASOC, HateXplain, OLID) and evaluated their performance in detail. We also present initial cross-lingual experiments in English and Spanish. We show that the proposed model fusion approach outperforms baselines in all the datasets while preserving privacy. |
1603.09638 | Berkay Celik | Z. Berkay Celik, Patrick McDaniel, Rauf Izmailov, Nicolas Papernot,
Ryan Sheatsley, Raquel Alvarez, Ananthram Swami | Detection under Privileged Information | A short version of this paper is accepted to ASIACCS 2018 | null | null | null | cs.CR cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For well over a quarter century, detection systems have been driven by models
learned from input features collected from real or simulated environments. An
artifact (e.g., network event, potential malware sample, suspicious email) is
deemed malicious or non-malicious based on its similarity to the learned model
at runtime. However, the training of the models has been historically limited
to only those features available at runtime. In this paper, we consider an
alternate learning approach that trains models using "privileged"
information--features available at training time but not at runtime--to improve
the accuracy and resilience of detection systems. In particular, we adapt and
extend recent advances in knowledge transfer, model influence, and distillation
to enable the use of forensic or other data unavailable at runtime in a range
of security domains. An empirical evaluation shows that privileged information
increases precision and recall over a system with no privileged information: we
observe up to 7.7% relative decrease in detection error for fast-flux bot
detection, 8.6% for malware traffic detection, 7.3% for malware classification,
and 16.9% for face recognition. We explore the limitations and applications of
different privileged information techniques in detection systems. Such
techniques provide a new means for detection systems to learn from data that
would otherwise not be available at runtime.
| [
{
"created": "Thu, 31 Mar 2016 15:28:45 GMT",
"version": "v1"
},
{
"created": "Wed, 20 Jul 2016 13:59:01 GMT",
"version": "v2"
},
{
"created": "Tue, 15 Nov 2016 01:17:06 GMT",
"version": "v3"
},
{
"created": "Sat, 31 Mar 2018 02:12:21 GMT",
"version": "v4"
}
] | 2018-04-03 | [
[
"Celik",
"Z. Berkay",
""
],
[
"McDaniel",
"Patrick",
""
],
[
"Izmailov",
"Rauf",
""
],
[
"Papernot",
"Nicolas",
""
],
[
"Sheatsley",
"Ryan",
""
],
[
"Alvarez",
"Raquel",
""
],
[
"Swami",
"Ananthram",
""
]
] | For well over a quarter century, detection systems have been driven by models learned from input features collected from real or simulated environments. An artifact (e.g., network event, potential malware sample, suspicious email) is deemed malicious or non-malicious based on its similarity to the learned model at runtime. However, the training of the models has been historically limited to only those features available at runtime. In this paper, we consider an alternate learning approach that trains models using "privileged" information--features available at training time but not at runtime--to improve the accuracy and resilience of detection systems. In particular, we adapt and extend recent advances in knowledge transfer, model influence, and distillation to enable the use of forensic or other data unavailable at runtime in a range of security domains. An empirical evaluation shows that privileged information increases precision and recall over a system with no privileged information: we observe up to 7.7% relative decrease in detection error for fast-flux bot detection, 8.6% for malware traffic detection, 7.3% for malware classification, and 16.9% for face recognition. We explore the limitations and applications of different privileged information techniques in detection systems. Such techniques provide a new means for detection systems to learn from data that would otherwise not be available at runtime. |
1904.11112 | Chaoyang Wang | Chaoyang Wang, Simon Lucey, Federico Perazzi, Oliver Wang | Web Stereo Video Supervision for Depth Prediction from Dynamic Scenes | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a fully data-driven method to compute depth from diverse monocular
video sequences that contain large amounts of non-rigid objects, e.g., people.
In order to learn reconstruction cues for non-rigid scenes, we introduce a new
dataset consisting of stereo videos scraped in-the-wild. This dataset has a
wide variety of scene types, and features large amounts of nonrigid objects,
especially people. From this, we compute disparity maps to be used as
supervision to train our approach. We propose a loss function that allows us to
generate a depth prediction even with unknown camera intrinsics and stereo
baselines in the dataset. We validate the use of large amounts of Internet
video by evaluating our method on existing video datasets with depth
supervision, including SINTEL, and KITTI, and show that our approach
generalizes better to natural scenes.
| [
{
"created": "Thu, 25 Apr 2019 01:13:16 GMT",
"version": "v1"
}
] | 2019-04-26 | [
[
"Wang",
"Chaoyang",
""
],
[
"Lucey",
"Simon",
""
],
[
"Perazzi",
"Federico",
""
],
[
"Wang",
"Oliver",
""
]
] | We present a fully data-driven method to compute depth from diverse monocular video sequences that contain large amounts of non-rigid objects, e.g., people. In order to learn reconstruction cues for non-rigid scenes, we introduce a new dataset consisting of stereo videos scraped in-the-wild. This dataset has a wide variety of scene types, and features large amounts of nonrigid objects, especially people. From this, we compute disparity maps to be used as supervision to train our approach. We propose a loss function that allows us to generate a depth prediction even with unknown camera intrinsics and stereo baselines in the dataset. We validate the use of large amounts of Internet video by evaluating our method on existing video datasets with depth supervision, including SINTEL, and KITTI, and show that our approach generalizes better to natural scenes. |
1005.4014 | William Jackson | Gilbert Phuah Leong Siang, Nor Azman Ismail and Pang Yee Yong | A Study on Potential of Integrating Multimodal Interaction into Musical
Conducting Education | http://www.journalofcomputing.org | Journal of Computing, Volume 2, Issue 5, May 2010 | null | null | cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the rapid development of computer technology, computer music has begun
to appear in the laboratory. Many potential utility of computer music is
gradually increasing. The purpose of this paper is attempted to analyze the
possibility of integrating multimodal interaction such as vision-based hand
gesture and speech interaction into musical conducting education. To achieve
this purpose, this paper is focus on discuss some related research and the
traditional musical conducting education. To do so, six musical conductors had
been interviewed to share their musical conducting learning/ teaching
experience. These interviews had been analyzed in this paper to show the
syllabus and the focus of musical conducting education for beginners.
| [
{
"created": "Fri, 21 May 2010 17:18:29 GMT",
"version": "v1"
}
] | 2010-05-24 | [
[
"Siang",
"Gilbert Phuah Leong",
""
],
[
"Ismail",
"Nor Azman",
""
],
[
"Yong",
"Pang Yee",
""
]
] | With the rapid development of computer technology, computer music has begun to appear in the laboratory. Many potential utility of computer music is gradually increasing. The purpose of this paper is attempted to analyze the possibility of integrating multimodal interaction such as vision-based hand gesture and speech interaction into musical conducting education. To achieve this purpose, this paper is focus on discuss some related research and the traditional musical conducting education. To do so, six musical conductors had been interviewed to share their musical conducting learning/ teaching experience. These interviews had been analyzed in this paper to show the syllabus and the focus of musical conducting education for beginners. |
1805.10695 | Anelia Somekh-Baruch | Anelia Somekh-Baruch | Converse Theorems for the DMC with Mismatched Decoding | This work was supported by the Israel Science Foundation (ISF) under
grant 2013/919. Some of the results of this paper were presented at the IEEE
International Symposium on Information Theory (ISIT) 2015 and at the 2016
International Zurich Seminar on Communications. This paper was accepted for
publication in the IEEE Transactions on Information Theory | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The problem of mismatched decoding with an additive metric $q$ for a discrete
memoryless channel $W$ is addressed. The "product-space" improvement of the
random coding lower bound on the mismatch capacity, $C_q^{(\infty)}(W)$, was
introduced by Csisz\'ar and Narayan.
We study two kinds of decoders. The {\it $\delta$-margin mismatched decoder}
outputs a message whose metric with the channel output exceeds that of all the
other codewords by at least $\delta$. The {\it $\tau$-threshold decoder}
outputs a single message whose metric with the channel output exceeds a
threshold $\tau$. Both decoders declare an error if they fail to find a message
that meets the requirement. It is assumed that $q$ is bounded.
It is proved that $C_q^{(\infty)}(W)$ is equal to the mismatch capacity with
a constant margin decoder. We next consider sequences of $P$-constant
composition codebooks, whose empirical distribution of the codewords are at
least $o(n^{-1/2})$ close in the $L_1$ distance sense to $P$. Using the Central
Limit Theorem, it is shown that for such sequences of codebooks the supremum of
achievable rates with constant threshold decoding is upper bounded by the
supremum of the achievable rates with a constant margin decoder, and therefore
also by $C_q^{(\infty)}(W)$.
Further, a soft converse is proved stating that if the average probability of
error of a sequence of codebooks converges to zero sufficiently fast, the rate
of the code sequence is upper bounded by $C_q^{(\infty)}(W)$. In particular, if
$q$ is a bounded rational metric, and the average probability of error
converges to zero faster than $O(n^{-1})$, then $R\leq C_q^{(\infty)}(W)$.
Finally, a max-min multi-letter upper bound on the mismatch capacity that bears
some resemblance to $C_q^{(\infty)}(W)$ is presented.
| [
{
"created": "Sun, 27 May 2018 21:33:55 GMT",
"version": "v1"
}
] | 2018-05-29 | [
[
"Somekh-Baruch",
"Anelia",
""
]
] | The problem of mismatched decoding with an additive metric $q$ for a discrete memoryless channel $W$ is addressed. The "product-space" improvement of the random coding lower bound on the mismatch capacity, $C_q^{(\infty)}(W)$, was introduced by Csisz\'ar and Narayan. We study two kinds of decoders. The {\it $\delta$-margin mismatched decoder} outputs a message whose metric with the channel output exceeds that of all the other codewords by at least $\delta$. The {\it $\tau$-threshold decoder} outputs a single message whose metric with the channel output exceeds a threshold $\tau$. Both decoders declare an error if they fail to find a message that meets the requirement. It is assumed that $q$ is bounded. It is proved that $C_q^{(\infty)}(W)$ is equal to the mismatch capacity with a constant margin decoder. We next consider sequences of $P$-constant composition codebooks, whose empirical distribution of the codewords are at least $o(n^{-1/2})$ close in the $L_1$ distance sense to $P$. Using the Central Limit Theorem, it is shown that for such sequences of codebooks the supremum of achievable rates with constant threshold decoding is upper bounded by the supremum of the achievable rates with a constant margin decoder, and therefore also by $C_q^{(\infty)}(W)$. Further, a soft converse is proved stating that if the average probability of error of a sequence of codebooks converges to zero sufficiently fast, the rate of the code sequence is upper bounded by $C_q^{(\infty)}(W)$. In particular, if $q$ is a bounded rational metric, and the average probability of error converges to zero faster than $O(n^{-1})$, then $R\leq C_q^{(\infty)}(W)$. Finally, a max-min multi-letter upper bound on the mismatch capacity that bears some resemblance to $C_q^{(\infty)}(W)$ is presented. |
cs/0411014 | Paul Vitanyi | Nikolai K. Vereshchagin (Moscow State Univ.), Paul M.B. Vitanyi (CWI
and University of Amsterdam) | Rate Distortion and Denoising of Individual Data Using Kolmogorov
complexity | LaTex, 31 pages, 2 figures. The new version is again completely
rewritten, newly titled, and adds new results | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We examine the structure of families of distortion balls from the perspective
of Kolmogorov complexity. Special attention is paid to the canonical
rate-distortion function of a source word which returns the minimal Kolmogorov
complexity of all distortion balls containing that word subject to a bound on
their cardinality. This canonical rate-distortion function is related to the
more standard algorithmic rate-distortion function for the given distortion
measure. Examples are given of list distortion, Hamming distortion, and
Euclidean distortion. The algorithmic rate-distortion function can behave
differently from Shannon's rate-distortion function. To this end, we show that
the canonical rate-distortion function can and does assume a wide class of
shapes (unlike Shannon's); we relate low algorithmic mutual information to low
Kolmogorov complexity (and consequently suggest that certain aspects of the
mutual information formulation of Shannon's rate-distortion function behave
differently than would an analogous formulation using algorithmic mutual
information); we explore the notion that low Kolmogorov complexity distortion
balls containing a given word capture the interesting properties of that word
(which is hard to formalize in Shannon's theory) and this suggests an approach
to denoising; and, finally, we show that the different behavior of the
rate-distortion curves of individual source words to some extent disappears
after averaging over the source words.
| [
{
"created": "Sun, 7 Nov 2004 04:05:25 GMT",
"version": "v1"
},
{
"created": "Wed, 30 Nov 2005 18:01:36 GMT",
"version": "v2"
},
{
"created": "Thu, 1 Dec 2005 16:31:24 GMT",
"version": "v3"
},
{
"created": "Thu, 26 Nov 2009 15:39:35 GMT",
"version": "v4"
}
] | 2009-11-26 | [
[
"Vereshchagin",
"Nikolai K.",
"",
"Moscow State Univ."
],
[
"Vitanyi",
"Paul M. B.",
"",
"CWI\n and University of Amsterdam"
]
] | We examine the structure of families of distortion balls from the perspective of Kolmogorov complexity. Special attention is paid to the canonical rate-distortion function of a source word which returns the minimal Kolmogorov complexity of all distortion balls containing that word subject to a bound on their cardinality. This canonical rate-distortion function is related to the more standard algorithmic rate-distortion function for the given distortion measure. Examples are given of list distortion, Hamming distortion, and Euclidean distortion. The algorithmic rate-distortion function can behave differently from Shannon's rate-distortion function. To this end, we show that the canonical rate-distortion function can and does assume a wide class of shapes (unlike Shannon's); we relate low algorithmic mutual information to low Kolmogorov complexity (and consequently suggest that certain aspects of the mutual information formulation of Shannon's rate-distortion function behave differently than would an analogous formulation using algorithmic mutual information); we explore the notion that low Kolmogorov complexity distortion balls containing a given word capture the interesting properties of that word (which is hard to formalize in Shannon's theory) and this suggests an approach to denoising; and, finally, we show that the different behavior of the rate-distortion curves of individual source words to some extent disappears after averaging over the source words. |
2304.05365 | Raaz Dwivedi | Susobhan Ghosh, Raphael Kim, Prasidh Chhabria, Raaz Dwivedi, Predrag
Klasnja, Peng Liao, Kelly Zhang, Susan Murphy | Did we personalize? Assessing personalization by an online reinforcement
learning algorithm using resampling | The first two authors contributed equally | null | null | null | cs.LG stat.AP stat.ME stat.ML | http://creativecommons.org/licenses/by/4.0/ | There is a growing interest in using reinforcement learning (RL) to
personalize sequences of treatments in digital health to support users in
adopting healthier behaviors. Such sequential decision-making problems involve
decisions about when to treat and how to treat based on the user's context
(e.g., prior activity level, location, etc.). Online RL is a promising
data-driven approach for this problem as it learns based on each user's
historical responses and uses that knowledge to personalize these decisions.
However, to decide whether the RL algorithm should be included in an
``optimized'' intervention for real-world deployment, we must assess the data
evidence indicating that the RL algorithm is actually personalizing the
treatments to its users. Due to the stochasticity in the RL algorithm, one may
get a false impression that it is learning in certain states and using this
learning to provide specific treatments. We use a working definition of
personalization and introduce a resampling-based methodology for investigating
whether the personalization exhibited by the RL algorithm is an artifact of the
RL algorithm stochasticity. We illustrate our methodology with a case study by
analyzing the data from a physical activity clinical trial called HeartSteps,
which included the use of an online RL algorithm. We demonstrate how our
approach enhances data-driven truth-in-advertising of algorithm personalization
both across all users as well as within specific users in the study.
| [
{
"created": "Tue, 11 Apr 2023 17:20:37 GMT",
"version": "v1"
},
{
"created": "Sun, 16 Apr 2023 18:46:08 GMT",
"version": "v2"
},
{
"created": "Mon, 24 Apr 2023 08:39:05 GMT",
"version": "v3"
},
{
"created": "Tue, 23 May 2023 17:05:45 GMT",
"version": "v4"
},
{
"created": "Tue, 30 May 2023 17:02:10 GMT",
"version": "v5"
},
{
"created": "Mon, 7 Aug 2023 15:39:37 GMT",
"version": "v6"
}
] | 2023-08-08 | [
[
"Ghosh",
"Susobhan",
""
],
[
"Kim",
"Raphael",
""
],
[
"Chhabria",
"Prasidh",
""
],
[
"Dwivedi",
"Raaz",
""
],
[
"Klasnja",
"Predrag",
""
],
[
"Liao",
"Peng",
""
],
[
"Zhang",
"Kelly",
""
],
[
"Murphy",
"Susan",
""
]
] | There is a growing interest in using reinforcement learning (RL) to personalize sequences of treatments in digital health to support users in adopting healthier behaviors. Such sequential decision-making problems involve decisions about when to treat and how to treat based on the user's context (e.g., prior activity level, location, etc.). Online RL is a promising data-driven approach for this problem as it learns based on each user's historical responses and uses that knowledge to personalize these decisions. However, to decide whether the RL algorithm should be included in an ``optimized'' intervention for real-world deployment, we must assess the data evidence indicating that the RL algorithm is actually personalizing the treatments to its users. Due to the stochasticity in the RL algorithm, one may get a false impression that it is learning in certain states and using this learning to provide specific treatments. We use a working definition of personalization and introduce a resampling-based methodology for investigating whether the personalization exhibited by the RL algorithm is an artifact of the RL algorithm stochasticity. We illustrate our methodology with a case study by analyzing the data from a physical activity clinical trial called HeartSteps, which included the use of an online RL algorithm. We demonstrate how our approach enhances data-driven truth-in-advertising of algorithm personalization both across all users as well as within specific users in the study. |
1605.04634 | Changzhe Jiao | Changzhe Jiao, Princess Lyons, Alina Zare, Licet Rosales, Marjorie
Skubic | Heart Beat Characterization from Ballistocardiogram Signals using
Extended Functions of Multiple Instances | IEEE EMBC 2016, pp. 1-5 | IEEE EMBC 2016, pp. 1-5 | 10.1109/EMBC.2016.7590812 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A multiple instance learning (MIL) method, extended Function of Multiple
Instances ($e$FUMI), is applied to ballistocardiogram (BCG) signals produced by
a hydraulic bed sensor. The goal of this approach is to learn a personalized
heartbeat "concept" for an individual. This heartbeat concept is a prototype
(or "signature") that characterizes the heartbeat pattern for an individual in
ballistocardiogram data. The $e$FUMI method models the problem of learning a
heartbeat concept from a BCG signal as a MIL problem. This approach elegantly
addresses the uncertainty inherent in a BCG signal e. g., misalignment between
training data and ground truth, mis-collection of heartbeat by some
transducers, etc. Given a BCG training signal coupled with a ground truth
signal (e.g., a pulse finger sensor), training "bags" labeled with only binary
labels denoting if a training bag contains a heartbeat signal or not can be
generated. Then, using these bags, $e$FUMI learns a personalized concept of
heartbeat for a subject as well as several non-heartbeat background concepts.
After learning the heartbeat concept, heartbeat detection and heart rate
estimation can be applied to test data. Experimental results show that the
estimated heartbeat concept found by $e$FUMI is more representative and a more
discriminative prototype of the heartbeat signals than those found by
comparison MIL methods in the literature.
| [
{
"created": "Mon, 16 May 2016 02:30:28 GMT",
"version": "v1"
}
] | 2016-11-17 | [
[
"Jiao",
"Changzhe",
""
],
[
"Lyons",
"Princess",
""
],
[
"Zare",
"Alina",
""
],
[
"Rosales",
"Licet",
""
],
[
"Skubic",
"Marjorie",
""
]
] | A multiple instance learning (MIL) method, extended Function of Multiple Instances ($e$FUMI), is applied to ballistocardiogram (BCG) signals produced by a hydraulic bed sensor. The goal of this approach is to learn a personalized heartbeat "concept" for an individual. This heartbeat concept is a prototype (or "signature") that characterizes the heartbeat pattern for an individual in ballistocardiogram data. The $e$FUMI method models the problem of learning a heartbeat concept from a BCG signal as a MIL problem. This approach elegantly addresses the uncertainty inherent in a BCG signal e. g., misalignment between training data and ground truth, mis-collection of heartbeat by some transducers, etc. Given a BCG training signal coupled with a ground truth signal (e.g., a pulse finger sensor), training "bags" labeled with only binary labels denoting if a training bag contains a heartbeat signal or not can be generated. Then, using these bags, $e$FUMI learns a personalized concept of heartbeat for a subject as well as several non-heartbeat background concepts. After learning the heartbeat concept, heartbeat detection and heart rate estimation can be applied to test data. Experimental results show that the estimated heartbeat concept found by $e$FUMI is more representative and a more discriminative prototype of the heartbeat signals than those found by comparison MIL methods in the literature. |
1909.10812 | Kim Hammar | Kim Hammar, Shatha Jaradat, Nima Dokoohaki and Mihhail Matskin | Deep Text Mining of Instagram Data Without Strong Supervision | 8 pages, 5 figures. Pre-print for paper to appear in conference
proceedings for the Web Intelligence Conference | null | 10.1109/WI.2018.00-94 | null | cs.CL cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the advent of social media, our online feeds increasingly consist of
short, informal, and unstructured text. This textual data can be analyzed for
the purpose of improving user recommendations and detecting trends. Instagram
is one of the largest social media platforms, containing both text and images.
However, most of the prior research on text processing in social media is
focused on analyzing Twitter data, and little attention has been paid to text
mining of Instagram data. Moreover, many text mining methods rely on annotated
training data, which in practice is both difficult and expensive to obtain. In
this paper, we present methods for unsupervised mining of fashion attributes
from Instagram text, which can enable a new kind of user recommendation in the
fashion domain. In this context, we analyze a corpora of Instagram posts from
the fashion domain, introduce a system for extracting fashion attributes from
Instagram, and train a deep clothing classifier with weak supervision to
classify Instagram posts based on the associated text.
With our experiments, we confirm that word embeddings are a useful asset for
information extraction. Experimental results show that information extraction
using word embeddings outperforms a baseline that uses Levenshtein distance.
The results also show the benefit of combining weak supervision signals using
generative models instead of majority voting. Using weak supervision and
generative modeling, an F1 score of 0.61 is achieved on the task of classifying
the image contents of Instagram posts based solely on the associated text,
which is on level with human performance. Finally, our empirical study provides
one of the few available studies on Instagram text and shows that the text is
noisy, that the text distribution exhibits the long-tail phenomenon, and that
comment sections on Instagram are multi-lingual.
| [
{
"created": "Tue, 24 Sep 2019 11:04:02 GMT",
"version": "v1"
}
] | 2019-09-25 | [
[
"Hammar",
"Kim",
""
],
[
"Jaradat",
"Shatha",
""
],
[
"Dokoohaki",
"Nima",
""
],
[
"Matskin",
"Mihhail",
""
]
] | With the advent of social media, our online feeds increasingly consist of short, informal, and unstructured text. This textual data can be analyzed for the purpose of improving user recommendations and detecting trends. Instagram is one of the largest social media platforms, containing both text and images. However, most of the prior research on text processing in social media is focused on analyzing Twitter data, and little attention has been paid to text mining of Instagram data. Moreover, many text mining methods rely on annotated training data, which in practice is both difficult and expensive to obtain. In this paper, we present methods for unsupervised mining of fashion attributes from Instagram text, which can enable a new kind of user recommendation in the fashion domain. In this context, we analyze a corpora of Instagram posts from the fashion domain, introduce a system for extracting fashion attributes from Instagram, and train a deep clothing classifier with weak supervision to classify Instagram posts based on the associated text. With our experiments, we confirm that word embeddings are a useful asset for information extraction. Experimental results show that information extraction using word embeddings outperforms a baseline that uses Levenshtein distance. The results also show the benefit of combining weak supervision signals using generative models instead of majority voting. Using weak supervision and generative modeling, an F1 score of 0.61 is achieved on the task of classifying the image contents of Instagram posts based solely on the associated text, which is on level with human performance. Finally, our empirical study provides one of the few available studies on Instagram text and shows that the text is noisy, that the text distribution exhibits the long-tail phenomenon, and that comment sections on Instagram are multi-lingual. |
2312.05726 | Kaiming Shen | Kaiming Shen, Ziping Zhao, Yannan Chen, Zepeng Zhang, Hei Victor Cheng | Accelerating Quadratic Transform and WMMSE | 15 pages | IEEE Journal on Selected Areas in Communications 2024 | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fractional programming (FP) arises in various communications and signal
processing problems because several key quantities in the field are
fractionally structured, e.g., the Cram\'{e}r-Rao bound, the Fisher
information, and the signal-to-interference-plus-noise ratio (SINR). A recently
proposed method called the quadratic transform has been applied to the FP
problems extensively. The main contributions of the present paper are two-fold.
First, we investigate how fast the quadratic transform converges. To the best
of our knowledge, this is the first work that analyzes the convergence rate for
the quadratic transform as well as its special case the weighted minimum mean
square error (WMMSE) algorithm. Second, we accelerate the existing quadratic
transform via a novel use of Nesterov's extrapolation scheme [1]. Specifically,
by generalizing the minorization-maximization (MM) approach in [2], we
establish a nontrivial connection between the quadratic transform and the
gradient projection, thereby further incorporating the gradient extrapolation
into the quadratic transform to make it converge more rapidly. Moreover, the
paper showcases the practical use of the accelerated quadratic transform with
two frontier wireless applications: integrated sensing and communications
(ISAC) and massive multiple-input multiple-output (MIMO).
| [
{
"created": "Sun, 10 Dec 2023 02:16:49 GMT",
"version": "v1"
},
{
"created": "Tue, 28 May 2024 06:55:49 GMT",
"version": "v2"
}
] | 2024-05-29 | [
[
"Shen",
"Kaiming",
""
],
[
"Zhao",
"Ziping",
""
],
[
"Chen",
"Yannan",
""
],
[
"Zhang",
"Zepeng",
""
],
[
"Cheng",
"Hei Victor",
""
]
] | Fractional programming (FP) arises in various communications and signal processing problems because several key quantities in the field are fractionally structured, e.g., the Cram\'{e}r-Rao bound, the Fisher information, and the signal-to-interference-plus-noise ratio (SINR). A recently proposed method called the quadratic transform has been applied to the FP problems extensively. The main contributions of the present paper are two-fold. First, we investigate how fast the quadratic transform converges. To the best of our knowledge, this is the first work that analyzes the convergence rate for the quadratic transform as well as its special case the weighted minimum mean square error (WMMSE) algorithm. Second, we accelerate the existing quadratic transform via a novel use of Nesterov's extrapolation scheme [1]. Specifically, by generalizing the minorization-maximization (MM) approach in [2], we establish a nontrivial connection between the quadratic transform and the gradient projection, thereby further incorporating the gradient extrapolation into the quadratic transform to make it converge more rapidly. Moreover, the paper showcases the practical use of the accelerated quadratic transform with two frontier wireless applications: integrated sensing and communications (ISAC) and massive multiple-input multiple-output (MIMO). |
1309.5262 | Eirik Rosnes | \'Angela I. Barbero, Eirik Rosnes, Guang Yang, and {\O}yvind Ytrehus | Near-Field Passive RFID Communication: Channel Model and Code Design | Submitted to IEEE Transactions on Communications. Accepted March 5,
2014 | null | 10.1109/TCOMM.2014.032314.130723 | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper discusses a new channel model and code design for the
reader-to-tag channel in near-field passive radio frequency identification
(RFID) systems using inductive coupling as a power transfer mechanism. If the
receiver resynchronizes its internal clock each time a bit is detected, the
bit-shift channel used previously in the literature to model the reader-to-tag
channel needs to be modified. In particular, we propose a discretized Gaussian
shift channel as a new channel model in this scenario. We introduce the concept
of quantifiable error avoidance, which is much simpler than error correction.
The capacity is computed numerically, and we also design some new simple codes
for error avoidance on this channel model based on insights gained from the
capacity calculations. Finally, some simulation results are presented to
compare the proposed codes to the Manchester code and two previously proposed
codes for the bit-shift channel model.
| [
{
"created": "Fri, 20 Sep 2013 13:22:44 GMT",
"version": "v1"
},
{
"created": "Fri, 21 Mar 2014 08:02:02 GMT",
"version": "v2"
}
] | 2016-11-17 | [
[
"Barbero",
"Ángela I.",
""
],
[
"Rosnes",
"Eirik",
""
],
[
"Yang",
"Guang",
""
],
[
"Ytrehus",
"Øyvind",
""
]
] | This paper discusses a new channel model and code design for the reader-to-tag channel in near-field passive radio frequency identification (RFID) systems using inductive coupling as a power transfer mechanism. If the receiver resynchronizes its internal clock each time a bit is detected, the bit-shift channel used previously in the literature to model the reader-to-tag channel needs to be modified. In particular, we propose a discretized Gaussian shift channel as a new channel model in this scenario. We introduce the concept of quantifiable error avoidance, which is much simpler than error correction. The capacity is computed numerically, and we also design some new simple codes for error avoidance on this channel model based on insights gained from the capacity calculations. Finally, some simulation results are presented to compare the proposed codes to the Manchester code and two previously proposed codes for the bit-shift channel model. |
2009.04932 | Alberto Sabater | Alberto Sabater, Luis Montesano, Ana C. Murillo | Performance of object recognition in wearable videos | Emerging Technologies and Factory Automation, ETFA, 2019 | null | 10.1109/ETFA.2019.8869019 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Wearable technologies are enabling plenty of new applications of computer
vision, from life logging to health assistance. Many of them are required to
recognize the elements of interest in the scene captured by the camera. This
work studies the problem of object detection and localization on videos
captured by this type of camera. Wearable videos are a much more challenging
scenario for object detection than standard images or even another type of
videos, due to lower quality images (e.g. poor focus) or high clutter and
occlusion common in wearable recordings. Existing work typically focuses on
detecting the objects of focus or those being manipulated by the user wearing
the camera. We perform a more general evaluation of the task of object
detection in this type of video, because numerous applications, such as
marketing studies, also need detecting objects which are not in focus by the
user. This work presents a thorough study of the well known YOLO architecture,
that offers an excellent trade-off between accuracy and speed, for the
particular case of object detection in wearable video. We focus our study on
the public ADL Dataset, but we also use additional public data for
complementary evaluations. We run an exhaustive set of experiments with
different variations of the original architecture and its training strategy.
Our experiments drive to several conclusions about the most promising
directions for our goal and point us to further research steps to improve
detection in wearable videos.
| [
{
"created": "Thu, 10 Sep 2020 15:20:17 GMT",
"version": "v1"
}
] | 2020-09-11 | [
[
"Sabater",
"Alberto",
""
],
[
"Montesano",
"Luis",
""
],
[
"Murillo",
"Ana C.",
""
]
] | Wearable technologies are enabling plenty of new applications of computer vision, from life logging to health assistance. Many of them are required to recognize the elements of interest in the scene captured by the camera. This work studies the problem of object detection and localization on videos captured by this type of camera. Wearable videos are a much more challenging scenario for object detection than standard images or even another type of videos, due to lower quality images (e.g. poor focus) or high clutter and occlusion common in wearable recordings. Existing work typically focuses on detecting the objects of focus or those being manipulated by the user wearing the camera. We perform a more general evaluation of the task of object detection in this type of video, because numerous applications, such as marketing studies, also need detecting objects which are not in focus by the user. This work presents a thorough study of the well known YOLO architecture, that offers an excellent trade-off between accuracy and speed, for the particular case of object detection in wearable video. We focus our study on the public ADL Dataset, but we also use additional public data for complementary evaluations. We run an exhaustive set of experiments with different variations of the original architecture and its training strategy. Our experiments drive to several conclusions about the most promising directions for our goal and point us to further research steps to improve detection in wearable videos. |
2107.14662 | Ahmad Kourani | Ahmad Kourani and Naseem Daher | Marine Locomotion: A Tethered UAV$-$Buoy System with Surge Velocity
Control | 16 pages, 9 figures, accepted after revision at Robotics and
Autonomous Systems (RAS) | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Unmanned aerial vehicles (UAVs) are reaching offshore. In this work, we
formulate the novel problem of a marine locomotive quadrotor UAV, which
manipulates the surge velocity of a floating buoy by means of a cable. The
proposed robotic system can have a variety of novel applications for UAVs where
their high speed and maneuverability, as well as their ease of deployment and
wide field of vision, give them a superior advantage. In addition, the major
limitation of limited flight time of quadrotor UAVs is typically addressed
through an umbilical power cable, which naturally integrates with the proposed
system. A detailed high-fidelity dynamic model is presented for the buoy, UAV,
and water environment. In addition, a stable control system design is proposed
to manipulate the surge velocity of the buoy within certain constraints that
keep the buoy in contact with the water surface. Polar coordinates are used in
the controller design process since they outperform traditional Cartesian-based
velocity controllers when it comes to ensuring correlated effects on the
tracking performance, where each control channel independently affects one
control parameter. The system model and controller design are validated in
numerical simulation under different wave scenarios.
| [
{
"created": "Fri, 30 Jul 2021 14:33:07 GMT",
"version": "v1"
}
] | 2021-08-02 | [
[
"Kourani",
"Ahmad",
""
],
[
"Daher",
"Naseem",
""
]
] | Unmanned aerial vehicles (UAVs) are reaching offshore. In this work, we formulate the novel problem of a marine locomotive quadrotor UAV, which manipulates the surge velocity of a floating buoy by means of a cable. The proposed robotic system can have a variety of novel applications for UAVs where their high speed and maneuverability, as well as their ease of deployment and wide field of vision, give them a superior advantage. In addition, the major limitation of limited flight time of quadrotor UAVs is typically addressed through an umbilical power cable, which naturally integrates with the proposed system. A detailed high-fidelity dynamic model is presented for the buoy, UAV, and water environment. In addition, a stable control system design is proposed to manipulate the surge velocity of the buoy within certain constraints that keep the buoy in contact with the water surface. Polar coordinates are used in the controller design process since they outperform traditional Cartesian-based velocity controllers when it comes to ensuring correlated effects on the tracking performance, where each control channel independently affects one control parameter. The system model and controller design are validated in numerical simulation under different wave scenarios. |
1806.10692 | Arya Farahi | Jacob Abernethy, Alex Chojnacki, Arya Farahi, Eric Schwartz, Jared
Webb | ActiveRemediation: The Search for Lead Pipes in Flint, Michigan | 10 pages, 10 figures, To appear in KDD 2018, For associated
promotional video, see https://www.youtube.com/watch?v=YbIn_axYu9E | null | 10.1145/3219819.3219896 | null | cs.LG cs.CY stat.AP stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We detail our ongoing work in Flint, Michigan to detect pipes made of lead
and other hazardous metals. After elevated levels of lead were detected in
residents' drinking water, followed by an increase in blood lead levels in area
children, the state and federal governments directed over $125 million to
replace water service lines, the pipes connecting each home to the water
system. In the absence of accurate records, and with the high cost of
determining buried pipe materials, we put forth a number of predictive and
procedural tools to aid in the search and removal of lead infrastructure.
Alongside these statistical and machine learning approaches, we describe our
interactions with government officials in recommending homes for both
inspection and replacement, with a focus on the statistical model that adapts
to incoming information. Finally, in light of discussions about increased
spending on infrastructure development by the federal government, we explore
how our approach generalizes beyond Flint to other municipalities nationwide.
| [
{
"created": "Sun, 10 Jun 2018 13:04:53 GMT",
"version": "v1"
},
{
"created": "Fri, 17 Aug 2018 17:10:17 GMT",
"version": "v2"
}
] | 2018-08-20 | [
[
"Abernethy",
"Jacob",
""
],
[
"Chojnacki",
"Alex",
""
],
[
"Farahi",
"Arya",
""
],
[
"Schwartz",
"Eric",
""
],
[
"Webb",
"Jared",
""
]
] | We detail our ongoing work in Flint, Michigan to detect pipes made of lead and other hazardous metals. After elevated levels of lead were detected in residents' drinking water, followed by an increase in blood lead levels in area children, the state and federal governments directed over $125 million to replace water service lines, the pipes connecting each home to the water system. In the absence of accurate records, and with the high cost of determining buried pipe materials, we put forth a number of predictive and procedural tools to aid in the search and removal of lead infrastructure. Alongside these statistical and machine learning approaches, we describe our interactions with government officials in recommending homes for both inspection and replacement, with a focus on the statistical model that adapts to incoming information. Finally, in light of discussions about increased spending on infrastructure development by the federal government, we explore how our approach generalizes beyond Flint to other municipalities nationwide. |
1810.03099 | Hamid Mohammadi | Hamid Mohammadi, Amin Nikoukaran | Multi-reference Cosine: A New Approach to Text Similarity Measurement in
Large Collections | 8 pages, 18 figures, 1 table, 3 equations | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The importance of an efficient and scalable document similarity detection
system is undeniable nowadays. Search engines need batch text similarity
measures to detect duplicated and near-duplicated web pages in their indexes in
order to prevent indexing a web page multiple times. Furthermore, in the
scoring phase, search engines need similarity measures to detect duplicated
contents on web pages so as to increase the quality of their results. In this
paper, a new approach to batch text similarity detection is proposed by
combining some ideas from dimensionality reduction techniques and information
gain theory. The new approach is focused on search engines need to detect
duplicated and near-duplicated web pages. The new approach is evaluated on the
NEWS20 dataset and the results show that the new approach is faster than the
cosine text similarity algorithm in terms of speed and performance. On top of
that, It is faster and more accurate than the other rival method, Simhash
similarity algorithm.
| [
{
"created": "Sun, 7 Oct 2018 08:04:18 GMT",
"version": "v1"
}
] | 2018-10-09 | [
[
"Mohammadi",
"Hamid",
""
],
[
"Nikoukaran",
"Amin",
""
]
] | The importance of an efficient and scalable document similarity detection system is undeniable nowadays. Search engines need batch text similarity measures to detect duplicated and near-duplicated web pages in their indexes in order to prevent indexing a web page multiple times. Furthermore, in the scoring phase, search engines need similarity measures to detect duplicated contents on web pages so as to increase the quality of their results. In this paper, a new approach to batch text similarity detection is proposed by combining some ideas from dimensionality reduction techniques and information gain theory. The new approach is focused on search engines need to detect duplicated and near-duplicated web pages. The new approach is evaluated on the NEWS20 dataset and the results show that the new approach is faster than the cosine text similarity algorithm in terms of speed and performance. On top of that, It is faster and more accurate than the other rival method, Simhash similarity algorithm. |
2101.10579 | Kuan-Hao Huang | Kuan-Hao Huang, Kai-Wei Chang | Generating Syntactically Controlled Paraphrases without Using Annotated
Parallel Pairs | EACL 2021 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Paraphrase generation plays an essential role in natural language process
(NLP), and it has many downstream applications. However, training supervised
paraphrase models requires many annotated paraphrase pairs, which are usually
costly to obtain. On the other hand, the paraphrases generated by existing
unsupervised approaches are usually syntactically similar to the source
sentences and are limited in diversity. In this paper, we demonstrate that it
is possible to generate syntactically various paraphrases without the need for
annotated paraphrase pairs. We propose Syntactically controlled Paraphrase
Generator (SynPG), an encoder-decoder based model that learns to disentangle
the semantics and the syntax of a sentence from a collection of unannotated
texts. The disentanglement enables SynPG to control the syntax of output
paraphrases by manipulating the embedding in the syntactic space. Extensive
experiments using automatic metrics and human evaluation show that SynPG
performs better syntactic control than unsupervised baselines, while the
quality of the generated paraphrases is competitive. We also demonstrate that
the performance of SynPG is competitive or even better than supervised models
when the unannotated data is large. Finally, we show that the syntactically
controlled paraphrases generated by SynPG can be utilized for data augmentation
to improve the robustness of NLP models.
| [
{
"created": "Tue, 26 Jan 2021 06:13:52 GMT",
"version": "v1"
}
] | 2021-01-27 | [
[
"Huang",
"Kuan-Hao",
""
],
[
"Chang",
"Kai-Wei",
""
]
] | Paraphrase generation plays an essential role in natural language process (NLP), and it has many downstream applications. However, training supervised paraphrase models requires many annotated paraphrase pairs, which are usually costly to obtain. On the other hand, the paraphrases generated by existing unsupervised approaches are usually syntactically similar to the source sentences and are limited in diversity. In this paper, we demonstrate that it is possible to generate syntactically various paraphrases without the need for annotated paraphrase pairs. We propose Syntactically controlled Paraphrase Generator (SynPG), an encoder-decoder based model that learns to disentangle the semantics and the syntax of a sentence from a collection of unannotated texts. The disentanglement enables SynPG to control the syntax of output paraphrases by manipulating the embedding in the syntactic space. Extensive experiments using automatic metrics and human evaluation show that SynPG performs better syntactic control than unsupervised baselines, while the quality of the generated paraphrases is competitive. We also demonstrate that the performance of SynPG is competitive or even better than supervised models when the unannotated data is large. Finally, we show that the syntactically controlled paraphrases generated by SynPG can be utilized for data augmentation to improve the robustness of NLP models. |
2406.16695 | Min-Seop Kwak | Min-Seop Kwak, Donghoon Ahn, Ines Hyeonsu Kim, Jin-Hwa Kim, Seungryong
Kim | Geometry-Aware Score Distillation via 3D Consistent Noising and Gradient
Consistency Modeling | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Score distillation sampling (SDS), the methodology in which the score from
pretrained 2D diffusion models is distilled into 3D representation, has
recently brought significant advancements in text-to-3D generation task.
However, this approach is still confronted with critical geometric
inconsistency problems such as the Janus problem. Starting from a hypothesis
that such inconsistency problems may be induced by multiview inconsistencies
between 2D scores predicted from various viewpoints, we introduce GSD, a simple
and general plug-and-play framework for incorporating 3D consistency and
therefore geometry awareness into the SDS process. Our methodology is composed
of three components: 3D consistent noising, designed to produce 3D consistent
noise maps that perfectly follow the standard Gaussian distribution,
geometry-based gradient warping for identifying correspondences between
predicted gradients of different viewpoints, and novel gradient consistency
loss to optimize the scene geometry toward producing more consistent gradients.
We demonstrate that our method significantly improves performance, successfully
addressing the geometric inconsistency problems in text-to-3D generation task
with minimal computation cost and being compatible with existing score
distillation-based models. Our project page is available at
https://ku-cvlab.github.io/GSD/.
| [
{
"created": "Mon, 24 Jun 2024 14:58:17 GMT",
"version": "v1"
},
{
"created": "Mon, 1 Jul 2024 01:55:12 GMT",
"version": "v2"
}
] | 2024-07-02 | [
[
"Kwak",
"Min-Seop",
""
],
[
"Ahn",
"Donghoon",
""
],
[
"Kim",
"Ines Hyeonsu",
""
],
[
"Kim",
"Jin-Hwa",
""
],
[
"Kim",
"Seungryong",
""
]
] | Score distillation sampling (SDS), the methodology in which the score from pretrained 2D diffusion models is distilled into 3D representation, has recently brought significant advancements in text-to-3D generation task. However, this approach is still confronted with critical geometric inconsistency problems such as the Janus problem. Starting from a hypothesis that such inconsistency problems may be induced by multiview inconsistencies between 2D scores predicted from various viewpoints, we introduce GSD, a simple and general plug-and-play framework for incorporating 3D consistency and therefore geometry awareness into the SDS process. Our methodology is composed of three components: 3D consistent noising, designed to produce 3D consistent noise maps that perfectly follow the standard Gaussian distribution, geometry-based gradient warping for identifying correspondences between predicted gradients of different viewpoints, and novel gradient consistency loss to optimize the scene geometry toward producing more consistent gradients. We demonstrate that our method significantly improves performance, successfully addressing the geometric inconsistency problems in text-to-3D generation task with minimal computation cost and being compatible with existing score distillation-based models. Our project page is available at https://ku-cvlab.github.io/GSD/. |
1105.1515 | Kostas Pentikousis | Kostas Pentikousis, Ram\'on Ag\"uero, Jens Gebert, Jos\'e Antonio
Galache, Oliver Blume, Pekka P\"a\"akk\"onen | The Ambient Networks Heterogeneous Access Selection Architecture | Proc. First Ambient Networks Workshop on Mobility, Multiaccess, and
Network Management (M2NM), Sydney, Australia, October 2007, pp. 49-54 | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Forthcoming wireless communications will be characterized by the ubiquity of
multiaccess. Despite the inherently increased complexity, end-users should be
able to take advantage of the most suitable access network. Thus, access
selection in an environment with different overlapping radio technologies is of
central interest and an architecture is needed that performs equally well on
single- and multi-operator scenarios, considers several parameters, and
respects the principle of layering. In this paper, we introduce the Ambient
Networks heterogeneous access selection architecture explaining how it meets
such requirements. We present the essential architectural components and
explain their interactions. We illustrate how the proposed architecture works
in practice and discuss recent results form our prototype-based validation.
| [
{
"created": "Sun, 8 May 2011 13:40:19 GMT",
"version": "v1"
}
] | 2011-05-10 | [
[
"Pentikousis",
"Kostas",
""
],
[
"Agüero",
"Ramón",
""
],
[
"Gebert",
"Jens",
""
],
[
"Galache",
"José Antonio",
""
],
[
"Blume",
"Oliver",
""
],
[
"Pääkkönen",
"Pekka",
""
]
] | Forthcoming wireless communications will be characterized by the ubiquity of multiaccess. Despite the inherently increased complexity, end-users should be able to take advantage of the most suitable access network. Thus, access selection in an environment with different overlapping radio technologies is of central interest and an architecture is needed that performs equally well on single- and multi-operator scenarios, considers several parameters, and respects the principle of layering. In this paper, we introduce the Ambient Networks heterogeneous access selection architecture explaining how it meets such requirements. We present the essential architectural components and explain their interactions. We illustrate how the proposed architecture works in practice and discuss recent results form our prototype-based validation. |
2211.16402 | Farzan Byramji | Farzan Byramji | Query complexity of Boolean functions on slices | null | null | null | null | cs.CC math.CO | http://creativecommons.org/licenses/by/4.0/ | We study the deterministic query complexity of Boolean functions on slices of
the hypercube. The $k^{th}$ slice $\binom{[n]}{k}$ of the hypercube $\{0,1\}^n$
is the set of all $n$-bit strings with Hamming weight $k$. We show that there
exists a function on the balanced slice $\binom{[n]}{n/2}$ requiring $n -
O(\log \log n)$ queries. We give an explicit function on the balanced slice
requiring $n - O(\log n)$ queries based on independent sets in Johnson graphs.
On the weight-2 slice, we show that hard functions are closely related to
Ramsey graphs. Further we describe a simple way of transforming functions on
the hypercube to functions on the balanced slice while preserving several
complexity measures.
| [
{
"created": "Tue, 29 Nov 2022 17:26:46 GMT",
"version": "v1"
}
] | 2022-11-30 | [
[
"Byramji",
"Farzan",
""
]
] | We study the deterministic query complexity of Boolean functions on slices of the hypercube. The $k^{th}$ slice $\binom{[n]}{k}$ of the hypercube $\{0,1\}^n$ is the set of all $n$-bit strings with Hamming weight $k$. We show that there exists a function on the balanced slice $\binom{[n]}{n/2}$ requiring $n - O(\log \log n)$ queries. We give an explicit function on the balanced slice requiring $n - O(\log n)$ queries based on independent sets in Johnson graphs. On the weight-2 slice, we show that hard functions are closely related to Ramsey graphs. Further we describe a simple way of transforming functions on the hypercube to functions on the balanced slice while preserving several complexity measures. |
2304.01567 | Hannes Fassold | Hannes Fassold, Karlheinz Gutjahr, Anna Weber, Roland Perko | A real-time algorithm for human action recognition in RGB and thermal
video | Accepted for SPIE Real-Time Image Processing and Deep Learning
Conference 2023 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Monitoring the movement and actions of humans in video in real-time is an
important task. We present a deep learning based algorithm for human action
recognition for both RGB and thermal cameras. It is able to detect and track
humans and recognize four basic actions (standing, walking, running, lying) in
real-time on a notebook with a NVIDIA GPU. For this, it combines state of the
art components for object detection (Scaled YoloV4), optical flow (RAFT) and
pose estimation (EvoSkeleton). Qualitative experiments on a set of tunnel
videos show that the proposed algorithm works robustly for both RGB and thermal
video.
| [
{
"created": "Tue, 4 Apr 2023 06:44:13 GMT",
"version": "v1"
}
] | 2023-04-05 | [
[
"Fassold",
"Hannes",
""
],
[
"Gutjahr",
"Karlheinz",
""
],
[
"Weber",
"Anna",
""
],
[
"Perko",
"Roland",
""
]
] | Monitoring the movement and actions of humans in video in real-time is an important task. We present a deep learning based algorithm for human action recognition for both RGB and thermal cameras. It is able to detect and track humans and recognize four basic actions (standing, walking, running, lying) in real-time on a notebook with a NVIDIA GPU. For this, it combines state of the art components for object detection (Scaled YoloV4), optical flow (RAFT) and pose estimation (EvoSkeleton). Qualitative experiments on a set of tunnel videos show that the proposed algorithm works robustly for both RGB and thermal video. |
2003.03917 | Ala Shaabana | Yuma Rao, Jacob Steeves, Ala Shaabana, Daniel Attevelt, Matthew
McAteer | BitTensor: A Peer-to-Peer Intelligence Market | This paper is incomplete. A more complete version is being worked on
at the moment. Additionally one of the authors (daniel attevelt) has been
removed from the work and so this paper is now obsolete from both a content
and an author perspective. Please help us remove it | null | null | null | cs.AI cs.LG cs.MA | http://creativecommons.org/publicdomain/zero/1.0/ | As with other commodities, markets could help us efficiently produce machine
intelligence. We propose a market where intelligence is priced by other
intelligence systems peer-to-peer across the internet. Peers rank each other by
training neural networks which learn the value of their neighbors. Scores
accumulate on a digital ledger where high ranking peers are monetarily rewarded
with additional weight in the network. However, this form of peer-ranking is
not resistant to collusion, which could disrupt the accuracy of the mechanism.
The solution is a connectivity-based regularization which exponentially rewards
trusted peers, making the system resistant to collusion of up to 50 percent of
the network weight. The result is a collectively run intelligence market which
continual produces newly trained models and pays contributors who create
information theoretic value.
| [
{
"created": "Mon, 9 Mar 2020 04:04:18 GMT",
"version": "v1"
},
{
"created": "Wed, 10 Mar 2021 20:36:51 GMT",
"version": "v2"
},
{
"created": "Wed, 10 Nov 2021 02:05:10 GMT",
"version": "v3"
}
] | 2021-11-11 | [
[
"Rao",
"Yuma",
""
],
[
"Steeves",
"Jacob",
""
],
[
"Shaabana",
"Ala",
""
],
[
"Attevelt",
"Daniel",
""
],
[
"McAteer",
"Matthew",
""
]
] | As with other commodities, markets could help us efficiently produce machine intelligence. We propose a market where intelligence is priced by other intelligence systems peer-to-peer across the internet. Peers rank each other by training neural networks which learn the value of their neighbors. Scores accumulate on a digital ledger where high ranking peers are monetarily rewarded with additional weight in the network. However, this form of peer-ranking is not resistant to collusion, which could disrupt the accuracy of the mechanism. The solution is a connectivity-based regularization which exponentially rewards trusted peers, making the system resistant to collusion of up to 50 percent of the network weight. The result is a collectively run intelligence market which continual produces newly trained models and pays contributors who create information theoretic value. |
2304.14396 | Anastasis Stathopoulos | Anastasis Stathopoulos, Georgios Pavlakos, Ligong Han, Dimitris
Metaxas | Learning Articulated Shape with Keypoint Pseudo-labels from Web Images | CVPR 2023 (project page:
https://statho.github.io/projects/animals3d/index.html) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper shows that it is possible to learn models for monocular 3D
reconstruction of articulated objects (e.g., horses, cows, sheep), using as few
as 50-150 images labeled with 2D keypoints. Our proposed approach involves
training category-specific keypoint estimators, generating 2D keypoint
pseudo-labels on unlabeled web images, and using both the labeled and
self-labeled sets to train 3D reconstruction models. It is based on two key
insights: (1) 2D keypoint estimation networks trained on as few as 50-150
images of a given object category generalize well and generate reliable
pseudo-labels; (2) a data selection mechanism can automatically create a
"curated" subset of the unlabeled web images that can be used for training --
we evaluate four data selection methods. Coupling these two insights enables us
to train models that effectively utilize web images, resulting in improved 3D
reconstruction performance for several articulated object categories beyond the
fully-supervised baseline. Our approach can quickly bootstrap a model and
requires only a few images labeled with 2D keypoints. This requirement can be
easily satisfied for any new object category. To showcase the practicality of
our approach for predicting the 3D shape of arbitrary object categories, we
annotate 2D keypoints on giraffe and bear images from COCO -- the annotation
process takes less than 1 minute per image.
| [
{
"created": "Thu, 27 Apr 2023 17:57:19 GMT",
"version": "v1"
}
] | 2023-04-28 | [
[
"Stathopoulos",
"Anastasis",
""
],
[
"Pavlakos",
"Georgios",
""
],
[
"Han",
"Ligong",
""
],
[
"Metaxas",
"Dimitris",
""
]
] | This paper shows that it is possible to learn models for monocular 3D reconstruction of articulated objects (e.g., horses, cows, sheep), using as few as 50-150 images labeled with 2D keypoints. Our proposed approach involves training category-specific keypoint estimators, generating 2D keypoint pseudo-labels on unlabeled web images, and using both the labeled and self-labeled sets to train 3D reconstruction models. It is based on two key insights: (1) 2D keypoint estimation networks trained on as few as 50-150 images of a given object category generalize well and generate reliable pseudo-labels; (2) a data selection mechanism can automatically create a "curated" subset of the unlabeled web images that can be used for training -- we evaluate four data selection methods. Coupling these two insights enables us to train models that effectively utilize web images, resulting in improved 3D reconstruction performance for several articulated object categories beyond the fully-supervised baseline. Our approach can quickly bootstrap a model and requires only a few images labeled with 2D keypoints. This requirement can be easily satisfied for any new object category. To showcase the practicality of our approach for predicting the 3D shape of arbitrary object categories, we annotate 2D keypoints on giraffe and bear images from COCO -- the annotation process takes less than 1 minute per image. |
2308.04224 | Wang Yao | Wang Yao, Muhammad Ali Farooq, Joseph Lemley and Peter Corcoran | Will your Doorbell Camera still recognize you as you grow old | The Paper is accepted in 25th Irish Machine Vision and Image
Processing Conference (IMVIP23) | null | 10.5281/zenodo.8208368 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Robust authentication for low-power consumer devices such as doorbell cameras
poses a valuable and unique challenge. This work explores the effect of age and
aging on the performance of facial authentication methods. Two public age
datasets, AgeDB and Morph-II have been used as baselines in this work. A
photo-realistic age transformation method has been employed to augment a set of
high-quality facial images with various age effects. Then the effect of these
synthetic aging data on the high-performance deep-learning-based face
recognition model is quantified by using various metrics including Receiver
Operating Characteristic (ROC) curves and match score distributions.
Experimental results demonstrate that long-term age effects are still a
significant challenge for the state-of-the-art facial authentication method.
| [
{
"created": "Tue, 8 Aug 2023 12:43:26 GMT",
"version": "v1"
}
] | 2023-08-09 | [
[
"Yao",
"Wang",
""
],
[
"Farooq",
"Muhammad Ali",
""
],
[
"Lemley",
"Joseph",
""
],
[
"Corcoran",
"Peter",
""
]
] | Robust authentication for low-power consumer devices such as doorbell cameras poses a valuable and unique challenge. This work explores the effect of age and aging on the performance of facial authentication methods. Two public age datasets, AgeDB and Morph-II have been used as baselines in this work. A photo-realistic age transformation method has been employed to augment a set of high-quality facial images with various age effects. Then the effect of these synthetic aging data on the high-performance deep-learning-based face recognition model is quantified by using various metrics including Receiver Operating Characteristic (ROC) curves and match score distributions. Experimental results demonstrate that long-term age effects are still a significant challenge for the state-of-the-art facial authentication method. |
0808.3546 | Ioan Raicu | Ioan Raicu, Yong Zhao, Ian Foster, Alex Szalay | Accelerating Large-scale Data Exploration through Data Diffusion | IEEE/ACM International Workshop on Data-Aware Distributed Computing
2008 | null | 10.1145/1383519.1383521 | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data-intensive applications often require exploratory analysis of large
datasets. If analysis is performed on distributed resources, data locality can
be crucial to high throughput and performance. We propose a "data diffusion"
approach that acquires compute and storage resources dynamically, replicates
data in response to demand, and schedules computations close to data. As demand
increases, more resources are acquired, thus allowing faster response to
subsequent requests that refer to the same data; when demand drops, resources
are released. This approach can provide the benefits of dedicated hardware
without the associated high costs, depending on workload and resource
characteristics. The approach is reminiscent of cooperative caching,
web-caching, and peer-to-peer storage systems, but addresses different
application demands. Other data-aware scheduling approaches assume dedicated
resources, which can be expensive and/or inefficient if load varies
significantly. To explore the feasibility of the data diffusion approach, we
have extended the Falkon resource provisioning and task scheduling system to
support data caching and data-aware scheduling. Performance results from both
micro-benchmarks and a large scale astronomy application demonstrate that our
approach improves performance relative to alternative approaches, as well as
provides improved scalability as aggregated I/O bandwidth scales linearly with
the number of data cache nodes.
| [
{
"created": "Tue, 26 Aug 2008 16:02:50 GMT",
"version": "v1"
}
] | 2016-11-17 | [
[
"Raicu",
"Ioan",
""
],
[
"Zhao",
"Yong",
""
],
[
"Foster",
"Ian",
""
],
[
"Szalay",
"Alex",
""
]
] | Data-intensive applications often require exploratory analysis of large datasets. If analysis is performed on distributed resources, data locality can be crucial to high throughput and performance. We propose a "data diffusion" approach that acquires compute and storage resources dynamically, replicates data in response to demand, and schedules computations close to data. As demand increases, more resources are acquired, thus allowing faster response to subsequent requests that refer to the same data; when demand drops, resources are released. This approach can provide the benefits of dedicated hardware without the associated high costs, depending on workload and resource characteristics. The approach is reminiscent of cooperative caching, web-caching, and peer-to-peer storage systems, but addresses different application demands. Other data-aware scheduling approaches assume dedicated resources, which can be expensive and/or inefficient if load varies significantly. To explore the feasibility of the data diffusion approach, we have extended the Falkon resource provisioning and task scheduling system to support data caching and data-aware scheduling. Performance results from both micro-benchmarks and a large scale astronomy application demonstrate that our approach improves performance relative to alternative approaches, as well as provides improved scalability as aggregated I/O bandwidth scales linearly with the number of data cache nodes. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.