id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2402.09939
|
Ridwan Taiwo
|
Ridwan Taiwo, Idris Temitope Bello, Sulemana Fatoama Abdulai,
Abdul-Mugis Yussif, Babatunde Abiodun Salami, Abdullahi Saka, Tarek Zayed
|
Generative AI in the Construction Industry: A State-of-the-art Analysis
|
74 pages, 11 figures, 20 tables
| null | null | null |
cs.AI cs.CL cs.HC cs.IR cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The construction industry is a vital sector of the global economy, but it
faces many productivity challenges in various processes, such as design,
planning, procurement, inspection, and maintenance. Generative artificial
intelligence (AI), which can create novel and realistic data or content, such
as text, image, video, or code, based on some input or prior knowledge, offers
innovative and disruptive solutions to address these challenges. However, there
is a gap in the literature on the current state, opportunities, and challenges
of generative AI in the construction industry. This study aims to fill this gap
by providing a state-of-the-art analysis of generative AI in construction, with
three objectives: (1) to review and categorize the existing and emerging
generative AI opportunities and challenges in the construction industry; (2) to
propose a framework for construction firms to build customized generative AI
solutions using their own data, comprising steps such as data collection,
dataset curation, training custom large language model (LLM), model evaluation,
and deployment; and (3) to demonstrate the framework via a case study of
developing a generative model for querying contract documents. The results show
that retrieval augmented generation (RAG) improves the baseline LLM by 5.2,
9.4, and 4.8% in terms of quality, relevance, and reproducibility. This study
provides academics and construction professionals with a comprehensive analysis
and practical framework to guide the adoption of generative AI techniques to
enhance productivity, quality, safety, and sustainability across the
construction industry.
|
[
{
"created": "Thu, 15 Feb 2024 13:39:55 GMT",
"version": "v1"
}
] |
2024-02-16
|
[
[
"Taiwo",
"Ridwan",
""
],
[
"Bello",
"Idris Temitope",
""
],
[
"Abdulai",
"Sulemana Fatoama",
""
],
[
"Yussif",
"Abdul-Mugis",
""
],
[
"Salami",
"Babatunde Abiodun",
""
],
[
"Saka",
"Abdullahi",
""
],
[
"Zayed",
"Tarek",
""
]
] |
The construction industry is a vital sector of the global economy, but it faces many productivity challenges in various processes, such as design, planning, procurement, inspection, and maintenance. Generative artificial intelligence (AI), which can create novel and realistic data or content, such as text, image, video, or code, based on some input or prior knowledge, offers innovative and disruptive solutions to address these challenges. However, there is a gap in the literature on the current state, opportunities, and challenges of generative AI in the construction industry. This study aims to fill this gap by providing a state-of-the-art analysis of generative AI in construction, with three objectives: (1) to review and categorize the existing and emerging generative AI opportunities and challenges in the construction industry; (2) to propose a framework for construction firms to build customized generative AI solutions using their own data, comprising steps such as data collection, dataset curation, training custom large language model (LLM), model evaluation, and deployment; and (3) to demonstrate the framework via a case study of developing a generative model for querying contract documents. The results show that retrieval augmented generation (RAG) improves the baseline LLM by 5.2, 9.4, and 4.8% in terms of quality, relevance, and reproducibility. This study provides academics and construction professionals with a comprehensive analysis and practical framework to guide the adoption of generative AI techniques to enhance productivity, quality, safety, and sustainability across the construction industry.
|
2109.01947
|
Jim Apple
|
Jim Apple
|
Stretching Your Data With Taffy Filters
|
15 pages, 15 figures
| null | null | null |
cs.DS cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Popular approximate membership query structures such as Bloom filters and
cuckoo filters are widely used in databases, security, and networking. These
structures represent sets approximately, and support at least two operations -
insert and lookup; lookup always returns true on elements inserted into the
structure; it also returns true with some probability $0 < \varepsilon < 1$ on
elements not inserted into the structure. These latter elements are called
false positives. Compensatory for these false positives, filters can be much
smaller than hash tables that represent the same set. However, unlike hash
tables, cuckoo filters and Bloom filters must be initialized with the intended
number of inserts to be performed, and cannot grow larger - inserts beyond this
number fail or significantly increase the false positive probability. This
paper presents designs and implementations of filters than can grow without
inserts failing and without meaningfully increasing the false positive
probability, even if the filters are created with a small initial size. The
resulting code is available on GitHub under a permissive open source license.
|
[
{
"created": "Sat, 4 Sep 2021 22:52:16 GMT",
"version": "v1"
},
{
"created": "Wed, 8 Sep 2021 15:09:07 GMT",
"version": "v2"
},
{
"created": "Sun, 19 Dec 2021 05:03:12 GMT",
"version": "v3"
},
{
"created": "Fri, 14 Jan 2022 03:24:29 GMT",
"version": "v4"
}
] |
2022-01-17
|
[
[
"Apple",
"Jim",
""
]
] |
Popular approximate membership query structures such as Bloom filters and cuckoo filters are widely used in databases, security, and networking. These structures represent sets approximately, and support at least two operations - insert and lookup; lookup always returns true on elements inserted into the structure; it also returns true with some probability $0 < \varepsilon < 1$ on elements not inserted into the structure. These latter elements are called false positives. Compensatory for these false positives, filters can be much smaller than hash tables that represent the same set. However, unlike hash tables, cuckoo filters and Bloom filters must be initialized with the intended number of inserts to be performed, and cannot grow larger - inserts beyond this number fail or significantly increase the false positive probability. This paper presents designs and implementations of filters than can grow without inserts failing and without meaningfully increasing the false positive probability, even if the filters are created with a small initial size. The resulting code is available on GitHub under a permissive open source license.
|
1904.05814
|
Tolga Birdal
|
Tolga Birdal and Umut \c{S}im\c{s}ekli
|
Probabilistic Permutation Synchronization using the Riemannian Structure
of the Birkhoff Polytope
|
To appear as oral presentation at CVPR 2019. 20 pages including the
supplementary material
| null | null | null |
cs.CV cs.GR cs.LG cs.NA cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present an entirely new geometric and probabilistic approach to
synchronization of correspondences across multiple sets of objects or images.
In particular, we present two algorithms: (1) Birkhoff-Riemannian L-BFGS for
optimizing the relaxed version of the combinatorially intractable cycle
consistency loss in a principled manner, (2) Birkhoff-Riemannian Langevin Monte
Carlo for generating samples on the Birkhoff Polytope and estimating the
confidence of the found solutions. To this end, we first introduce the very
recently developed Riemannian geometry of the Birkhoff Polytope. Next, we
introduce a new probabilistic synchronization model in the form of a Markov
Random Field (MRF). Finally, based on the first order retraction operators, we
formulate our problem as simulating a stochastic differential equation and
devise new integrators. We show on both synthetic and real datasets that we
achieve high quality multi-graph matching results with faster convergence and
reliable confidence/uncertainty estimates.
|
[
{
"created": "Thu, 11 Apr 2019 16:12:50 GMT",
"version": "v1"
}
] |
2019-04-12
|
[
[
"Birdal",
"Tolga",
""
],
[
"Şimşekli",
"Umut",
""
]
] |
We present an entirely new geometric and probabilistic approach to synchronization of correspondences across multiple sets of objects or images. In particular, we present two algorithms: (1) Birkhoff-Riemannian L-BFGS for optimizing the relaxed version of the combinatorially intractable cycle consistency loss in a principled manner, (2) Birkhoff-Riemannian Langevin Monte Carlo for generating samples on the Birkhoff Polytope and estimating the confidence of the found solutions. To this end, we first introduce the very recently developed Riemannian geometry of the Birkhoff Polytope. Next, we introduce a new probabilistic synchronization model in the form of a Markov Random Field (MRF). Finally, based on the first order retraction operators, we formulate our problem as simulating a stochastic differential equation and devise new integrators. We show on both synthetic and real datasets that we achieve high quality multi-graph matching results with faster convergence and reliable confidence/uncertainty estimates.
|
1612.03959
|
Tomoyoshi Shimobaba Dr.
|
Tomoyoshi Shimobaba, Yutaka Endo, Ryuji Hirayama, Yuki Nagahama,
Takayuki Takahashi, Takashi Nishitsuji, Takashi Kakue, Atsushi Shiraki, Naoki
Takada, Nobuyuki Masuda, Tomoyoshi Ito
|
Autoencoder-based holographic image restoration
| null | null |
10.1364/AO.56.000F27
| null |
cs.CV physics.optics
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a holographic image restoration method using an autoencoder, which
is an artificial neural network. Because holographic reconstructed images are
often contaminated by direct light, conjugate light, and speckle noise, the
discrimination of reconstructed images may be difficult. In this paper, we
demonstrate the restoration of reconstructed images from holograms that record
page data in holographic memory and QR codes by using the proposed method.
|
[
{
"created": "Mon, 12 Dec 2016 22:49:03 GMT",
"version": "v1"
}
] |
2017-04-05
|
[
[
"Shimobaba",
"Tomoyoshi",
""
],
[
"Endo",
"Yutaka",
""
],
[
"Hirayama",
"Ryuji",
""
],
[
"Nagahama",
"Yuki",
""
],
[
"Takahashi",
"Takayuki",
""
],
[
"Nishitsuji",
"Takashi",
""
],
[
"Kakue",
"Takashi",
""
],
[
"Shiraki",
"Atsushi",
""
],
[
"Takada",
"Naoki",
""
],
[
"Masuda",
"Nobuyuki",
""
],
[
"Ito",
"Tomoyoshi",
""
]
] |
We propose a holographic image restoration method using an autoencoder, which is an artificial neural network. Because holographic reconstructed images are often contaminated by direct light, conjugate light, and speckle noise, the discrimination of reconstructed images may be difficult. In this paper, we demonstrate the restoration of reconstructed images from holograms that record page data in holographic memory and QR codes by using the proposed method.
|
2306.15128
|
Kalyani Marathe
|
Kalyani Marathe, Mahtab Bigverdi, Nishat Khan, Tuhin Kundu, Patrick
Howe, Sharan Ranjit S, Anand Bhattad, Aniruddha Kembhavi, Linda G. Shapiro,
Ranjay Krishna
|
MIMIC: Masked Image Modeling with Image Correspondences
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Dense pixel-specific representation learning at scale has been bottlenecked
due to the unavailability of large-scale multi-view datasets. Current methods
for building effective pretraining datasets heavily rely on annotated 3D
meshes, point clouds, and camera parameters from simulated environments,
preventing them from building datasets from real-world data sources where such
metadata is lacking. We propose a pretraining dataset-curation approach that
does not require any additional annotations. Our method allows us to generate
multi-view datasets from both real-world videos and simulated environments at
scale. Specifically, we experiment with two scales: MIMIC-1M with 1.3M and
MIMIC-3M with 3.1M multi-view image pairs. We train multiple models with
different masked image modeling objectives to showcase the following findings:
Representations trained on our automatically generated MIMIC-3M outperform
those learned from expensive crowdsourced datasets (ImageNet-1K) and those
learned from synthetic environments (MULTIVIEW-HABITAT) on two dense geometric
tasks: depth estimation on NYUv2 (1.7%), and surface normals estimation on
Taskonomy (2.05%). For dense tasks which also require object understanding, we
outperform MULTIVIEW-HABITAT, on semantic segmentation on ADE20K (3.89%), pose
estimation on MSCOCO (9.4%), and reduce the gap with models pre-trained on the
object-centric expensive ImageNet-1K. We outperform even when the
representations are frozen, and when downstream training data is limited to
few-shot. Larger dataset (MIMIC-3M) significantly improves performance, which
is promising since our curation method can arbitrarily scale to produce even
larger datasets. MIMIC code, dataset, and pretrained models are open-sourced at
https://github.com/RAIVNLab/MIMIC.
|
[
{
"created": "Tue, 27 Jun 2023 00:40:12 GMT",
"version": "v1"
},
{
"created": "Wed, 28 Jun 2023 16:10:48 GMT",
"version": "v2"
},
{
"created": "Mon, 9 Oct 2023 02:15:22 GMT",
"version": "v3"
},
{
"created": "Thu, 16 May 2024 03:03:37 GMT",
"version": "v4"
}
] |
2024-05-17
|
[
[
"Marathe",
"Kalyani",
""
],
[
"Bigverdi",
"Mahtab",
""
],
[
"Khan",
"Nishat",
""
],
[
"Kundu",
"Tuhin",
""
],
[
"Howe",
"Patrick",
""
],
[
"S",
"Sharan Ranjit",
""
],
[
"Bhattad",
"Anand",
""
],
[
"Kembhavi",
"Aniruddha",
""
],
[
"Shapiro",
"Linda G.",
""
],
[
"Krishna",
"Ranjay",
""
]
] |
Dense pixel-specific representation learning at scale has been bottlenecked due to the unavailability of large-scale multi-view datasets. Current methods for building effective pretraining datasets heavily rely on annotated 3D meshes, point clouds, and camera parameters from simulated environments, preventing them from building datasets from real-world data sources where such metadata is lacking. We propose a pretraining dataset-curation approach that does not require any additional annotations. Our method allows us to generate multi-view datasets from both real-world videos and simulated environments at scale. Specifically, we experiment with two scales: MIMIC-1M with 1.3M and MIMIC-3M with 3.1M multi-view image pairs. We train multiple models with different masked image modeling objectives to showcase the following findings: Representations trained on our automatically generated MIMIC-3M outperform those learned from expensive crowdsourced datasets (ImageNet-1K) and those learned from synthetic environments (MULTIVIEW-HABITAT) on two dense geometric tasks: depth estimation on NYUv2 (1.7%), and surface normals estimation on Taskonomy (2.05%). For dense tasks which also require object understanding, we outperform MULTIVIEW-HABITAT, on semantic segmentation on ADE20K (3.89%), pose estimation on MSCOCO (9.4%), and reduce the gap with models pre-trained on the object-centric expensive ImageNet-1K. We outperform even when the representations are frozen, and when downstream training data is limited to few-shot. Larger dataset (MIMIC-3M) significantly improves performance, which is promising since our curation method can arbitrarily scale to produce even larger datasets. MIMIC code, dataset, and pretrained models are open-sourced at https://github.com/RAIVNLab/MIMIC.
|
2307.01902
|
Jonathan Kelly
|
Oliver Limoyo and Filip Mari\'c and Matthew Giamou and Petra Alexson
and Ivan Petrovi\'c and Jonathan Kelly
|
Euclidean Equivariant Models for Generative Graphical Inverse Kinematics
|
Proceedings of the Robotics: Science and Systems (RSS'23) Workshop on
Symmetries in Robot Learning, Daegu, Republic of Korea, Jul. 10, 2023. arXiv
admin note: substantial text overlap with arXiv:2209.08812
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Quickly and reliably finding accurate inverse kinematics (IK) solutions
remains a challenging problem for robotic manipulation. Existing numerical
solvers typically produce a single solution only and rely on local search
techniques to minimize a highly nonconvex objective function. Recently,
learning-based approaches that approximate the entire feasible set of solutions
have shown promise as a means to generate multiple fast and accurate IK results
in parallel. However, existing learning-based techniques have a significant
drawback: each robot of interest requires a specialized model that must be
trained from scratch. To address this shortcoming, we investigate a novel
distance-geometric robot representation coupled with a graph structure that
allows us to leverage the flexibility of graph neural networks (GNNs). We use
this approach to train a generative graphical inverse kinematics solver (GGIK)
that is able to produce a large number of diverse solutions in parallel while
also generalizing well -- a single learned model can be used to produce IK
solutions for a variety of different robots. The graphical formulation
elegantly exposes the symmetry and Euclidean equivariance of the IK problem
that stems from the spatial nature of robot manipulators. We exploit this
symmetry by encoding it into the architecture of our learned model, yielding a
flexible solver that is able to produce sets of IK solutions for multiple
robots.
|
[
{
"created": "Tue, 4 Jul 2023 20:12:02 GMT",
"version": "v1"
}
] |
2023-07-06
|
[
[
"Limoyo",
"Oliver",
""
],
[
"Marić",
"Filip",
""
],
[
"Giamou",
"Matthew",
""
],
[
"Alexson",
"Petra",
""
],
[
"Petrović",
"Ivan",
""
],
[
"Kelly",
"Jonathan",
""
]
] |
Quickly and reliably finding accurate inverse kinematics (IK) solutions remains a challenging problem for robotic manipulation. Existing numerical solvers typically produce a single solution only and rely on local search techniques to minimize a highly nonconvex objective function. Recently, learning-based approaches that approximate the entire feasible set of solutions have shown promise as a means to generate multiple fast and accurate IK results in parallel. However, existing learning-based techniques have a significant drawback: each robot of interest requires a specialized model that must be trained from scratch. To address this shortcoming, we investigate a novel distance-geometric robot representation coupled with a graph structure that allows us to leverage the flexibility of graph neural networks (GNNs). We use this approach to train a generative graphical inverse kinematics solver (GGIK) that is able to produce a large number of diverse solutions in parallel while also generalizing well -- a single learned model can be used to produce IK solutions for a variety of different robots. The graphical formulation elegantly exposes the symmetry and Euclidean equivariance of the IK problem that stems from the spatial nature of robot manipulators. We exploit this symmetry by encoding it into the architecture of our learned model, yielding a flexible solver that is able to produce sets of IK solutions for multiple robots.
|
1909.11336
|
Jakub Radoszewski
|
Patryk Czajka and Jakub Radoszewski
|
Experimental Evaluation of Algorithms for Computing Quasiperiods
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Quasiperiodicity is a generalization of periodicity that was introduced in
the early 1990s. Since then, dozens of algorithms for computing various types
of quasiperiodicity were proposed. Our work is a step towards answering the
question: "Which algorithm for computing quasiperiods to choose in practice?".
The central notions of quasiperiodicity are covers and seeds. We implement
algorithms for computing covers and seeds in the original and in new simplified
versions and compare their efficiency on various types of data. We also discuss
other known types of quasiperiodicity, distinguish partial covers as currently
the most promising for large real-world data, and check their effectiveness
using real-world data.
|
[
{
"created": "Wed, 25 Sep 2019 08:22:35 GMT",
"version": "v1"
}
] |
2019-09-26
|
[
[
"Czajka",
"Patryk",
""
],
[
"Radoszewski",
"Jakub",
""
]
] |
Quasiperiodicity is a generalization of periodicity that was introduced in the early 1990s. Since then, dozens of algorithms for computing various types of quasiperiodicity were proposed. Our work is a step towards answering the question: "Which algorithm for computing quasiperiods to choose in practice?". The central notions of quasiperiodicity are covers and seeds. We implement algorithms for computing covers and seeds in the original and in new simplified versions and compare their efficiency on various types of data. We also discuss other known types of quasiperiodicity, distinguish partial covers as currently the most promising for large real-world data, and check their effectiveness using real-world data.
|
1911.10194
|
Bowen Cheng
|
Bowen Cheng and Maxwell D. Collins and Yukun Zhu and Ting Liu and
Thomas S. Huang and Hartwig Adam and Liang-Chieh Chen
|
Panoptic-DeepLab: A Simple, Strong, and Fast Baseline for Bottom-Up
Panoptic Segmentation
|
CVPR 2020
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we introduce Panoptic-DeepLab, a simple, strong, and fast
system for panoptic segmentation, aiming to establish a solid baseline for
bottom-up methods that can achieve comparable performance of two-stage methods
while yielding fast inference speed. In particular, Panoptic-DeepLab adopts the
dual-ASPP and dual-decoder structures specific to semantic, and instance
segmentation, respectively. The semantic segmentation branch is the same as the
typical design of any semantic segmentation model (e.g., DeepLab), while the
instance segmentation branch is class-agnostic, involving a simple instance
center regression. As a result, our single Panoptic-DeepLab simultaneously
ranks first at all three Cityscapes benchmarks, setting the new state-of-art of
84.2% mIoU, 39.0% AP, and 65.5% PQ on test set. Additionally, equipped with
MobileNetV3, Panoptic-DeepLab runs nearly in real-time with a single 1025x2049
image (15.8 frames per second), while achieving a competitive performance on
Cityscapes (54.1 PQ% on test set). On Mapillary Vistas test set, our ensemble
of six models attains 42.7% PQ, outperforming the challenge winner in 2018 by a
healthy margin of 1.5%. Finally, our Panoptic-DeepLab also performs on par with
several top-down approaches on the challenging COCO dataset. For the first
time, we demonstrate a bottom-up approach could deliver state-of-the-art
results on panoptic segmentation.
|
[
{
"created": "Fri, 22 Nov 2019 18:59:51 GMT",
"version": "v1"
},
{
"created": "Fri, 6 Dec 2019 17:45:21 GMT",
"version": "v2"
},
{
"created": "Wed, 11 Mar 2020 17:59:11 GMT",
"version": "v3"
}
] |
2020-03-12
|
[
[
"Cheng",
"Bowen",
""
],
[
"Collins",
"Maxwell D.",
""
],
[
"Zhu",
"Yukun",
""
],
[
"Liu",
"Ting",
""
],
[
"Huang",
"Thomas S.",
""
],
[
"Adam",
"Hartwig",
""
],
[
"Chen",
"Liang-Chieh",
""
]
] |
In this work, we introduce Panoptic-DeepLab, a simple, strong, and fast system for panoptic segmentation, aiming to establish a solid baseline for bottom-up methods that can achieve comparable performance of two-stage methods while yielding fast inference speed. In particular, Panoptic-DeepLab adopts the dual-ASPP and dual-decoder structures specific to semantic, and instance segmentation, respectively. The semantic segmentation branch is the same as the typical design of any semantic segmentation model (e.g., DeepLab), while the instance segmentation branch is class-agnostic, involving a simple instance center regression. As a result, our single Panoptic-DeepLab simultaneously ranks first at all three Cityscapes benchmarks, setting the new state-of-art of 84.2% mIoU, 39.0% AP, and 65.5% PQ on test set. Additionally, equipped with MobileNetV3, Panoptic-DeepLab runs nearly in real-time with a single 1025x2049 image (15.8 frames per second), while achieving a competitive performance on Cityscapes (54.1 PQ% on test set). On Mapillary Vistas test set, our ensemble of six models attains 42.7% PQ, outperforming the challenge winner in 2018 by a healthy margin of 1.5%. Finally, our Panoptic-DeepLab also performs on par with several top-down approaches on the challenging COCO dataset. For the first time, we demonstrate a bottom-up approach could deliver state-of-the-art results on panoptic segmentation.
|
2311.17693
|
Amr Gomaa
|
Amr Gomaa and Bilal Mahdy and Niko Kleer and Antonio Kr\"uger
|
Toward a Surgeon-in-the-Loop Ophthalmic Robotic Apprentice using
Reinforcement and Imitation Learning
|
Accepted at IROS'24
| null | null | null |
cs.RO cs.CV cs.HC cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Robot-assisted surgical systems have demonstrated significant potential in
enhancing surgical precision and minimizing human errors. However, existing
systems cannot accommodate individual surgeons' unique preferences and
requirements. Additionally, they primarily focus on general surgeries (e.g.,
laparoscopy) and are unsuitable for highly precise microsurgeries, such as
ophthalmic procedures. Thus, we propose an image-guided approach for
surgeon-centered autonomous agents that can adapt to the individual surgeon's
skill level and preferred surgical techniques during ophthalmic cataract
surgery. Our approach trains reinforcement and imitation learning agents
simultaneously using curriculum learning approaches guided by image data to
perform all tasks of the incision phase of cataract surgery. By integrating the
surgeon's actions and preferences into the training process, our approach
enables the robot to implicitly learn and adapt to the individual surgeon's
unique techniques through surgeon-in-the-loop demonstrations. This results in a
more intuitive and personalized surgical experience for the surgeon while
ensuring consistent performance for the autonomous robotic apprentice. We
define and evaluate the effectiveness of our approach in a simulated
environment using our proposed metrics and highlight the trade-off between a
generic agent and a surgeon-centered adapted agent. Finally, our approach has
the potential to extend to other ophthalmic and microsurgical procedures,
opening the door to a new generation of surgeon-in-the-loop autonomous surgical
robots. We provide an open-source simulation framework for future development
and reproducibility at
https://github.com/amrgomaaelhady/CataractAdaptSurgRobot.
|
[
{
"created": "Wed, 29 Nov 2023 15:00:06 GMT",
"version": "v1"
},
{
"created": "Thu, 28 Mar 2024 18:24:46 GMT",
"version": "v2"
},
{
"created": "Mon, 12 Aug 2024 16:52:09 GMT",
"version": "v3"
}
] |
2024-08-13
|
[
[
"Gomaa",
"Amr",
""
],
[
"Mahdy",
"Bilal",
""
],
[
"Kleer",
"Niko",
""
],
[
"Krüger",
"Antonio",
""
]
] |
Robot-assisted surgical systems have demonstrated significant potential in enhancing surgical precision and minimizing human errors. However, existing systems cannot accommodate individual surgeons' unique preferences and requirements. Additionally, they primarily focus on general surgeries (e.g., laparoscopy) and are unsuitable for highly precise microsurgeries, such as ophthalmic procedures. Thus, we propose an image-guided approach for surgeon-centered autonomous agents that can adapt to the individual surgeon's skill level and preferred surgical techniques during ophthalmic cataract surgery. Our approach trains reinforcement and imitation learning agents simultaneously using curriculum learning approaches guided by image data to perform all tasks of the incision phase of cataract surgery. By integrating the surgeon's actions and preferences into the training process, our approach enables the robot to implicitly learn and adapt to the individual surgeon's unique techniques through surgeon-in-the-loop demonstrations. This results in a more intuitive and personalized surgical experience for the surgeon while ensuring consistent performance for the autonomous robotic apprentice. We define and evaluate the effectiveness of our approach in a simulated environment using our proposed metrics and highlight the trade-off between a generic agent and a surgeon-centered adapted agent. Finally, our approach has the potential to extend to other ophthalmic and microsurgical procedures, opening the door to a new generation of surgeon-in-the-loop autonomous surgical robots. We provide an open-source simulation framework for future development and reproducibility at https://github.com/amrgomaaelhady/CataractAdaptSurgRobot.
|
1912.10726
|
Yudie Wang
|
Yudie Wang, Zhiwei Li, Chao Zeng, Gui-Song Xia, Huanfeng Shen
|
An Urban Water Extraction Method Combining Deep Learning and Google
Earth Engine
|
This manuscript has been accepted for publication in IEEE Journal of
Selected Topics in Applied Earth Observations and Remote Sensing, vol. 13,
pp. 769-782, 2020
|
IEEE Journal of Selected Topics in Applied Earth Observations and
Remote Sensing, vol. 13, pp. 769-782, 2020
|
10.1109/JSTARS.2020.2971783
| null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Urban water is important for the urban ecosystem. Accurate and efficient
detection of urban water with remote sensing data is of great significance for
urban management and planning. In this paper, we proposed a new method to
combine Google Earth Engine (GEE) with multiscale convolutional neural network
(MSCNN) to extract urban water from Landsat images, which is summarized as
offline training and online prediction (OTOP). That is, the training of MSCNN
was completed offline, and the process of urban water extraction was
implemented on GEE with the trained parameters of MSCNN. The OTOP can give full
play to the respective advantages of GEE and CNN, and make the use of deep
learning method on GEE more flexible. It can process available satellite images
with high performance without data download and storage, and the overall
performance of urban water extraction is also higher than that of the modified
normalized difference water index (MNDWI) and random forest. The mean kappa,
F1-score and intersection over union (IoU) of urban water extraction with the
OTOP in Changchun, Wuhan, Kunming and Guangzhou reached 0.924, 0.930 and 0.869,
respectively. The results of the extended validation in the other major cities
of China also show that the OTOP is robust and can be used to extract different
types of urban water, which benefits from the structural design and training of
the MSCNN. Therefore, the OTOP is especially suitable for the study of
large-scale and long-term urban water change detection in the background of
urbanization.
|
[
{
"created": "Mon, 23 Dec 2019 10:50:03 GMT",
"version": "v1"
},
{
"created": "Mon, 20 May 2024 03:18:19 GMT",
"version": "v2"
}
] |
2024-05-21
|
[
[
"Wang",
"Yudie",
""
],
[
"Li",
"Zhiwei",
""
],
[
"Zeng",
"Chao",
""
],
[
"Xia",
"Gui-Song",
""
],
[
"Shen",
"Huanfeng",
""
]
] |
Urban water is important for the urban ecosystem. Accurate and efficient detection of urban water with remote sensing data is of great significance for urban management and planning. In this paper, we proposed a new method to combine Google Earth Engine (GEE) with multiscale convolutional neural network (MSCNN) to extract urban water from Landsat images, which is summarized as offline training and online prediction (OTOP). That is, the training of MSCNN was completed offline, and the process of urban water extraction was implemented on GEE with the trained parameters of MSCNN. The OTOP can give full play to the respective advantages of GEE and CNN, and make the use of deep learning method on GEE more flexible. It can process available satellite images with high performance without data download and storage, and the overall performance of urban water extraction is also higher than that of the modified normalized difference water index (MNDWI) and random forest. The mean kappa, F1-score and intersection over union (IoU) of urban water extraction with the OTOP in Changchun, Wuhan, Kunming and Guangzhou reached 0.924, 0.930 and 0.869, respectively. The results of the extended validation in the other major cities of China also show that the OTOP is robust and can be used to extract different types of urban water, which benefits from the structural design and training of the MSCNN. Therefore, the OTOP is especially suitable for the study of large-scale and long-term urban water change detection in the background of urbanization.
|
2011.01097
|
Marta R. Costa-juss\`a
|
Carlos Escolano, Marta R. Costa-juss\`a, Jos\'e A. R. Fonollosa,
Carlos Segura
|
Enabling Zero-shot Multilingual Spoken Language Translation with
Language-Specific Encoders and Decoders
| null |
IEEE Workshop on Automatic Speech Recognition and Understanding
2021
| null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Current end-to-end approaches to Spoken Language Translation (SLT) rely on
limited training resources, especially for multilingual settings. On the other
hand, Multilingual Neural Machine Translation (MultiNMT) approaches rely on
higher-quality and more massive data sets. Our proposed method extends a
MultiNMT architecture based on language-specific encoders-decoders to the task
of Multilingual SLT (MultiSLT). Our method entirely eliminates the dependency
from MultiSLT data and it is able to translate while training only on ASR and
MultiNMT data.
Our experiments on four different languages show that coupling the speech
encoder to the MultiNMT architecture produces similar quality translations
compared to a bilingual baseline ($\pm 0.2$ BLEU) while effectively allowing
for zero-shot MultiSLT. Additionally, we propose using an Adapter module for
coupling the speech inputs. This Adapter module produces consistent
improvements up to +6 BLEU points on the proposed architecture and +1 BLEU
point on the end-to-end baseline.
|
[
{
"created": "Mon, 2 Nov 2020 16:31:14 GMT",
"version": "v1"
},
{
"created": "Wed, 15 Sep 2021 18:42:21 GMT",
"version": "v2"
}
] |
2021-09-17
|
[
[
"Escolano",
"Carlos",
""
],
[
"Costa-jussà",
"Marta R.",
""
],
[
"Fonollosa",
"José A. R.",
""
],
[
"Segura",
"Carlos",
""
]
] |
Current end-to-end approaches to Spoken Language Translation (SLT) rely on limited training resources, especially for multilingual settings. On the other hand, Multilingual Neural Machine Translation (MultiNMT) approaches rely on higher-quality and more massive data sets. Our proposed method extends a MultiNMT architecture based on language-specific encoders-decoders to the task of Multilingual SLT (MultiSLT). Our method entirely eliminates the dependency from MultiSLT data and it is able to translate while training only on ASR and MultiNMT data. Our experiments on four different languages show that coupling the speech encoder to the MultiNMT architecture produces similar quality translations compared to a bilingual baseline ($\pm 0.2$ BLEU) while effectively allowing for zero-shot MultiSLT. Additionally, we propose using an Adapter module for coupling the speech inputs. This Adapter module produces consistent improvements up to +6 BLEU points on the proposed architecture and +1 BLEU point on the end-to-end baseline.
|
1407.6580
|
EPTCS
|
Roderick Bloem (Graz University of Technology, Austria), Swen Jacobs
(Graz University of Technology, Austria), Ayrat Khalimov (Graz University of
Technology, Austria)
|
Parameterized Synthesis Case Study: AMBA AHB
|
Conference version of arXiv:1406.7608. In Proceedings SYNT 2014,
arXiv:1407.4937
|
EPTCS 157, 2014, pp. 68-83
|
10.4204/EPTCS.157.9
| null |
cs.LO cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We revisit the AMBA AHB case study that has been used as a benchmark for
several reactive synthesis tools. Synthesizing AMBA AHB implementations that
can serve a large number of masters is still a difficult problem. We
demonstrate how to use parameterized synthesis in token rings to obtain an
implementation for a component that serves a single master, and can be arranged
in a ring of arbitrarily many components. We describe new tricks - property
decompositional synthesis, and direct encoding of simple GR(1) - that together
with previously described optimizations allowed us to synthesize a component
model with 14 states in about 1 hour.
|
[
{
"created": "Mon, 21 Jul 2014 07:28:39 GMT",
"version": "v1"
}
] |
2014-07-25
|
[
[
"Bloem",
"Roderick",
"",
"Graz University of Technology, Austria"
],
[
"Jacobs",
"Swen",
"",
"Graz University of Technology, Austria"
],
[
"Khalimov",
"Ayrat",
"",
"Graz University of\n Technology, Austria"
]
] |
We revisit the AMBA AHB case study that has been used as a benchmark for several reactive synthesis tools. Synthesizing AMBA AHB implementations that can serve a large number of masters is still a difficult problem. We demonstrate how to use parameterized synthesis in token rings to obtain an implementation for a component that serves a single master, and can be arranged in a ring of arbitrarily many components. We describe new tricks - property decompositional synthesis, and direct encoding of simple GR(1) - that together with previously described optimizations allowed us to synthesize a component model with 14 states in about 1 hour.
|
2201.00012
|
Markus Peschl
|
Markus Peschl, Arkady Zgonnikov, Frans A. Oliehoek, Luciano C. Siebert
|
MORAL: Aligning AI with Human Norms through Multi-Objective Reinforced
Active Learning
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Inferring reward functions from demonstrations and pairwise preferences are
auspicious approaches for aligning Reinforcement Learning (RL) agents with
human intentions. However, state-of-the art methods typically focus on learning
a single reward model, thus rendering it difficult to trade off different
reward functions from multiple experts. We propose Multi-Objective Reinforced
Active Learning (MORAL), a novel method for combining diverse demonstrations of
social norms into a Pareto-optimal policy. Through maintaining a distribution
over scalarization weights, our approach is able to interactively tune a deep
RL agent towards a variety of preferences, while eliminating the need for
computing multiple policies. We empirically demonstrate the effectiveness of
MORAL in two scenarios, which model a delivery and an emergency task that
require an agent to act in the presence of normative conflicts. Overall, we
consider our research a step towards multi-objective RL with learned rewards,
bridging the gap between current reward learning and machine ethics literature.
|
[
{
"created": "Thu, 30 Dec 2021 19:21:03 GMT",
"version": "v1"
}
] |
2022-01-04
|
[
[
"Peschl",
"Markus",
""
],
[
"Zgonnikov",
"Arkady",
""
],
[
"Oliehoek",
"Frans A.",
""
],
[
"Siebert",
"Luciano C.",
""
]
] |
Inferring reward functions from demonstrations and pairwise preferences are auspicious approaches for aligning Reinforcement Learning (RL) agents with human intentions. However, state-of-the art methods typically focus on learning a single reward model, thus rendering it difficult to trade off different reward functions from multiple experts. We propose Multi-Objective Reinforced Active Learning (MORAL), a novel method for combining diverse demonstrations of social norms into a Pareto-optimal policy. Through maintaining a distribution over scalarization weights, our approach is able to interactively tune a deep RL agent towards a variety of preferences, while eliminating the need for computing multiple policies. We empirically demonstrate the effectiveness of MORAL in two scenarios, which model a delivery and an emergency task that require an agent to act in the presence of normative conflicts. Overall, we consider our research a step towards multi-objective RL with learned rewards, bridging the gap between current reward learning and machine ethics literature.
|
2211.12857
|
Stefan Kolek
|
Stefan Kolek, Robert Windesheim, Hector Andrade Loarca, Gitta
Kutyniok, Ron Levie
|
Explaining Image Classifiers with Multiscale Directional Image
Representation
| null |
CVPR 2023
| null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Image classifiers are known to be difficult to interpret and therefore
require explanation methods to understand their decisions. We present
ShearletX, a novel mask explanation method for image classifiers based on the
shearlet transform -- a multiscale directional image representation. Current
mask explanation methods are regularized by smoothness constraints that protect
against undesirable fine-grained explanation artifacts. However, the smoothness
of a mask limits its ability to separate fine-detail patterns, that are
relevant for the classifier, from nearby nuisance patterns, that do not affect
the classifier. ShearletX solves this problem by avoiding smoothness
regularization all together, replacing it by shearlet sparsity constraints. The
resulting explanations consist of a few edges, textures, and smooth parts of
the original image, that are the most relevant for the decision of the
classifier. To support our method, we propose a mathematical definition for
explanation artifacts and an information theoretic score to evaluate the
quality of mask explanations. We demonstrate the superiority of ShearletX over
previous mask based explanation methods using these new metrics, and present
exemplary situations where separating fine-detail patterns allows explaining
phenomena that were not explainable before.
|
[
{
"created": "Tue, 22 Nov 2022 09:24:45 GMT",
"version": "v1"
},
{
"created": "Thu, 24 Nov 2022 09:20:55 GMT",
"version": "v2"
},
{
"created": "Fri, 28 Apr 2023 12:58:15 GMT",
"version": "v3"
}
] |
2023-05-01
|
[
[
"Kolek",
"Stefan",
""
],
[
"Windesheim",
"Robert",
""
],
[
"Loarca",
"Hector Andrade",
""
],
[
"Kutyniok",
"Gitta",
""
],
[
"Levie",
"Ron",
""
]
] |
Image classifiers are known to be difficult to interpret and therefore require explanation methods to understand their decisions. We present ShearletX, a novel mask explanation method for image classifiers based on the shearlet transform -- a multiscale directional image representation. Current mask explanation methods are regularized by smoothness constraints that protect against undesirable fine-grained explanation artifacts. However, the smoothness of a mask limits its ability to separate fine-detail patterns, that are relevant for the classifier, from nearby nuisance patterns, that do not affect the classifier. ShearletX solves this problem by avoiding smoothness regularization all together, replacing it by shearlet sparsity constraints. The resulting explanations consist of a few edges, textures, and smooth parts of the original image, that are the most relevant for the decision of the classifier. To support our method, we propose a mathematical definition for explanation artifacts and an information theoretic score to evaluate the quality of mask explanations. We demonstrate the superiority of ShearletX over previous mask based explanation methods using these new metrics, and present exemplary situations where separating fine-detail patterns allows explaining phenomena that were not explainable before.
|
2405.05080
|
Helena Amalie Haxvig
|
Helena A. Haxvig
|
Concerns on Bias in Large Language Models when Creating Synthetic
Personae
|
4 pages, accepted at the "LLM-Based Synthetic Personae and Data in
HCI" workshop at CHI2024
| null | null | null |
cs.HC cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This position paper explores the benefits, drawbacks, and ethical
considerations of incorporating synthetic personae in HCI research,
particularly focusing on the customization challenges beyond the limitations of
current Large Language Models (LLMs). These perspectives are derived from the
initial results of a sub-study employing vignettes to showcase the existence of
bias within black-box LLMs and explore methods for manipulating them. The study
aims to establish a foundation for understanding the challenges associated with
these models, emphasizing the necessity of thorough testing before utilizing
them to create synthetic personae for HCI research.
|
[
{
"created": "Wed, 8 May 2024 14:24:11 GMT",
"version": "v1"
}
] |
2024-05-09
|
[
[
"Haxvig",
"Helena A.",
""
]
] |
This position paper explores the benefits, drawbacks, and ethical considerations of incorporating synthetic personae in HCI research, particularly focusing on the customization challenges beyond the limitations of current Large Language Models (LLMs). These perspectives are derived from the initial results of a sub-study employing vignettes to showcase the existence of bias within black-box LLMs and explore methods for manipulating them. The study aims to establish a foundation for understanding the challenges associated with these models, emphasizing the necessity of thorough testing before utilizing them to create synthetic personae for HCI research.
|
2001.08361
|
Samuel McCandlish
|
Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin
Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, Dario Amodei
|
Scaling Laws for Neural Language Models
|
19 pages, 15 figures
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study empirical scaling laws for language model performance on the
cross-entropy loss. The loss scales as a power-law with model size, dataset
size, and the amount of compute used for training, with some trends spanning
more than seven orders of magnitude. Other architectural details such as
network width or depth have minimal effects within a wide range. Simple
equations govern the dependence of overfitting on model/dataset size and the
dependence of training speed on model size. These relationships allow us to
determine the optimal allocation of a fixed compute budget. Larger models are
significantly more sample-efficient, such that optimally compute-efficient
training involves training very large models on a relatively modest amount of
data and stopping significantly before convergence.
|
[
{
"created": "Thu, 23 Jan 2020 03:59:20 GMT",
"version": "v1"
}
] |
2020-01-24
|
[
[
"Kaplan",
"Jared",
""
],
[
"McCandlish",
"Sam",
""
],
[
"Henighan",
"Tom",
""
],
[
"Brown",
"Tom B.",
""
],
[
"Chess",
"Benjamin",
""
],
[
"Child",
"Rewon",
""
],
[
"Gray",
"Scott",
""
],
[
"Radford",
"Alec",
""
],
[
"Wu",
"Jeffrey",
""
],
[
"Amodei",
"Dario",
""
]
] |
We study empirical scaling laws for language model performance on the cross-entropy loss. The loss scales as a power-law with model size, dataset size, and the amount of compute used for training, with some trends spanning more than seven orders of magnitude. Other architectural details such as network width or depth have minimal effects within a wide range. Simple equations govern the dependence of overfitting on model/dataset size and the dependence of training speed on model size. These relationships allow us to determine the optimal allocation of a fixed compute budget. Larger models are significantly more sample-efficient, such that optimally compute-efficient training involves training very large models on a relatively modest amount of data and stopping significantly before convergence.
|
1611.03895
|
George Nomikos
|
George Nomikos and Xenofontas Dimitropoulos
|
traIXroute: Detecting IXPs in traceroute paths
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Internet eXchange Points (IXP) are critical components of the Internet
infrastructure that affect its performance, evolution, security and economics.
In this work, we introduce techniques to augment the well-known traceroute tool
with the capability of identifying if and where exactly IXPs are crossed in
endto- end paths. Knowing this information can help end-users have more
transparency over how their traffic flows in the Internet. Our tool, called
traIXroute, exploits data from the PeeringDB (PDB) and the Packet Clearing
House (PCH) about IXP IP addresses of BGP routers, IXP members, and IXP
prefixes. We show that the used data are both rich, i.e., we find 12,716 IP
addresses of BGP routers in 460 IXPs, and mostly accurate, i.e., our validation
shows 92-93% accuracy. In addition, 78.2% of the detected IXPs in our data are
based on multiple diverse evidence and therefore help have higher confidence on
the detected IXPs than when relying solely on IXP prefixes. To demonstrate the
utility of our tool, we use it to show that one out of five paths in our data
cross an IXP and that paths do not normally cross more than a single IXP, as it
is expected based on the valley-free model about Internet policies.
Furthermore, although the top IXPs both in terms of paths and members are
located in Europe, US IXPs attract many more paths than their number of members
indicates.
|
[
{
"created": "Fri, 11 Nov 2016 22:06:12 GMT",
"version": "v1"
}
] |
2016-11-15
|
[
[
"Nomikos",
"George",
""
],
[
"Dimitropoulos",
"Xenofontas",
""
]
] |
Internet eXchange Points (IXP) are critical components of the Internet infrastructure that affect its performance, evolution, security and economics. In this work, we introduce techniques to augment the well-known traceroute tool with the capability of identifying if and where exactly IXPs are crossed in endto- end paths. Knowing this information can help end-users have more transparency over how their traffic flows in the Internet. Our tool, called traIXroute, exploits data from the PeeringDB (PDB) and the Packet Clearing House (PCH) about IXP IP addresses of BGP routers, IXP members, and IXP prefixes. We show that the used data are both rich, i.e., we find 12,716 IP addresses of BGP routers in 460 IXPs, and mostly accurate, i.e., our validation shows 92-93% accuracy. In addition, 78.2% of the detected IXPs in our data are based on multiple diverse evidence and therefore help have higher confidence on the detected IXPs than when relying solely on IXP prefixes. To demonstrate the utility of our tool, we use it to show that one out of five paths in our data cross an IXP and that paths do not normally cross more than a single IXP, as it is expected based on the valley-free model about Internet policies. Furthermore, although the top IXPs both in terms of paths and members are located in Europe, US IXPs attract many more paths than their number of members indicates.
|
2406.16696
|
Gilad Abiri
|
Gilad Abiri
|
Public Constitutional AI
| null | null | null | null |
cs.CY cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
We are increasingly subjected to the power of AI authorities. As AI decisions
become inescapable, entering domains such as healthcare, education, and law, we
must confront a vital question: how can we ensure AI systems have the
legitimacy necessary for effective governance? This essay argues that to secure
AI legitimacy, we need methods that engage the public in designing and
constraining AI systems, ensuring these technologies reflect the community's
shared values. Constitutional AI, proposed by Anthropic, represents a step
towards this goal, offering a model for democratic control of AI. However,
while Constitutional AI's commitment to hardcoding explicit principles into AI
models enhances transparency and accountability, it falls short in two crucial
aspects: addressing the opacity of individual AI decisions and fostering
genuine democratic legitimacy. To overcome these limitations, this essay
proposes "Public Constitutional AI." This approach envisions a participatory
process where diverse stakeholders, including ordinary citizens, deliberate on
the principles guiding AI development. The resulting "AI Constitution" would
carry the legitimacy of popular authorship, grounding AI governance in the
public will. Furthermore, the essay proposes "AI Courts" to develop "AI case
law," providing concrete examples for operationalizing constitutional
principles in AI training. This evolving combination of constitutional
principles and case law aims to make AI governance more responsive to public
values. By grounding AI governance in deliberative democratic processes, Public
Constitutional AI offers a path to imbue automated authorities with genuine
democratic legitimacy, addressing the unique challenges posed by increasingly
powerful AI systems while ensuring their alignment with the public interest.
|
[
{
"created": "Mon, 24 Jun 2024 15:00:01 GMT",
"version": "v1"
}
] |
2024-06-25
|
[
[
"Abiri",
"Gilad",
""
]
] |
We are increasingly subjected to the power of AI authorities. As AI decisions become inescapable, entering domains such as healthcare, education, and law, we must confront a vital question: how can we ensure AI systems have the legitimacy necessary for effective governance? This essay argues that to secure AI legitimacy, we need methods that engage the public in designing and constraining AI systems, ensuring these technologies reflect the community's shared values. Constitutional AI, proposed by Anthropic, represents a step towards this goal, offering a model for democratic control of AI. However, while Constitutional AI's commitment to hardcoding explicit principles into AI models enhances transparency and accountability, it falls short in two crucial aspects: addressing the opacity of individual AI decisions and fostering genuine democratic legitimacy. To overcome these limitations, this essay proposes "Public Constitutional AI." This approach envisions a participatory process where diverse stakeholders, including ordinary citizens, deliberate on the principles guiding AI development. The resulting "AI Constitution" would carry the legitimacy of popular authorship, grounding AI governance in the public will. Furthermore, the essay proposes "AI Courts" to develop "AI case law," providing concrete examples for operationalizing constitutional principles in AI training. This evolving combination of constitutional principles and case law aims to make AI governance more responsive to public values. By grounding AI governance in deliberative democratic processes, Public Constitutional AI offers a path to imbue automated authorities with genuine democratic legitimacy, addressing the unique challenges posed by increasingly powerful AI systems while ensuring their alignment with the public interest.
|
2008.05459
|
Jun Qi
|
Jun Qi, Jun Du, Sabato Marco Siniscalchi, Xiaoli Ma, Chin-Hui Lee
|
Analyzing Upper Bounds on Mean Absolute Errors for Deep Neural Network
Based Vector-to-Vector Regression
| null |
IEEE Transactions on Signal Processing, Vol 68, pp. 3411-3422,
2020
|
10.1109/TSP.2020.2993164
| null |
cs.LG eess.SP stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we show that, in vector-to-vector regression utilizing deep
neural networks (DNNs), a generalized loss of mean absolute error (MAE) between
the predicted and expected feature vectors is upper bounded by the sum of an
approximation error, an estimation error, and an optimization error. Leveraging
upon error decomposition techniques in statistical learning theory and
non-convex optimization theory, we derive upper bounds for each of the three
aforementioned errors and impose necessary constraints on DNN models. Moreover,
we assess our theoretical results through a set of image de-noising and speech
enhancement experiments. Our proposed upper bounds of MAE for DNN based
vector-to-vector regression are corroborated by the experimental results and
the upper bounds are valid with and without the "over-parametrization"
technique.
|
[
{
"created": "Tue, 4 Aug 2020 19:39:41 GMT",
"version": "v1"
}
] |
2020-08-13
|
[
[
"Qi",
"Jun",
""
],
[
"Du",
"Jun",
""
],
[
"Siniscalchi",
"Sabato Marco",
""
],
[
"Ma",
"Xiaoli",
""
],
[
"Lee",
"Chin-Hui",
""
]
] |
In this paper, we show that, in vector-to-vector regression utilizing deep neural networks (DNNs), a generalized loss of mean absolute error (MAE) between the predicted and expected feature vectors is upper bounded by the sum of an approximation error, an estimation error, and an optimization error. Leveraging upon error decomposition techniques in statistical learning theory and non-convex optimization theory, we derive upper bounds for each of the three aforementioned errors and impose necessary constraints on DNN models. Moreover, we assess our theoretical results through a set of image de-noising and speech enhancement experiments. Our proposed upper bounds of MAE for DNN based vector-to-vector regression are corroborated by the experimental results and the upper bounds are valid with and without the "over-parametrization" technique.
|
2109.04066
|
Xinbei Ma
|
Xinbei Ma, Zhuosheng Zhang, Hai Zhao
|
Enhanced Speaker-aware Multi-party Multi-turn Dialogue Comprehension
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-party multi-turn dialogue comprehension brings unprecedented challenges
on handling the complicated scenarios from multiple speakers and criss-crossed
discourse relationship among speaker-aware utterances. Most existing methods
deal with dialogue contexts as plain texts and pay insufficient attention to
the crucial speaker-aware clues. In this work, we propose an enhanced
speaker-aware model with masking attention and heterogeneous graph networks to
comprehensively capture discourse clues from both sides of speaker property and
speaker-aware relationships. With such comprehensive speaker-aware modeling,
experimental results show that our speaker-aware model helps achieves
state-of-the-art performance on the benchmark dataset Molweni. Case analysis
shows that our model enhances the connections between utterances and their own
speakers and captures the speaker-aware discourse relations, which are critical
for dialogue modeling.
|
[
{
"created": "Thu, 9 Sep 2021 07:12:22 GMT",
"version": "v1"
}
] |
2021-09-10
|
[
[
"Ma",
"Xinbei",
""
],
[
"Zhang",
"Zhuosheng",
""
],
[
"Zhao",
"Hai",
""
]
] |
Multi-party multi-turn dialogue comprehension brings unprecedented challenges on handling the complicated scenarios from multiple speakers and criss-crossed discourse relationship among speaker-aware utterances. Most existing methods deal with dialogue contexts as plain texts and pay insufficient attention to the crucial speaker-aware clues. In this work, we propose an enhanced speaker-aware model with masking attention and heterogeneous graph networks to comprehensively capture discourse clues from both sides of speaker property and speaker-aware relationships. With such comprehensive speaker-aware modeling, experimental results show that our speaker-aware model helps achieves state-of-the-art performance on the benchmark dataset Molweni. Case analysis shows that our model enhances the connections between utterances and their own speakers and captures the speaker-aware discourse relations, which are critical for dialogue modeling.
|
0803.2220
|
Panagiotis Papadakos
|
Panagiotis Papadakos, Giorgos Vasiliadis, Yannis Theoharis, Nikos
Armenatzoglou, Stella Kopidaki, Yannis Marketakis, Manos Daskalakis, Kostas
Karamaroudis, Giorgos Linardakis, Giannis Makrydakis, Vangelis Papathanasiou,
Lefteris Sardis, Petros Tsialiamanis, Georgia Troullinou, Kostas Vandikas,
Dimitris Velegrakis and Yannis Tzitzikas
|
The Anatomy of Mitos Web Search Engine
| null | null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Engineering a Web search engine offering effective and efficient information
retrieval is a challenging task. This document presents our experiences from
designing and developing a Web search engine offering a wide spectrum of
functionalities and we report some interesting experimental results. A rather
peculiar design choice of the engine is that its index is based on a DBMS,
while some of the distinctive functionalities that are offered include advanced
Greek language stemming, real time result clustering, and advanced link
analysis techniques (also for spam page detection).
|
[
{
"created": "Fri, 14 Mar 2008 19:18:15 GMT",
"version": "v1"
},
{
"created": "Sun, 16 Mar 2008 17:25:19 GMT",
"version": "v2"
}
] |
2008-12-18
|
[
[
"Papadakos",
"Panagiotis",
""
],
[
"Vasiliadis",
"Giorgos",
""
],
[
"Theoharis",
"Yannis",
""
],
[
"Armenatzoglou",
"Nikos",
""
],
[
"Kopidaki",
"Stella",
""
],
[
"Marketakis",
"Yannis",
""
],
[
"Daskalakis",
"Manos",
""
],
[
"Karamaroudis",
"Kostas",
""
],
[
"Linardakis",
"Giorgos",
""
],
[
"Makrydakis",
"Giannis",
""
],
[
"Papathanasiou",
"Vangelis",
""
],
[
"Sardis",
"Lefteris",
""
],
[
"Tsialiamanis",
"Petros",
""
],
[
"Troullinou",
"Georgia",
""
],
[
"Vandikas",
"Kostas",
""
],
[
"Velegrakis",
"Dimitris",
""
],
[
"Tzitzikas",
"Yannis",
""
]
] |
Engineering a Web search engine offering effective and efficient information retrieval is a challenging task. This document presents our experiences from designing and developing a Web search engine offering a wide spectrum of functionalities and we report some interesting experimental results. A rather peculiar design choice of the engine is that its index is based on a DBMS, while some of the distinctive functionalities that are offered include advanced Greek language stemming, real time result clustering, and advanced link analysis techniques (also for spam page detection).
|
1912.04078
|
Qiaoyun Wu
|
Qiaoyun Wu, Kai Xu, Jun Wang, Mingliang Xu, Xiaoxi Gong, Dinesh
Manocha
|
Reinforcement Learning-based Visual Navigation with
Information-Theoretic Regularization
|
corresponding author: Kai Xu (kevin.kai.xu@gmail.com) and Jun Wang
(wjun@nuaa.edu.cn), accepted by IEEE Robotics and Automation Letters
| null | null | null |
cs.RO
|
http://creativecommons.org/publicdomain/zero/1.0/
|
To enhance the cross-target and cross-scene generalization of target-driven
visual navigation based on deep reinforcement learning (RL), we introduce an
information-theoretic regularization term into the RL objective. The
regularization maximizes the mutual information between navigation actions and
visual observation transforms of an agent, thus promoting more informed
navigation decisions. This way, the agent models the action-observation
dynamics by learning a variational generative model. Based on the model, the
agent generates (imagines) the next observation from its current observation
and navigation target. This way, the agent learns to understand the causality
between navigation actions and the changes in its observations, which allows
the agent to predict the next action for navigation by comparing the current
and the imagined next observations. Cross-target and cross-scene evaluations on
the AI2-THOR framework show that our method attains at least a $10\%$
improvement of average success rate over some state-of-the-art models. We
further evaluate our model in two real-world settings: navigation in unseen
indoor scenes from a discrete Active Vision Dataset (AVD) and continuous
real-world environments with a TurtleBot.We demonstrate that our navigation
model is able to successfully achieve navigation tasks in these scenarios.
Videos and models can be found in the supplementary material.
|
[
{
"created": "Mon, 9 Dec 2019 14:27:21 GMT",
"version": "v1"
},
{
"created": "Tue, 7 Apr 2020 02:31:44 GMT",
"version": "v2"
},
{
"created": "Fri, 21 Aug 2020 14:20:05 GMT",
"version": "v3"
},
{
"created": "Mon, 2 Nov 2020 01:39:52 GMT",
"version": "v4"
},
{
"created": "Fri, 18 Dec 2020 00:32:18 GMT",
"version": "v5"
},
{
"created": "Mon, 10 Jan 2022 05:12:07 GMT",
"version": "v6"
},
{
"created": "Mon, 9 May 2022 09:02:44 GMT",
"version": "v7"
}
] |
2022-05-10
|
[
[
"Wu",
"Qiaoyun",
""
],
[
"Xu",
"Kai",
""
],
[
"Wang",
"Jun",
""
],
[
"Xu",
"Mingliang",
""
],
[
"Gong",
"Xiaoxi",
""
],
[
"Manocha",
"Dinesh",
""
]
] |
To enhance the cross-target and cross-scene generalization of target-driven visual navigation based on deep reinforcement learning (RL), we introduce an information-theoretic regularization term into the RL objective. The regularization maximizes the mutual information between navigation actions and visual observation transforms of an agent, thus promoting more informed navigation decisions. This way, the agent models the action-observation dynamics by learning a variational generative model. Based on the model, the agent generates (imagines) the next observation from its current observation and navigation target. This way, the agent learns to understand the causality between navigation actions and the changes in its observations, which allows the agent to predict the next action for navigation by comparing the current and the imagined next observations. Cross-target and cross-scene evaluations on the AI2-THOR framework show that our method attains at least a $10\%$ improvement of average success rate over some state-of-the-art models. We further evaluate our model in two real-world settings: navigation in unseen indoor scenes from a discrete Active Vision Dataset (AVD) and continuous real-world environments with a TurtleBot.We demonstrate that our navigation model is able to successfully achieve navigation tasks in these scenarios. Videos and models can be found in the supplementary material.
|
2103.15493
|
Benjamin Marussig
|
A. Borkovi\'c, B. Marussig, and G. Radenkovi\'c
|
Geometrically exact static isogeometric analysis of arbitrarily curved
plane Bernoulli-Euler beam
| null |
Thin-Walled Structures, Volume 170, January 2022, 108539
|
10.1016/j.tws.2021.108539
| null |
cs.CE cs.NA math.NA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a geometrically exact nonlinear analysis of elastic in-plane beams
in the context of finite but small strain theory. The formulation utilizes the
full beam metric and obtains the complete analytic elastic constitutive model
by employing the exact relation between the reference and equidistant strains.
Thus, we account for the nonlinear strain distribution over the thickness of a
beam. In addition to the full analytical constitutive model, four simplified
ones are presented. Their comparison provides a thorough examination of the
influence of a beam's metric on the structural response. We show that the
appropriate formulation depends on the curviness of a beam at all
configurations. Furthermore, the nonlinear distribution of strain along the
thickness of strongly curved beams must be considered to obtain a complete and
accurate response.
|
[
{
"created": "Mon, 29 Mar 2021 10:51:20 GMT",
"version": "v1"
}
] |
2021-11-24
|
[
[
"Borković",
"A.",
""
],
[
"Marussig",
"B.",
""
],
[
"Radenković",
"G.",
""
]
] |
We present a geometrically exact nonlinear analysis of elastic in-plane beams in the context of finite but small strain theory. The formulation utilizes the full beam metric and obtains the complete analytic elastic constitutive model by employing the exact relation between the reference and equidistant strains. Thus, we account for the nonlinear strain distribution over the thickness of a beam. In addition to the full analytical constitutive model, four simplified ones are presented. Their comparison provides a thorough examination of the influence of a beam's metric on the structural response. We show that the appropriate formulation depends on the curviness of a beam at all configurations. Furthermore, the nonlinear distribution of strain along the thickness of strongly curved beams must be considered to obtain a complete and accurate response.
|
1808.08279
|
Navid Alemi Koohbanani
|
Navid Alemi Koohababni, Mostafa Jahanifar, Ali Gooya, and Nasir
Rajpoot
|
Nuclei Detection Using Mixture Density Networks
|
8 pages, 3 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Nuclei detection is an important task in the histology domain as it is a main
step toward further analysis such as cell counting, cell segmentation, study of
cell connections, etc. This is a challenging task due to the complex texture of
histology image, variation in shape, and touching cells. To tackle these
hurdles, many approaches have been proposed in the literature where deep
learning methods stand on top in terms of performance. Hence, in this paper, we
propose a novel framework for nuclei detection based on Mixture Density
Networks (MDNs). These networks are suitable to map a single input to several
possible outputs and we utilize this property to detect multiple seeds in a
single image patch. A new modified form of a cost function is proposed for
training and handling patches with missing nuclei. The probability maps of the
nuclei in the individual patches are next combined to generate the final
image-wide result. The experimental results show the state-of-the-art
performance on complex colorectal adenocarcinoma dataset.
|
[
{
"created": "Wed, 22 Aug 2018 05:59:19 GMT",
"version": "v1"
}
] |
2018-08-28
|
[
[
"Koohababni",
"Navid Alemi",
""
],
[
"Jahanifar",
"Mostafa",
""
],
[
"Gooya",
"Ali",
""
],
[
"Rajpoot",
"Nasir",
""
]
] |
Nuclei detection is an important task in the histology domain as it is a main step toward further analysis such as cell counting, cell segmentation, study of cell connections, etc. This is a challenging task due to the complex texture of histology image, variation in shape, and touching cells. To tackle these hurdles, many approaches have been proposed in the literature where deep learning methods stand on top in terms of performance. Hence, in this paper, we propose a novel framework for nuclei detection based on Mixture Density Networks (MDNs). These networks are suitable to map a single input to several possible outputs and we utilize this property to detect multiple seeds in a single image patch. A new modified form of a cost function is proposed for training and handling patches with missing nuclei. The probability maps of the nuclei in the individual patches are next combined to generate the final image-wide result. The experimental results show the state-of-the-art performance on complex colorectal adenocarcinoma dataset.
|
0902.2674
|
Evira Mayordomo
|
Lance Fortnow, Jack H. Lutz, Elvira Mayordomo
|
Inseparability and Strong Hypotheses for Disjoint NP Pairs
| null | null | null | null |
cs.CC
|
http://creativecommons.org/licenses/by/3.0/
|
This paper investigates the existence of inseparable disjoint pairs of NP
languages and related strong hypotheses in computational complexity. Our main
theorem says that, if NP does not have measure 0 in EXP, then there exist
disjoint pairs of NP languages that are P-inseparable, in fact
TIME(2^(n^k))-inseparable. We also relate these conditions to strong hypotheses
concerning randomness and genericity of disjoint pairs.
|
[
{
"created": "Mon, 16 Feb 2009 12:27:54 GMT",
"version": "v1"
},
{
"created": "Wed, 23 Sep 2009 11:25:44 GMT",
"version": "v2"
},
{
"created": "Thu, 7 Jan 2010 17:04:19 GMT",
"version": "v3"
},
{
"created": "Wed, 3 Feb 2010 11:35:09 GMT",
"version": "v4"
}
] |
2010-02-03
|
[
[
"Fortnow",
"Lance",
""
],
[
"Lutz",
"Jack H.",
""
],
[
"Mayordomo",
"Elvira",
""
]
] |
This paper investigates the existence of inseparable disjoint pairs of NP languages and related strong hypotheses in computational complexity. Our main theorem says that, if NP does not have measure 0 in EXP, then there exist disjoint pairs of NP languages that are P-inseparable, in fact TIME(2^(n^k))-inseparable. We also relate these conditions to strong hypotheses concerning randomness and genericity of disjoint pairs.
|
2312.17025
|
Chen Qian
|
Chen Qian, Yufan Dang, Jiahao Li, Wei Liu, Zihao Xie, Yifei Wang,
Weize Chen, Cheng Yang, Xin Cong, Xiaoyin Che, Zhiyuan Liu, Maosong Sun
|
Experiential Co-Learning of Software-Developing Agents
|
Accepted to ACL 2024, https://github.com/OpenBMB/ChatDev
| null | null | null |
cs.CL cs.AI cs.LG cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent advancements in large language models (LLMs) have brought significant
changes to various domains, especially through LLM-driven autonomous agents. A
representative scenario is in software development, where LLM agents
demonstrate efficient collaboration, task division, and assurance of software
quality, markedly reducing the need for manual involvement. However, these
agents frequently perform a variety of tasks independently, without benefiting
from past experiences, which leads to repeated mistakes and inefficient
attempts in multi-step task execution. To this end, we introduce Experiential
Co-Learning, a novel LLM-agent learning framework in which instructor and
assistant agents gather shortcut-oriented experiences from their historical
trajectories and use these past experiences for future task execution. The
extensive experiments demonstrate that the framework enables agents to tackle
unseen software-developing tasks more effectively. We anticipate that our
insights will guide LLM agents towards enhanced autonomy and contribute to
their evolutionary growth in cooperative learning. The code and data are
available at https://github.com/OpenBMB/ChatDev.
|
[
{
"created": "Thu, 28 Dec 2023 13:50:42 GMT",
"version": "v1"
},
{
"created": "Fri, 29 Dec 2023 12:50:08 GMT",
"version": "v2"
},
{
"created": "Wed, 5 Jun 2024 13:39:20 GMT",
"version": "v3"
}
] |
2024-06-06
|
[
[
"Qian",
"Chen",
""
],
[
"Dang",
"Yufan",
""
],
[
"Li",
"Jiahao",
""
],
[
"Liu",
"Wei",
""
],
[
"Xie",
"Zihao",
""
],
[
"Wang",
"Yifei",
""
],
[
"Chen",
"Weize",
""
],
[
"Yang",
"Cheng",
""
],
[
"Cong",
"Xin",
""
],
[
"Che",
"Xiaoyin",
""
],
[
"Liu",
"Zhiyuan",
""
],
[
"Sun",
"Maosong",
""
]
] |
Recent advancements in large language models (LLMs) have brought significant changes to various domains, especially through LLM-driven autonomous agents. A representative scenario is in software development, where LLM agents demonstrate efficient collaboration, task division, and assurance of software quality, markedly reducing the need for manual involvement. However, these agents frequently perform a variety of tasks independently, without benefiting from past experiences, which leads to repeated mistakes and inefficient attempts in multi-step task execution. To this end, we introduce Experiential Co-Learning, a novel LLM-agent learning framework in which instructor and assistant agents gather shortcut-oriented experiences from their historical trajectories and use these past experiences for future task execution. The extensive experiments demonstrate that the framework enables agents to tackle unseen software-developing tasks more effectively. We anticipate that our insights will guide LLM agents towards enhanced autonomy and contribute to their evolutionary growth in cooperative learning. The code and data are available at https://github.com/OpenBMB/ChatDev.
|
2406.07229
|
JinKyu Lee
|
JinKyu Lee, Jihie Kim
|
Improving Commonsense Bias Classification by Mitigating the Influence of
Demographic Terms
|
10 pages, 5 figures, conference presentation, supported by MSIT
(Korea) under ITRC program (IITP-2024-2020-0-01789) and AI Convergence
Innovation HR Development (IITP-2024-RS-2023-00254592)
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Understanding commonsense knowledge is crucial in the field of Natural
Language Processing (NLP). However, the presence of demographic terms in
commonsense knowledge poses a potential risk of compromising the performance of
NLP models. This study aims to investigate and propose methods for enhancing
the performance and effectiveness of a commonsense polarization classifier by
mitigating the influence of demographic terms. Three methods are introduced in
this paper: (1) hierarchical generalization of demographic terms (2)
threshold-based augmentation and (3) integration of hierarchical generalization
and threshold-based augmentation methods (IHTA). The first method involves
replacing demographic terms with more general ones based on a term hierarchy
ontology, aiming to mitigate the influence of specific terms. To address the
limited bias-related information, the second method measures the polarization
of demographic terms by comparing the changes in the model's predictions when
these terms are masked versus unmasked. This method augments commonsense
sentences containing terms with high polarization values by replacing their
predicates with synonyms generated by ChatGPT. The third method combines the
two approaches, starting with threshold-based augmentation followed by
hierarchical generalization. The experiments show that the first method
increases the accuracy over the baseline by 2.33%, and the second one by 0.96%
over standard augmentation methods. The IHTA techniques yielded an 8.82% and
9.96% higher accuracy than threshold-based and standard augmentation methods,
respectively.
|
[
{
"created": "Tue, 11 Jun 2024 13:09:16 GMT",
"version": "v1"
}
] |
2024-06-12
|
[
[
"Lee",
"JinKyu",
""
],
[
"Kim",
"Jihie",
""
]
] |
Understanding commonsense knowledge is crucial in the field of Natural Language Processing (NLP). However, the presence of demographic terms in commonsense knowledge poses a potential risk of compromising the performance of NLP models. This study aims to investigate and propose methods for enhancing the performance and effectiveness of a commonsense polarization classifier by mitigating the influence of demographic terms. Three methods are introduced in this paper: (1) hierarchical generalization of demographic terms (2) threshold-based augmentation and (3) integration of hierarchical generalization and threshold-based augmentation methods (IHTA). The first method involves replacing demographic terms with more general ones based on a term hierarchy ontology, aiming to mitigate the influence of specific terms. To address the limited bias-related information, the second method measures the polarization of demographic terms by comparing the changes in the model's predictions when these terms are masked versus unmasked. This method augments commonsense sentences containing terms with high polarization values by replacing their predicates with synonyms generated by ChatGPT. The third method combines the two approaches, starting with threshold-based augmentation followed by hierarchical generalization. The experiments show that the first method increases the accuracy over the baseline by 2.33%, and the second one by 0.96% over standard augmentation methods. The IHTA techniques yielded an 8.82% and 9.96% higher accuracy than threshold-based and standard augmentation methods, respectively.
|
2308.16415
|
Kyuhong Shim
|
Kyuhong Shim, Jinkyu Lee, Simyung Chang, Kyuwoong Hwang
|
Knowledge Distillation from Non-streaming to Streaming ASR Encoder using
Auxiliary Non-streaming Layer
|
Accepted to Interspeech 2023
| null | null | null |
cs.CL eess.AS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Streaming automatic speech recognition (ASR) models are restricted from
accessing future context, which results in worse performance compared to the
non-streaming models. To improve the performance of streaming ASR, knowledge
distillation (KD) from the non-streaming to streaming model has been studied,
mainly focusing on aligning the output token probabilities. In this paper, we
propose a layer-to-layer KD from the teacher encoder to the student encoder. To
ensure that features are extracted using the same context, we insert auxiliary
non-streaming branches to the student and perform KD from the non-streaming
teacher layer to the non-streaming auxiliary layer. We design a special KD loss
that leverages the autoregressive predictive coding (APC) mechanism to
encourage the streaming model to predict unseen future contexts. Experimental
results show that the proposed method can significantly reduce the word error
rate compared to previous token probability distillation methods.
|
[
{
"created": "Thu, 31 Aug 2023 02:58:33 GMT",
"version": "v1"
}
] |
2023-09-01
|
[
[
"Shim",
"Kyuhong",
""
],
[
"Lee",
"Jinkyu",
""
],
[
"Chang",
"Simyung",
""
],
[
"Hwang",
"Kyuwoong",
""
]
] |
Streaming automatic speech recognition (ASR) models are restricted from accessing future context, which results in worse performance compared to the non-streaming models. To improve the performance of streaming ASR, knowledge distillation (KD) from the non-streaming to streaming model has been studied, mainly focusing on aligning the output token probabilities. In this paper, we propose a layer-to-layer KD from the teacher encoder to the student encoder. To ensure that features are extracted using the same context, we insert auxiliary non-streaming branches to the student and perform KD from the non-streaming teacher layer to the non-streaming auxiliary layer. We design a special KD loss that leverages the autoregressive predictive coding (APC) mechanism to encourage the streaming model to predict unseen future contexts. Experimental results show that the proposed method can significantly reduce the word error rate compared to previous token probability distillation methods.
|
2007.13135
|
Peng Gao
|
Lei Shi, Kai Shuang, Shijie Geng, Peng Su, Zhengkai Jiang, Peng Gao,
Zuohui Fu, Gerard de Melo, Sen Su
|
Contrastive Visual-Linguistic Pretraining
| null | null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Several multi-modality representation learning approaches such as LXMERT and
ViLBERT have been proposed recently. Such approaches can achieve superior
performance due to the high-level semantic information captured during
large-scale multimodal pretraining. However, as ViLBERT and LXMERT adopt visual
region regression and classification loss, they often suffer from domain gap
and noisy label problems, based on the visual features having been pretrained
on the Visual Genome dataset. To overcome these issues, we propose unbiased
Contrastive Visual-Linguistic Pretraining (CVLP), which constructs a visual
self-supervised loss built upon contrastive learning. We evaluate CVLP on
several down-stream tasks, including VQA, GQA and NLVR2 to validate the
superiority of contrastive learning on multi-modality representation learning.
Our code is available at: https://github.com/ArcherYunDong/CVLP-.
|
[
{
"created": "Sun, 26 Jul 2020 14:26:18 GMT",
"version": "v1"
}
] |
2020-07-28
|
[
[
"Shi",
"Lei",
""
],
[
"Shuang",
"Kai",
""
],
[
"Geng",
"Shijie",
""
],
[
"Su",
"Peng",
""
],
[
"Jiang",
"Zhengkai",
""
],
[
"Gao",
"Peng",
""
],
[
"Fu",
"Zuohui",
""
],
[
"de Melo",
"Gerard",
""
],
[
"Su",
"Sen",
""
]
] |
Several multi-modality representation learning approaches such as LXMERT and ViLBERT have been proposed recently. Such approaches can achieve superior performance due to the high-level semantic information captured during large-scale multimodal pretraining. However, as ViLBERT and LXMERT adopt visual region regression and classification loss, they often suffer from domain gap and noisy label problems, based on the visual features having been pretrained on the Visual Genome dataset. To overcome these issues, we propose unbiased Contrastive Visual-Linguistic Pretraining (CVLP), which constructs a visual self-supervised loss built upon contrastive learning. We evaluate CVLP on several down-stream tasks, including VQA, GQA and NLVR2 to validate the superiority of contrastive learning on multi-modality representation learning. Our code is available at: https://github.com/ArcherYunDong/CVLP-.
|
2103.14986
|
Ildar Batyrshin Z.
|
Ildar Batyrshin, Luis Alfonso Villa-Vargas, Marco Antonio
Ramirez-Salinas, Moises Salinas-Rosales, Nailya Kubysheva
|
Generating Negations of Probability Distributions
|
10 pages, 1 figure
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Recently it was introduced a negation of a probability distribution. The need
for such negation arises when a knowledge-based system can use the terms like
NOT HIGH, where HIGH is represented by a probability distribution (pd). For
example, HIGH PROFIT or HIGH PRICE can be considered. The application of this
negation in Dempster-Shafer theory was considered in many works. Although
several negations of probability distributions have been proposed, it was not
clear how to construct other negations. In this paper, we consider negations of
probability distributions as point-by-point transformations of pd using
decreasing functions defined on [0,1] called negators. We propose the general
method of generation of negators and corresponding negations of pd, and study
their properties. We give a characterization of linear negators as a convex
combination of Yager and uniform negators.
|
[
{
"created": "Sat, 27 Mar 2021 20:24:10 GMT",
"version": "v1"
}
] |
2021-03-30
|
[
[
"Batyrshin",
"Ildar",
""
],
[
"Villa-Vargas",
"Luis Alfonso",
""
],
[
"Ramirez-Salinas",
"Marco Antonio",
""
],
[
"Salinas-Rosales",
"Moises",
""
],
[
"Kubysheva",
"Nailya",
""
]
] |
Recently it was introduced a negation of a probability distribution. The need for such negation arises when a knowledge-based system can use the terms like NOT HIGH, where HIGH is represented by a probability distribution (pd). For example, HIGH PROFIT or HIGH PRICE can be considered. The application of this negation in Dempster-Shafer theory was considered in many works. Although several negations of probability distributions have been proposed, it was not clear how to construct other negations. In this paper, we consider negations of probability distributions as point-by-point transformations of pd using decreasing functions defined on [0,1] called negators. We propose the general method of generation of negators and corresponding negations of pd, and study their properties. We give a characterization of linear negators as a convex combination of Yager and uniform negators.
|
1710.04347
|
Duckhwan Kim
|
Duckhwan Kim, Taesik Na, Sudhakar Yalamanchili, and Saibal
Mukhopadhyay
|
NeuroTrainer: An Intelligent Memory Module for Deep Learning Training
| null | null | null | null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents, NeuroTrainer, an intelligent memory module with
in-memory accelerators that forms the building block of a scalable architecture
for energy efficient training for deep neural networks. The proposed
architecture is based on integration of a homogeneous computing substrate
composed of multiple processing engines in the logic layer of a 3D memory
module. NeuroTrainer utilizes a programmable data flow based execution model to
optimize memory mapping and data re-use during different phases of training
operation. A programming model and supporting architecture utilizes the
flexible data flow to efficiently accelerate training of various types of DNNs.
The cycle level simulation and synthesized design in 15nm FinFET showspower
efficiency of 500 GFLOPS/W, and almost similar throughput for a wide range of
DNNs including convolutional, recurrent, multi-layer-perceptron, and mixed
(CNN+RNN) networks
|
[
{
"created": "Thu, 12 Oct 2017 02:56:37 GMT",
"version": "v1"
}
] |
2017-10-13
|
[
[
"Kim",
"Duckhwan",
""
],
[
"Na",
"Taesik",
""
],
[
"Yalamanchili",
"Sudhakar",
""
],
[
"Mukhopadhyay",
"Saibal",
""
]
] |
This paper presents, NeuroTrainer, an intelligent memory module with in-memory accelerators that forms the building block of a scalable architecture for energy efficient training for deep neural networks. The proposed architecture is based on integration of a homogeneous computing substrate composed of multiple processing engines in the logic layer of a 3D memory module. NeuroTrainer utilizes a programmable data flow based execution model to optimize memory mapping and data re-use during different phases of training operation. A programming model and supporting architecture utilizes the flexible data flow to efficiently accelerate training of various types of DNNs. The cycle level simulation and synthesized design in 15nm FinFET showspower efficiency of 500 GFLOPS/W, and almost similar throughput for a wide range of DNNs including convolutional, recurrent, multi-layer-perceptron, and mixed (CNN+RNN) networks
|
2405.01615
|
Chengqian Gao
|
Chengqian Gao, William de Vazelhes, Hualin Zhang, Bin Gu, Zhiqiang Xu
|
Hard-Thresholding Meets Evolution Strategies in Reinforcement Learning
|
16 pages, including proofs in the appendix
| null | null | null |
cs.NE cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Evolution Strategies (ES) have emerged as a competitive alternative for
model-free reinforcement learning, showcasing exemplary performance in tasks
like Mujoco and Atari. Notably, they shine in scenarios with imperfect reward
functions, making them invaluable for real-world applications where dense
reward signals may be elusive. Yet, an inherent assumption in ES, that all
input features are task-relevant, poses challenges, especially when confronted
with irrelevant features common in real-world problems. This work scrutinizes
this limitation, particularly focusing on the Natural Evolution Strategies
(NES) variant. We propose NESHT, a novel approach that integrates
Hard-Thresholding (HT) with NES to champion sparsity, ensuring only pertinent
features are employed. Backed by rigorous analysis and empirical tests, NESHT
demonstrates its promise in mitigating the pitfalls of irrelevant features and
shines in complex decision-making problems like noisy Mujoco and Atari tasks.
|
[
{
"created": "Thu, 2 May 2024 16:19:48 GMT",
"version": "v1"
}
] |
2024-05-06
|
[
[
"Gao",
"Chengqian",
""
],
[
"de Vazelhes",
"William",
""
],
[
"Zhang",
"Hualin",
""
],
[
"Gu",
"Bin",
""
],
[
"Xu",
"Zhiqiang",
""
]
] |
Evolution Strategies (ES) have emerged as a competitive alternative for model-free reinforcement learning, showcasing exemplary performance in tasks like Mujoco and Atari. Notably, they shine in scenarios with imperfect reward functions, making them invaluable for real-world applications where dense reward signals may be elusive. Yet, an inherent assumption in ES, that all input features are task-relevant, poses challenges, especially when confronted with irrelevant features common in real-world problems. This work scrutinizes this limitation, particularly focusing on the Natural Evolution Strategies (NES) variant. We propose NESHT, a novel approach that integrates Hard-Thresholding (HT) with NES to champion sparsity, ensuring only pertinent features are employed. Backed by rigorous analysis and empirical tests, NESHT demonstrates its promise in mitigating the pitfalls of irrelevant features and shines in complex decision-making problems like noisy Mujoco and Atari tasks.
|
1203.3498
|
Enrique Munoz de Cote
|
Enrique Munoz de Cote, Archie C. Chapman, Adam M. Sykulski, Nicholas
R. Jennings
|
Automated Planning in Repeated Adversarial Games
|
Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty
in Artificial Intelligence (UAI2010)
| null | null |
UAI-P-2010-PG-376-383
|
cs.GT cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Game theory's prescriptive power typically relies on full rationality and/or
self-play interactions. In contrast, this work sets aside these fundamental
premises and focuses instead on heterogeneous autonomous interactions between
two or more agents. Specifically, we introduce a new and concise representation
for repeated adversarial (constant-sum) games that highlight the necessary
features that enable an automated planing agent to reason about how to score
above the game's Nash equilibrium, when facing heterogeneous adversaries. To
this end, we present TeamUP, a model-based RL algorithm designed for learning
and planning such an abstraction. In essence, it is somewhat similar to R-max
with a cleverly engineered reward shaping that treats exploration as an
adversarial optimization problem. In practice, it attempts to find an ally with
which to tacitly collude (in more than two-player games) and then collaborates
on a joint plan of actions that can consistently score a high utility in
adversarial repeated games. We use the inaugural Lemonade Stand Game Tournament
to demonstrate the effectiveness of our approach, and find that TeamUP is the
best performing agent, demoting the Tournament's actual winning strategy into
second place. In our experimental analysis, we show hat our strategy
successfully and consistently builds collaborations with many different
heterogeneous (and sometimes very sophisticated) adversaries.
|
[
{
"created": "Thu, 15 Mar 2012 11:17:56 GMT",
"version": "v1"
}
] |
2012-03-19
|
[
[
"de Cote",
"Enrique Munoz",
""
],
[
"Chapman",
"Archie C.",
""
],
[
"Sykulski",
"Adam M.",
""
],
[
"Jennings",
"Nicholas R.",
""
]
] |
Game theory's prescriptive power typically relies on full rationality and/or self-play interactions. In contrast, this work sets aside these fundamental premises and focuses instead on heterogeneous autonomous interactions between two or more agents. Specifically, we introduce a new and concise representation for repeated adversarial (constant-sum) games that highlight the necessary features that enable an automated planing agent to reason about how to score above the game's Nash equilibrium, when facing heterogeneous adversaries. To this end, we present TeamUP, a model-based RL algorithm designed for learning and planning such an abstraction. In essence, it is somewhat similar to R-max with a cleverly engineered reward shaping that treats exploration as an adversarial optimization problem. In practice, it attempts to find an ally with which to tacitly collude (in more than two-player games) and then collaborates on a joint plan of actions that can consistently score a high utility in adversarial repeated games. We use the inaugural Lemonade Stand Game Tournament to demonstrate the effectiveness of our approach, and find that TeamUP is the best performing agent, demoting the Tournament's actual winning strategy into second place. In our experimental analysis, we show hat our strategy successfully and consistently builds collaborations with many different heterogeneous (and sometimes very sophisticated) adversaries.
|
2303.06638
|
Thomas Dag\`es
|
Thomas Dag\`es, Michael Lindenbaum, Alfred M. Bruckstein
|
From Compass and Ruler to Convolution and Nonlinearity: On the
Surprising Difficulty of Understanding a Simple CNN Solving a Simple
Geometric Estimation Task
| null | null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural networks are omnipresent, but remain poorly understood. Their
increasing complexity and use in critical systems raises the important
challenge to full interpretability. We propose to address a simple well-posed
learning problem: estimating the radius of a centred pulse in a one-dimensional
signal or of a centred disk in two-dimensional images using a simple
convolutional neural network. Surprisingly, understanding what trained networks
have learned is difficult and, to some extent, counter-intuitive. However, an
in-depth theoretical analysis in the one-dimensional case allows us to
comprehend constraints due to the chosen architecture, the role of each filter
and of the nonlinear activation function, and every single value taken by the
weights of the model. Two fundamental concepts of neural networks arise: the
importance of invariance and of the shape of the nonlinear activation
functions.
|
[
{
"created": "Sun, 12 Mar 2023 11:30:49 GMT",
"version": "v1"
}
] |
2023-03-14
|
[
[
"Dagès",
"Thomas",
""
],
[
"Lindenbaum",
"Michael",
""
],
[
"Bruckstein",
"Alfred M.",
""
]
] |
Neural networks are omnipresent, but remain poorly understood. Their increasing complexity and use in critical systems raises the important challenge to full interpretability. We propose to address a simple well-posed learning problem: estimating the radius of a centred pulse in a one-dimensional signal or of a centred disk in two-dimensional images using a simple convolutional neural network. Surprisingly, understanding what trained networks have learned is difficult and, to some extent, counter-intuitive. However, an in-depth theoretical analysis in the one-dimensional case allows us to comprehend constraints due to the chosen architecture, the role of each filter and of the nonlinear activation function, and every single value taken by the weights of the model. Two fundamental concepts of neural networks arise: the importance of invariance and of the shape of the nonlinear activation functions.
|
2210.11720
|
Wangjie Jiang
|
Wangjie Jiang, Zhihao Ye, Zijing Ou, Ruihui Zhao, Jianguang Zheng, Yi
Liu, Siheng Li, Bang Liu, Yujiu Yang and Yefeng Zheng
|
MCSCSet: A Specialist-annotated Dataset for Medical-domain Chinese
Spelling Correction
|
The full version of CIKM 2022 accepted resource paper "MCSCSet: A
Specialist-annotated Dataset for Medical-domain Chinese Spelling Correction".
(https://dl.acm.org/doi/10.1145/3511808.3557636)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Chinese Spelling Correction (CSC) is gaining increasing attention due to its
promise of automatically detecting and correcting spelling errors in Chinese
texts. Despite its extensive use in many applications, like search engines and
optical character recognition systems, little has been explored in medical
scenarios in which complex and uncommon medical entities are easily misspelled.
Correcting the misspellings of medical entities is arguably more difficult than
those in the open domain due to its requirements of specificdomain knowledge.
In this work, we define the task of Medical-domain Chinese Spelling Correction
and propose MCSCSet, a large scale specialist-annotated dataset that contains
about 200k samples. In contrast to the existing open-domain CSC datasets,
MCSCSet involves: i) extensive real-world medical queries collected from
Tencent Yidian, ii) corresponding misspelled sentences manually annotated by
medical specialists. To ensure automated dataset curation, MCSCSet further
offers a medical confusion set consisting of the commonly misspelled characters
of given Chinese medical terms. This enables one to create the medical
misspelling dataset automatically. Extensive empirical studies have shown
significant performance gaps between the open-domain and medical-domain
spelling correction, highlighting the need to develop high-quality datasets
that allow for Chinese spelling correction in specific domains. Moreover, our
work benchmarks several representative Chinese spelling correction models,
establishing baselines for future work.
|
[
{
"created": "Fri, 21 Oct 2022 04:11:25 GMT",
"version": "v1"
}
] |
2022-10-24
|
[
[
"Jiang",
"Wangjie",
""
],
[
"Ye",
"Zhihao",
""
],
[
"Ou",
"Zijing",
""
],
[
"Zhao",
"Ruihui",
""
],
[
"Zheng",
"Jianguang",
""
],
[
"Liu",
"Yi",
""
],
[
"Li",
"Siheng",
""
],
[
"Liu",
"Bang",
""
],
[
"Yang",
"Yujiu",
""
],
[
"Zheng",
"Yefeng",
""
]
] |
Chinese Spelling Correction (CSC) is gaining increasing attention due to its promise of automatically detecting and correcting spelling errors in Chinese texts. Despite its extensive use in many applications, like search engines and optical character recognition systems, little has been explored in medical scenarios in which complex and uncommon medical entities are easily misspelled. Correcting the misspellings of medical entities is arguably more difficult than those in the open domain due to its requirements of specificdomain knowledge. In this work, we define the task of Medical-domain Chinese Spelling Correction and propose MCSCSet, a large scale specialist-annotated dataset that contains about 200k samples. In contrast to the existing open-domain CSC datasets, MCSCSet involves: i) extensive real-world medical queries collected from Tencent Yidian, ii) corresponding misspelled sentences manually annotated by medical specialists. To ensure automated dataset curation, MCSCSet further offers a medical confusion set consisting of the commonly misspelled characters of given Chinese medical terms. This enables one to create the medical misspelling dataset automatically. Extensive empirical studies have shown significant performance gaps between the open-domain and medical-domain spelling correction, highlighting the need to develop high-quality datasets that allow for Chinese spelling correction in specific domains. Moreover, our work benchmarks several representative Chinese spelling correction models, establishing baselines for future work.
|
2108.08771
|
Hongkai Chen
|
Hongkai Chen, Zixin Luo, Jiahui Zhang, Lei Zhou, Xuyang Bai, Zeyu Hu,
Chiew-Lan Tai, Long Quan
|
Learning to Match Features with Seeded Graph Matching Network
|
Accepted by ICCV2021, code to be realeased at
https://github.com/vdvchen/SGMNet
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Matching local features across images is a fundamental problem in computer
vision. Targeting towards high accuracy and efficiency, we propose Seeded Graph
Matching Network, a graph neural network with sparse structure to reduce
redundant connectivity and learn compact representation. The network consists
of 1) Seeding Module, which initializes the matching by generating a small set
of reliable matches as seeds. 2) Seeded Graph Neural Network, which utilizes
seed matches to pass messages within/across images and predicts assignment
costs. Three novel operations are proposed as basic elements for message
passing: 1) Attentional Pooling, which aggregates keypoint features within the
image to seed matches. 2) Seed Filtering, which enhances seed features and
exchanges messages across images. 3) Attentional Unpooling, which propagates
seed features back to original keypoints. Experiments show that our method
reduces computational and memory complexity significantly compared with typical
attention-based networks while competitive or higher performance is achieved.
|
[
{
"created": "Thu, 19 Aug 2021 16:25:23 GMT",
"version": "v1"
}
] |
2021-08-20
|
[
[
"Chen",
"Hongkai",
""
],
[
"Luo",
"Zixin",
""
],
[
"Zhang",
"Jiahui",
""
],
[
"Zhou",
"Lei",
""
],
[
"Bai",
"Xuyang",
""
],
[
"Hu",
"Zeyu",
""
],
[
"Tai",
"Chiew-Lan",
""
],
[
"Quan",
"Long",
""
]
] |
Matching local features across images is a fundamental problem in computer vision. Targeting towards high accuracy and efficiency, we propose Seeded Graph Matching Network, a graph neural network with sparse structure to reduce redundant connectivity and learn compact representation. The network consists of 1) Seeding Module, which initializes the matching by generating a small set of reliable matches as seeds. 2) Seeded Graph Neural Network, which utilizes seed matches to pass messages within/across images and predicts assignment costs. Three novel operations are proposed as basic elements for message passing: 1) Attentional Pooling, which aggregates keypoint features within the image to seed matches. 2) Seed Filtering, which enhances seed features and exchanges messages across images. 3) Attentional Unpooling, which propagates seed features back to original keypoints. Experiments show that our method reduces computational and memory complexity significantly compared with typical attention-based networks while competitive or higher performance is achieved.
|
2002.09723
|
Cristian Rusu
|
Cristian Rusu and Lorenzo Rosasco
|
Constructing fast approximate eigenspaces with application to the fast
graph Fourier transforms
| null | null |
10.1109/TSP.2021.3107629
| null |
cs.LG cs.NA eess.SP math.NA stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We investigate numerically efficient approximations of eigenspaces associated
to symmetric and general matrices. The eigenspaces are factored into a fixed
number of fundamental components that can be efficiently manipulated (we
consider extended orthogonal Givens or scaling and shear transformations). The
number of these components controls the trade-off between approximation
accuracy and the computational complexity of projecting on the eigenspaces. We
write minimization problems for the single fundamental components and provide
closed-form solutions. Then we propose algorithms that iterative update all
these components until convergence. We show results on random matrices and an
application on the approximation of graph Fourier transforms for directed and
undirected graphs.
|
[
{
"created": "Sat, 22 Feb 2020 15:55:50 GMT",
"version": "v1"
},
{
"created": "Fri, 20 Mar 2020 19:32:39 GMT",
"version": "v2"
},
{
"created": "Tue, 18 May 2021 20:32:41 GMT",
"version": "v3"
}
] |
2021-09-29
|
[
[
"Rusu",
"Cristian",
""
],
[
"Rosasco",
"Lorenzo",
""
]
] |
We investigate numerically efficient approximations of eigenspaces associated to symmetric and general matrices. The eigenspaces are factored into a fixed number of fundamental components that can be efficiently manipulated (we consider extended orthogonal Givens or scaling and shear transformations). The number of these components controls the trade-off between approximation accuracy and the computational complexity of projecting on the eigenspaces. We write minimization problems for the single fundamental components and provide closed-form solutions. Then we propose algorithms that iterative update all these components until convergence. We show results on random matrices and an application on the approximation of graph Fourier transforms for directed and undirected graphs.
|
1110.5450
|
Andreas Kolb
|
Roberto Cespi, Andreas Kolb, Marvin Lindner
|
Hand Tracking based on Hierarchical Clustering of Range Data
|
Technical Report
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fast and robust hand segmentation and tracking is an essential basis for
gesture recognition and thus an important component for contact-less
human-computer interaction (HCI). Hand gesture recognition based on 2D video
data has been intensively investigated. However, in practical scenarios purely
intensity based approaches suffer from uncontrollable environmental conditions
like cluttered background colors. In this paper we present a real-time hand
segmentation and tracking algorithm using Time-of-Flight (ToF) range cameras
and intensity data. The intensity and range information is fused into one pixel
value, representing its combined intensity-depth homogeneity. The scene is
hierarchically clustered using a GPU based parallel merging algorithm, allowing
a robust identification of both hands even for inhomogeneous backgrounds. After
the detection, both hands are tracked on the CPU. Our tracking algorithm can
cope with the situation that one hand is temporarily covered by the other hand.
|
[
{
"created": "Tue, 25 Oct 2011 09:24:25 GMT",
"version": "v1"
}
] |
2011-10-26
|
[
[
"Cespi",
"Roberto",
""
],
[
"Kolb",
"Andreas",
""
],
[
"Lindner",
"Marvin",
""
]
] |
Fast and robust hand segmentation and tracking is an essential basis for gesture recognition and thus an important component for contact-less human-computer interaction (HCI). Hand gesture recognition based on 2D video data has been intensively investigated. However, in practical scenarios purely intensity based approaches suffer from uncontrollable environmental conditions like cluttered background colors. In this paper we present a real-time hand segmentation and tracking algorithm using Time-of-Flight (ToF) range cameras and intensity data. The intensity and range information is fused into one pixel value, representing its combined intensity-depth homogeneity. The scene is hierarchically clustered using a GPU based parallel merging algorithm, allowing a robust identification of both hands even for inhomogeneous backgrounds. After the detection, both hands are tracked on the CPU. Our tracking algorithm can cope with the situation that one hand is temporarily covered by the other hand.
|
1911.11288
|
Sergey Zakharov
|
Sergey Zakharov, Wadim Kehl, Arjun Bhargava, Adrien Gaidon
|
Autolabeling 3D Objects with Differentiable Rendering of SDF Shape
Priors
|
CVPR 2020 (Oral). 8 pages + supplementary material. The first two
authors contributed equally to this work
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present an automatic annotation pipeline to recover 9D cuboids and 3D
shapes from pre-trained off-the-shelf 2D detectors and sparse LIDAR data. Our
autolabeling method solves an ill-posed inverse problem by considering learned
shape priors and optimizing geometric and physical parameters. To address this
challenging problem, we apply a novel differentiable shape renderer to signed
distance fields (SDF), leveraged together with normalized object coordinate
spaces (NOCS). Initially trained on synthetic data to predict shape and
coordinates, our method uses these predictions for projective and geometric
alignment over real samples. Moreover, we also propose a curriculum learning
strategy, iteratively retraining on samples of increasing difficulty in
subsequent self-improving annotation rounds. Our experiments on the KITTI3D
dataset show that we can recover a substantial amount of accurate cuboids, and
that these autolabels can be used to train 3D vehicle detectors with
state-of-the-art results.
|
[
{
"created": "Tue, 26 Nov 2019 00:11:49 GMT",
"version": "v1"
},
{
"created": "Thu, 2 Apr 2020 15:44:47 GMT",
"version": "v2"
}
] |
2020-04-03
|
[
[
"Zakharov",
"Sergey",
""
],
[
"Kehl",
"Wadim",
""
],
[
"Bhargava",
"Arjun",
""
],
[
"Gaidon",
"Adrien",
""
]
] |
We present an automatic annotation pipeline to recover 9D cuboids and 3D shapes from pre-trained off-the-shelf 2D detectors and sparse LIDAR data. Our autolabeling method solves an ill-posed inverse problem by considering learned shape priors and optimizing geometric and physical parameters. To address this challenging problem, we apply a novel differentiable shape renderer to signed distance fields (SDF), leveraged together with normalized object coordinate spaces (NOCS). Initially trained on synthetic data to predict shape and coordinates, our method uses these predictions for projective and geometric alignment over real samples. Moreover, we also propose a curriculum learning strategy, iteratively retraining on samples of increasing difficulty in subsequent self-improving annotation rounds. Our experiments on the KITTI3D dataset show that we can recover a substantial amount of accurate cuboids, and that these autolabels can be used to train 3D vehicle detectors with state-of-the-art results.
|
2402.00152
|
Yahong Yang
|
Yahong Yang and Juncai He
|
Deeper or Wider: A Perspective from Optimal Generalization Error with
Sobolev Loss
|
arXiv admin note: text overlap with arXiv:2310.10766,
arXiv:2305.08466
| null | null | null |
cs.LG cs.NA math.NA stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Constructing the architecture of a neural network is a challenging pursuit
for the machine learning community, and the dilemma of whether to go deeper or
wider remains a persistent question. This paper explores a comparison between
deeper neural networks (DeNNs) with a flexible number of layers and wider
neural networks (WeNNs) with limited hidden layers, focusing on their optimal
generalization error in Sobolev losses. Analytical investigations reveal that
the architecture of a neural network can be significantly influenced by various
factors, including the number of sample points, parameters within the neural
networks, and the regularity of the loss function. Specifically, a higher
number of parameters tends to favor WeNNs, while an increased number of sample
points and greater regularity in the loss function lean towards the adoption of
DeNNs. We ultimately apply this theory to address partial differential
equations using deep Ritz and physics-informed neural network (PINN) methods,
guiding the design of neural networks.
|
[
{
"created": "Wed, 31 Jan 2024 20:10:10 GMT",
"version": "v1"
},
{
"created": "Sun, 12 May 2024 13:47:30 GMT",
"version": "v2"
}
] |
2024-05-14
|
[
[
"Yang",
"Yahong",
""
],
[
"He",
"Juncai",
""
]
] |
Constructing the architecture of a neural network is a challenging pursuit for the machine learning community, and the dilemma of whether to go deeper or wider remains a persistent question. This paper explores a comparison between deeper neural networks (DeNNs) with a flexible number of layers and wider neural networks (WeNNs) with limited hidden layers, focusing on their optimal generalization error in Sobolev losses. Analytical investigations reveal that the architecture of a neural network can be significantly influenced by various factors, including the number of sample points, parameters within the neural networks, and the regularity of the loss function. Specifically, a higher number of parameters tends to favor WeNNs, while an increased number of sample points and greater regularity in the loss function lean towards the adoption of DeNNs. We ultimately apply this theory to address partial differential equations using deep Ritz and physics-informed neural network (PINN) methods, guiding the design of neural networks.
|
1003.5437
|
Secretary Aircc Journal
|
K.Prasanth (1), Dr.K.Duraiswamy (2), K.Jayasudha (3) and
Dr.C.Chandrasekar (4), ((1) K.S.Rangasamy College of Technology, India, (2)
K.S.Rangasamy College of Technology, India, (3) K.S.R College of Engineering,
India, (4) Periyar University, India)
|
Improved Packet Forwarding Approach in Vehicular Ad Hoc Networks Using
RDGR Algorithm
|
14 Pages, IJNGN Journal
|
International Journal of Next-Generation Networks 2.1 (2010) 64-77
|
10.5121/ijngn.2010.2106
| null |
cs.NI
|
http://creativecommons.org/licenses/by-nc-sa/3.0/
|
VANETs (Vehicular Ad hoc Networks) are highly mobile wireless ad hoc networks
and will play an important role in public safety communications and commercial
applications. Routing of data in VANETs is a challenging task due to rapidly
changing topology and high speed mobility of vehicles. Position based routing
protocols are becoming popular due to advancement and availability of GPS
devices. One of the critical issues of VANETs are frequent path disruptions
caused by high speed mobility of vehicle that leads to broken links which
results in low throughput and high overhead . This paper argues the use of
information on vehicles' movement information (e.g., position, direction, speed
of vehicles) to predict a possible link-breakage event prior to its occurrence.
So in this paper we propose a Reliable Directional Greedy routing (RDGR), a
reliable position based routing approach which obtains position, speed and
direction of its neighboring nodes from GPS. This approach incorporates
potential score based strategy, which calculates link stability between
neighbor nodes in distributed fashion for reliable forwarding of data packet.
|
[
{
"created": "Mon, 29 Mar 2010 06:55:33 GMT",
"version": "v1"
}
] |
2010-07-15
|
[
[
"Prasanth",
"K.",
""
],
[
"Duraiswamy",
"Dr. K.",
""
],
[
"Jayasudha",
"K.",
""
],
[
"Chandrasekar",
"Dr. C.",
""
]
] |
VANETs (Vehicular Ad hoc Networks) are highly mobile wireless ad hoc networks and will play an important role in public safety communications and commercial applications. Routing of data in VANETs is a challenging task due to rapidly changing topology and high speed mobility of vehicles. Position based routing protocols are becoming popular due to advancement and availability of GPS devices. One of the critical issues of VANETs are frequent path disruptions caused by high speed mobility of vehicle that leads to broken links which results in low throughput and high overhead . This paper argues the use of information on vehicles' movement information (e.g., position, direction, speed of vehicles) to predict a possible link-breakage event prior to its occurrence. So in this paper we propose a Reliable Directional Greedy routing (RDGR), a reliable position based routing approach which obtains position, speed and direction of its neighboring nodes from GPS. This approach incorporates potential score based strategy, which calculates link stability between neighbor nodes in distributed fashion for reliable forwarding of data packet.
|
1901.05376
|
Faisal Qureshi
|
Tony Joseph, Konstantinos G. Derpanis, Faisal Z. Qureshi
|
Joint Spatial and Layer Attention for Convolutional Networks
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a novel approach that learns to sequentially attend
to different Convolutional Neural Networks (CNN) layers (i.e., ``what'' feature
abstraction to attend to) and different spatial locations of the selected
feature map (i.e., ``where'') to perform the task at hand. Specifically, at
each Recurrent Neural Network (RNN) step, both a CNN layer and localized
spatial region within it are selected for further processing. We demonstrate
the effectiveness of this approach on two computer vision tasks: (i)
image-based six degree of freedom camera pose regression and (ii) indoor scene
classification. Empirically, we show that combining the ``what'' and ``where''
aspects of attention improves network performance on both tasks. We evaluate
our method on standard benchmarks for camera localization (Cambridge, 7-Scenes,
and TUM-LSI) and for scene classification (MIT-67 Indoor Scenes). For camera
localization our approach reduces the median error by 18.8\% for position and
8.2\% for orientation (averaged over all scenes), and for scene classification
it improves the mean accuracy by 3.4\% over previous methods.
|
[
{
"created": "Wed, 16 Jan 2019 16:32:31 GMT",
"version": "v1"
},
{
"created": "Fri, 31 May 2019 11:38:07 GMT",
"version": "v2"
}
] |
2019-06-03
|
[
[
"Joseph",
"Tony",
""
],
[
"Derpanis",
"Konstantinos G.",
""
],
[
"Qureshi",
"Faisal Z.",
""
]
] |
In this paper, we propose a novel approach that learns to sequentially attend to different Convolutional Neural Networks (CNN) layers (i.e., ``what'' feature abstraction to attend to) and different spatial locations of the selected feature map (i.e., ``where'') to perform the task at hand. Specifically, at each Recurrent Neural Network (RNN) step, both a CNN layer and localized spatial region within it are selected for further processing. We demonstrate the effectiveness of this approach on two computer vision tasks: (i) image-based six degree of freedom camera pose regression and (ii) indoor scene classification. Empirically, we show that combining the ``what'' and ``where'' aspects of attention improves network performance on both tasks. We evaluate our method on standard benchmarks for camera localization (Cambridge, 7-Scenes, and TUM-LSI) and for scene classification (MIT-67 Indoor Scenes). For camera localization our approach reduces the median error by 18.8\% for position and 8.2\% for orientation (averaged over all scenes), and for scene classification it improves the mean accuracy by 3.4\% over previous methods.
|
2312.02405
|
Anssi Kanervisto
|
Stephanie Milani, Anssi Kanervisto, Karolis Ramanauskas, Sander
Schulhoff, Brandon Houghton, Rohin Shah
|
BEDD: The MineRL BASALT Evaluation and Demonstrations Dataset for
Training and Benchmarking Agents that Solve Fuzzy Tasks
|
NeurIPS 2023 Datasets and Benchmarks Oral. Dataset links are
available on Github: https://github.com/minerllabs/basalt-benchmark
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The MineRL BASALT competition has served to catalyze advances in learning
from human feedback through four hard-to-specify tasks in Minecraft, such as
create and photograph a waterfall. Given the completion of two years of BASALT
competitions, we offer to the community a formalized benchmark through the
BASALT Evaluation and Demonstrations Dataset (BEDD), which serves as a resource
for algorithm development and performance assessment. BEDD consists of a
collection of 26 million image-action pairs from nearly 14,000 videos of human
players completing the BASALT tasks in Minecraft. It also includes over 3,000
dense pairwise human evaluations of human and algorithmic agents. These
comparisons serve as a fixed, preliminary leaderboard for evaluating
newly-developed algorithms. To enable this comparison, we present a streamlined
codebase for benchmarking new algorithms against the leaderboard. In addition
to presenting these datasets, we conduct a detailed analysis of the data from
both datasets to guide algorithm development and evaluation. The released code
and data are available at https://github.com/minerllabs/basalt-benchmark .
|
[
{
"created": "Tue, 5 Dec 2023 00:29:44 GMT",
"version": "v1"
}
] |
2023-12-06
|
[
[
"Milani",
"Stephanie",
""
],
[
"Kanervisto",
"Anssi",
""
],
[
"Ramanauskas",
"Karolis",
""
],
[
"Schulhoff",
"Sander",
""
],
[
"Houghton",
"Brandon",
""
],
[
"Shah",
"Rohin",
""
]
] |
The MineRL BASALT competition has served to catalyze advances in learning from human feedback through four hard-to-specify tasks in Minecraft, such as create and photograph a waterfall. Given the completion of two years of BASALT competitions, we offer to the community a formalized benchmark through the BASALT Evaluation and Demonstrations Dataset (BEDD), which serves as a resource for algorithm development and performance assessment. BEDD consists of a collection of 26 million image-action pairs from nearly 14,000 videos of human players completing the BASALT tasks in Minecraft. It also includes over 3,000 dense pairwise human evaluations of human and algorithmic agents. These comparisons serve as a fixed, preliminary leaderboard for evaluating newly-developed algorithms. To enable this comparison, we present a streamlined codebase for benchmarking new algorithms against the leaderboard. In addition to presenting these datasets, we conduct a detailed analysis of the data from both datasets to guide algorithm development and evaluation. The released code and data are available at https://github.com/minerllabs/basalt-benchmark .
|
1712.00376
|
Alexios Balatsoukas-Stimming
|
Alexios Balatsoukas-Stimming, Tomasz Podzorny, Jan Uythoven
|
Polar Coding for the Large Hadron Collider: Challenges in Code
Concatenation
|
Presented at the 51st Asilomar Conference on Signals, Systems, and
Computers, November 2017
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we present a concatenated repetition-polar coding scheme that
is aimed at applications requiring highly unbalanced unequal bit-error
protection, such as the Beam Interlock System of the Large Hadron Collider at
CERN. Even though this concatenation scheme is simple, it reveals significant
challenges that may be encountered when designing a concatenated scheme that
uses a polar code as an inner code, such as error correlation and unusual
decision log-likelihood ratio distributions. We explain and analyze these
challenges and we propose two ways to overcome them.
|
[
{
"created": "Fri, 1 Dec 2017 15:48:35 GMT",
"version": "v1"
}
] |
2017-12-04
|
[
[
"Balatsoukas-Stimming",
"Alexios",
""
],
[
"Podzorny",
"Tomasz",
""
],
[
"Uythoven",
"Jan",
""
]
] |
In this work, we present a concatenated repetition-polar coding scheme that is aimed at applications requiring highly unbalanced unequal bit-error protection, such as the Beam Interlock System of the Large Hadron Collider at CERN. Even though this concatenation scheme is simple, it reveals significant challenges that may be encountered when designing a concatenated scheme that uses a polar code as an inner code, such as error correlation and unusual decision log-likelihood ratio distributions. We explain and analyze these challenges and we propose two ways to overcome them.
|
2211.02208
|
Adrian Tam
|
Aaron Yagnik and Adrian S.-W. Tam
|
Automated Logging Drone: A Computer Vision Drone Implementation
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-sa/4.0/
|
In recent years, Artificial Intelligence (AI) and Computer Vision (CV) have
become the pinnacle of technology with new developments seemingly every day.
This technology along with more powerful drone technology have made autonomous
surveillance more sought after. Here an overview of the Automated Logging Drone
(ALD) project is presented along with examples of how this project can be used
with more refining and added features.
|
[
{
"created": "Fri, 4 Nov 2022 01:36:32 GMT",
"version": "v1"
}
] |
2022-11-07
|
[
[
"Yagnik",
"Aaron",
""
],
[
"Tam",
"Adrian S. -W.",
""
]
] |
In recent years, Artificial Intelligence (AI) and Computer Vision (CV) have become the pinnacle of technology with new developments seemingly every day. This technology along with more powerful drone technology have made autonomous surveillance more sought after. Here an overview of the Automated Logging Drone (ALD) project is presented along with examples of how this project can be used with more refining and added features.
|
cs/0407027
|
Atsushi Fujii
|
Atsushi Fujii, Katunobu Itou, Tomoyosi Akiba, Tetsuya Ishikawa
|
Unsupervised Topic Adaptation for Lecture Speech Retrieval
|
4 pages, Proceedings of the 8th International Conference on Spoken
Language Processing (to appear)
|
Proceedings of the 8th International Conference on Spoken Language
Processing (ICSLP 2004), pp.2957-2960, Oct. 2004
| null | null |
cs.CL
| null |
We are developing a cross-media information retrieval system, in which users
can view specific segments of lecture videos by submitting text queries. To
produce a text index, the audio track is extracted from a lecture video and a
transcription is generated by automatic speech recognition. In this paper, to
improve the quality of our retrieval system, we extensively investigate the
effects of adapting acoustic and language models on speech recognition. We
perform an MLLR-based method to adapt an acoustic model. To obtain a corpus for
language model adaptation, we use the textbook for a target lecture to search a
Web collection for the pages associated with the lecture topic. We show the
effectiveness of our method by means of experiments.
|
[
{
"created": "Sat, 10 Jul 2004 11:45:57 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Fujii",
"Atsushi",
""
],
[
"Itou",
"Katunobu",
""
],
[
"Akiba",
"Tomoyosi",
""
],
[
"Ishikawa",
"Tetsuya",
""
]
] |
We are developing a cross-media information retrieval system, in which users can view specific segments of lecture videos by submitting text queries. To produce a text index, the audio track is extracted from a lecture video and a transcription is generated by automatic speech recognition. In this paper, to improve the quality of our retrieval system, we extensively investigate the effects of adapting acoustic and language models on speech recognition. We perform an MLLR-based method to adapt an acoustic model. To obtain a corpus for language model adaptation, we use the textbook for a target lecture to search a Web collection for the pages associated with the lecture topic. We show the effectiveness of our method by means of experiments.
|
1710.08015
|
Chenwei Zhang
|
Chenwei Zhang, Nan Du, Wei Fan, Yaliang Li, Chun-Ta Lu, Philip S. Yu
|
Bringing Semantic Structures to User Intent Detection in Online Medical
Queries
|
10 pages, 2017 IEEE International Conference on Big Data (Big Data
2017)
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Internet has revolutionized healthcare by offering medical information
ubiquitously to patients via web search. The healthcare status, complex medical
information needs of patients are expressed diversely and implicitly in their
medical text queries. Aiming to better capture a focused picture of user's
medical-related information search and shed insights on their healthcare
information access strategies, it is challenging yet rewarding to detect
structured user intentions from their diversely expressed medical text queries.
We introduce a graph-based formulation to explore structured concept
transitions for effective user intent detection in medical queries, where each
node represents a medical concept mention and each directed edge indicates a
medical concept transition. A deep model based on multi-task learning is
introduced to extract structured semantic transitions from user queries, where
the model extracts word-level medical concept mentions as well as
sentence-level concept transitions collectively. A customized graph-based
mutual transfer loss function is designed to impose explicit constraints and
further exploit the contribution of mentioning a medical concept word to the
implication of a semantic transition. We observe an 8% relative improvement in
AUC and 23% relative reduction in coverage error by comparing the proposed
model with the best baseline model for the concept transition inference task on
real-world medical text queries.
|
[
{
"created": "Sun, 22 Oct 2017 21:03:28 GMT",
"version": "v1"
}
] |
2017-10-24
|
[
[
"Zhang",
"Chenwei",
""
],
[
"Du",
"Nan",
""
],
[
"Fan",
"Wei",
""
],
[
"Li",
"Yaliang",
""
],
[
"Lu",
"Chun-Ta",
""
],
[
"Yu",
"Philip S.",
""
]
] |
The Internet has revolutionized healthcare by offering medical information ubiquitously to patients via web search. The healthcare status, complex medical information needs of patients are expressed diversely and implicitly in their medical text queries. Aiming to better capture a focused picture of user's medical-related information search and shed insights on their healthcare information access strategies, it is challenging yet rewarding to detect structured user intentions from their diversely expressed medical text queries. We introduce a graph-based formulation to explore structured concept transitions for effective user intent detection in medical queries, where each node represents a medical concept mention and each directed edge indicates a medical concept transition. A deep model based on multi-task learning is introduced to extract structured semantic transitions from user queries, where the model extracts word-level medical concept mentions as well as sentence-level concept transitions collectively. A customized graph-based mutual transfer loss function is designed to impose explicit constraints and further exploit the contribution of mentioning a medical concept word to the implication of a semantic transition. We observe an 8% relative improvement in AUC and 23% relative reduction in coverage error by comparing the proposed model with the best baseline model for the concept transition inference task on real-world medical text queries.
|
2204.05039
|
Shrestha Ghosh
|
Shrestha Ghosh, Simon Razniewski, Gerhard Weikum
|
Answering Count Queries with Explanatory Evidence
|
Version published at SIGIR 2022
| null |
10.1145/3477495.3531870
| null |
cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
A challenging case in web search and question answering are count queries,
such as \textit{"number of songs by John Lennon"}. Prior methods merely answer
these with a single, and sometimes puzzling number or return a ranked list of
text snippets with different numbers. This paper proposes a methodology for
answering count queries with inference, contextualization and explanatory
evidence. Unlike previous systems, our method infers final answers from
multiple observations, supports semantic qualifiers for the counts, and
provides evidence by enumerating representative instances. Experiments with a
wide variety of queries show the benefits of our method. To promote further
research on this underexplored topic, we release an annotated dataset of 5k
queries with 200k relevant text spans.
|
[
{
"created": "Mon, 11 Apr 2022 12:20:13 GMT",
"version": "v1"
},
{
"created": "Tue, 30 Aug 2022 09:46:27 GMT",
"version": "v2"
}
] |
2022-08-31
|
[
[
"Ghosh",
"Shrestha",
""
],
[
"Razniewski",
"Simon",
""
],
[
"Weikum",
"Gerhard",
""
]
] |
A challenging case in web search and question answering are count queries, such as \textit{"number of songs by John Lennon"}. Prior methods merely answer these with a single, and sometimes puzzling number or return a ranked list of text snippets with different numbers. This paper proposes a methodology for answering count queries with inference, contextualization and explanatory evidence. Unlike previous systems, our method infers final answers from multiple observations, supports semantic qualifiers for the counts, and provides evidence by enumerating representative instances. Experiments with a wide variety of queries show the benefits of our method. To promote further research on this underexplored topic, we release an annotated dataset of 5k queries with 200k relevant text spans.
|
1108.5472
|
Shan Zhou
|
Shan Zhou, Xinzhou Wu, Lei Ying
|
Distributed Power Control and Coding-Modulation Adaptation in Wireless
Networks using Annealed Gibbs Sampling
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In wireless networks, the transmission rate of a link is determined by
received signal strength, interference from simultaneous transmissions, and
available coding-modulation schemes. Rate allocation is a key problem in
wireless network design, but a very challenging problem because: (i) wireless
interference is global, i.e., a transmission interferes all other simultaneous
transmissions, and (ii) the rate-power relation is non-convex and
non-continuous, where the discontinuity is due to limited number of
coding-modulation choices in practical systems. In this paper, we propose a
distributed power control and coding-modulation adaptation algorithm using
annealed Gibbs sampling, which achieves throughput optimality in an arbitrary
network topology. We consider a realistic
Signal-to-Interference-and-Noise-Ratio (SINR) based interference model, and
assume continuous power space and finite rate options (coding-modulation
choices). Our algorithm first decomposes network-wide interference to local
interference by properly choosing a "neighborhood" for each transmitter and
bounding the interference from non-neighbor nodes. The power update policy is
then carefully designed to emulate a Gibbs sampler over a Markov chain with a
continuous state space. We further exploit the technique of simulated annealing
to speed up the convergence of the algorithm to the optimal power and
coding-modulation configuration. Finally, simulation results demonstrate the
superior performance of the proposed algorithm.
|
[
{
"created": "Sat, 27 Aug 2011 20:17:52 GMT",
"version": "v1"
},
{
"created": "Tue, 23 Oct 2012 03:52:04 GMT",
"version": "v2"
}
] |
2012-10-24
|
[
[
"Zhou",
"Shan",
""
],
[
"Wu",
"Xinzhou",
""
],
[
"Ying",
"Lei",
""
]
] |
In wireless networks, the transmission rate of a link is determined by received signal strength, interference from simultaneous transmissions, and available coding-modulation schemes. Rate allocation is a key problem in wireless network design, but a very challenging problem because: (i) wireless interference is global, i.e., a transmission interferes all other simultaneous transmissions, and (ii) the rate-power relation is non-convex and non-continuous, where the discontinuity is due to limited number of coding-modulation choices in practical systems. In this paper, we propose a distributed power control and coding-modulation adaptation algorithm using annealed Gibbs sampling, which achieves throughput optimality in an arbitrary network topology. We consider a realistic Signal-to-Interference-and-Noise-Ratio (SINR) based interference model, and assume continuous power space and finite rate options (coding-modulation choices). Our algorithm first decomposes network-wide interference to local interference by properly choosing a "neighborhood" for each transmitter and bounding the interference from non-neighbor nodes. The power update policy is then carefully designed to emulate a Gibbs sampler over a Markov chain with a continuous state space. We further exploit the technique of simulated annealing to speed up the convergence of the algorithm to the optimal power and coding-modulation configuration. Finally, simulation results demonstrate the superior performance of the proposed algorithm.
|
1810.01719
|
Orfeas Stefanos Thyfronitis Litos
|
Aggelos Kiayias and Benjamin Livshits and Andr\'es Monteoliva Mosteiro
and Orfeas Stefanos Thyfronitis Litos
|
A Puff of Steem: Security Analysis of Decentralized Content Curation
|
15 pages main text, 35 pages in total, 6 figures, 6 algorithms.
Contains mathematical analysis and computer simulation
| null | null | null |
cs.MA
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Decentralized content curation is the process through which uploaded posts
are ranked and filtered based exclusively on users' feedback. Platforms such as
the blockchain-based Steemit employ this type of curation while providing
monetary incentives to promote the visibility of high quality posts according
to the perception of the participants. Despite the wide adoption of the
platform very little is known regarding its performance and resilience
characteristics. In this work, we provide a formal model for decentralized
content curation that identifies salient complexity and game-theoretic measures
of performance and resilience to selfish participants. Armed with our model, we
provide a first analysis of Steemit identifying the conditions under which the
system can be expected to correctly converge to curation while we demonstrate
its susceptibility to selfish participant behaviour. We validate our
theoretical results with system simulations in various scenarios.
|
[
{
"created": "Wed, 3 Oct 2018 12:50:38 GMT",
"version": "v1"
},
{
"created": "Wed, 2 Jan 2019 16:59:06 GMT",
"version": "v2"
}
] |
2019-01-03
|
[
[
"Kiayias",
"Aggelos",
""
],
[
"Livshits",
"Benjamin",
""
],
[
"Mosteiro",
"Andrés Monteoliva",
""
],
[
"Litos",
"Orfeas Stefanos Thyfronitis",
""
]
] |
Decentralized content curation is the process through which uploaded posts are ranked and filtered based exclusively on users' feedback. Platforms such as the blockchain-based Steemit employ this type of curation while providing monetary incentives to promote the visibility of high quality posts according to the perception of the participants. Despite the wide adoption of the platform very little is known regarding its performance and resilience characteristics. In this work, we provide a formal model for decentralized content curation that identifies salient complexity and game-theoretic measures of performance and resilience to selfish participants. Armed with our model, we provide a first analysis of Steemit identifying the conditions under which the system can be expected to correctly converge to curation while we demonstrate its susceptibility to selfish participant behaviour. We validate our theoretical results with system simulations in various scenarios.
|
2003.06706
|
Rickard Br\"uel Gabrielsson
|
Rickard Br\"uel-Gabrielsson
|
Universal Function Approximation on Graphs
| null | null | null | null |
cs.DS cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work we produce a framework for constructing universal function
approximators on graph isomorphism classes. We prove how this framework comes
with a collection of theoretically desirable properties and enables novel
analysis. We show how this allows us to achieve state-of-the-art performance on
four different well-known datasets in graph classification and separate classes
of graphs that other graph-learning methods cannot. Our approach is inspired by
persistent homology, dependency parsing for NLP, and multivalued functions. The
complexity of the underlying algorithm is O(#edges x #nodes) and code is
publicly available
(https://github.com/bruel-gabrielsson/universal-function-approximation-on-graphs).
|
[
{
"created": "Sat, 14 Mar 2020 21:12:33 GMT",
"version": "v1"
},
{
"created": "Tue, 1 Sep 2020 09:06:02 GMT",
"version": "v2"
},
{
"created": "Mon, 26 Oct 2020 07:58:11 GMT",
"version": "v3"
}
] |
2020-10-27
|
[
[
"Brüel-Gabrielsson",
"Rickard",
""
]
] |
In this work we produce a framework for constructing universal function approximators on graph isomorphism classes. We prove how this framework comes with a collection of theoretically desirable properties and enables novel analysis. We show how this allows us to achieve state-of-the-art performance on four different well-known datasets in graph classification and separate classes of graphs that other graph-learning methods cannot. Our approach is inspired by persistent homology, dependency parsing for NLP, and multivalued functions. The complexity of the underlying algorithm is O(#edges x #nodes) and code is publicly available (https://github.com/bruel-gabrielsson/universal-function-approximation-on-graphs).
|
2407.13368
|
Gertjan Burghouts
|
Gertjan Burghouts, Marianne Schaaphok, Michael van Bekkum, Wouter
Meijer, Fieke Hillerstr\"om, Jelle van Mil
|
Affordance Perception by a Knowledge-Guided Vision-Language Model with
Efficient Error Correction
|
15 pages
|
International Conference on Pattern Recognition and Artificial
Intelligence (ICPRAI) 2024
| null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Mobile robot platforms will increasingly be tasked with activities that
involve grasping and manipulating objects in open world environments.
Affordance understanding provides a robot with means to realise its goals and
execute its tasks, e.g. to achieve autonomous navigation in unknown buildings
where it has to find doors and ways to open these. In order to get actionable
suggestions, robots need to be able to distinguish subtle differences between
objects, as they may result in different action sequences: doorknobs require
grasp and twist, while handlebars require grasp and push. In this paper, we
improve affordance perception for a robot in an open-world setting. Our
contribution is threefold: (1) We provide an affordance representation with
precise, actionable affordances; (2) We connect this knowledge base to a
foundational vision-language models (VLM) and prompt the VLM for a wider
variety of new and unseen objects; (3) We apply a human-in-the-loop for
corrections on the output of the VLM. The mix of affordance representation,
image detection and a human-in-the-loop is effective for a robot to search for
objects to achieve its goals. We have demonstrated this in a scenario of
finding various doors and the many different ways to open them.
|
[
{
"created": "Thu, 18 Jul 2024 10:24:22 GMT",
"version": "v1"
}
] |
2024-07-19
|
[
[
"Burghouts",
"Gertjan",
""
],
[
"Schaaphok",
"Marianne",
""
],
[
"van Bekkum",
"Michael",
""
],
[
"Meijer",
"Wouter",
""
],
[
"Hillerström",
"Fieke",
""
],
[
"van Mil",
"Jelle",
""
]
] |
Mobile robot platforms will increasingly be tasked with activities that involve grasping and manipulating objects in open world environments. Affordance understanding provides a robot with means to realise its goals and execute its tasks, e.g. to achieve autonomous navigation in unknown buildings where it has to find doors and ways to open these. In order to get actionable suggestions, robots need to be able to distinguish subtle differences between objects, as they may result in different action sequences: doorknobs require grasp and twist, while handlebars require grasp and push. In this paper, we improve affordance perception for a robot in an open-world setting. Our contribution is threefold: (1) We provide an affordance representation with precise, actionable affordances; (2) We connect this knowledge base to a foundational vision-language models (VLM) and prompt the VLM for a wider variety of new and unseen objects; (3) We apply a human-in-the-loop for corrections on the output of the VLM. The mix of affordance representation, image detection and a human-in-the-loop is effective for a robot to search for objects to achieve its goals. We have demonstrated this in a scenario of finding various doors and the many different ways to open them.
|
2408.07434
|
Christopher Herneth
|
Christopher Herneth, Junnan Li, Muhammad Hilman Fatoni, Amartya
Ganguly, and Sami Haddadin
|
Object Augmentation Algorithm: Computing virtual object motion and
object induced interaction wrench from optical markers
|
An open source implementation of the described algorithm is available
at
https://github.com/ChristopherHerneth/ObjectAugmentationAlgorithm/tree/main.
Accompanying video material may be found here https://youtu.be/8oz-awvyNRA.
The article was accepted at IROS 2024
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-sa/4.0/
|
This study addresses the critical need for diverse and comprehensive data
focused on human arm joint torques while performing activities of daily living
(ADL). Previous studies have often overlooked the influence of objects on joint
torques during ADL, resulting in limited datasets for analysis. To address this
gap, we propose an Object Augmentation Algorithm (OAA) capable of augmenting
existing marker-based databases with virtual object motions and object-induced
joint torque estimations. The OAA consists of five phases: (1) computing hand
coordinate systems from optical markers, (2) characterising object movements
with virtual markers, (3) calculating object motions through inverse kinematics
(IK), (4) determining the wrench necessary for prescribed object motion using
inverse dynamics (ID), and (5) computing joint torques resulting from object
manipulation. The algorithm's accuracy is validated through trajectory tracking
and torque analysis on a 7+4 degree of freedom (DoF) robotic hand-arm system,
manipulating three unique objects. The results show that the OAA can accurately
and precisely estimate 6 DoF object motion and object-induced joint torques.
Correlations between computed and measured quantities were > 0.99 for object
trajectories and > 0.93 for joint torques. The OAA was further shown to be
robust to variations in the number and placement of input markers, which are
expected between databases. Differences between repeated experiments were minor
but significant (p < 0.05). The algorithm expands the scope of available data
and facilitates more comprehensive analyses of human-object interaction
dynamics.
|
[
{
"created": "Wed, 14 Aug 2024 10:09:00 GMT",
"version": "v1"
}
] |
2024-08-15
|
[
[
"Herneth",
"Christopher",
""
],
[
"Li",
"Junnan",
""
],
[
"Fatoni",
"Muhammad Hilman",
""
],
[
"Ganguly",
"Amartya",
""
],
[
"Haddadin",
"Sami",
""
]
] |
This study addresses the critical need for diverse and comprehensive data focused on human arm joint torques while performing activities of daily living (ADL). Previous studies have often overlooked the influence of objects on joint torques during ADL, resulting in limited datasets for analysis. To address this gap, we propose an Object Augmentation Algorithm (OAA) capable of augmenting existing marker-based databases with virtual object motions and object-induced joint torque estimations. The OAA consists of five phases: (1) computing hand coordinate systems from optical markers, (2) characterising object movements with virtual markers, (3) calculating object motions through inverse kinematics (IK), (4) determining the wrench necessary for prescribed object motion using inverse dynamics (ID), and (5) computing joint torques resulting from object manipulation. The algorithm's accuracy is validated through trajectory tracking and torque analysis on a 7+4 degree of freedom (DoF) robotic hand-arm system, manipulating three unique objects. The results show that the OAA can accurately and precisely estimate 6 DoF object motion and object-induced joint torques. Correlations between computed and measured quantities were > 0.99 for object trajectories and > 0.93 for joint torques. The OAA was further shown to be robust to variations in the number and placement of input markers, which are expected between databases. Differences between repeated experiments were minor but significant (p < 0.05). The algorithm expands the scope of available data and facilitates more comprehensive analyses of human-object interaction dynamics.
|
1104.4680
|
Prasad Raghavendra
|
Boaz Barak, Prasad Raghavendra, David Steurer
|
Rounding Semidefinite Programming Hierarchies via Global Correlation
|
30 pages
| null | null | null |
cs.DS cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We show a new way to round vector solutions of semidefinite programming (SDP)
hierarchies into integral solutions, based on a connection between these
hierarchies and the spectrum of the input graph. We demonstrate the utility of
our method by providing a new SDP-hierarchy based algorithm for constraint
satisfaction problems with 2-variable constraints (2-CSP's).
More concretely, we show for every 2-CSP instance I a rounding algorithm for
r rounds of the Lasserre SDP hierarchy for I that obtains an integral solution
that is at most \eps worse than the relaxation's value (normalized to lie in
[0,1]), as long as r > k\cdot\rank_{\geq \theta}(\Ins)/\poly(\e) \;, where k is
the alphabet size of I, $\theta=\poly(\e/k)$, and $\rank_{\geq \theta}(\Ins)$
denotes the number of eigenvalues larger than $\theta$ in the normalized
adjacency matrix of the constraint graph of $\Ins$.
In the case that $\Ins$ is a \uniquegames instance, the threshold $\theta$ is
only a polynomial in $\e$, and is independent of the alphabet size. Also in
this case, we can give a non-trivial bound on the number of rounds for
\emph{every} instance. In particular our result yields an SDP-hierarchy based
algorithm that matches the performance of the recent subexponential algorithm
of Arora, Barak and Steurer (FOCS 2010) in the worst case, but runs faster on a
natural family of instances, thus further restricting the set of possible hard
instances for Khot's Unique Games Conjecture.
Our algorithm actually requires less than the $n^{O(r)}$ constraints
specified by the $r^{th}$ level of the Lasserre hierarchy, and in some cases
$r$ rounds of our program can be evaluated in time $2^{O(r)}\poly(n)$.
|
[
{
"created": "Mon, 25 Apr 2011 04:58:50 GMT",
"version": "v1"
}
] |
2011-04-26
|
[
[
"Barak",
"Boaz",
""
],
[
"Raghavendra",
"Prasad",
""
],
[
"Steurer",
"David",
""
]
] |
We show a new way to round vector solutions of semidefinite programming (SDP) hierarchies into integral solutions, based on a connection between these hierarchies and the spectrum of the input graph. We demonstrate the utility of our method by providing a new SDP-hierarchy based algorithm for constraint satisfaction problems with 2-variable constraints (2-CSP's). More concretely, we show for every 2-CSP instance I a rounding algorithm for r rounds of the Lasserre SDP hierarchy for I that obtains an integral solution that is at most \eps worse than the relaxation's value (normalized to lie in [0,1]), as long as r > k\cdot\rank_{\geq \theta}(\Ins)/\poly(\e) \;, where k is the alphabet size of I, $\theta=\poly(\e/k)$, and $\rank_{\geq \theta}(\Ins)$ denotes the number of eigenvalues larger than $\theta$ in the normalized adjacency matrix of the constraint graph of $\Ins$. In the case that $\Ins$ is a \uniquegames instance, the threshold $\theta$ is only a polynomial in $\e$, and is independent of the alphabet size. Also in this case, we can give a non-trivial bound on the number of rounds for \emph{every} instance. In particular our result yields an SDP-hierarchy based algorithm that matches the performance of the recent subexponential algorithm of Arora, Barak and Steurer (FOCS 2010) in the worst case, but runs faster on a natural family of instances, thus further restricting the set of possible hard instances for Khot's Unique Games Conjecture. Our algorithm actually requires less than the $n^{O(r)}$ constraints specified by the $r^{th}$ level of the Lasserre hierarchy, and in some cases $r$ rounds of our program can be evaluated in time $2^{O(r)}\poly(n)$.
|
2205.02125
|
Yongsheng Bai
|
Yongsheng Bai, Bing Zha, Halil Sezen and Alper Yilmaz
|
Engineering deep learning methods on automatic detection of damage in
infrastructure due to extreme events
|
Thanks for the revivers' help for improving this paper. Structural
Health Monitoring (2022)
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a few comprehensive experimental studies for automated
Structural Damage Detection (SDD) in extreme events using deep learning methods
for processing 2D images. In the first study, a 152-layer Residual network
(ResNet) is utilized to classify multiple classes in eight SDD tasks, which
include identification of scene levels, damage levels, material types, etc. The
proposed ResNet achieved high accuracy for each task while the positions of the
damage are not identifiable. In the second study, the existing ResNet and a
segmentation network (U-Net) are combined into a new pipeline, cascaded
networks, for categorizing and locating structural damage. The results show
that the accuracy of damage detection is significantly improved compared to
only using a segmentation network. In the third and fourth studies, end-to-end
networks are developed and tested as a new solution to directly detect cracks
and spalling in the image collections of recent large earthquakes. One of the
proposed networks can achieve an accuracy above 67.6% for all tested images at
various scales and resolutions, and shows its robustness for these human-free
detection tasks. As a preliminary field study, we applied the proposed method
to detect damage in a concrete structure that was tested to study its
progressive collapse performance. The experiments indicate that these solutions
for automatic detection of structural damage using deep learning methods are
feasible and promising. The training datasets and codes will be made available
for the public upon the publication of this paper.
|
[
{
"created": "Sun, 1 May 2022 19:55:56 GMT",
"version": "v1"
}
] |
2022-05-05
|
[
[
"Bai",
"Yongsheng",
""
],
[
"Zha",
"Bing",
""
],
[
"Sezen",
"Halil",
""
],
[
"Yilmaz",
"Alper",
""
]
] |
This paper presents a few comprehensive experimental studies for automated Structural Damage Detection (SDD) in extreme events using deep learning methods for processing 2D images. In the first study, a 152-layer Residual network (ResNet) is utilized to classify multiple classes in eight SDD tasks, which include identification of scene levels, damage levels, material types, etc. The proposed ResNet achieved high accuracy for each task while the positions of the damage are not identifiable. In the second study, the existing ResNet and a segmentation network (U-Net) are combined into a new pipeline, cascaded networks, for categorizing and locating structural damage. The results show that the accuracy of damage detection is significantly improved compared to only using a segmentation network. In the third and fourth studies, end-to-end networks are developed and tested as a new solution to directly detect cracks and spalling in the image collections of recent large earthquakes. One of the proposed networks can achieve an accuracy above 67.6% for all tested images at various scales and resolutions, and shows its robustness for these human-free detection tasks. As a preliminary field study, we applied the proposed method to detect damage in a concrete structure that was tested to study its progressive collapse performance. The experiments indicate that these solutions for automatic detection of structural damage using deep learning methods are feasible and promising. The training datasets and codes will be made available for the public upon the publication of this paper.
|
2310.04426
|
Jamal El-Ouahi
|
Jamal El-Ouahi
|
Research Funding in the Middle East and North Africa: Analyses of
Acknowledgments in Scientific Publications indexed in the Web of Science
(2008-2021)
|
34 pages, 7 figures, 8 tables
| null |
10.1007/s11192-024-04983-8
| null |
cs.DL
|
http://creativecommons.org/licenses/by/4.0/
|
Funding acknowledgments are important objects of study in the context of
science funding. This study uses a mixed-methods approach to analyze the
funding acknowledgments found in 2.3 million scientific publications published
between 2008 and 2021 by authors affiliated with research institutions located
in the Middle Eastern and North Africa (MENA). The aim is to identify the major
funders, assess their contribution to national scientific publications, and
gain insights into the funding mechanism in relation to collaboration and
publication. Publication data from the Web of Science is examined to provide
key insights about funding activities. Saudi Arabia and Qatar lead the region,
as about half of their publications include acknowledgments to funding sources.
Most MENA countries exhibit strong linkages with foreign agencies, mainly due
to a high level of international collaborations. The distinction between
domestic and international publications reveals some differences in terms of
funding structures. For instance, Turkey and Iran are dominated by one or two
major funders whereas a few other countries like Saudi Arabia showcase multiple
funders. Iran and Kuwait are examples of countries where research is mainly
funded by domestic funders. The government and academic sectors mainly fund
scientific research in MENA whereas the industry sector plays little or no role
in terms of research funding. Lastly, the qualitative analyses provide more
context into the complex funding mechanism. The findings of this study
contribute to a better understanding of the funding structure in MENA countries
and provide insights to funders and research managers to evaluate the funding
landscape.
|
[
{
"created": "Mon, 18 Sep 2023 07:29:52 GMT",
"version": "v1"
},
{
"created": "Wed, 24 Jan 2024 07:11:58 GMT",
"version": "v2"
},
{
"created": "Thu, 16 May 2024 04:45:06 GMT",
"version": "v3"
}
] |
2024-05-17
|
[
[
"El-Ouahi",
"Jamal",
""
]
] |
Funding acknowledgments are important objects of study in the context of science funding. This study uses a mixed-methods approach to analyze the funding acknowledgments found in 2.3 million scientific publications published between 2008 and 2021 by authors affiliated with research institutions located in the Middle Eastern and North Africa (MENA). The aim is to identify the major funders, assess their contribution to national scientific publications, and gain insights into the funding mechanism in relation to collaboration and publication. Publication data from the Web of Science is examined to provide key insights about funding activities. Saudi Arabia and Qatar lead the region, as about half of their publications include acknowledgments to funding sources. Most MENA countries exhibit strong linkages with foreign agencies, mainly due to a high level of international collaborations. The distinction between domestic and international publications reveals some differences in terms of funding structures. For instance, Turkey and Iran are dominated by one or two major funders whereas a few other countries like Saudi Arabia showcase multiple funders. Iran and Kuwait are examples of countries where research is mainly funded by domestic funders. The government and academic sectors mainly fund scientific research in MENA whereas the industry sector plays little or no role in terms of research funding. Lastly, the qualitative analyses provide more context into the complex funding mechanism. The findings of this study contribute to a better understanding of the funding structure in MENA countries and provide insights to funders and research managers to evaluate the funding landscape.
|
1406.5597
|
Anando Chatterjee
|
A. G. Chatterjee, M. K. Verma, and M. Chaudhuri
|
Transpose-free Fast Fourier Transform for Turbulence Simulation
| null | null | null | null |
cs.MS cs.CE cs.DS physics.comp-ph physics.flu-dyn
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pseudo-spectral method is one of the most accurate techniques for simulating
turbulent flows. Fast Fourier transform (FFT) is an integral part of this
method. In this paper, we present a new procedure to compute FFT in which we
save operations during interprocess communications by avoiding transpose of the
array. As a result, our transpose-free FFT is 15\% to 20\% faster than FFTW.
|
[
{
"created": "Sat, 21 Jun 2014 11:19:59 GMT",
"version": "v1"
}
] |
2014-06-24
|
[
[
"Chatterjee",
"A. G.",
""
],
[
"Verma",
"M. K.",
""
],
[
"Chaudhuri",
"M.",
""
]
] |
Pseudo-spectral method is one of the most accurate techniques for simulating turbulent flows. Fast Fourier transform (FFT) is an integral part of this method. In this paper, we present a new procedure to compute FFT in which we save operations during interprocess communications by avoiding transpose of the array. As a result, our transpose-free FFT is 15\% to 20\% faster than FFTW.
|
1103.5254
|
Kevin Waugh
|
Kevin Waugh and Brian D. Ziebart and J. Andrew Bagnell
|
Computational Rationalization: The Inverse Equilibrium Problem
|
8 pages, 4 page appendix, ICML 2011
| null | null | null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modeling the purposeful behavior of imperfect agents from a small number of
observations is a challenging task. When restricted to the single-agent
decision-theoretic setting, inverse optimal control techniques assume that
observed behavior is an approximately optimal solution to an unknown decision
problem. These techniques learn a utility function that explains the example
behavior and can then be used to accurately predict or imitate future behavior
in similar observed or unobserved situations.
In this work, we consider similar tasks in competitive and cooperative
multi-agent domains. Here, unlike single-agent settings, a player cannot
myopically maximize its reward --- it must speculate on how the other agents
may act to influence the game's outcome. Employing the game-theoretic notion of
regret and the principle of maximum entropy, we introduce a technique for
predicting and generalizing behavior, as well as recovering a reward function
in these domains.
|
[
{
"created": "Sun, 27 Mar 2011 22:13:15 GMT",
"version": "v1"
},
{
"created": "Tue, 29 Mar 2011 19:13:06 GMT",
"version": "v2"
},
{
"created": "Fri, 6 May 2011 20:41:14 GMT",
"version": "v3"
}
] |
2015-03-19
|
[
[
"Waugh",
"Kevin",
""
],
[
"Ziebart",
"Brian D.",
""
],
[
"Bagnell",
"J. Andrew",
""
]
] |
Modeling the purposeful behavior of imperfect agents from a small number of observations is a challenging task. When restricted to the single-agent decision-theoretic setting, inverse optimal control techniques assume that observed behavior is an approximately optimal solution to an unknown decision problem. These techniques learn a utility function that explains the example behavior and can then be used to accurately predict or imitate future behavior in similar observed or unobserved situations. In this work, we consider similar tasks in competitive and cooperative multi-agent domains. Here, unlike single-agent settings, a player cannot myopically maximize its reward --- it must speculate on how the other agents may act to influence the game's outcome. Employing the game-theoretic notion of regret and the principle of maximum entropy, we introduce a technique for predicting and generalizing behavior, as well as recovering a reward function in these domains.
|
1001.3734
|
T.R. Gopalakrishnan Nair
|
Muthu Ramachandran, T.R. Gopalakrsihnan Nair, R. Selvarani
|
Software Components for Web Services
|
6 pages, 4 figures
|
Journal of Research & Industry, Volume 1, Issue 1, pp 1-6, 2008
| null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Service-oriented computing has emerged as the new area to address software as
a service. This paper proposes a model for component based development for
service-oriented systems and have created best practice guidelines on software
component design.
|
[
{
"created": "Thu, 21 Jan 2010 06:56:10 GMT",
"version": "v1"
}
] |
2016-09-08
|
[
[
"Ramachandran",
"Muthu",
""
],
[
"Nair",
"T. R. Gopalakrsihnan",
""
],
[
"Selvarani",
"R.",
""
]
] |
Service-oriented computing has emerged as the new area to address software as a service. This paper proposes a model for component based development for service-oriented systems and have created best practice guidelines on software component design.
|
2111.14297
|
Panjian Huang
|
Panjian Huang, Xu Liu and Yongzhen Huang
|
Data Augmentation For Medical MR Image Using Generative Adversarial
Networks
| null | null | null | null |
cs.CV cs.LG eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Computer-assisted diagnosis (CAD) based on deep learning has become a crucial
diagnostic technology in the medical industry, effectively improving diagnosis
accuracy. However, the scarcity of brain tumor Magnetic Resonance (MR) image
datasets causes the low performance of deep learning algorithms. The
distribution of transformed images generated by traditional data augmentation
(DA) intrinsically resembles the original ones, resulting in a limited
performance in terms of generalization ability. This work improves Progressive
Growing of GANs with a structural similarity loss function (PGGAN-SSIM) to
solve image blurriness problems and model collapse. We also explore other
GAN-based data augmentation to demonstrate the effectiveness of the proposed
model. Our results show that PGGAN-SSIM successfully generates 256x256
realistic brain tumor MR images which fill the real image distribution
uncovered by the original dataset. Furthermore, PGGAN-SSIM exceeds other
GAN-based methods, achieving promising performance improvement in Frechet
Inception Distance (FID) and Multi-scale Structural Similarity (MS-SSIM).
|
[
{
"created": "Mon, 29 Nov 2021 01:59:50 GMT",
"version": "v1"
}
] |
2021-11-30
|
[
[
"Huang",
"Panjian",
""
],
[
"Liu",
"Xu",
""
],
[
"Huang",
"Yongzhen",
""
]
] |
Computer-assisted diagnosis (CAD) based on deep learning has become a crucial diagnostic technology in the medical industry, effectively improving diagnosis accuracy. However, the scarcity of brain tumor Magnetic Resonance (MR) image datasets causes the low performance of deep learning algorithms. The distribution of transformed images generated by traditional data augmentation (DA) intrinsically resembles the original ones, resulting in a limited performance in terms of generalization ability. This work improves Progressive Growing of GANs with a structural similarity loss function (PGGAN-SSIM) to solve image blurriness problems and model collapse. We also explore other GAN-based data augmentation to demonstrate the effectiveness of the proposed model. Our results show that PGGAN-SSIM successfully generates 256x256 realistic brain tumor MR images which fill the real image distribution uncovered by the original dataset. Furthermore, PGGAN-SSIM exceeds other GAN-based methods, achieving promising performance improvement in Frechet Inception Distance (FID) and Multi-scale Structural Similarity (MS-SSIM).
|
2112.01155
|
Junghun Oh
|
Junghun Oh, Heewon Kim, Sungyong Baik, Cheeun Hong and Kyoung Mu Lee
|
Batch Normalization Tells You Which Filter is Important
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The goal of filter pruning is to search for unimportant filters to remove in
order to make convolutional neural networks (CNNs) efficient without
sacrificing the performance in the process. The challenge lies in finding
information that can help determine how important or relevant each filter is
with respect to the final output of neural networks. In this work, we share our
observation that the batch normalization (BN) parameters of pre-trained CNNs
can be used to estimate the feature distribution of activation outputs, without
processing of training data. Upon observation, we propose a simple yet
effective filter pruning method by evaluating the importance of each filter
based on the BN parameters of pre-trained CNNs. The experimental results on
CIFAR-10 and ImageNet demonstrate that the proposed method can achieve
outstanding performance with and without fine-tuning in terms of the trade-off
between the accuracy drop and the reduction in computational complexity and
number of parameters of pruned networks.
|
[
{
"created": "Thu, 2 Dec 2021 12:04:59 GMT",
"version": "v1"
},
{
"created": "Sat, 23 Apr 2022 09:22:50 GMT",
"version": "v2"
}
] |
2022-04-26
|
[
[
"Oh",
"Junghun",
""
],
[
"Kim",
"Heewon",
""
],
[
"Baik",
"Sungyong",
""
],
[
"Hong",
"Cheeun",
""
],
[
"Lee",
"Kyoung Mu",
""
]
] |
The goal of filter pruning is to search for unimportant filters to remove in order to make convolutional neural networks (CNNs) efficient without sacrificing the performance in the process. The challenge lies in finding information that can help determine how important or relevant each filter is with respect to the final output of neural networks. In this work, we share our observation that the batch normalization (BN) parameters of pre-trained CNNs can be used to estimate the feature distribution of activation outputs, without processing of training data. Upon observation, we propose a simple yet effective filter pruning method by evaluating the importance of each filter based on the BN parameters of pre-trained CNNs. The experimental results on CIFAR-10 and ImageNet demonstrate that the proposed method can achieve outstanding performance with and without fine-tuning in terms of the trade-off between the accuracy drop and the reduction in computational complexity and number of parameters of pruned networks.
|
2309.10505
|
Muah Kim
|
Muah Kim, Rick Fritschek, and Rafael F. Schaefer
|
Diffusion Models for Accurate Channel Distribution Generation
|
13 pages, 6 figures, preprint
| null | null | null |
cs.IT cs.LG math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Strong generative models can accurately learn channel distributions. This
could save recurring costs for physical measurements of the channel. Moreover,
the resulting differentiable channel model supports training neural encoders by
enabling gradient-based optimization. The initial approach in the literature
draws upon the modern advancements in image generation, utilizing generative
adversarial networks (GANs) or their enhanced variants to generate channel
distributions. In this paper, we address this channel approximation challenge
with diffusion models (DMs), which have demonstrated high sample quality and
mode coverage in image generation. In addition to testing the generative
performance of the channel distributions, we use an end-to-end (E2E)
coded-modulation framework underpinned by DMs and propose an efficient training
algorithm. Our simulations with various channel models show that a DM can
accurately learn channel distributions, enabling an E2E framework to achieve
near-optimal symbol error rates (SERs). Furthermore, we examine the trade-off
between mode coverage and sampling speed through skipped sampling using sliced
Wasserstein distance (SWD) and the E2E SER. We investigate the effect of noise
scheduling on this trade-off, demonstrating that with an appropriate choice of
parameters and techniques, sampling time can be significantly reduced with a
minor increase in SWD and SER. Finally, we show that the DM can generate a
correlated fading channel, whereas a strong GAN variant fails to learn the
covariance. This paper highlights the potential benefits of using DMs for
learning channel distributions, which could be further investigated for various
channels and advanced techniques of DMs.
|
[
{
"created": "Tue, 19 Sep 2023 10:35:54 GMT",
"version": "v1"
},
{
"created": "Thu, 21 Sep 2023 14:45:03 GMT",
"version": "v2"
},
{
"created": "Fri, 7 Jun 2024 21:30:35 GMT",
"version": "v3"
},
{
"created": "Tue, 11 Jun 2024 04:01:00 GMT",
"version": "v4"
}
] |
2024-06-12
|
[
[
"Kim",
"Muah",
""
],
[
"Fritschek",
"Rick",
""
],
[
"Schaefer",
"Rafael F.",
""
]
] |
Strong generative models can accurately learn channel distributions. This could save recurring costs for physical measurements of the channel. Moreover, the resulting differentiable channel model supports training neural encoders by enabling gradient-based optimization. The initial approach in the literature draws upon the modern advancements in image generation, utilizing generative adversarial networks (GANs) or their enhanced variants to generate channel distributions. In this paper, we address this channel approximation challenge with diffusion models (DMs), which have demonstrated high sample quality and mode coverage in image generation. In addition to testing the generative performance of the channel distributions, we use an end-to-end (E2E) coded-modulation framework underpinned by DMs and propose an efficient training algorithm. Our simulations with various channel models show that a DM can accurately learn channel distributions, enabling an E2E framework to achieve near-optimal symbol error rates (SERs). Furthermore, we examine the trade-off between mode coverage and sampling speed through skipped sampling using sliced Wasserstein distance (SWD) and the E2E SER. We investigate the effect of noise scheduling on this trade-off, demonstrating that with an appropriate choice of parameters and techniques, sampling time can be significantly reduced with a minor increase in SWD and SER. Finally, we show that the DM can generate a correlated fading channel, whereas a strong GAN variant fails to learn the covariance. This paper highlights the potential benefits of using DMs for learning channel distributions, which could be further investigated for various channels and advanced techniques of DMs.
|
2208.03084
|
Federico Simonetta
|
Alessandro Maria Poir\`e, Federico Simonetta, Stavros Ntalampiras
|
Deep Feature Learning for Medical Acoustics
|
Published at ICANN 2022
| null | null | null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The purpose of this paper is to compare different learnable frontends in
medical acoustics tasks. A framework has been implemented to classify human
respiratory sounds and heartbeats in two categories, i.e. healthy or affected
by pathologies. After obtaining two suitable datasets, we proceeded to classify
the sounds using two learnable state-of-art frontends -- LEAF and nnAudio --
plus a non-learnable baseline frontend, i.e. Mel-filterbanks. The computed
features are then fed into two different CNN models, namely VGG16 and
EfficientNet. The frontends are carefully benchmarked in terms of the number of
parameters, computational resources, and effectiveness.
This work demonstrates how the integration of learnable frontends in neural
audio classification systems may improve performance, especially in the field
of medical acoustics. However, the usage of such frameworks makes the needed
amount of data even larger. Consequently, they are useful if the amount of data
available for training is adequately large to assist the feature learning
process.
|
[
{
"created": "Fri, 5 Aug 2022 10:39:37 GMT",
"version": "v1"
}
] |
2022-08-08
|
[
[
"Poirè",
"Alessandro Maria",
""
],
[
"Simonetta",
"Federico",
""
],
[
"Ntalampiras",
"Stavros",
""
]
] |
The purpose of this paper is to compare different learnable frontends in medical acoustics tasks. A framework has been implemented to classify human respiratory sounds and heartbeats in two categories, i.e. healthy or affected by pathologies. After obtaining two suitable datasets, we proceeded to classify the sounds using two learnable state-of-art frontends -- LEAF and nnAudio -- plus a non-learnable baseline frontend, i.e. Mel-filterbanks. The computed features are then fed into two different CNN models, namely VGG16 and EfficientNet. The frontends are carefully benchmarked in terms of the number of parameters, computational resources, and effectiveness. This work demonstrates how the integration of learnable frontends in neural audio classification systems may improve performance, especially in the field of medical acoustics. However, the usage of such frameworks makes the needed amount of data even larger. Consequently, they are useful if the amount of data available for training is adequately large to assist the feature learning process.
|
2407.02877
|
Zhiqiang Wei
|
Zhiqiang Wei and Dongfang Xu and Shuangyang Li and Shenghui Song and
Derrick Wing Kwan Ng and Giuseppe Caire
|
Resource Allocation Design for Next-Generation Multiple Access: A
Tutorial Overview
|
69 pages, 10 figures, 5 tables
| null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Multiple access is the cornerstone technology for each generation of wireless
cellular networks and resource allocation design plays a crucial role in
multiple access. In this paper, we present a comprehensive tutorial overview
for junior researchers in this field, aiming to offer a foundational guide for
resource allocation design in the context of next-generation multiple access
(NGMA). Initially, we identify three types of channels in future wireless
cellular networks over which NGMA will be implemented, namely: natural
channels, reconfigurable channels, and functional channels. Natural channels
are traditional uplink and downlink communication channels; reconfigurable
channels are defined as channels that can be proactively reshaped via emerging
platforms or techniques, such as intelligent reflecting surface (IRS), unmanned
aerial vehicle (UAV), and movable/fluid antenna (M/FA); and functional channels
support not only communication but also other functionalities simultaneously,
with typical examples including integrated sensing and communication (ISAC) and
joint computing and communication (JCAC) channels. Then, we introduce NGMA
models applicable to these three types of channels that cover most of the
practical communication scenarios of future wireless communications.
Subsequently, we articulate the key optimization technical challenges inherent
in the resource allocation design for NGMA, categorizing them into
rate-oriented, power-oriented, and reliability-oriented resource allocation
designs. The corresponding optimization approaches for solving the formulated
resource allocation design problems are then presented. Finally, simulation
results are presented and discussed to elucidate the practical implications and
insights derived from resource allocation designs in NGMA.
|
[
{
"created": "Wed, 3 Jul 2024 07:45:39 GMT",
"version": "v1"
}
] |
2024-07-04
|
[
[
"Wei",
"Zhiqiang",
""
],
[
"Xu",
"Dongfang",
""
],
[
"Li",
"Shuangyang",
""
],
[
"Song",
"Shenghui",
""
],
[
"Ng",
"Derrick Wing Kwan",
""
],
[
"Caire",
"Giuseppe",
""
]
] |
Multiple access is the cornerstone technology for each generation of wireless cellular networks and resource allocation design plays a crucial role in multiple access. In this paper, we present a comprehensive tutorial overview for junior researchers in this field, aiming to offer a foundational guide for resource allocation design in the context of next-generation multiple access (NGMA). Initially, we identify three types of channels in future wireless cellular networks over which NGMA will be implemented, namely: natural channels, reconfigurable channels, and functional channels. Natural channels are traditional uplink and downlink communication channels; reconfigurable channels are defined as channels that can be proactively reshaped via emerging platforms or techniques, such as intelligent reflecting surface (IRS), unmanned aerial vehicle (UAV), and movable/fluid antenna (M/FA); and functional channels support not only communication but also other functionalities simultaneously, with typical examples including integrated sensing and communication (ISAC) and joint computing and communication (JCAC) channels. Then, we introduce NGMA models applicable to these three types of channels that cover most of the practical communication scenarios of future wireless communications. Subsequently, we articulate the key optimization technical challenges inherent in the resource allocation design for NGMA, categorizing them into rate-oriented, power-oriented, and reliability-oriented resource allocation designs. The corresponding optimization approaches for solving the formulated resource allocation design problems are then presented. Finally, simulation results are presented and discussed to elucidate the practical implications and insights derived from resource allocation designs in NGMA.
|
2202.13033
|
Wei-Chang Yeh
|
WC Yeh, CL Huang, TY Hsu, Z Liu, SY Tan
|
A New BAT and PageRank algorithm for Propagation Probability in Social
Networks
| null | null | null | null |
cs.SI math.PR
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Social networks have increasingly become important and popular in modern
times. Moreover, the influence of social networks plays a vital role in various
organizations including government organizations, academic research or
corporate organizations. Therefore, how to strategize the optimal propagation
strategy in social networks has also become more important. By increasing the
precision of evaluating the propagation probability of social network, it can
indirectly influence the investment of cost, manpower and time for information
propagation to achieve the best return. This study proposes a new algorithm,
which includes a scale-free network, Barabasi-Albert model,
Binary-Addition-Tree (BAT) algorithm, PageRank algorithm, personalized PageRank
algorithm and a new BAT algorithm, to calculate the propagation probability in
social networks. The results obtained after implementing the simulation
experiment of social network models show the studied model and the proposed
algorithm provide an effective method to increase the efficiency of information
propagation in social networks. In this way, the maximum propagation efficiency
is achieved with the minimum investment.
|
[
{
"created": "Sat, 26 Feb 2022 01:27:09 GMT",
"version": "v1"
}
] |
2022-03-01
|
[
[
"Yeh",
"WC",
""
],
[
"Huang",
"CL",
""
],
[
"Hsu",
"TY",
""
],
[
"Liu",
"Z",
""
],
[
"Tan",
"SY",
""
]
] |
Social networks have increasingly become important and popular in modern times. Moreover, the influence of social networks plays a vital role in various organizations including government organizations, academic research or corporate organizations. Therefore, how to strategize the optimal propagation strategy in social networks has also become more important. By increasing the precision of evaluating the propagation probability of social network, it can indirectly influence the investment of cost, manpower and time for information propagation to achieve the best return. This study proposes a new algorithm, which includes a scale-free network, Barabasi-Albert model, Binary-Addition-Tree (BAT) algorithm, PageRank algorithm, personalized PageRank algorithm and a new BAT algorithm, to calculate the propagation probability in social networks. The results obtained after implementing the simulation experiment of social network models show the studied model and the proposed algorithm provide an effective method to increase the efficiency of information propagation in social networks. In this way, the maximum propagation efficiency is achieved with the minimum investment.
|
1107.0018
|
A. Al-Ani
|
A. Al-Ani, M. Deriche
|
A New Technique for Combining Multiple Classifiers using The
Dempster-Shafer Theory of Evidence
| null |
Journal Of Artificial Intelligence Research, Volume 17, pages
333-361, 2002
|
10.1613/jair.1026
| null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a new classifier combination technique based on the
Dempster-Shafer theory of evidence. The Dempster-Shafer theory of evidence is a
powerful method for combining measures of evidence from different classifiers.
However, since each of the available methods that estimates the evidence of
classifiers has its own limitations, we propose here a new implementation which
adapts to training data so that the overall mean square error is minimized. The
proposed technique is shown to outperform most available classifier combination
methods when tested on three different classification problems.
|
[
{
"created": "Thu, 30 Jun 2011 20:31:52 GMT",
"version": "v1"
}
] |
2011-07-04
|
[
[
"Al-Ani",
"A.",
""
],
[
"Deriche",
"M.",
""
]
] |
This paper presents a new classifier combination technique based on the Dempster-Shafer theory of evidence. The Dempster-Shafer theory of evidence is a powerful method for combining measures of evidence from different classifiers. However, since each of the available methods that estimates the evidence of classifiers has its own limitations, we propose here a new implementation which adapts to training data so that the overall mean square error is minimized. The proposed technique is shown to outperform most available classifier combination methods when tested on three different classification problems.
|
2403.05168
|
Hai Huang
|
Hai Huang, Yan Xia, Shengpeng Ji, Shulei Wang, Hanting Wang, Jieming
Zhu, Zhenhua Dong, Zhou Zhao
|
Unlocking the Potential of Multimodal Unified Discrete Representation
through Training-Free Codebook Optimization and Hierarchical Alignment
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent advances in representation learning have demonstrated the significance
of multimodal alignment. The Dual Cross-modal Information Disentanglement
(DCID) model, utilizing a unified codebook, shows promising results in
achieving fine-grained representation and cross-modal generalization. However,
it is still hindered by equal treatment of all channels and neglect of minor
event information, resulting in interference from irrelevant channels and
limited performance in fine-grained tasks. Thus, in this work, We propose a
Training-free Optimization of Codebook (TOC) method to enhance model
performance by selecting important channels in the unified space without
retraining. Additionally, we introduce the Hierarchical Dual Cross-modal
Information Disentanglement (H-DCID) approach to extend information separation
and alignment to two levels, capturing more cross-modal details. The experiment
results demonstrate significant improvements across various downstream tasks,
with TOC contributing to an average improvement of 1.70% for DCID on four
tasks, and H-DCID surpassing DCID by an average of 3.64%. The combination of
TOC and H-DCID further enhances performance, exceeding DCID by 4.43%. These
findings highlight the effectiveness of our methods in facilitating robust and
nuanced cross-modal learning, opening avenues for future enhancements. The
source code and pre-trained models can be accessed at
https://github.com/haihuangcode/TOC_H-DCID.
|
[
{
"created": "Fri, 8 Mar 2024 09:16:47 GMT",
"version": "v1"
}
] |
2024-03-11
|
[
[
"Huang",
"Hai",
""
],
[
"Xia",
"Yan",
""
],
[
"Ji",
"Shengpeng",
""
],
[
"Wang",
"Shulei",
""
],
[
"Wang",
"Hanting",
""
],
[
"Zhu",
"Jieming",
""
],
[
"Dong",
"Zhenhua",
""
],
[
"Zhao",
"Zhou",
""
]
] |
Recent advances in representation learning have demonstrated the significance of multimodal alignment. The Dual Cross-modal Information Disentanglement (DCID) model, utilizing a unified codebook, shows promising results in achieving fine-grained representation and cross-modal generalization. However, it is still hindered by equal treatment of all channels and neglect of minor event information, resulting in interference from irrelevant channels and limited performance in fine-grained tasks. Thus, in this work, We propose a Training-free Optimization of Codebook (TOC) method to enhance model performance by selecting important channels in the unified space without retraining. Additionally, we introduce the Hierarchical Dual Cross-modal Information Disentanglement (H-DCID) approach to extend information separation and alignment to two levels, capturing more cross-modal details. The experiment results demonstrate significant improvements across various downstream tasks, with TOC contributing to an average improvement of 1.70% for DCID on four tasks, and H-DCID surpassing DCID by an average of 3.64%. The combination of TOC and H-DCID further enhances performance, exceeding DCID by 4.43%. These findings highlight the effectiveness of our methods in facilitating robust and nuanced cross-modal learning, opening avenues for future enhancements. The source code and pre-trained models can be accessed at https://github.com/haihuangcode/TOC_H-DCID.
|
2005.00723
|
Qiang Yu
|
Qiang Yu, Shenglan Li, Huajin Tang, Longbiao Wang, Jianwu Dang, Kay
Chen Tan
|
Towards Efficient Processing and Learning with Spikes: New Approaches
for Multi-Spike Learning
|
13 pages
| null |
10.1109/TCYB.2020.2984888
| null |
cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Spikes are the currency in central nervous systems for information
transmission and processing. They are also believed to play an essential role
in low-power consumption of the biological systems, whose efficiency attracts
increasing attentions to the field of neuromorphic computing. However,
efficient processing and learning of discrete spikes still remains as a
challenging problem. In this paper, we make our contributions towards this
direction. A simplified spiking neuron model is firstly introduced with effects
of both synaptic input and firing output on membrane potential being modeled
with an impulse function. An event-driven scheme is then presented to further
improve the processing efficiency. Based on the neuron model, we propose two
new multi-spike learning rules which demonstrate better performance over other
baselines on various tasks including association, classification, feature
detection. In addition to efficiency, our learning rules demonstrate a high
robustness against strong noise of different types. They can also be
generalized to different spike coding schemes for the classification task, and
notably single neuron is capable of solving multi-category classifications with
our learning rules. In the feature detection task, we re-examine the ability of
unsupervised STDP with its limitations being presented, and find a new
phenomenon of losing selectivity. In contrast, our proposed learning rules can
reliably solve the task over a wide range of conditions without specific
constraints being applied. Moreover, our rules can not only detect features but
also discriminate them. The improved performance of our methods would
contribute to neuromorphic computing as a preferable choice.
|
[
{
"created": "Sat, 2 May 2020 06:41:20 GMT",
"version": "v1"
}
] |
2020-05-05
|
[
[
"Yu",
"Qiang",
""
],
[
"Li",
"Shenglan",
""
],
[
"Tang",
"Huajin",
""
],
[
"Wang",
"Longbiao",
""
],
[
"Dang",
"Jianwu",
""
],
[
"Tan",
"Kay Chen",
""
]
] |
Spikes are the currency in central nervous systems for information transmission and processing. They are also believed to play an essential role in low-power consumption of the biological systems, whose efficiency attracts increasing attentions to the field of neuromorphic computing. However, efficient processing and learning of discrete spikes still remains as a challenging problem. In this paper, we make our contributions towards this direction. A simplified spiking neuron model is firstly introduced with effects of both synaptic input and firing output on membrane potential being modeled with an impulse function. An event-driven scheme is then presented to further improve the processing efficiency. Based on the neuron model, we propose two new multi-spike learning rules which demonstrate better performance over other baselines on various tasks including association, classification, feature detection. In addition to efficiency, our learning rules demonstrate a high robustness against strong noise of different types. They can also be generalized to different spike coding schemes for the classification task, and notably single neuron is capable of solving multi-category classifications with our learning rules. In the feature detection task, we re-examine the ability of unsupervised STDP with its limitations being presented, and find a new phenomenon of losing selectivity. In contrast, our proposed learning rules can reliably solve the task over a wide range of conditions without specific constraints being applied. Moreover, our rules can not only detect features but also discriminate them. The improved performance of our methods would contribute to neuromorphic computing as a preferable choice.
|
1403.4722
|
Abhik Banerjee
|
Abhik Banerjee, Abhirup Ghosh, Koustuvmoni Bharadwaj, Hemanta Saikia
|
Mouse Control using a Web Camera based on Colour Detection
|
10 pages, 6 figures, 1 flowchart, "Published with International
Journal of Computer Trends and Technology (IJCTT)"
|
International Journal of Computer Trends and Technology (IJCTT)
V9(1):15-20,March 2014.ISSN:2231-2803 Published by Seventh Sense Research
Group
|
10.14445/22312803/IJCTT-V9P104
| null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we present an approach for Human computer Interaction (HCI),
where we have tried to control the mouse cursor movement and click events of
the mouse using hand gestures. Hand gestures were acquired using a camera based
on colour detection technique. This method mainly focuses on the use of a Web
Camera to develop a virtual human computer interaction device in a cost
effective manner.
|
[
{
"created": "Wed, 19 Mar 2014 07:40:55 GMT",
"version": "v1"
}
] |
2014-03-20
|
[
[
"Banerjee",
"Abhik",
""
],
[
"Ghosh",
"Abhirup",
""
],
[
"Bharadwaj",
"Koustuvmoni",
""
],
[
"Saikia",
"Hemanta",
""
]
] |
In this paper we present an approach for Human computer Interaction (HCI), where we have tried to control the mouse cursor movement and click events of the mouse using hand gestures. Hand gestures were acquired using a camera based on colour detection technique. This method mainly focuses on the use of a Web Camera to develop a virtual human computer interaction device in a cost effective manner.
|
2408.01916
|
Yumeng Jin
|
Leilei Lin, Yumeng Jin, Yingming Zhou, Wenlong Chen, Chen Qian
|
MAO: A Framework for Process Model Generation with Multi-Agent
Orchestration
| null | null | null | null |
cs.AI cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Process models are frequently used in software engineering to describe
business requirements, guide software testing and control system improvement.
However, traditional process modeling methods often require the participation
of numerous experts, which is expensive and time-consuming. Therefore, the
exploration of a more efficient and cost-effective automated modeling method
has emerged as a focal point in current research. This article explores a
framework for automatically generating process models with multi-agent
orchestration (MAO), aiming to enhance the efficiency of process modeling and
offer valuable insights for domain experts. Our framework MAO leverages large
language models as the cornerstone for multi-agent, employing an innovative
prompt strategy to ensure efficient collaboration among multi-agent.
Specifically, 1) generation. The first phase of MAO is to generate a slightly
rough process model from the text description; 2) refinement. The agents would
continuously refine the initial process model through multiple rounds of
dialogue; 3) reviewing. Large language models are prone to hallucination
phenomena among multi-turn dialogues, so the agents need to review and repair
semantic hallucinations in process models; 4) testing. The representation of
process models is diverse. Consequently, the agents utilize external tools to
test whether the generated process model contains format errors, namely format
hallucinations, and then adjust the process model to conform to the output
paradigm. The experiments demonstrate that the process models generated by our
framework outperform existing methods and surpass manual modeling by 89%, 61%,
52%, and 75% on four different datasets, respectively.
|
[
{
"created": "Sun, 4 Aug 2024 03:32:17 GMT",
"version": "v1"
},
{
"created": "Wed, 7 Aug 2024 10:37:38 GMT",
"version": "v2"
}
] |
2024-08-08
|
[
[
"Lin",
"Leilei",
""
],
[
"Jin",
"Yumeng",
""
],
[
"Zhou",
"Yingming",
""
],
[
"Chen",
"Wenlong",
""
],
[
"Qian",
"Chen",
""
]
] |
Process models are frequently used in software engineering to describe business requirements, guide software testing and control system improvement. However, traditional process modeling methods often require the participation of numerous experts, which is expensive and time-consuming. Therefore, the exploration of a more efficient and cost-effective automated modeling method has emerged as a focal point in current research. This article explores a framework for automatically generating process models with multi-agent orchestration (MAO), aiming to enhance the efficiency of process modeling and offer valuable insights for domain experts. Our framework MAO leverages large language models as the cornerstone for multi-agent, employing an innovative prompt strategy to ensure efficient collaboration among multi-agent. Specifically, 1) generation. The first phase of MAO is to generate a slightly rough process model from the text description; 2) refinement. The agents would continuously refine the initial process model through multiple rounds of dialogue; 3) reviewing. Large language models are prone to hallucination phenomena among multi-turn dialogues, so the agents need to review and repair semantic hallucinations in process models; 4) testing. The representation of process models is diverse. Consequently, the agents utilize external tools to test whether the generated process model contains format errors, namely format hallucinations, and then adjust the process model to conform to the output paradigm. The experiments demonstrate that the process models generated by our framework outperform existing methods and surpass manual modeling by 89%, 61%, 52%, and 75% on four different datasets, respectively.
|
2310.02964
|
Zihan Liu
|
Zihan Liu, Ge Wang, Jiaqi Wang, Jiangbin Zheng, Stan Z. Li
|
Co-modeling the Sequential and Graphical Routes for Peptide
Representation Learning
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Peptides are formed by the dehydration condensation of multiple amino acids.
The primary structure of a peptide can be represented either as an amino acid
sequence or as a molecular graph consisting of atoms and chemical bonds.
Previous studies have indicated that deep learning routes specific to
sequential and graphical peptide forms exhibit comparable performance on
downstream tasks. Despite the fact that these models learn representations of
the same modality of peptides, we find that they explain their predictions
differently. Considering sequential and graphical models as two experts making
inferences from different perspectives, we work on fusing expert knowledge to
enrich the learned representations for improving the discriminative
performance. To achieve this, we propose a peptide co-modeling method, RepCon,
which employs a contrastive learning-based framework to enhance the mutual
information of representations from decoupled sequential and graphical
end-to-end models. It considers representations from the sequential encoder and
the graphical encoder for the same peptide sample as a positive pair and learns
to enhance the consistency of representations between positive sample pairs and
to repel representations between negative pairs. Empirical studies of RepCon
and other co-modeling methods are conducted on open-source discriminative
datasets, including aggregation propensity, retention time, antimicrobial
peptide prediction, and family classification from Peptide Database. Our
results demonstrate the superiority of the co-modeling approach over
independent modeling, as well as the superiority of RepCon over other methods
under the co-modeling framework. In addition, the attribution on RepCon further
corroborates the validity of the approach at the level of model explanation.
|
[
{
"created": "Wed, 4 Oct 2023 16:58:25 GMT",
"version": "v1"
},
{
"created": "Thu, 5 Oct 2023 12:42:25 GMT",
"version": "v2"
}
] |
2023-10-06
|
[
[
"Liu",
"Zihan",
""
],
[
"Wang",
"Ge",
""
],
[
"Wang",
"Jiaqi",
""
],
[
"Zheng",
"Jiangbin",
""
],
[
"Li",
"Stan Z.",
""
]
] |
Peptides are formed by the dehydration condensation of multiple amino acids. The primary structure of a peptide can be represented either as an amino acid sequence or as a molecular graph consisting of atoms and chemical bonds. Previous studies have indicated that deep learning routes specific to sequential and graphical peptide forms exhibit comparable performance on downstream tasks. Despite the fact that these models learn representations of the same modality of peptides, we find that they explain their predictions differently. Considering sequential and graphical models as two experts making inferences from different perspectives, we work on fusing expert knowledge to enrich the learned representations for improving the discriminative performance. To achieve this, we propose a peptide co-modeling method, RepCon, which employs a contrastive learning-based framework to enhance the mutual information of representations from decoupled sequential and graphical end-to-end models. It considers representations from the sequential encoder and the graphical encoder for the same peptide sample as a positive pair and learns to enhance the consistency of representations between positive sample pairs and to repel representations between negative pairs. Empirical studies of RepCon and other co-modeling methods are conducted on open-source discriminative datasets, including aggregation propensity, retention time, antimicrobial peptide prediction, and family classification from Peptide Database. Our results demonstrate the superiority of the co-modeling approach over independent modeling, as well as the superiority of RepCon over other methods under the co-modeling framework. In addition, the attribution on RepCon further corroborates the validity of the approach at the level of model explanation.
|
1505.00752
|
Asbj{\o}rn Br{\ae}ndeland
|
Asbj{\o}rn Br{\ae}ndeland
|
A family of greedy algorithms for finding maximum independent sets
|
4 pages
| null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The greedy algorithm A iterates over a set of uniformly sized independent
sets of a given graph G and checks for each set S which non-neighbor of S, if
any, is best suited to be added to S, until no more suitable non-neighbors are
found for any of the sets. The algorithms receives as arguments the graph, the
heuristic used to evaluate the independent set candidates, and the initial
cardinality of the independent sets, and returns the final set of independent
sets.
|
[
{
"created": "Mon, 4 May 2015 18:56:55 GMT",
"version": "v1"
}
] |
2015-05-05
|
[
[
"Brændeland",
"Asbjørn",
""
]
] |
The greedy algorithm A iterates over a set of uniformly sized independent sets of a given graph G and checks for each set S which non-neighbor of S, if any, is best suited to be added to S, until no more suitable non-neighbors are found for any of the sets. The algorithms receives as arguments the graph, the heuristic used to evaluate the independent set candidates, and the initial cardinality of the independent sets, and returns the final set of independent sets.
|
2307.02729
|
Yuheng Zha
|
Yuheng Zha, Yichi Yang, Ruichen Li, Zhiting Hu
|
Text Alignment Is An Efficient Unified Model for Massive NLP Tasks
|
NeurIPS 2023 Camera Ready, Code available at
https://github.com/yuh-zha/Align
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large language models (LLMs), typically designed as a function of next-word
prediction, have excelled across extensive NLP tasks. Despite the generality,
next-word prediction is often not an efficient formulation for many of the
tasks, demanding an extreme scale of model parameters (10s or 100s of billions)
and sometimes yielding suboptimal performance. In practice, it is often
desirable to build more efficient models -- despite being less versatile, they
still apply to a substantial subset of problems, delivering on par or even
superior performance with much smaller model sizes. In this paper, we propose
text alignment as an efficient unified model for a wide range of crucial tasks
involving text entailment, similarity, question answering (and answerability),
factual consistency, and so forth. Given a pair of texts, the model measures
the degree of alignment between their information. We instantiate an alignment
model (Align) through lightweight finetuning of RoBERTa (355M parameters) using
5.9M examples from 28 datasets. Despite its compact size, extensive experiments
show the model's efficiency and strong performance: (1) On over 20 datasets of
aforementioned diverse tasks, the model matches or surpasses FLAN-T5 models
that have around 2x or 10x more parameters; the single unified model also
outperforms task-specific models finetuned on individual datasets; (2) When
applied to evaluate factual consistency of language generation on 23 datasets,
our model improves over various baselines, including the much larger GPT-3.5
(ChatGPT) and sometimes even GPT-4; (3) The lightweight model can also serve as
an add-on component for LLMs such as GPT-3.5 in question answering tasks,
improving the average exact match (EM) score by 17.94 and F1 score by 15.05
through identifying unanswerable questions.
|
[
{
"created": "Thu, 6 Jul 2023 02:28:31 GMT",
"version": "v1"
},
{
"created": "Thu, 2 Nov 2023 03:49:19 GMT",
"version": "v2"
}
] |
2023-11-03
|
[
[
"Zha",
"Yuheng",
""
],
[
"Yang",
"Yichi",
""
],
[
"Li",
"Ruichen",
""
],
[
"Hu",
"Zhiting",
""
]
] |
Large language models (LLMs), typically designed as a function of next-word prediction, have excelled across extensive NLP tasks. Despite the generality, next-word prediction is often not an efficient formulation for many of the tasks, demanding an extreme scale of model parameters (10s or 100s of billions) and sometimes yielding suboptimal performance. In practice, it is often desirable to build more efficient models -- despite being less versatile, they still apply to a substantial subset of problems, delivering on par or even superior performance with much smaller model sizes. In this paper, we propose text alignment as an efficient unified model for a wide range of crucial tasks involving text entailment, similarity, question answering (and answerability), factual consistency, and so forth. Given a pair of texts, the model measures the degree of alignment between their information. We instantiate an alignment model (Align) through lightweight finetuning of RoBERTa (355M parameters) using 5.9M examples from 28 datasets. Despite its compact size, extensive experiments show the model's efficiency and strong performance: (1) On over 20 datasets of aforementioned diverse tasks, the model matches or surpasses FLAN-T5 models that have around 2x or 10x more parameters; the single unified model also outperforms task-specific models finetuned on individual datasets; (2) When applied to evaluate factual consistency of language generation on 23 datasets, our model improves over various baselines, including the much larger GPT-3.5 (ChatGPT) and sometimes even GPT-4; (3) The lightweight model can also serve as an add-on component for LLMs such as GPT-3.5 in question answering tasks, improving the average exact match (EM) score by 17.94 and F1 score by 15.05 through identifying unanswerable questions.
|
2011.01624
|
Andreas van Cranenburgh
|
Andreas van Cranenburgh, Corina Koolen
|
Results of a Single Blind Literary Taste Test with Short Anonymized
Novel Fragments
|
Accepted for LaTeCH 2020 @ COLING
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
It is an open question to what extent perceptions of literary quality are
derived from text-intrinsic versus social factors. While supervised models can
predict literary quality ratings from textual factors quite successfully, as
shown in the Riddle of Literary Quality project (Koolen et al., 2020), this
does not prove that social factors are not important, nor can we assume that
readers make judgments on literary quality in the same way and based on the
same information as machine learning models. We report the results of a pilot
study to gauge the effect of textual features on literary ratings of
Dutch-language novels by participants in a controlled experiment with 48
participants. In an exploratory analysis, we compare the ratings to those from
the large reader survey of the Riddle in which social factors were not
excluded, and to machine learning predictions of those literary ratings. We
find moderate to strong correlations of questionnaire ratings with the survey
ratings, but the predictions are closer to the survey ratings. Code and data:
https://github.com/andreasvc/litquest
|
[
{
"created": "Tue, 3 Nov 2020 11:10:17 GMT",
"version": "v1"
}
] |
2020-11-04
|
[
[
"van Cranenburgh",
"Andreas",
""
],
[
"Koolen",
"Corina",
""
]
] |
It is an open question to what extent perceptions of literary quality are derived from text-intrinsic versus social factors. While supervised models can predict literary quality ratings from textual factors quite successfully, as shown in the Riddle of Literary Quality project (Koolen et al., 2020), this does not prove that social factors are not important, nor can we assume that readers make judgments on literary quality in the same way and based on the same information as machine learning models. We report the results of a pilot study to gauge the effect of textual features on literary ratings of Dutch-language novels by participants in a controlled experiment with 48 participants. In an exploratory analysis, we compare the ratings to those from the large reader survey of the Riddle in which social factors were not excluded, and to machine learning predictions of those literary ratings. We find moderate to strong correlations of questionnaire ratings with the survey ratings, but the predictions are closer to the survey ratings. Code and data: https://github.com/andreasvc/litquest
|
1807.11456
|
Stevan Tomic
|
Stevan Tomic, Federico Pecora, Alessandro Saffiotti
|
Norms, Institutions, and Robots
|
12 pages, 8 figures. This work has been submitted to the IEEE for
possible publication. Copyright may be transferred without notice, after
which this version may no longer be accessible
| null | null | null |
cs.AI cs.CY cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Interactions within human societies are usually regulated by social norms. If
robots are to be accepted into human society, it is essential that they are
aware of and capable of reasoning about social norms. In this paper, we focus
on how to represent social norms in societies with humans and robots, and how
artificial agents such as robots can reason about social norms in order to plan
appropriate behavior. We use the notion of institution as a way to formally
define and encapsulate norms, and we provide a formal framework for
institutions. Our framework borrows ideas from the field of multi-agent systems
to define abstract normative models, and ideas from the field of robotics to
define physical executions as state-space trajectories. By bridging the two in
a common model, our framework allows us to use the same abstract institution
across physical domains and agent types. We then make our framework
computational via a reduction to CSP and show experiments where this reduction
is used for norm verification, planning, and plan execution in a domain
including a mixture of humans and robots.
|
[
{
"created": "Mon, 30 Jul 2018 17:27:06 GMT",
"version": "v1"
},
{
"created": "Thu, 20 Aug 2020 12:43:40 GMT",
"version": "v2"
}
] |
2020-08-21
|
[
[
"Tomic",
"Stevan",
""
],
[
"Pecora",
"Federico",
""
],
[
"Saffiotti",
"Alessandro",
""
]
] |
Interactions within human societies are usually regulated by social norms. If robots are to be accepted into human society, it is essential that they are aware of and capable of reasoning about social norms. In this paper, we focus on how to represent social norms in societies with humans and robots, and how artificial agents such as robots can reason about social norms in order to plan appropriate behavior. We use the notion of institution as a way to formally define and encapsulate norms, and we provide a formal framework for institutions. Our framework borrows ideas from the field of multi-agent systems to define abstract normative models, and ideas from the field of robotics to define physical executions as state-space trajectories. By bridging the two in a common model, our framework allows us to use the same abstract institution across physical domains and agent types. We then make our framework computational via a reduction to CSP and show experiments where this reduction is used for norm verification, planning, and plan execution in a domain including a mixture of humans and robots.
|
2201.03998
|
Tarik Taleb Dr.
|
O. El Marai and T. Taleb
|
Smooth and Low Latency Video Streaming for Autonomous Cars during
Handover
| null | null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Self-driving vehicles are expected to bring many benefits among which
enhancing traffic efficiency and relia-bility, and reducing fuel consumption
which would have a great economical and environmental impact. The success of
this technology heavily relies on the full situational awareness of its
surrounding entities. This is achievable only when everything is networked,
including vehicles, users and infrastructure, and exchange the sensed data
among the nearby objects to increase their awareness. Nevertheless, human
intervention is still needed in the loop anyway to deal with unseen situations
or compensate for inaccurate or improper vehicle's decisions. For such cases,
video feed, in addition to other data such as LIDAR, is considered essential to
provide humans with the real picture of what is hap-pening to eventually take
the right decision. However, if the video is not delivered in a timely
fashion,it becomes useless or likely produce catastrophic outcomes.
Additionally, any disruption in the streamed video, for instance during
handover operation while traversing inter-countries cross borders, is very
annoying to the user and possibly ause damages as well. In this article, we
start by describing two important use cases, namely Remote Driving and
Platooning, where the timely delivery of video is of extreme importance [1].
Thereafter, we detail our implemented solution to accommodate the
aforementioned use cases for self-driving vehicles. Through extensive
experiments in local and LTE networks, we show that our solution ensures a very
low end-to-end latency. Also, we show that our solution keeps the video outage
as low as possible during handover operation.
|
[
{
"created": "Wed, 5 Jan 2022 09:04:24 GMT",
"version": "v1"
}
] |
2022-01-12
|
[
[
"Marai",
"O. El",
""
],
[
"Taleb",
"T.",
""
]
] |
Self-driving vehicles are expected to bring many benefits among which enhancing traffic efficiency and relia-bility, and reducing fuel consumption which would have a great economical and environmental impact. The success of this technology heavily relies on the full situational awareness of its surrounding entities. This is achievable only when everything is networked, including vehicles, users and infrastructure, and exchange the sensed data among the nearby objects to increase their awareness. Nevertheless, human intervention is still needed in the loop anyway to deal with unseen situations or compensate for inaccurate or improper vehicle's decisions. For such cases, video feed, in addition to other data such as LIDAR, is considered essential to provide humans with the real picture of what is hap-pening to eventually take the right decision. However, if the video is not delivered in a timely fashion,it becomes useless or likely produce catastrophic outcomes. Additionally, any disruption in the streamed video, for instance during handover operation while traversing inter-countries cross borders, is very annoying to the user and possibly ause damages as well. In this article, we start by describing two important use cases, namely Remote Driving and Platooning, where the timely delivery of video is of extreme importance [1]. Thereafter, we detail our implemented solution to accommodate the aforementioned use cases for self-driving vehicles. Through extensive experiments in local and LTE networks, we show that our solution ensures a very low end-to-end latency. Also, we show that our solution keeps the video outage as low as possible during handover operation.
|
2202.03958
|
Xiaotong Li
|
Xiaotong Li, Yongxing Dai, Yixiao Ge, Jun Liu, Ying Shan, Ling-Yu Duan
|
Uncertainty Modeling for Out-of-Distribution Generalization
|
Accepted by ICLR 2022
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Though remarkable progress has been achieved in various vision tasks, deep
neural networks still suffer obvious performance degradation when tested in
out-of-distribution scenarios. We argue that the feature statistics (mean and
standard deviation), which carry the domain characteristics of the training
data, can be properly manipulated to improve the generalization ability of deep
learning models. Common methods often consider the feature statistics as
deterministic values measured from the learned features and do not explicitly
consider the uncertain statistics discrepancy caused by potential domain shifts
during testing. In this paper, we improve the network generalization ability by
modeling the uncertainty of domain shifts with synthesized feature statistics
during training. Specifically, we hypothesize that the feature statistic, after
considering the potential uncertainties, follows a multivariate Gaussian
distribution. Hence, each feature statistic is no longer a deterministic value,
but a probabilistic point with diverse distribution possibilities. With the
uncertain feature statistics, the models can be trained to alleviate the domain
perturbations and achieve better robustness against potential domain shifts.
Our method can be readily integrated into networks without additional
parameters. Extensive experiments demonstrate that our proposed method
consistently improves the network generalization ability on multiple vision
tasks, including image classification, semantic segmentation, and instance
retrieval. The code can be available at https://github.com/lixiaotong97/DSU.
|
[
{
"created": "Tue, 8 Feb 2022 16:09:12 GMT",
"version": "v1"
},
{
"created": "Fri, 22 Apr 2022 03:10:41 GMT",
"version": "v2"
}
] |
2022-04-25
|
[
[
"Li",
"Xiaotong",
""
],
[
"Dai",
"Yongxing",
""
],
[
"Ge",
"Yixiao",
""
],
[
"Liu",
"Jun",
""
],
[
"Shan",
"Ying",
""
],
[
"Duan",
"Ling-Yu",
""
]
] |
Though remarkable progress has been achieved in various vision tasks, deep neural networks still suffer obvious performance degradation when tested in out-of-distribution scenarios. We argue that the feature statistics (mean and standard deviation), which carry the domain characteristics of the training data, can be properly manipulated to improve the generalization ability of deep learning models. Common methods often consider the feature statistics as deterministic values measured from the learned features and do not explicitly consider the uncertain statistics discrepancy caused by potential domain shifts during testing. In this paper, we improve the network generalization ability by modeling the uncertainty of domain shifts with synthesized feature statistics during training. Specifically, we hypothesize that the feature statistic, after considering the potential uncertainties, follows a multivariate Gaussian distribution. Hence, each feature statistic is no longer a deterministic value, but a probabilistic point with diverse distribution possibilities. With the uncertain feature statistics, the models can be trained to alleviate the domain perturbations and achieve better robustness against potential domain shifts. Our method can be readily integrated into networks without additional parameters. Extensive experiments demonstrate that our proposed method consistently improves the network generalization ability on multiple vision tasks, including image classification, semantic segmentation, and instance retrieval. The code can be available at https://github.com/lixiaotong97/DSU.
|
2108.13389
|
Omkar Phadke
|
Omkar Phadke, Jayatika Sakhuja, Vivek Saraswat, Udayan Ganguly
|
Exploiting the Electrothermal Timescale in PrMnO3 RRAM for a compact,
clock-less neuron exhibiting biological spiking patterns
| null | null |
10.1088/1361-6641/ac24e8
| null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Spiking Neural Networks (SNNs) are gaining widespread momentum in the field
of neuromorphic computing. These network systems integrated with neurons and
synapses provide computational efficiency by mimicking the human brain. It is
desired to incorporate the biological neuronal dynamics, including complex
spiking patterns which represent diverse brain activities within the neural
networks. Earlier hardware realization of neurons was (1) area intensive
because of large capacitors in the circuit design, (2) neuronal spiking
patterns were demonstrated with clocked neurons at the device level. To achieve
more realistic biological neuron spiking behavior, emerging memristive devices
are considered promising alternatives. In this paper, we propose, PrMnO3(PMO)
-RRAM device-based neuron. The voltage-controlled electrothermal timescales of
the compact PMO RRAM device replace the electrical timescales of charging a
large capacitor. The electrothermal timescale is used to implement an
integration block with multiple voltage-controlled timescales coupled with a
refractory block to generate biological neuronal dynamics. Here, first, a
Verilog-A implementation of the thermal device model is demonstrated, which
captures the current-temperature dynamics of the PMO device. Second, a driving
circuitry is designed to mimic different spiking patterns of cortical neurons,
including Intrinsic bursting (IB) and Chattering (CH). Third, a neuron circuit
model is simulated, which includes the PMO RRAM device model and the driving
circuitry to demonstrate the asynchronous neuron behavior. Finally, a
hardware-software hybrid analysis is done in which the PMO RRAM device is
experimentally characterized to mimic neuron spiking dynamics. The work
presents a realizable and more biologically comparable hardware-efficient
solution for large-scale SNNs.
|
[
{
"created": "Mon, 30 Aug 2021 17:17:44 GMT",
"version": "v1"
}
] |
2021-10-27
|
[
[
"Phadke",
"Omkar",
""
],
[
"Sakhuja",
"Jayatika",
""
],
[
"Saraswat",
"Vivek",
""
],
[
"Ganguly",
"Udayan",
""
]
] |
Spiking Neural Networks (SNNs) are gaining widespread momentum in the field of neuromorphic computing. These network systems integrated with neurons and synapses provide computational efficiency by mimicking the human brain. It is desired to incorporate the biological neuronal dynamics, including complex spiking patterns which represent diverse brain activities within the neural networks. Earlier hardware realization of neurons was (1) area intensive because of large capacitors in the circuit design, (2) neuronal spiking patterns were demonstrated with clocked neurons at the device level. To achieve more realistic biological neuron spiking behavior, emerging memristive devices are considered promising alternatives. In this paper, we propose, PrMnO3(PMO) -RRAM device-based neuron. The voltage-controlled electrothermal timescales of the compact PMO RRAM device replace the electrical timescales of charging a large capacitor. The electrothermal timescale is used to implement an integration block with multiple voltage-controlled timescales coupled with a refractory block to generate biological neuronal dynamics. Here, first, a Verilog-A implementation of the thermal device model is demonstrated, which captures the current-temperature dynamics of the PMO device. Second, a driving circuitry is designed to mimic different spiking patterns of cortical neurons, including Intrinsic bursting (IB) and Chattering (CH). Third, a neuron circuit model is simulated, which includes the PMO RRAM device model and the driving circuitry to demonstrate the asynchronous neuron behavior. Finally, a hardware-software hybrid analysis is done in which the PMO RRAM device is experimentally characterized to mimic neuron spiking dynamics. The work presents a realizable and more biologically comparable hardware-efficient solution for large-scale SNNs.
|
1607.08692
|
Rui Wang
|
Rui Wang, Hai Zhao, Sabine Ploux, Bao-Liang Lu, Masao Utiyama and
Eiichiro Sumita
|
A Novel Bilingual Word Embedding Method for Lexical Translation Using
Bilingual Sense Clique
|
under review by COLING-2016
| null |
10.1145/3203078
| null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most of the existing methods for bilingual word embedding only consider
shallow context or simple co-occurrence information. In this paper, we propose
a latent bilingual sense unit (Bilingual Sense Clique, BSC), which is derived
from a maximum complete sub-graph of pointwise mutual information based graph
over bilingual corpus. In this way, we treat source and target words equally
and a separated bilingual projection processing that have to be used in most
existing works is not necessary any more. Several dimension reduction methods
are evaluated to summarize the BSC-word relationship. The proposed method is
evaluated on bilingual lexicon translation tasks and empirical results show
that bilingual sense embedding methods outperform existing bilingual word
embedding methods.
|
[
{
"created": "Fri, 29 Jul 2016 06:28:32 GMT",
"version": "v1"
},
{
"created": "Tue, 2 Aug 2016 06:58:04 GMT",
"version": "v2"
}
] |
2018-06-19
|
[
[
"Wang",
"Rui",
""
],
[
"Zhao",
"Hai",
""
],
[
"Ploux",
"Sabine",
""
],
[
"Lu",
"Bao-Liang",
""
],
[
"Utiyama",
"Masao",
""
],
[
"Sumita",
"Eiichiro",
""
]
] |
Most of the existing methods for bilingual word embedding only consider shallow context or simple co-occurrence information. In this paper, we propose a latent bilingual sense unit (Bilingual Sense Clique, BSC), which is derived from a maximum complete sub-graph of pointwise mutual information based graph over bilingual corpus. In this way, we treat source and target words equally and a separated bilingual projection processing that have to be used in most existing works is not necessary any more. Several dimension reduction methods are evaluated to summarize the BSC-word relationship. The proposed method is evaluated on bilingual lexicon translation tasks and empirical results show that bilingual sense embedding methods outperform existing bilingual word embedding methods.
|
2306.08888
|
Srivatsan Krishnan
|
Srivatsan Krishnan, Amir Yazdanbaksh, Shvetank Prakash, Jason Jabbour,
Ikechukwu Uchendu, Susobhan Ghosh, Behzad Boroujerdian, Daniel Richins,
Devashree Tripathy, Aleksandra Faust, Vijay Janapa Reddi
|
ArchGym: An Open-Source Gymnasium for Machine Learning Assisted
Architecture Design
|
International Symposium on Computer Architecture (ISCA 2023)
| null | null | null |
cs.AR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Machine learning is a prevalent approach to tame the complexity of design
space exploration for domain-specific architectures. Using ML for design space
exploration poses challenges. First, it's not straightforward to identify the
suitable algorithm from an increasing pool of ML methods. Second, assessing the
trade-offs between performance and sample efficiency across these methods is
inconclusive. Finally, lack of a holistic framework for fair, reproducible, and
objective comparison across these methods hinders progress of adopting ML-aided
architecture design space exploration and impedes creating repeatable
artifacts. To mitigate these challenges, we introduce ArchGym, an open-source
gym and easy-to-extend framework that connects diverse search algorithms to
architecture simulators. To demonstrate utility, we evaluate ArchGym across
multiple vanilla and domain-specific search algorithms in designing custom
memory controller, deep neural network accelerators, and custom SoC for AR/VR
workloads, encompassing over 21K experiments. Results suggest that with
unlimited samples, ML algorithms are equally favorable to meet user-defined
target specification if hyperparameters are tuned; no solution is necessarily
better than another (e.g., reinforcement learning vs. Bayesian methods). We
coin the term hyperparameter lottery to describe the chance for a search
algorithm to find an optimal design provided meticulously selected
hyperparameters. The ease of data collection and aggregation in ArchGym
facilitates research in ML-aided architecture design space exploration. As a
case study, we show this advantage by developing a proxy cost model with an
RMSE of 0.61% that offers a 2,000-fold reduction in simulation time. Code and
data for ArchGym is available at https://bit.ly/ArchGym.
|
[
{
"created": "Thu, 15 Jun 2023 06:41:23 GMT",
"version": "v1"
}
] |
2023-06-16
|
[
[
"Krishnan",
"Srivatsan",
""
],
[
"Yazdanbaksh",
"Amir",
""
],
[
"Prakash",
"Shvetank",
""
],
[
"Jabbour",
"Jason",
""
],
[
"Uchendu",
"Ikechukwu",
""
],
[
"Ghosh",
"Susobhan",
""
],
[
"Boroujerdian",
"Behzad",
""
],
[
"Richins",
"Daniel",
""
],
[
"Tripathy",
"Devashree",
""
],
[
"Faust",
"Aleksandra",
""
],
[
"Reddi",
"Vijay Janapa",
""
]
] |
Machine learning is a prevalent approach to tame the complexity of design space exploration for domain-specific architectures. Using ML for design space exploration poses challenges. First, it's not straightforward to identify the suitable algorithm from an increasing pool of ML methods. Second, assessing the trade-offs between performance and sample efficiency across these methods is inconclusive. Finally, lack of a holistic framework for fair, reproducible, and objective comparison across these methods hinders progress of adopting ML-aided architecture design space exploration and impedes creating repeatable artifacts. To mitigate these challenges, we introduce ArchGym, an open-source gym and easy-to-extend framework that connects diverse search algorithms to architecture simulators. To demonstrate utility, we evaluate ArchGym across multiple vanilla and domain-specific search algorithms in designing custom memory controller, deep neural network accelerators, and custom SoC for AR/VR workloads, encompassing over 21K experiments. Results suggest that with unlimited samples, ML algorithms are equally favorable to meet user-defined target specification if hyperparameters are tuned; no solution is necessarily better than another (e.g., reinforcement learning vs. Bayesian methods). We coin the term hyperparameter lottery to describe the chance for a search algorithm to find an optimal design provided meticulously selected hyperparameters. The ease of data collection and aggregation in ArchGym facilitates research in ML-aided architecture design space exploration. As a case study, we show this advantage by developing a proxy cost model with an RMSE of 0.61% that offers a 2,000-fold reduction in simulation time. Code and data for ArchGym is available at https://bit.ly/ArchGym.
|
2406.04745
|
Chao Qian
|
Yu-Chang Wu, Shen-Huan Lyu, Haopu Shang, Xiangyu Wang, Chao Qian
|
Confidence-aware Contrastive Learning for Selective Classification
|
Accepted by ICML 2024
| null | null | null |
cs.LG cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Selective classification enables models to make predictions only when they
are sufficiently confident, aiming to enhance safety and reliability, which is
important in high-stakes scenarios. Previous methods mainly use deep neural
networks and focus on modifying the architecture of classification layers to
enable the model to estimate the confidence of its prediction. This work
provides a generalization bound for selective classification, disclosing that
optimizing feature layers helps improve the performance of selective
classification. Inspired by this theory, we propose to explicitly improve the
selective classification model at the feature level for the first time, leading
to a novel Confidence-aware Contrastive Learning method for Selective
Classification, CCL-SC, which similarizes the features of homogeneous instances
and differentiates the features of heterogeneous instances, with the strength
controlled by the model's confidence. The experimental results on typical
datasets, i.e., CIFAR-10, CIFAR-100, CelebA, and ImageNet, show that CCL-SC
achieves significantly lower selective risk than state-of-the-art methods,
across almost all coverage degrees. Moreover, it can be combined with existing
methods to bring further improvement.
|
[
{
"created": "Fri, 7 Jun 2024 08:43:53 GMT",
"version": "v1"
}
] |
2024-06-10
|
[
[
"Wu",
"Yu-Chang",
""
],
[
"Lyu",
"Shen-Huan",
""
],
[
"Shang",
"Haopu",
""
],
[
"Wang",
"Xiangyu",
""
],
[
"Qian",
"Chao",
""
]
] |
Selective classification enables models to make predictions only when they are sufficiently confident, aiming to enhance safety and reliability, which is important in high-stakes scenarios. Previous methods mainly use deep neural networks and focus on modifying the architecture of classification layers to enable the model to estimate the confidence of its prediction. This work provides a generalization bound for selective classification, disclosing that optimizing feature layers helps improve the performance of selective classification. Inspired by this theory, we propose to explicitly improve the selective classification model at the feature level for the first time, leading to a novel Confidence-aware Contrastive Learning method for Selective Classification, CCL-SC, which similarizes the features of homogeneous instances and differentiates the features of heterogeneous instances, with the strength controlled by the model's confidence. The experimental results on typical datasets, i.e., CIFAR-10, CIFAR-100, CelebA, and ImageNet, show that CCL-SC achieves significantly lower selective risk than state-of-the-art methods, across almost all coverage degrees. Moreover, it can be combined with existing methods to bring further improvement.
|
2010.06216
|
Koar Marntirosian
|
Koar Marntirosian, Tom Schrijvers, Bruno C. d. S. Oliveira, Georgios
Karachalias
|
Resolution as Intersection Subtyping via Modus Ponens
|
43 pages, 20 figures; typos corrected, link to artifact added
| null |
10.1145/3428274
| null |
cs.PL cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
Resolution and subtyping are two common mechanisms in programming languages.
Resolution is used by features such as type classes or Scala-style implicits to
synthesize values automatically from contextual type information. Subtyping is
commonly used to automatically convert the type of a value into another
compatible type. So far the two mechanisms have been considered independently
of each other. This paper shows that, with a small extension, subtyping with
intersection types can subsume resolution. This has three main consequences.
Firstly, resolution does not need to be implemented as a separate mechanism.
Secondly, the interaction between resolution and subtyping becomes apparent.
Finally, the integration of resolution into subtyping enables first-class
(implicit) environments. The extension that recovers the power of resolution
via subtyping is the modus ponens rule of propositional logic. While it is
easily added to declarative subtyping, significant care needs to be taken to
retain desirable properties, such as transitivity and decidability of
algorithmic subtyping, and coherence. To materialize these ideas we develop
$\lambda_i^{\mathsf{MP}}$, a calculus that extends a iprevious calculus with
disjoint intersection types, and develop its metatheory in the Coq theorem
prover.
|
[
{
"created": "Tue, 13 Oct 2020 07:58:17 GMT",
"version": "v1"
},
{
"created": "Thu, 15 Oct 2020 09:32:19 GMT",
"version": "v2"
}
] |
2020-10-19
|
[
[
"Marntirosian",
"Koar",
""
],
[
"Schrijvers",
"Tom",
""
],
[
"Oliveira",
"Bruno C. d. S.",
""
],
[
"Karachalias",
"Georgios",
""
]
] |
Resolution and subtyping are two common mechanisms in programming languages. Resolution is used by features such as type classes or Scala-style implicits to synthesize values automatically from contextual type information. Subtyping is commonly used to automatically convert the type of a value into another compatible type. So far the two mechanisms have been considered independently of each other. This paper shows that, with a small extension, subtyping with intersection types can subsume resolution. This has three main consequences. Firstly, resolution does not need to be implemented as a separate mechanism. Secondly, the interaction between resolution and subtyping becomes apparent. Finally, the integration of resolution into subtyping enables first-class (implicit) environments. The extension that recovers the power of resolution via subtyping is the modus ponens rule of propositional logic. While it is easily added to declarative subtyping, significant care needs to be taken to retain desirable properties, such as transitivity and decidability of algorithmic subtyping, and coherence. To materialize these ideas we develop $\lambda_i^{\mathsf{MP}}$, a calculus that extends a iprevious calculus with disjoint intersection types, and develop its metatheory in the Coq theorem prover.
|
2303.08120
|
Chenyang Lei
|
Chenyang Lei, Xuanchi Ren, Zhaoxiang Zhang, Qifeng Chen
|
Blind Video Deflickering by Neural Filtering with a Flawed Atlas
|
To appear in CVPR2023. Code:
github.com/ChenyangLEI/All-In-One-Deflicker Website:
chenyanglei.github.io/deflicker
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Many videos contain flickering artifacts. Common causes of flicker include
video processing algorithms, video generation algorithms, and capturing videos
under specific situations. Prior work usually requires specific guidance such
as the flickering frequency, manual annotations, or extra consistent videos to
remove the flicker. In this work, we propose a general flicker removal
framework that only receives a single flickering video as input without
additional guidance. Since it is blind to a specific flickering type or
guidance, we name this "blind deflickering." The core of our approach is
utilizing the neural atlas in cooperation with a neural filtering strategy. The
neural atlas is a unified representation for all frames in a video that
provides temporal consistency guidance but is flawed in many cases. To this
end, a neural network is trained to mimic a filter to learn the consistent
features (e.g., color, brightness) and avoid introducing the artifacts in the
atlas. To validate our method, we construct a dataset that contains diverse
real-world flickering videos. Extensive experiments show that our method
achieves satisfying deflickering performance and even outperforms baselines
that use extra guidance on a public benchmark.
|
[
{
"created": "Tue, 14 Mar 2023 17:52:29 GMT",
"version": "v1"
}
] |
2023-03-15
|
[
[
"Lei",
"Chenyang",
""
],
[
"Ren",
"Xuanchi",
""
],
[
"Zhang",
"Zhaoxiang",
""
],
[
"Chen",
"Qifeng",
""
]
] |
Many videos contain flickering artifacts. Common causes of flicker include video processing algorithms, video generation algorithms, and capturing videos under specific situations. Prior work usually requires specific guidance such as the flickering frequency, manual annotations, or extra consistent videos to remove the flicker. In this work, we propose a general flicker removal framework that only receives a single flickering video as input without additional guidance. Since it is blind to a specific flickering type or guidance, we name this "blind deflickering." The core of our approach is utilizing the neural atlas in cooperation with a neural filtering strategy. The neural atlas is a unified representation for all frames in a video that provides temporal consistency guidance but is flawed in many cases. To this end, a neural network is trained to mimic a filter to learn the consistent features (e.g., color, brightness) and avoid introducing the artifacts in the atlas. To validate our method, we construct a dataset that contains diverse real-world flickering videos. Extensive experiments show that our method achieves satisfying deflickering performance and even outperforms baselines that use extra guidance on a public benchmark.
|
1102.2794
|
Xinhua Wang
|
Xinhua Wang
|
Universal approximation using differentiators and application to
feedback control
| null | null | null | null |
cs.SY math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we consider the problems of approximating uncertainties and
feedback control for a class of nonlinear systems without full-known states,
and two approximation methods are proposed: universal approximation using
integral-chain differentiator or extended observer. Comparing to the
approximations by fuzzy system and radial-based-function (RBF) neural networks,
the presented two methods can not only approximate universally the
uncertainties, but also estimate the unknown states. Moreover, the
integral-chain differentiator can restrain noises thoroughly. The theoretical
results are confirmed by computer simulations for feedback control.
|
[
{
"created": "Mon, 14 Feb 2011 15:15:28 GMT",
"version": "v1"
}
] |
2011-02-15
|
[
[
"Wang",
"Xinhua",
""
]
] |
In this paper, we consider the problems of approximating uncertainties and feedback control for a class of nonlinear systems without full-known states, and two approximation methods are proposed: universal approximation using integral-chain differentiator or extended observer. Comparing to the approximations by fuzzy system and radial-based-function (RBF) neural networks, the presented two methods can not only approximate universally the uncertainties, but also estimate the unknown states. Moreover, the integral-chain differentiator can restrain noises thoroughly. The theoretical results are confirmed by computer simulations for feedback control.
|
2112.04088
|
Xiaoge Deng
|
Xiaoge Deng, Dongsheng Li, Tao Sun and Xicheng Lu
|
Communication-Efficient Distributed Learning via Sparse and Adaptive
Stochastic Gradient
|
Accepted by IEEE Transactions on Big Data, 2024
| null |
10.1109/TBDATA.2024.3407510
| null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Gradient-based optimization methods implemented on distributed computing
architectures are increasingly used to tackle large-scale machine learning
applications. A key bottleneck in such distributed systems is the high
communication overhead for exchanging information, such as stochastic
gradients, between workers. The inherent causes of this bottleneck are the
frequent communication rounds and the full model gradient transmission in every
round. In this study, we present SASG, a communication-efficient distributed
algorithm that enjoys the advantages of sparse communication and adaptive
aggregated stochastic gradients. By dynamically determining the workers who
need to communicate through an adaptive aggregation rule and sparsifying the
transmitted information, the SASG algorithm reduces both the overhead of
communication rounds and the number of communication bits in the distributed
system. For the theoretical analysis, we introduce an important auxiliary
variable and define a new Lyapunov function to prove that the
communication-efficient algorithm is convergent. The convergence result is
identical to the sublinear rate of stochastic gradient descent, and our result
also reveals that SASG scales well with the number of distributed workers.
Finally, experiments on training deep neural networks demonstrate that the
proposed algorithm can significantly reduce communication overhead compared to
previous methods.
|
[
{
"created": "Wed, 8 Dec 2021 02:55:28 GMT",
"version": "v1"
},
{
"created": "Sun, 17 Apr 2022 03:47:42 GMT",
"version": "v2"
},
{
"created": "Mon, 29 Aug 2022 14:38:01 GMT",
"version": "v3"
},
{
"created": "Wed, 29 May 2024 09:18:28 GMT",
"version": "v4"
},
{
"created": "Sun, 9 Jun 2024 11:47:03 GMT",
"version": "v5"
}
] |
2024-06-11
|
[
[
"Deng",
"Xiaoge",
""
],
[
"Li",
"Dongsheng",
""
],
[
"Sun",
"Tao",
""
],
[
"Lu",
"Xicheng",
""
]
] |
Gradient-based optimization methods implemented on distributed computing architectures are increasingly used to tackle large-scale machine learning applications. A key bottleneck in such distributed systems is the high communication overhead for exchanging information, such as stochastic gradients, between workers. The inherent causes of this bottleneck are the frequent communication rounds and the full model gradient transmission in every round. In this study, we present SASG, a communication-efficient distributed algorithm that enjoys the advantages of sparse communication and adaptive aggregated stochastic gradients. By dynamically determining the workers who need to communicate through an adaptive aggregation rule and sparsifying the transmitted information, the SASG algorithm reduces both the overhead of communication rounds and the number of communication bits in the distributed system. For the theoretical analysis, we introduce an important auxiliary variable and define a new Lyapunov function to prove that the communication-efficient algorithm is convergent. The convergence result is identical to the sublinear rate of stochastic gradient descent, and our result also reveals that SASG scales well with the number of distributed workers. Finally, experiments on training deep neural networks demonstrate that the proposed algorithm can significantly reduce communication overhead compared to previous methods.
|
2305.17910
|
Safinah Ali
|
Safinah Ali, Vishesh Kumar, Cynthia Breazeal
|
AI Audit: A Card Game to Reflect on Everyday AI Systems
| null | null | null | null |
cs.CY cs.AI cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
An essential element of K-12 AI literacy is educating learners about the
ethical and societal implications of AI systems. Previous work in AI ethics
literacy have developed curriculum and classroom activities that engage
learners in reflecting on the ethical implications of AI systems and developing
responsible AI. There is little work in using game-based learning methods in AI
literacy. Games are known to be compelling media to teach children about
complex STEM concepts. In this work, we developed a competitive card game for
middle and high school students called "AI Audit" where they play as AI
start-up founders building novel AI-powered technology. Players can challenge
other players with potential harms of their technology or defend their own
businesses by features that mitigate these harms. The game mechanics reward
systems that are ethically developed or that take steps to mitigate potential
harms. In this paper, we present the game design, teacher resources for
classroom deployment and early playtesting results. We discuss our reflections
about using games as teaching tools for AI literacy in K-12 classrooms.
|
[
{
"created": "Mon, 29 May 2023 06:41:47 GMT",
"version": "v1"
}
] |
2023-05-30
|
[
[
"Ali",
"Safinah",
""
],
[
"Kumar",
"Vishesh",
""
],
[
"Breazeal",
"Cynthia",
""
]
] |
An essential element of K-12 AI literacy is educating learners about the ethical and societal implications of AI systems. Previous work in AI ethics literacy have developed curriculum and classroom activities that engage learners in reflecting on the ethical implications of AI systems and developing responsible AI. There is little work in using game-based learning methods in AI literacy. Games are known to be compelling media to teach children about complex STEM concepts. In this work, we developed a competitive card game for middle and high school students called "AI Audit" where they play as AI start-up founders building novel AI-powered technology. Players can challenge other players with potential harms of their technology or defend their own businesses by features that mitigate these harms. The game mechanics reward systems that are ethically developed or that take steps to mitigate potential harms. In this paper, we present the game design, teacher resources for classroom deployment and early playtesting results. We discuss our reflections about using games as teaching tools for AI literacy in K-12 classrooms.
|
1806.08554
|
Bei Chen
|
Yihong Chen, Bei Chen, Xuguang Duan, Jian-Guang Lou, Yue Wang, Wenwu
Zhu, Yong Cao
|
Learning-to-Ask: Knowledge Acquisition via 20 Questions
|
Accepted by KDD 2018
| null |
10.1145/3219819.3220047
| null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Almost all the knowledge empowered applications rely upon accurate knowledge,
which has to be either collected manually with high cost, or extracted
automatically with unignorable errors. In this paper, we study 20 Questions, an
online interactive game where each question-response pair corresponds to a fact
of the target entity, to acquire highly accurate knowledge effectively with
nearly zero labor cost. Knowledge acquisition via 20 Questions predominantly
presents two challenges to the intelligent agent playing games with human
players. The first one is to seek enough information and identify the target
entity with as few questions as possible, while the second one is to leverage
the remaining questioning opportunities to acquire valuable knowledge
effectively, both of which count on good questioning strategies. To address
these challenges, we propose the Learning-to-Ask (LA) framework, within which
the agent learns smart questioning strategies for information seeking and
knowledge acquisition by means of deep reinforcement learning and generalized
matrix factorization respectively. In addition, a Bayesian approach to
represent knowledge is adopted to ensure robustness to noisy user responses.
Simulating experiments on real data show that LA is able to equip the agent
with effective questioning strategies, which result in high winning rates and
rapid knowledge acquisition. Moreover, the questioning strategies for
information seeking and knowledge acquisition boost the performance of each
other, allowing the agent to start with a relatively small knowledge set and
quickly improve its knowledge base in the absence of constant human
supervision.
|
[
{
"created": "Fri, 22 Jun 2018 08:48:49 GMT",
"version": "v1"
}
] |
2018-06-25
|
[
[
"Chen",
"Yihong",
""
],
[
"Chen",
"Bei",
""
],
[
"Duan",
"Xuguang",
""
],
[
"Lou",
"Jian-Guang",
""
],
[
"Wang",
"Yue",
""
],
[
"Zhu",
"Wenwu",
""
],
[
"Cao",
"Yong",
""
]
] |
Almost all the knowledge empowered applications rely upon accurate knowledge, which has to be either collected manually with high cost, or extracted automatically with unignorable errors. In this paper, we study 20 Questions, an online interactive game where each question-response pair corresponds to a fact of the target entity, to acquire highly accurate knowledge effectively with nearly zero labor cost. Knowledge acquisition via 20 Questions predominantly presents two challenges to the intelligent agent playing games with human players. The first one is to seek enough information and identify the target entity with as few questions as possible, while the second one is to leverage the remaining questioning opportunities to acquire valuable knowledge effectively, both of which count on good questioning strategies. To address these challenges, we propose the Learning-to-Ask (LA) framework, within which the agent learns smart questioning strategies for information seeking and knowledge acquisition by means of deep reinforcement learning and generalized matrix factorization respectively. In addition, a Bayesian approach to represent knowledge is adopted to ensure robustness to noisy user responses. Simulating experiments on real data show that LA is able to equip the agent with effective questioning strategies, which result in high winning rates and rapid knowledge acquisition. Moreover, the questioning strategies for information seeking and knowledge acquisition boost the performance of each other, allowing the agent to start with a relatively small knowledge set and quickly improve its knowledge base in the absence of constant human supervision.
|
2307.15555
|
Daniele Mari
|
Daniele Mari, Davide Salvi, Paolo Bestagini, and Simone Milani
|
All-for-One and One-For-All: Deep learning-based feature fusion for
Synthetic Speech Detection
|
Accepted at ECML-PKDD 2023 Workshop "Deep Learning and Multimedia
Forensics. Combating fake media and misinformation"
| null | null | null |
cs.SD cs.CL cs.CR eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Recent advances in deep learning and computer vision have made the synthesis
and counterfeiting of multimedia content more accessible than ever, leading to
possible threats and dangers from malicious users. In the audio field, we are
witnessing the growth of speech deepfake generation techniques, which solicit
the development of synthetic speech detection algorithms to counter possible
mischievous uses such as frauds or identity thefts. In this paper, we consider
three different feature sets proposed in the literature for the synthetic
speech detection task and present a model that fuses them, achieving overall
better performances with respect to the state-of-the-art solutions. The system
was tested on different scenarios and datasets to prove its robustness to
anti-forensic attacks and its generalization capabilities.
|
[
{
"created": "Fri, 28 Jul 2023 13:50:25 GMT",
"version": "v1"
}
] |
2023-07-31
|
[
[
"Mari",
"Daniele",
""
],
[
"Salvi",
"Davide",
""
],
[
"Bestagini",
"Paolo",
""
],
[
"Milani",
"Simone",
""
]
] |
Recent advances in deep learning and computer vision have made the synthesis and counterfeiting of multimedia content more accessible than ever, leading to possible threats and dangers from malicious users. In the audio field, we are witnessing the growth of speech deepfake generation techniques, which solicit the development of synthetic speech detection algorithms to counter possible mischievous uses such as frauds or identity thefts. In this paper, we consider three different feature sets proposed in the literature for the synthetic speech detection task and present a model that fuses them, achieving overall better performances with respect to the state-of-the-art solutions. The system was tested on different scenarios and datasets to prove its robustness to anti-forensic attacks and its generalization capabilities.
|
2307.04442
|
Mohamed Amine Kerkouri
|
Aymen Sekhri, Marouane Tliba, Mohamed Amine Kerkouri, Yassine Nasser,
Aladine Chetouani, Alessandro Bruno, Rachid Jennane,
|
Automatic diagnosis of knee osteoarthritis severity using Swin
transformer
|
CBMI 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Knee osteoarthritis (KOA) is a widespread condition that can cause chronic
pain and stiffness in the knee joint. Early detection and diagnosis are crucial
for successful clinical intervention and management to prevent severe
complications, such as loss of mobility. In this paper, we propose an automated
approach that employs the Swin Transformer to predict the severity of KOA. Our
model uses publicly available radiographic datasets with Kellgren and Lawrence
scores to enable early detection and severity assessment. To improve the
accuracy of our model, we employ a multi-prediction head architecture that
utilizes multi-layer perceptron classifiers. Additionally, we introduce a novel
training approach that reduces the data drift between multiple datasets to
ensure the generalization ability of the model. The results of our experiments
demonstrate the effectiveness and feasibility of our approach in predicting KOA
severity accurately.
|
[
{
"created": "Mon, 10 Jul 2023 09:49:30 GMT",
"version": "v1"
}
] |
2023-07-11
|
[
[
"Sekhri",
"Aymen",
""
],
[
"Tliba",
"Marouane",
""
],
[
"Kerkouri",
"Mohamed Amine",
""
],
[
"Nasser",
"Yassine",
""
],
[
"Chetouani",
"Aladine",
""
],
[
"Bruno",
"Alessandro",
""
],
[
"Jennane",
"Rachid",
""
]
] |
Knee osteoarthritis (KOA) is a widespread condition that can cause chronic pain and stiffness in the knee joint. Early detection and diagnosis are crucial for successful clinical intervention and management to prevent severe complications, such as loss of mobility. In this paper, we propose an automated approach that employs the Swin Transformer to predict the severity of KOA. Our model uses publicly available radiographic datasets with Kellgren and Lawrence scores to enable early detection and severity assessment. To improve the accuracy of our model, we employ a multi-prediction head architecture that utilizes multi-layer perceptron classifiers. Additionally, we introduce a novel training approach that reduces the data drift between multiple datasets to ensure the generalization ability of the model. The results of our experiments demonstrate the effectiveness and feasibility of our approach in predicting KOA severity accurately.
|
1312.2867
|
Chanabasayya Vastrad M
|
Doreswamy and Chanabasayya M. Vastrad
|
Study Of E-Smooth Support Vector Regression And Comparison With E-
Support Vector Regression And Potential Support Vector Machines For
Prediction For The Antitubercular Activity Of Oxazolines And Oxazoles
Derivatives
| null |
Published International Journal on Soft Computing, Artificial
Intelligence and Applications (IJSCAI), Vol.2, No.2, April 2013
|
10.5121/ijscai.2013.2204
| null |
cs.CE cs.LO
|
http://creativecommons.org/licenses/by-nc-sa/3.0/
|
A new smoothing method for solving ? -support vector regression (?-SVR),
tolerating a small error in fitting a given data sets nonlinearly is proposed
in this study. Which is a smooth unconstrained optimization reformulation of
the traditional linear programming associated with a ?-insensitive support
vector regression. We term this redeveloped problem as ?-smooth support vector
regression (?-SSVR). The performance and predictive ability of ?-SSVR are
investigated and compared with other methods such as LIBSVM (?-SVR) and P-SVM
methods. In the present study, two Oxazolines and Oxazoles molecular descriptor
data sets were evaluated. We demonstrate the merits of our algorithm in a
series of experiments. Primary experimental results illustrate that our
proposed approach improves the regression performance and the learning
efficiency. In both studied cases, the predictive ability of the ?- SSVR model
is comparable or superior to those obtained by LIBSVM and P-SVM. The results
indicate that ?-SSVR can be used as an alternative powerful modeling method for
regression studies. The experimental results show that the presented algorithm
?-SSVR, plays better precisely and effectively than LIBSVMand P-SVM in
predicting antitubercular activity.
|
[
{
"created": "Tue, 10 Dec 2013 16:44:56 GMT",
"version": "v1"
}
] |
2013-12-13
|
[
[
"Doreswamy",
"",
""
],
[
"Vastrad",
"Chanabasayya M.",
""
]
] |
A new smoothing method for solving ? -support vector regression (?-SVR), tolerating a small error in fitting a given data sets nonlinearly is proposed in this study. Which is a smooth unconstrained optimization reformulation of the traditional linear programming associated with a ?-insensitive support vector regression. We term this redeveloped problem as ?-smooth support vector regression (?-SSVR). The performance and predictive ability of ?-SSVR are investigated and compared with other methods such as LIBSVM (?-SVR) and P-SVM methods. In the present study, two Oxazolines and Oxazoles molecular descriptor data sets were evaluated. We demonstrate the merits of our algorithm in a series of experiments. Primary experimental results illustrate that our proposed approach improves the regression performance and the learning efficiency. In both studied cases, the predictive ability of the ?- SSVR model is comparable or superior to those obtained by LIBSVM and P-SVM. The results indicate that ?-SSVR can be used as an alternative powerful modeling method for regression studies. The experimental results show that the presented algorithm ?-SSVR, plays better precisely and effectively than LIBSVMand P-SVM in predicting antitubercular activity.
|
2303.05228
|
Luca Mariot
|
Luca Mariot and Luca Manzoni
|
A classification of S-boxes generated by Orthogonal Cellular Automata
|
22 pages. Extended version of "On the Linear Components Space of
S-boxes Generated by Orthogonal Cellular Automata" arXiv:2203.14365v,
presented at ACRI 2022. Currently under submission at Natural Computing
| null | null | null |
cs.CR cs.IT math.CO math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most of the approaches published in the literature to construct S-boxes via
Cellular Automata (CA) work by either iterating a finite CA for several time
steps, or by a one-shot application of the global rule. The main characteristic
that brings together these works is that they employ a single CA rule to define
the vectorial Boolean function of the S-box. In this work, we explore a
different direction for the design of S-boxes that leverages on Orthogonal CA
(OCA), i.e. pairs of CA rules giving rise to orthogonal Latin squares. The
motivation stands on the facts that an OCA pair already defines a bijective
transformation, and moreover the orthogonality property of the resulting Latin
squares ensures a minimum amount of diffusion. We exhaustively enumerate all
S-boxes generated by OCA pairs of diameter $4 \le d \le 6$, and measure their
nonlinearity. Interestingly, we observe that for $d=4$ and $d=5$ all S-boxes
are linear, despite the underlying CA local rules being nonlinear. The smallest
nonlinear S-boxes emerges for $d=6$, but their nonlinearity is still too low to
be used in practice. Nonetheless, we unearth an interesting structure of linear
OCA S-boxes, proving that their Linear Components Space (LCS) is itself the
image of a linear CA, or equivalently a polynomial code. We finally classify
all linear OCA S-boxes in terms of their generator polynomials.
|
[
{
"created": "Thu, 9 Mar 2023 13:04:31 GMT",
"version": "v1"
}
] |
2023-03-10
|
[
[
"Mariot",
"Luca",
""
],
[
"Manzoni",
"Luca",
""
]
] |
Most of the approaches published in the literature to construct S-boxes via Cellular Automata (CA) work by either iterating a finite CA for several time steps, or by a one-shot application of the global rule. The main characteristic that brings together these works is that they employ a single CA rule to define the vectorial Boolean function of the S-box. In this work, we explore a different direction for the design of S-boxes that leverages on Orthogonal CA (OCA), i.e. pairs of CA rules giving rise to orthogonal Latin squares. The motivation stands on the facts that an OCA pair already defines a bijective transformation, and moreover the orthogonality property of the resulting Latin squares ensures a minimum amount of diffusion. We exhaustively enumerate all S-boxes generated by OCA pairs of diameter $4 \le d \le 6$, and measure their nonlinearity. Interestingly, we observe that for $d=4$ and $d=5$ all S-boxes are linear, despite the underlying CA local rules being nonlinear. The smallest nonlinear S-boxes emerges for $d=6$, but their nonlinearity is still too low to be used in practice. Nonetheless, we unearth an interesting structure of linear OCA S-boxes, proving that their Linear Components Space (LCS) is itself the image of a linear CA, or equivalently a polynomial code. We finally classify all linear OCA S-boxes in terms of their generator polynomials.
|
1405.1112
|
EPTCS
|
\'Etienne Andr\'e (Universit\'e Paris 13, France), Mohamed Mahdi
Benmoussa (Universit\'e Paris 13, France), Christine Choppy (Universit\'e
Paris 13, France)
|
Translating UML State Machines to Coloured Petri Nets Using Acceleo: A
Report
|
In Proceedings ESSS 2014, arXiv:1405.0554
|
EPTCS 150, 2014, pp. 1-7
|
10.4204/EPTCS.150.1
| null |
cs.SE cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
UML state machines are widely used to specify dynamic systems behaviours.
However its semantics is described informally, thus preventing the application
of model checking techniques that could guarantee the system safety. In a
former work, we proposed a formalisation of non-concurrent UML state machines
using coloured Petri nets, so as to allow for formal verification. In this
paper, we report our experience to implement this translation in an automated
manner using the model-to-text transformation tool Acceleo. Whereas Acceleo
provides interesting features that facilitated our translation process, it also
suffers from limitations uneasy to overcome.
|
[
{
"created": "Tue, 6 May 2014 00:53:22 GMT",
"version": "v1"
}
] |
2014-05-07
|
[
[
"André",
"Étienne",
"",
"Université Paris 13, France"
],
[
"Benmoussa",
"Mohamed Mahdi",
"",
"Université Paris 13, France"
],
[
"Choppy",
"Christine",
"",
"Université\n Paris 13, France"
]
] |
UML state machines are widely used to specify dynamic systems behaviours. However its semantics is described informally, thus preventing the application of model checking techniques that could guarantee the system safety. In a former work, we proposed a formalisation of non-concurrent UML state machines using coloured Petri nets, so as to allow for formal verification. In this paper, we report our experience to implement this translation in an automated manner using the model-to-text transformation tool Acceleo. Whereas Acceleo provides interesting features that facilitated our translation process, it also suffers from limitations uneasy to overcome.
|
2009.02694
|
Marco Di Renzo
|
Gabriele Gradoni and Marco Di Renzo
|
End-to-End Mutual Coupling Aware Communication Model for Reconfigurable
Intelligent Surfaces: An Electromagnetic-Compliant Approach Based on Mutual
Impedances
|
Submitted for journal publication
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reconfigurable intelligent surfaces (RISs) are an emerging technology for
application to wireless networks. We introduce a physics and electromagnetic
(EM) compliant communication model for analyzing and optimizing RIS-assisted
wireless systems. The proposed model has four main notable attributes: (i) it
is end-to-end, i.e., it is formulated in terms of an equivalent channel that
yields a one-to-one mapping between the voltages fed into the ports of a
transmitter and the voltages measured at the ports of a receiver; (ii) it is
EM-compliant, i.e., it accounts for the generation and propagation of the EM
fields; (iii) it is mutual coupling aware, i.e., it accounts for the mutual
coupling among the sub-wavelength unit cells of the RIS; and (iv) it is unit
cell aware, i.e., it accounts for the intertwinement between the amplitude and
phase response of the unit cells of the RIS.
|
[
{
"created": "Sun, 6 Sep 2020 10:01:48 GMT",
"version": "v1"
},
{
"created": "Mon, 7 Dec 2020 09:48:16 GMT",
"version": "v2"
}
] |
2020-12-08
|
[
[
"Gradoni",
"Gabriele",
""
],
[
"Di Renzo",
"Marco",
""
]
] |
Reconfigurable intelligent surfaces (RISs) are an emerging technology for application to wireless networks. We introduce a physics and electromagnetic (EM) compliant communication model for analyzing and optimizing RIS-assisted wireless systems. The proposed model has four main notable attributes: (i) it is end-to-end, i.e., it is formulated in terms of an equivalent channel that yields a one-to-one mapping between the voltages fed into the ports of a transmitter and the voltages measured at the ports of a receiver; (ii) it is EM-compliant, i.e., it accounts for the generation and propagation of the EM fields; (iii) it is mutual coupling aware, i.e., it accounts for the mutual coupling among the sub-wavelength unit cells of the RIS; and (iv) it is unit cell aware, i.e., it accounts for the intertwinement between the amplitude and phase response of the unit cells of the RIS.
|
1806.09817
|
Tim Roughgarden
|
Tim Roughgarden
|
Beyond Worst-Case Analysis
|
To appear in Communications of the ACM
| null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the worst-case analysis of algorithms, the overall performance of an
algorithm is summarized by its worst performance on any input. This approach
has countless success stories, but there are also important computational
problems --- like linear programming, clustering, online caching, and neural
network training --- where the worst-case analysis framework does not provide
any helpful advice on how to solve the problem. This article covers a number of
modeling methods for going beyond worst-case analysis and articulating which
inputs are the most relevant.
|
[
{
"created": "Tue, 26 Jun 2018 07:15:56 GMT",
"version": "v1"
}
] |
2018-06-27
|
[
[
"Roughgarden",
"Tim",
""
]
] |
In the worst-case analysis of algorithms, the overall performance of an algorithm is summarized by its worst performance on any input. This approach has countless success stories, but there are also important computational problems --- like linear programming, clustering, online caching, and neural network training --- where the worst-case analysis framework does not provide any helpful advice on how to solve the problem. This article covers a number of modeling methods for going beyond worst-case analysis and articulating which inputs are the most relevant.
|
1212.3747
|
Su Hu
|
Su Hu and Yong Liang Guan and Guoan Bi and Shaoqian Li
|
Cluster-based Transform Domain Communication Systems for High Spectrum
Efficiency
|
15 pages, 9 figures, Accepted for publication in IET Communications
| null | null | null |
cs.NI cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a cluster-based transform domain communication system
(TDCS) to improve spectrum efficiency. Unlike the utilities of clusters in
orthogonal frequency division multiplex (OFDM) systems, the cluster-based TDCS
framework divides entire unoccupied spectrum bins into $L$ clusters, where each
one represents a data steam independently, to achieve $L$ times of spectrum
efficiency compared to that of the traditional one. Among various schemes of
spectrum bin spacing and allocation, the TDCS with random allocation scheme
appears to be an ideal candidate to significantly improve spectrum efficiency
without seriously degrading power efficiency. In multipath fading channel, the
coded TDCS with random allocation scheme achieves robust BER performance due to
a large degree of frequency diversity. Furthermore, our study shows that the
smaller spectrum bin spacing should be configured for the cluster-based TDCS to
achieve higher spectrum efficiency and more robust BER performance.
|
[
{
"created": "Sun, 16 Dec 2012 03:25:05 GMT",
"version": "v1"
}
] |
2012-12-18
|
[
[
"Hu",
"Su",
""
],
[
"Guan",
"Yong Liang",
""
],
[
"Bi",
"Guoan",
""
],
[
"Li",
"Shaoqian",
""
]
] |
This paper presents a cluster-based transform domain communication system (TDCS) to improve spectrum efficiency. Unlike the utilities of clusters in orthogonal frequency division multiplex (OFDM) systems, the cluster-based TDCS framework divides entire unoccupied spectrum bins into $L$ clusters, where each one represents a data steam independently, to achieve $L$ times of spectrum efficiency compared to that of the traditional one. Among various schemes of spectrum bin spacing and allocation, the TDCS with random allocation scheme appears to be an ideal candidate to significantly improve spectrum efficiency without seriously degrading power efficiency. In multipath fading channel, the coded TDCS with random allocation scheme achieves robust BER performance due to a large degree of frequency diversity. Furthermore, our study shows that the smaller spectrum bin spacing should be configured for the cluster-based TDCS to achieve higher spectrum efficiency and more robust BER performance.
|
2308.09985
|
Hanzhuo Tan
|
Hanzhuo Tan, Chunpu Xu, Jing Li, Yuqun Zhang, Zeyang Fang, Zeyu Chen,
Baohua Lai
|
HICL: Hashtag-Driven In-Context Learning for Social Media Natural
Language Understanding
|
https://github.com/albertan017/HICL
|
10.1109/TNNLS.2024.3384987
|
10.1109/TNNLS.2024.3384987
| null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Natural language understanding (NLU) is integral to various social media
applications. However, existing NLU models rely heavily on context for semantic
learning, resulting in compromised performance when faced with short and noisy
social media content. To address this issue, we leverage in-context learning
(ICL), wherein language models learn to make inferences by conditioning on a
handful of demonstrations to enrich the context and propose a novel
hashtag-driven in-context learning (HICL) framework. Concretely, we pre-train a
model #Encoder, which employs #hashtags (user-annotated topic labels) to drive
BERT-based pre-training through contrastive learning. Our objective here is to
enable #Encoder to gain the ability to incorporate topic-related semantic
information, which allows it to retrieve topic-related posts to enrich contexts
and enhance social media NLU with noisy contexts. To further integrate the
retrieved context with the source text, we employ a gradient-based method to
identify trigger terms useful in fusing information from both sources. For
empirical studies, we collected 45M tweets to set up an in-context NLU
benchmark, and the experimental results on seven downstream tasks show that
HICL substantially advances the previous state-of-the-art results. Furthermore,
we conducted extensive analyzes and found that: (1) combining source input with
a top-retrieved post from #Encoder is more effective than using semantically
similar posts; (2) trigger words can largely benefit in merging context from
the source and retrieved posts.
|
[
{
"created": "Sat, 19 Aug 2023 11:31:45 GMT",
"version": "v1"
}
] |
2024-04-17
|
[
[
"Tan",
"Hanzhuo",
""
],
[
"Xu",
"Chunpu",
""
],
[
"Li",
"Jing",
""
],
[
"Zhang",
"Yuqun",
""
],
[
"Fang",
"Zeyang",
""
],
[
"Chen",
"Zeyu",
""
],
[
"Lai",
"Baohua",
""
]
] |
Natural language understanding (NLU) is integral to various social media applications. However, existing NLU models rely heavily on context for semantic learning, resulting in compromised performance when faced with short and noisy social media content. To address this issue, we leverage in-context learning (ICL), wherein language models learn to make inferences by conditioning on a handful of demonstrations to enrich the context and propose a novel hashtag-driven in-context learning (HICL) framework. Concretely, we pre-train a model #Encoder, which employs #hashtags (user-annotated topic labels) to drive BERT-based pre-training through contrastive learning. Our objective here is to enable #Encoder to gain the ability to incorporate topic-related semantic information, which allows it to retrieve topic-related posts to enrich contexts and enhance social media NLU with noisy contexts. To further integrate the retrieved context with the source text, we employ a gradient-based method to identify trigger terms useful in fusing information from both sources. For empirical studies, we collected 45M tweets to set up an in-context NLU benchmark, and the experimental results on seven downstream tasks show that HICL substantially advances the previous state-of-the-art results. Furthermore, we conducted extensive analyzes and found that: (1) combining source input with a top-retrieved post from #Encoder is more effective than using semantically similar posts; (2) trigger words can largely benefit in merging context from the source and retrieved posts.
|
2309.06527
|
Alexey Zaytsev
|
Pavel Burnyshev, Elizaveta Kostenok, Alexey Zaytsev
|
Machine Translation Models Stand Strong in the Face of Adversarial
Attacks
| null |
AIST-2023
| null | null |
cs.CL cs.CR cs.LG
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Adversarial attacks expose vulnerabilities of deep learning models by
introducing minor perturbations to the input, which lead to substantial
alterations in the output. Our research focuses on the impact of such
adversarial attacks on sequence-to-sequence (seq2seq) models, specifically
machine translation models. We introduce algorithms that incorporate basic text
perturbation heuristics and more advanced strategies, such as the
gradient-based attack, which utilizes a differentiable approximation of the
inherently non-differentiable translation metric. Through our investigation, we
provide evidence that machine translation models display robustness displayed
robustness against best performed known adversarial attacks, as the degree of
perturbation in the output is directly proportional to the perturbation in the
input. However, among underdogs, our attacks outperform alternatives, providing
the best relative performance. Another strong candidate is an attack based on
mixing of individual characters.
|
[
{
"created": "Sun, 10 Sep 2023 11:22:59 GMT",
"version": "v1"
}
] |
2023-09-14
|
[
[
"Burnyshev",
"Pavel",
""
],
[
"Kostenok",
"Elizaveta",
""
],
[
"Zaytsev",
"Alexey",
""
]
] |
Adversarial attacks expose vulnerabilities of deep learning models by introducing minor perturbations to the input, which lead to substantial alterations in the output. Our research focuses on the impact of such adversarial attacks on sequence-to-sequence (seq2seq) models, specifically machine translation models. We introduce algorithms that incorporate basic text perturbation heuristics and more advanced strategies, such as the gradient-based attack, which utilizes a differentiable approximation of the inherently non-differentiable translation metric. Through our investigation, we provide evidence that machine translation models display robustness displayed robustness against best performed known adversarial attacks, as the degree of perturbation in the output is directly proportional to the perturbation in the input. However, among underdogs, our attacks outperform alternatives, providing the best relative performance. Another strong candidate is an attack based on mixing of individual characters.
|
2311.15562
|
Chongyan Chen
|
Chongyan Chen, Mengchen Liu, Noel Codella, Yunsheng Li, Lu Yuan, Danna
Gurari
|
Fully Authentic Visual Question Answering Dataset from Online
Communities
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Visual Question Answering (VQA) entails answering questions about images. We
introduce the first VQA dataset in which all contents originate from an
authentic use case. Sourced from online question answering community forums, we
call it VQAonline. We characterize this dataset and how it relates to eight
mainstream VQA datasets. Observing that answers in our dataset tend to be much
longer (i.e., a mean of 173 words) and so incompatible with standard VQA
evaluation metrics, we instead utilize popular metrics for longer text
evaluation for evaluating six state-of-the-art VQA models on VQAonline and
report where they struggle most. Finally, we analyze which evaluation metrics
align best with human judgments. To facilitate future extensions, we
publicly-share the dataset at: https://vqaonline.github.io/.
|
[
{
"created": "Mon, 27 Nov 2023 06:19:00 GMT",
"version": "v1"
},
{
"created": "Fri, 29 Dec 2023 14:18:39 GMT",
"version": "v2"
},
{
"created": "Tue, 19 Mar 2024 03:50:36 GMT",
"version": "v3"
},
{
"created": "Wed, 17 Jul 2024 07:28:19 GMT",
"version": "v4"
}
] |
2024-07-18
|
[
[
"Chen",
"Chongyan",
""
],
[
"Liu",
"Mengchen",
""
],
[
"Codella",
"Noel",
""
],
[
"Li",
"Yunsheng",
""
],
[
"Yuan",
"Lu",
""
],
[
"Gurari",
"Danna",
""
]
] |
Visual Question Answering (VQA) entails answering questions about images. We introduce the first VQA dataset in which all contents originate from an authentic use case. Sourced from online question answering community forums, we call it VQAonline. We characterize this dataset and how it relates to eight mainstream VQA datasets. Observing that answers in our dataset tend to be much longer (i.e., a mean of 173 words) and so incompatible with standard VQA evaluation metrics, we instead utilize popular metrics for longer text evaluation for evaluating six state-of-the-art VQA models on VQAonline and report where they struggle most. Finally, we analyze which evaluation metrics align best with human judgments. To facilitate future extensions, we publicly-share the dataset at: https://vqaonline.github.io/.
|
1209.4532
|
Sachin Lakra
|
T.V. Prasad, Sachin Lakra, G. Ramakrishna
|
Applicability of Crisp and Fuzzy Logic in Intelligent Response
Generation
|
4 pages, 1 table
|
Published in proceedings of National Conference on Information,
Computational Technologies and e-Governance 2010, Alwar, Rajasthan, India,
19-20 November, 2010, pp. 137-139
| null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper discusses the merits and demerits of crisp logic and fuzzy logic
with respect to their applicability in intelligent response generation by a
human being and by a robot. Intelligent systems must have the capability of
taking decisions that are wise and handle situations intelligently. A direct
relationship exists between the level of perfection in handling a situation and
the level of completeness of the available knowledge or information or data
required to handle the situation. The paper concludes that the use of crisp
logic with complete knowledge leads to perfection in handling situations
whereas fuzzy logic can handle situations imperfectly only. However, in the
light of availability of incomplete knowledge fuzzy theory is more effective
but may be disadvantageous as compared to crisp logic.
|
[
{
"created": "Thu, 20 Sep 2012 14:00:06 GMT",
"version": "v1"
}
] |
2012-09-21
|
[
[
"Prasad",
"T. V.",
""
],
[
"Lakra",
"Sachin",
""
],
[
"Ramakrishna",
"G.",
""
]
] |
This paper discusses the merits and demerits of crisp logic and fuzzy logic with respect to their applicability in intelligent response generation by a human being and by a robot. Intelligent systems must have the capability of taking decisions that are wise and handle situations intelligently. A direct relationship exists between the level of perfection in handling a situation and the level of completeness of the available knowledge or information or data required to handle the situation. The paper concludes that the use of crisp logic with complete knowledge leads to perfection in handling situations whereas fuzzy logic can handle situations imperfectly only. However, in the light of availability of incomplete knowledge fuzzy theory is more effective but may be disadvantageous as compared to crisp logic.
|
2011.07355
|
Jamie Hayes
|
Jamie Hayes, Krishnamurthy (Dj) Dvijotham, Yutian Chen, Sander
Dieleman, Pushmeet Kohli, Norman Casagrande
|
Towards transformation-resilient provenance detection of digital media
| null | null | null | null |
cs.LG cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Advancements in deep generative models have made it possible to synthesize
images, videos and audio signals that are difficult to distinguish from natural
signals, creating opportunities for potential abuse of these capabilities. This
motivates the problem of tracking the provenance of signals, i.e., being able
to determine the original source of a signal. Watermarking the signal at the
time of signal creation is a potential solution, but current techniques are
brittle and watermark detection mechanisms can easily be bypassed by applying
post-processing transformations (cropping images, shifting pitch in the audio
etc.). In this paper, we introduce ReSWAT (Resilient Signal Watermarking via
Adversarial Training), a framework for learning transformation-resilient
watermark detectors that are able to detect a watermark even after a signal has
been through several post-processing transformations. Our detection method can
be applied to domains with continuous data representations such as images,
videos or sound signals. Experiments on watermarking image and audio signals
show that our method can reliably detect the provenance of a signal, even if it
has been through several post-processing transformations, and improve upon
related work in this setting. Furthermore, we show that for specific kinds of
transformations (perturbations bounded in the L2 norm), we can even get formal
guarantees on the ability of our model to detect the watermark. We provide
qualitative examples of watermarked image and audio samples in
https://drive.google.com/open?id=1-yZ0WIGNu2Iez7UpXBjtjVgZu3jJjFga.
|
[
{
"created": "Sat, 14 Nov 2020 18:08:07 GMT",
"version": "v1"
}
] |
2020-11-17
|
[
[
"Hayes",
"Jamie",
"",
"Dj"
],
[
"Krishnamurthy",
"",
"",
"Dj"
],
[
"Dvijotham",
"",
""
],
[
"Chen",
"Yutian",
""
],
[
"Dieleman",
"Sander",
""
],
[
"Kohli",
"Pushmeet",
""
],
[
"Casagrande",
"Norman",
""
]
] |
Advancements in deep generative models have made it possible to synthesize images, videos and audio signals that are difficult to distinguish from natural signals, creating opportunities for potential abuse of these capabilities. This motivates the problem of tracking the provenance of signals, i.e., being able to determine the original source of a signal. Watermarking the signal at the time of signal creation is a potential solution, but current techniques are brittle and watermark detection mechanisms can easily be bypassed by applying post-processing transformations (cropping images, shifting pitch in the audio etc.). In this paper, we introduce ReSWAT (Resilient Signal Watermarking via Adversarial Training), a framework for learning transformation-resilient watermark detectors that are able to detect a watermark even after a signal has been through several post-processing transformations. Our detection method can be applied to domains with continuous data representations such as images, videos or sound signals. Experiments on watermarking image and audio signals show that our method can reliably detect the provenance of a signal, even if it has been through several post-processing transformations, and improve upon related work in this setting. Furthermore, we show that for specific kinds of transformations (perturbations bounded in the L2 norm), we can even get formal guarantees on the ability of our model to detect the watermark. We provide qualitative examples of watermarked image and audio samples in https://drive.google.com/open?id=1-yZ0WIGNu2Iez7UpXBjtjVgZu3jJjFga.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.