id stringlengths 9 10 | submitter stringlengths 1 64 ⌀ | authors stringlengths 4 20.7k | title stringlengths 4 246 | comments stringlengths 1 523 ⌀ | journal-ref stringlengths 4 404 ⌀ | doi stringlengths 11 153 ⌀ | report-no stringlengths 2 254 ⌀ | categories stringlengths 5 98 | license stringclasses 9 values | orig_abstract stringlengths 14 3.35k | versions listlengths 1 60 | update_date stringlengths 10 10 | authors_parsed listlengths 1 1.35k | abstract stringlengths 11 3.34k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2311.13845 | Vivien Cabannes | Vivien Cabannes, Charles Arnal | Touring sampling with pushforward maps | 5 pages | ICASSP, 2024 | null | null | cs.LG cs.AI stat.ML | http://creativecommons.org/licenses/by-sa/4.0/ | The number of sampling methods could be daunting for a practitioner looking
to cast powerful machine learning methods to their specific problem. This paper
takes a theoretical stance to review and organize many sampling approaches in
the ``generative modeling'' setting, where one wants to generate new data that
are similar to some training examples. By revealing links between existing
methods, it might prove useful to overcome some of the current challenges in
sampling with diffusion models, such as long inference time due to diffusion
simulation, or the lack of diversity in generated samples.
| [
{
"created": "Thu, 23 Nov 2023 08:23:43 GMT",
"version": "v1"
},
{
"created": "Tue, 20 Feb 2024 18:17:40 GMT",
"version": "v2"
}
] | 2024-02-21 | [
[
"Cabannes",
"Vivien",
""
],
[
"Arnal",
"Charles",
""
]
] | The number of sampling methods could be daunting for a practitioner looking to cast powerful machine learning methods to their specific problem. This paper takes a theoretical stance to review and organize many sampling approaches in the ``generative modeling'' setting, where one wants to generate new data that are similar to some training examples. By revealing links between existing methods, it might prove useful to overcome some of the current challenges in sampling with diffusion models, such as long inference time due to diffusion simulation, or the lack of diversity in generated samples. |
1408.4797 | Bogdan Pasca | Adrian J. Chung, Kathryn Cobden, Mark Jervis, Martin Langhammer,
Bogdan Pasca | Tools and Techniques for Efficient High-Level System Design on FPGAs | Presented at First International Workshop on FPGAs for Software
Programmers (FSP 2014) (arXiv:1408.4423) | null | null | FSP/2014/01 | cs.OH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In order for FPGAs to be successful outside traditional markets, tools which
enable software programmers to achieve high levels of system performance while
abstracting away the FPGA-specific details are needed. DSPB Builder Advanced
(DSPBA) is one such tool. DSPBA provides model-based design environment using
Matlab's Simulink frontend that decouples the fully-algorithmic design
description from the details of FPGA system generation. DSPBA offers several
levels of debugging: from Simulink scopes to bit-accurate-simulation and silver
reference models. It also offers the most comprehensive set of fixed-point,
floating-point and signal-processing IPs available today. The combination of 7
floating-point precisions, fused-datapath support, custom operator support and
automated folding allows exploring the best tradeoffs between accuracy, size
and throughput. The DSPBA backend protects users from the details of
device-dependent operator mapping offering both efficiency and prompt support
for new devices and features such as the Arria10 floating-point cores. The
collection of features available in DSPBA allows both unexperienced and expert
users to efficiently migrate performance-crucial systems to the FPGA
architecture.
| [
{
"created": "Wed, 20 Aug 2014 16:08:57 GMT",
"version": "v1"
}
] | 2014-08-22 | [
[
"Chung",
"Adrian J.",
""
],
[
"Cobden",
"Kathryn",
""
],
[
"Jervis",
"Mark",
""
],
[
"Langhammer",
"Martin",
""
],
[
"Pasca",
"Bogdan",
""
]
] | In order for FPGAs to be successful outside traditional markets, tools which enable software programmers to achieve high levels of system performance while abstracting away the FPGA-specific details are needed. DSPB Builder Advanced (DSPBA) is one such tool. DSPBA provides model-based design environment using Matlab's Simulink frontend that decouples the fully-algorithmic design description from the details of FPGA system generation. DSPBA offers several levels of debugging: from Simulink scopes to bit-accurate-simulation and silver reference models. It also offers the most comprehensive set of fixed-point, floating-point and signal-processing IPs available today. The combination of 7 floating-point precisions, fused-datapath support, custom operator support and automated folding allows exploring the best tradeoffs between accuracy, size and throughput. The DSPBA backend protects users from the details of device-dependent operator mapping offering both efficiency and prompt support for new devices and features such as the Arria10 floating-point cores. The collection of features available in DSPBA allows both unexperienced and expert users to efficiently migrate performance-crucial systems to the FPGA architecture. |
1711.08585 | Mir Rayat Imtiaz Hossain | Mir Rayat Imtiaz Hossain, James J. Little | Exploiting temporal information for 3D pose estimation | null | null | 10.1007/978-3-030-01249-6_5 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we address the problem of 3D human pose estimation from a
sequence of 2D human poses. Although the recent success of deep networks has
led many state-of-the-art methods for 3D pose estimation to train deep networks
end-to-end to predict from images directly, the top-performing approaches have
shown the effectiveness of dividing the task of 3D pose estimation into two
steps: using a state-of-the-art 2D pose estimator to estimate the 2D pose from
images and then mapping them into 3D space. They also showed that a
low-dimensional representation like 2D locations of a set of joints can be
discriminative enough to estimate 3D pose with high accuracy. However,
estimation of 3D pose for individual frames leads to temporally incoherent
estimates due to independent error in each frame causing jitter. Therefore, in
this work we utilize the temporal information across a sequence of 2D joint
locations to estimate a sequence of 3D poses. We designed a
sequence-to-sequence network composed of layer-normalized LSTM units with
shortcut connections connecting the input to the output on the decoder side and
imposed temporal smoothness constraint during training. We found that the
knowledge of temporal consistency improves the best reported result on
Human3.6M dataset by approximately $12.2\%$ and helps our network to recover
temporally consistent 3D poses over a sequence of images even when the 2D pose
detector fails.
| [
{
"created": "Thu, 23 Nov 2017 06:20:51 GMT",
"version": "v1"
},
{
"created": "Sun, 1 Apr 2018 04:49:22 GMT",
"version": "v2"
},
{
"created": "Tue, 3 Apr 2018 13:54:34 GMT",
"version": "v3"
},
{
"created": "Wed, 12 Sep 2018 05:15:11 GMT",
"version": "v4"
}
] | 2018-10-23 | [
[
"Hossain",
"Mir Rayat Imtiaz",
""
],
[
"Little",
"James J.",
""
]
] | In this work, we address the problem of 3D human pose estimation from a sequence of 2D human poses. Although the recent success of deep networks has led many state-of-the-art methods for 3D pose estimation to train deep networks end-to-end to predict from images directly, the top-performing approaches have shown the effectiveness of dividing the task of 3D pose estimation into two steps: using a state-of-the-art 2D pose estimator to estimate the 2D pose from images and then mapping them into 3D space. They also showed that a low-dimensional representation like 2D locations of a set of joints can be discriminative enough to estimate 3D pose with high accuracy. However, estimation of 3D pose for individual frames leads to temporally incoherent estimates due to independent error in each frame causing jitter. Therefore, in this work we utilize the temporal information across a sequence of 2D joint locations to estimate a sequence of 3D poses. We designed a sequence-to-sequence network composed of layer-normalized LSTM units with shortcut connections connecting the input to the output on the decoder side and imposed temporal smoothness constraint during training. We found that the knowledge of temporal consistency improves the best reported result on Human3.6M dataset by approximately $12.2\%$ and helps our network to recover temporally consistent 3D poses over a sequence of images even when the 2D pose detector fails. |
1402.6516 | Greg Dubbin | Greg Dubbin and Phil Blunsom | Modelling the Lexicon in Unsupervised Part of Speech Induction | To be presented at the 14th Conference of the European Chapter of the
Association for Computational Linguistics | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatically inducing the syntactic part-of-speech categories for words in
text is a fundamental task in Computational Linguistics. While the performance
of unsupervised tagging models has been slowly improving, current
state-of-the-art systems make the obviously incorrect assumption that all
tokens of a given word type must share a single part-of-speech tag. This
one-tag-per-type heuristic counters the tendency of Hidden Markov Model based
taggers to over generate tags for a given word type. However, it is clearly
incompatible with basic syntactic theory. In this paper we extend a
state-of-the-art Pitman-Yor Hidden Markov Model tagger with an explicit model
of the lexicon. In doing so we are able to incorporate a soft bias towards
inducing few tags per type. We develop a particle filter for drawing samples
from the posterior of our model and present empirical results that show that
our model is competitive with and faster than the state-of-the-art without
making any unrealistic restrictions.
| [
{
"created": "Wed, 26 Feb 2014 12:37:04 GMT",
"version": "v1"
}
] | 2014-02-27 | [
[
"Dubbin",
"Greg",
""
],
[
"Blunsom",
"Phil",
""
]
] | Automatically inducing the syntactic part-of-speech categories for words in text is a fundamental task in Computational Linguistics. While the performance of unsupervised tagging models has been slowly improving, current state-of-the-art systems make the obviously incorrect assumption that all tokens of a given word type must share a single part-of-speech tag. This one-tag-per-type heuristic counters the tendency of Hidden Markov Model based taggers to over generate tags for a given word type. However, it is clearly incompatible with basic syntactic theory. In this paper we extend a state-of-the-art Pitman-Yor Hidden Markov Model tagger with an explicit model of the lexicon. In doing so we are able to incorporate a soft bias towards inducing few tags per type. We develop a particle filter for drawing samples from the posterior of our model and present empirical results that show that our model is competitive with and faster than the state-of-the-art without making any unrealistic restrictions. |
1904.11908 | Mikaela Ngamboe | Ngambo\'e Mikaela, Berthier Paul, Ammari Nader, Dyrda Katia, Fernandez
Jos\'e | Risk Assessment of Cyber Attacks on Telemetry Enabled Cardiac
Implantable Electronic Devices (CIED) | 60 pages, 4 figures, 9 tables | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cardiac Implantable Electronic Devices (CIED) are fast becoming a fundamental
tool of advanced medical technology and a key instrument in saving lives.
Despite their importance, previous studies have shown that CIED are not
completely secure against cyber attacks and especially those who are exploiting
their Radio Frequency (RF) communication interfaces. Furthermore, the telemetry
capabilities and IP connectivity of the external devices interacting with the
CIED are creating other entry points that may be used by attackers. In this
paper, we carry out a realistic risk analysis of such attacks. This analysis is
composed of three parts. First, an actor-based analysis to determine the impact
of the attacks. Second, a scenario-based analysis to determine the probability
of occurrence of each threat. Finally, a combined analysis to determine which
attack outcomes (i.e. attack goals) are riskiest and to identify the
vulnerabilities that constitute the highest overall risk exposure. The
conducted study showed that the vulnerabilities associated with the RF
interface of CIED represent an acceptable risk. In contrast, the network and
internet connectivity of external devices represent an important potential
risk. The previously described findings suggest that the highest risk is
associated with external systems and not the CIED itself.
| [
{
"created": "Fri, 26 Apr 2019 15:50:24 GMT",
"version": "v1"
}
] | 2019-04-29 | [
[
"Mikaela",
"Ngamboé",
""
],
[
"Paul",
"Berthier",
""
],
[
"Nader",
"Ammari",
""
],
[
"Katia",
"Dyrda",
""
],
[
"José",
"Fernandez",
""
]
] | Cardiac Implantable Electronic Devices (CIED) are fast becoming a fundamental tool of advanced medical technology and a key instrument in saving lives. Despite their importance, previous studies have shown that CIED are not completely secure against cyber attacks and especially those who are exploiting their Radio Frequency (RF) communication interfaces. Furthermore, the telemetry capabilities and IP connectivity of the external devices interacting with the CIED are creating other entry points that may be used by attackers. In this paper, we carry out a realistic risk analysis of such attacks. This analysis is composed of three parts. First, an actor-based analysis to determine the impact of the attacks. Second, a scenario-based analysis to determine the probability of occurrence of each threat. Finally, a combined analysis to determine which attack outcomes (i.e. attack goals) are riskiest and to identify the vulnerabilities that constitute the highest overall risk exposure. The conducted study showed that the vulnerabilities associated with the RF interface of CIED represent an acceptable risk. In contrast, the network and internet connectivity of external devices represent an important potential risk. The previously described findings suggest that the highest risk is associated with external systems and not the CIED itself. |
2105.07452 | Bai Li | Bai Li, Zining Zhu, Guillaume Thomas, Yang Xu, Frank Rudzicz | How is BERT surprised? Layerwise detection of linguistic anomalies | ACL 2021 (Long Paper) | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Transformer language models have shown remarkable ability in detecting when a
word is anomalous in context, but likelihood scores offer no information about
the cause of the anomaly. In this work, we use Gaussian models for density
estimation at intermediate layers of three language models (BERT, RoBERTa, and
XLNet), and evaluate our method on BLiMP, a grammaticality judgement benchmark.
In lower layers, surprisal is highly correlated to low token frequency, but
this correlation diminishes in upper layers. Next, we gather datasets of
morphosyntactic, semantic, and commonsense anomalies from psycholinguistic
studies; we find that the best performing model RoBERTa exhibits surprisal in
earlier layers when the anomaly is morphosyntactic than when it is semantic,
while commonsense anomalies do not exhibit surprisal at any intermediate layer.
These results suggest that language models employ separate mechanisms to detect
different types of linguistic anomalies.
| [
{
"created": "Sun, 16 May 2021 15:20:36 GMT",
"version": "v1"
}
] | 2021-05-18 | [
[
"Li",
"Bai",
""
],
[
"Zhu",
"Zining",
""
],
[
"Thomas",
"Guillaume",
""
],
[
"Xu",
"Yang",
""
],
[
"Rudzicz",
"Frank",
""
]
] | Transformer language models have shown remarkable ability in detecting when a word is anomalous in context, but likelihood scores offer no information about the cause of the anomaly. In this work, we use Gaussian models for density estimation at intermediate layers of three language models (BERT, RoBERTa, and XLNet), and evaluate our method on BLiMP, a grammaticality judgement benchmark. In lower layers, surprisal is highly correlated to low token frequency, but this correlation diminishes in upper layers. Next, we gather datasets of morphosyntactic, semantic, and commonsense anomalies from psycholinguistic studies; we find that the best performing model RoBERTa exhibits surprisal in earlier layers when the anomaly is morphosyntactic than when it is semantic, while commonsense anomalies do not exhibit surprisal at any intermediate layer. These results suggest that language models employ separate mechanisms to detect different types of linguistic anomalies. |
2407.20141 | Zitong Yu | Jing Yang, Runping Xi, Yingxin Lai, Xun Lin, Zitong Yu | DDAP: Dual-Domain Anti-Personalization against Text-to-Image Diffusion
Models | Accepted by IJCB 2024 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Diffusion-based personalized visual content generation technologies have
achieved significant breakthroughs, allowing for the creation of specific
objects by just learning from a few reference photos. However, when misused to
fabricate fake news or unsettling content targeting individuals, these
technologies could cause considerable societal harm. To address this problem,
current methods generate adversarial samples by adversarially maximizing the
training loss, thereby disrupting the output of any personalized generation
model trained with these samples. However, the existing methods fail to achieve
effective defense and maintain stealthiness, as they overlook the intrinsic
properties of diffusion models. In this paper, we introduce a novel Dual-Domain
Anti-Personalization framework (DDAP). Specifically, we have developed Spatial
Perturbation Learning (SPL) by exploiting the fixed and perturbation-sensitive
nature of the image encoder in personalized generation. Subsequently, we have
designed a Frequency Perturbation Learning (FPL) method that utilizes the
characteristics of diffusion models in the frequency domain. The SPL disrupts
the overall texture of the generated images, while the FPL focuses on image
details. By alternating between these two methods, we construct the DDAP
framework, effectively harnessing the strengths of both domains. To further
enhance the visual quality of the adversarial samples, we design a localization
module to accurately capture attentive areas while ensuring the effectiveness
of the attack and avoiding unnecessary disturbances in the background.
Extensive experiments on facial benchmarks have shown that the proposed DDAP
enhances the disruption of personalized generation models while also
maintaining high quality in adversarial samples, making it more effective in
protecting privacy in practical applications.
| [
{
"created": "Mon, 29 Jul 2024 16:11:21 GMT",
"version": "v1"
}
] | 2024-07-30 | [
[
"Yang",
"Jing",
""
],
[
"Xi",
"Runping",
""
],
[
"Lai",
"Yingxin",
""
],
[
"Lin",
"Xun",
""
],
[
"Yu",
"Zitong",
""
]
] | Diffusion-based personalized visual content generation technologies have achieved significant breakthroughs, allowing for the creation of specific objects by just learning from a few reference photos. However, when misused to fabricate fake news or unsettling content targeting individuals, these technologies could cause considerable societal harm. To address this problem, current methods generate adversarial samples by adversarially maximizing the training loss, thereby disrupting the output of any personalized generation model trained with these samples. However, the existing methods fail to achieve effective defense and maintain stealthiness, as they overlook the intrinsic properties of diffusion models. In this paper, we introduce a novel Dual-Domain Anti-Personalization framework (DDAP). Specifically, we have developed Spatial Perturbation Learning (SPL) by exploiting the fixed and perturbation-sensitive nature of the image encoder in personalized generation. Subsequently, we have designed a Frequency Perturbation Learning (FPL) method that utilizes the characteristics of diffusion models in the frequency domain. The SPL disrupts the overall texture of the generated images, while the FPL focuses on image details. By alternating between these two methods, we construct the DDAP framework, effectively harnessing the strengths of both domains. To further enhance the visual quality of the adversarial samples, we design a localization module to accurately capture attentive areas while ensuring the effectiveness of the attack and avoiding unnecessary disturbances in the background. Extensive experiments on facial benchmarks have shown that the proposed DDAP enhances the disruption of personalized generation models while also maintaining high quality in adversarial samples, making it more effective in protecting privacy in practical applications. |
1603.08026 | Raj Jain | Jianli Pan, Shan Zhi Chen, Raj Jain, Subharthi Paul | Energy Sensing and Monitoring Framework with an Integrated Communication
Backbone in the Energy Efficient Intelligent Buildings | null | Applied Mechanics and Materials (Volumes 303 - 306), pp.
1460-1464, February, 2013 | 10.4028/www.scientific.net/AMM.303-306.1460 | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Building environments are significant sources of global energy consumption.
To create energy efficient buildings, the first step is to sense and monitor
all the energy-consuming appliances in the buildings and record all the energy
consumption information. After that, appropriate energy saving policies can be
decided and the instructions can be sent to the control devices to apply the
energy saving adjustments. To do that, in-building two-way communication
networks are needed to connect all the sensors to collect information as well
as to send control instructions. However, most of the current devices are
provided by separate manufacturers and with separate network infrastructures
and so there is not much integration and interaction among different
subsystems. In this paper, we envision a new energy sensing and monitoring
framework with integrated communication backbone in the intelligent building
environments. Specifically, through comprehensive comparisons and
investigations, we study different candidate communicating media and protocols
like wireline, wireless, and power-line communications technologies that
potentially can be used in the intelligent buildings to realize the goals of
coordination, integration, and energy efficiency. Also, we propose an extension
"smart box" for integration of the devices before the maturity of the
standardization process. Cloud computing and smart phone technologies are also
introduced to realize the goals of improving energy efficiency and promote
global sustainability.
| [
{
"created": "Fri, 25 Mar 2016 20:41:09 GMT",
"version": "v1"
}
] | 2016-03-29 | [
[
"Pan",
"Jianli",
""
],
[
"Chen",
"Shan Zhi",
""
],
[
"Jain",
"Raj",
""
],
[
"Paul",
"Subharthi",
""
]
] | Building environments are significant sources of global energy consumption. To create energy efficient buildings, the first step is to sense and monitor all the energy-consuming appliances in the buildings and record all the energy consumption information. After that, appropriate energy saving policies can be decided and the instructions can be sent to the control devices to apply the energy saving adjustments. To do that, in-building two-way communication networks are needed to connect all the sensors to collect information as well as to send control instructions. However, most of the current devices are provided by separate manufacturers and with separate network infrastructures and so there is not much integration and interaction among different subsystems. In this paper, we envision a new energy sensing and monitoring framework with integrated communication backbone in the intelligent building environments. Specifically, through comprehensive comparisons and investigations, we study different candidate communicating media and protocols like wireline, wireless, and power-line communications technologies that potentially can be used in the intelligent buildings to realize the goals of coordination, integration, and energy efficiency. Also, we propose an extension "smart box" for integration of the devices before the maturity of the standardization process. Cloud computing and smart phone technologies are also introduced to realize the goals of improving energy efficiency and promote global sustainability. |
2207.12074 | Xinxing Chen | Xinxing Chen, Chuheng Chen, Yuxuan Wang, Bowen Yang, Teng Ma, Yuquan
Leng, Chenglong Fu | A Piecewise Monotonic Gait Phase Estimation Model for Controlling a
Powered Transfemoral Prosthesis in Various Locomotion Modes | null | in IEEE Robotics and Automation Letters, vol. 7, no. 4, pp.
9549-9556, Oct. 2022 | 10.1109/LRA.2022.3191945 | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | Gait phase-based control is a trending research topic for walking-aid robots,
especially robotic lower-limb prostheses. Gait phase estimation is a challenge
for gait phase-based control. Previous researches used the integration or the
differential of the human's thigh angle to estimate the gait phase, but
accumulative measurement errors and noises can affect the estimation results.
In this paper, a more robust gait phase estimation method is proposed using a
unified form of piecewise monotonic gait phase-thigh angle models for various
locomotion modes. The gait phase is estimated from only the thigh angle, which
is a stable variable and avoids phase drifting. A Kalman filter-based smoother
is designed to further suppress the mutations of the estimated gait phase.
Based on the proposed gait phase estimation method, a gait phase-based joint
angle tracking controller is designed for a transfemoral prosthesis. The
proposed gait estimation method, the gait phase smoother, and the controller
are evaluated through offline analysis on walking data in various locomotion
modes. And the real-time performance of the gait phase-based controller is
validated in an experiment on the transfemoral prosthesis.
| [
{
"created": "Mon, 25 Jul 2022 11:47:12 GMT",
"version": "v1"
}
] | 2022-08-02 | [
[
"Chen",
"Xinxing",
""
],
[
"Chen",
"Chuheng",
""
],
[
"Wang",
"Yuxuan",
""
],
[
"Yang",
"Bowen",
""
],
[
"Ma",
"Teng",
""
],
[
"Leng",
"Yuquan",
""
],
[
"Fu",
"Chenglong",
""
]
] | Gait phase-based control is a trending research topic for walking-aid robots, especially robotic lower-limb prostheses. Gait phase estimation is a challenge for gait phase-based control. Previous researches used the integration or the differential of the human's thigh angle to estimate the gait phase, but accumulative measurement errors and noises can affect the estimation results. In this paper, a more robust gait phase estimation method is proposed using a unified form of piecewise monotonic gait phase-thigh angle models for various locomotion modes. The gait phase is estimated from only the thigh angle, which is a stable variable and avoids phase drifting. A Kalman filter-based smoother is designed to further suppress the mutations of the estimated gait phase. Based on the proposed gait phase estimation method, a gait phase-based joint angle tracking controller is designed for a transfemoral prosthesis. The proposed gait estimation method, the gait phase smoother, and the controller are evaluated through offline analysis on walking data in various locomotion modes. And the real-time performance of the gait phase-based controller is validated in an experiment on the transfemoral prosthesis. |
1907.03112 | Lena Shakurova | Lena Shakurova, Beata Nyari, Chao Li, Mihai Rotaru | Best Practices for Learning Domain-Specific Cross-Lingual Embeddings | Proceedings of the 4th Workshop on Representation Learning for NLP | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cross-lingual embeddings aim to represent words in multiple languages in a
shared vector space by capturing semantic similarities across languages. They
are a crucial component for scaling tasks to multiple languages by transferring
knowledge from languages with rich resources to low-resource languages. A
common approach to learning cross-lingual embeddings is to train monolingual
embeddings separately for each language and learn a linear projection from the
monolingual spaces into a shared space, where the mapping relies on a small
seed dictionary. While there are high-quality generic seed dictionaries and
pre-trained cross-lingual embeddings available for many language pairs, there
is little research on how they perform on specialised tasks. In this paper, we
investigate the best practices for constructing the seed dictionary for a
specific domain. We evaluate the embeddings on the sequence labelling task of
Curriculum Vitae parsing and show that the size of a bilingual dictionary, the
frequency of the dictionary words in the domain corpora and the source of data
(task-specific vs generic) influence the performance. We also show that the
less training data is available in the low-resource language, the more the
construction of the bilingual dictionary matters, and demonstrate that some of
the choices are crucial in the zero-shot transfer learning case.
| [
{
"created": "Sat, 6 Jul 2019 10:45:45 GMT",
"version": "v1"
}
] | 2019-07-10 | [
[
"Shakurova",
"Lena",
""
],
[
"Nyari",
"Beata",
""
],
[
"Li",
"Chao",
""
],
[
"Rotaru",
"Mihai",
""
]
] | Cross-lingual embeddings aim to represent words in multiple languages in a shared vector space by capturing semantic similarities across languages. They are a crucial component for scaling tasks to multiple languages by transferring knowledge from languages with rich resources to low-resource languages. A common approach to learning cross-lingual embeddings is to train monolingual embeddings separately for each language and learn a linear projection from the monolingual spaces into a shared space, where the mapping relies on a small seed dictionary. While there are high-quality generic seed dictionaries and pre-trained cross-lingual embeddings available for many language pairs, there is little research on how they perform on specialised tasks. In this paper, we investigate the best practices for constructing the seed dictionary for a specific domain. We evaluate the embeddings on the sequence labelling task of Curriculum Vitae parsing and show that the size of a bilingual dictionary, the frequency of the dictionary words in the domain corpora and the source of data (task-specific vs generic) influence the performance. We also show that the less training data is available in the low-resource language, the more the construction of the bilingual dictionary matters, and demonstrate that some of the choices are crucial in the zero-shot transfer learning case. |
1605.01776 | Mohammadhussein Rafieisakhaei | Mohammadhussein Rafieisakhaei, Suman Chakravorty and P.R. Kumar | Non-Gaussian SLAP: Simultaneous Localization and Planning Under
Non-Gaussian Uncertainty in Static and Dynamic Environments | 10 pages | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Simultaneous Localization and Planning (SLAP) under process and measurement
uncertainties is a challenge. It involves solving a stochastic control problem
modeled as a Partially Observed Markov Decision Process (POMDP) in a general
framework. For a convex environment, we propose an optimization-based open-loop
optimal control problem coupled with receding horizon control strategy to plan
for high quality trajectories along which the uncertainty of the state
localization is reduced while the system reaches to a goal state with minimum
control effort. In a static environment with non-convex state constraints, the
optimization is modified by defining barrier functions to obtain collision-free
paths while maintaining the previous goals. By initializing the optimization
with trajectories in different homotopy classes and comparing the resultant
costs, we improve the quality of the solution in the presence of action and
measurement uncertainties. In dynamic environments with time-varying
constraints such as moving obstacles or banned areas, the approach is extended
to find collision-free trajectories. In this paper, the underlying spaces are
continuous, and beliefs are non-Gaussian. Without obstacles, the optimization
is a globally convex problem, while in the presence of obstacles it becomes
locally convex. We demonstrate the performance of the method on different
scenarios.
| [
{
"created": "Thu, 5 May 2016 22:14:35 GMT",
"version": "v1"
},
{
"created": "Wed, 10 Aug 2016 01:14:00 GMT",
"version": "v2"
},
{
"created": "Thu, 11 Aug 2016 18:24:44 GMT",
"version": "v3"
}
] | 2016-08-12 | [
[
"Rafieisakhaei",
"Mohammadhussein",
""
],
[
"Chakravorty",
"Suman",
""
],
[
"Kumar",
"P. R.",
""
]
] | Simultaneous Localization and Planning (SLAP) under process and measurement uncertainties is a challenge. It involves solving a stochastic control problem modeled as a Partially Observed Markov Decision Process (POMDP) in a general framework. For a convex environment, we propose an optimization-based open-loop optimal control problem coupled with receding horizon control strategy to plan for high quality trajectories along which the uncertainty of the state localization is reduced while the system reaches to a goal state with minimum control effort. In a static environment with non-convex state constraints, the optimization is modified by defining barrier functions to obtain collision-free paths while maintaining the previous goals. By initializing the optimization with trajectories in different homotopy classes and comparing the resultant costs, we improve the quality of the solution in the presence of action and measurement uncertainties. In dynamic environments with time-varying constraints such as moving obstacles or banned areas, the approach is extended to find collision-free trajectories. In this paper, the underlying spaces are continuous, and beliefs are non-Gaussian. Without obstacles, the optimization is a globally convex problem, while in the presence of obstacles it becomes locally convex. We demonstrate the performance of the method on different scenarios. |
2307.08614 | Khayyam Salehi | Mohammadsadegh Mohagheghi, Khayyam Salehi | Splitter Orderings for Probabilistic Bisimulation | null | null | null | null | cs.PF cs.LO cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Model checking has been proposed as a formal verification approach for
analyzing computer-based and cyber-physical systems. The state space explosion
problem is the main obstacle for applying this approach for sophisticated
systems. Bisimulation minimization is a prominent method for reducing the
number of states in a labeled transition system and is used to alleviate the
challenges of the state space explosion problem. For systems with stochastic
behaviors, probabilistic bisimulation is used to reduce a given model to its
minimized equivalent one. In recent years, several techniques have been
proposed to reduce the time complexity of the iterative methods for computing
probabilistic bisimulation of stochastic systems with nondeterministic
behaviors. In this paper, we propose several techniques to accelerate iterative
processes to partition the state space of a given probabilistic model to its
bisimulation classes. The first technique applies two ordering heuristics for
choosing splitter blocks. The second technique uses hash tables to reduce the
running time and the average time complexity of the standard iterative method.
The proposed approaches are implemented and run on several conventional case
studies and reduce the running time by one order of magnitude on average.
| [
{
"created": "Mon, 17 Jul 2023 16:30:19 GMT",
"version": "v1"
}
] | 2023-07-18 | [
[
"Mohagheghi",
"Mohammadsadegh",
""
],
[
"Salehi",
"Khayyam",
""
]
] | Model checking has been proposed as a formal verification approach for analyzing computer-based and cyber-physical systems. The state space explosion problem is the main obstacle for applying this approach for sophisticated systems. Bisimulation minimization is a prominent method for reducing the number of states in a labeled transition system and is used to alleviate the challenges of the state space explosion problem. For systems with stochastic behaviors, probabilistic bisimulation is used to reduce a given model to its minimized equivalent one. In recent years, several techniques have been proposed to reduce the time complexity of the iterative methods for computing probabilistic bisimulation of stochastic systems with nondeterministic behaviors. In this paper, we propose several techniques to accelerate iterative processes to partition the state space of a given probabilistic model to its bisimulation classes. The first technique applies two ordering heuristics for choosing splitter blocks. The second technique uses hash tables to reduce the running time and the average time complexity of the standard iterative method. The proposed approaches are implemented and run on several conventional case studies and reduce the running time by one order of magnitude on average. |
1703.04367 | Stefan Jaax | Michael Blondin, Javier Esparza, Stefan Jaax, Philipp J. Meyer | Towards Efficient Verification of Population Protocols | 29 pages, 1 figure | null | 10.1145/3087801.3087816 | null | cs.LO cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Population protocols are a well established model of computation by
anonymous, identical finite state agents. A protocol is well-specified if from
every initial configuration, all fair executions reach a common consensus. The
central verification question for population protocols is the
well-specification problem: deciding if a given protocol is well-specified.
Esparza et al. have recently shown that this problem is decidable, but with
very high complexity: it is at least as hard as the Petri net reachability
problem, which is EXPSPACE-hard, and for which only algorithms of non-primitive
recursive complexity are currently known.
In this paper we introduce the class WS3 of well-specified strongly-silent
protocols and we prove that it is suitable for automatic verification. More
precisely, we show that WS3 has the same computational power as general
well-specified protocols, and captures standard protocols from the literature.
Moreover, we show that the membership problem for WS3 reduces to solving
boolean combinations of linear constraints over N. This allowed us to develop
the first software able to automatically prove well-specification for all of
the infinitely many possible inputs.
| [
{
"created": "Mon, 13 Mar 2017 13:03:47 GMT",
"version": "v1"
},
{
"created": "Tue, 8 Aug 2017 09:31:00 GMT",
"version": "v2"
},
{
"created": "Mon, 30 Jul 2018 10:44:35 GMT",
"version": "v3"
}
] | 2018-07-31 | [
[
"Blondin",
"Michael",
""
],
[
"Esparza",
"Javier",
""
],
[
"Jaax",
"Stefan",
""
],
[
"Meyer",
"Philipp J.",
""
]
] | Population protocols are a well established model of computation by anonymous, identical finite state agents. A protocol is well-specified if from every initial configuration, all fair executions reach a common consensus. The central verification question for population protocols is the well-specification problem: deciding if a given protocol is well-specified. Esparza et al. have recently shown that this problem is decidable, but with very high complexity: it is at least as hard as the Petri net reachability problem, which is EXPSPACE-hard, and for which only algorithms of non-primitive recursive complexity are currently known. In this paper we introduce the class WS3 of well-specified strongly-silent protocols and we prove that it is suitable for automatic verification. More precisely, we show that WS3 has the same computational power as general well-specified protocols, and captures standard protocols from the literature. Moreover, we show that the membership problem for WS3 reduces to solving boolean combinations of linear constraints over N. This allowed us to develop the first software able to automatically prove well-specification for all of the infinitely many possible inputs. |
1909.03839 | Haoyue Bai | Haoyue Bai, Song Wen, S.-H. Gary Chan | Crowd Counting on Images with Scale Variation and Isolated Clusters | Accepted at International Conference on Computer Vision (ICCV) 2019
Workshop | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Crowd counting is to estimate the number of objects (e.g., people or
vehicles) in an image of unconstrained congested scenes. Designing a general
crowd counting algorithm applicable to a wide range of crowd images is
challenging, mainly due to the possibly large variation in object scales and
the presence of many isolated small clusters. Previous approaches based on
convolution operations with multi-branch architecture are effective for only
some narrow bands of scales and have not captured the long-range contextual
relationship due to isolated clustering. To address that, we propose SACANet, a
novel scale-adaptive long-range context-aware network for crowd counting.
SACANet consists of three major modules: the pyramid contextual module which
extracts long-range contextual information and enlarges the receptive field, a
scale-adaptive self-attention multi-branch module to attain high scale
sensitivity and detection accuracy of isolated clusters, and a hierarchical
fusion module to fuse multi-level self-attention features. With group
normalization, SACANet achieves better optimality in the training process. We
have conducted extensive experiments using the VisDrone2019 People dataset, the
VisDrone2019 Vehicle dataset, and some other challenging benchmarks. As
compared with the state-of-the-art methods, SACANet is shown to be effective,
especially for extremely crowded conditions with diverse scales and scattered
clusters, and achieves much lower MAE as compared with baselines.
| [
{
"created": "Mon, 9 Sep 2019 13:17:26 GMT",
"version": "v1"
}
] | 2019-09-10 | [
[
"Bai",
"Haoyue",
""
],
[
"Wen",
"Song",
""
],
[
"Chan",
"S. -H. Gary",
""
]
] | Crowd counting is to estimate the number of objects (e.g., people or vehicles) in an image of unconstrained congested scenes. Designing a general crowd counting algorithm applicable to a wide range of crowd images is challenging, mainly due to the possibly large variation in object scales and the presence of many isolated small clusters. Previous approaches based on convolution operations with multi-branch architecture are effective for only some narrow bands of scales and have not captured the long-range contextual relationship due to isolated clustering. To address that, we propose SACANet, a novel scale-adaptive long-range context-aware network for crowd counting. SACANet consists of three major modules: the pyramid contextual module which extracts long-range contextual information and enlarges the receptive field, a scale-adaptive self-attention multi-branch module to attain high scale sensitivity and detection accuracy of isolated clusters, and a hierarchical fusion module to fuse multi-level self-attention features. With group normalization, SACANet achieves better optimality in the training process. We have conducted extensive experiments using the VisDrone2019 People dataset, the VisDrone2019 Vehicle dataset, and some other challenging benchmarks. As compared with the state-of-the-art methods, SACANet is shown to be effective, especially for extremely crowded conditions with diverse scales and scattered clusters, and achieves much lower MAE as compared with baselines. |
2405.18710 | Joonhyung Lee | Joonhyung Lee, Jeongin Bae, Byeongwook Kim, Se Jung Kwon, Dongsoo Lee | To FP8 and Back Again: Quantifying the Effects of Reducing Precision on
LLM Training Stability | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | The massive computational costs associated with large language model (LLM)
pretraining have spurred great interest in reduced-precision floating-point
representations to accelerate the process. As a result, the BrainFloat16 (BF16)
precision has become the de facto standard for LLM training, with hardware
support included in recent accelerators. This trend has gone even further in
the latest processors, where FP8 has recently been introduced. However, prior
experience with FP16, which was found to be less stable than BF16, raises
concerns as to whether FP8, with even fewer bits than FP16, can be a
cost-effective option for LLM training. We argue that reduced-precision
training schemes must have similar training stability and hyperparameter
sensitivities to their higher-precision counterparts in order to be
cost-effective. However, we find that currently available methods for FP8
training are not robust enough to allow their use as economical replacements.
This prompts us to investigate the stability of reduced-precision LLM training
in terms of robustness across random seeds and learning rates. To this end, we
propose new evaluation techniques and a new metric for quantifying loss
landscape sharpness in autoregressive language models. By simulating
incremental bit reductions in floating-point representations, we analyze the
relationship between representational power and training stability with the
intent of aiding future research into the field.
| [
{
"created": "Wed, 29 May 2024 02:42:23 GMT",
"version": "v1"
}
] | 2024-05-30 | [
[
"Lee",
"Joonhyung",
""
],
[
"Bae",
"Jeongin",
""
],
[
"Kim",
"Byeongwook",
""
],
[
"Kwon",
"Se Jung",
""
],
[
"Lee",
"Dongsoo",
""
]
] | The massive computational costs associated with large language model (LLM) pretraining have spurred great interest in reduced-precision floating-point representations to accelerate the process. As a result, the BrainFloat16 (BF16) precision has become the de facto standard for LLM training, with hardware support included in recent accelerators. This trend has gone even further in the latest processors, where FP8 has recently been introduced. However, prior experience with FP16, which was found to be less stable than BF16, raises concerns as to whether FP8, with even fewer bits than FP16, can be a cost-effective option for LLM training. We argue that reduced-precision training schemes must have similar training stability and hyperparameter sensitivities to their higher-precision counterparts in order to be cost-effective. However, we find that currently available methods for FP8 training are not robust enough to allow their use as economical replacements. This prompts us to investigate the stability of reduced-precision LLM training in terms of robustness across random seeds and learning rates. To this end, we propose new evaluation techniques and a new metric for quantifying loss landscape sharpness in autoregressive language models. By simulating incremental bit reductions in floating-point representations, we analyze the relationship between representational power and training stability with the intent of aiding future research into the field. |
2407.01860 | Yuancheng Luo | Yuancheng Luo | Constant Directivity Loudspeaker Beamforming | Accepted at EUSIPCO 2024 | null | null | null | cs.SD eess.AS eess.SP | http://creativecommons.org/licenses/by/4.0/ | Loudspeaker array beamforming is a common signal processing technique for
acoustic directivity control and robust audio reproduction. Unlike their
microphone counterpart, loudspeaker constraints are often heterogeneous due to
arrayed transducers with varying operating ranges in frequency,
acoustic-electrical sensitivity, efficiency, and directivity. This work
proposes a frequency-regularization method for generalized Rayleigh quotient
directivity specifications and two novel beamformer designs that optimize for
maximum efficiency constant directivity (MECD) and maximum sensitivity constant
directivity (MSCD). We derive fast converging and analytic solutions from their
quadratic equality constrained quadratic program formulations. Experiments
optimize generalized directivity index constrained beamformer designs for a
full-band heterogeneous array.
| [
{
"created": "Tue, 2 Jul 2024 00:13:07 GMT",
"version": "v1"
},
{
"created": "Mon, 8 Jul 2024 23:49:53 GMT",
"version": "v2"
}
] | 2024-07-10 | [
[
"Luo",
"Yuancheng",
""
]
] | Loudspeaker array beamforming is a common signal processing technique for acoustic directivity control and robust audio reproduction. Unlike their microphone counterpart, loudspeaker constraints are often heterogeneous due to arrayed transducers with varying operating ranges in frequency, acoustic-electrical sensitivity, efficiency, and directivity. This work proposes a frequency-regularization method for generalized Rayleigh quotient directivity specifications and two novel beamformer designs that optimize for maximum efficiency constant directivity (MECD) and maximum sensitivity constant directivity (MSCD). We derive fast converging and analytic solutions from their quadratic equality constrained quadratic program formulations. Experiments optimize generalized directivity index constrained beamformer designs for a full-band heterogeneous array. |
2403.17130 | Mihaela Breaban | Radu-Andrei Rosu, Mihaela-Elena Breaban, Henri Luchian | Exploring the potential of prototype-based soft-labels data distillation
for imbalanced data classification | null | 24th International Symposium on Symbolic and Numeric Algorithms
for Scientific Computing (SYNASC), pp. 173-180, 2022. IEEE | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Dataset distillation aims at synthesizing a dataset by a small number of
artificially generated data items, which, when used as training data, reproduce
or approximate a machine learning (ML) model as if it were trained on the
entire original dataset. Consequently, data distillation methods are usually
tied to a specific ML algorithm. While recent literature deals mainly with
distillation of large collections of images in the context of neural network
models, tabular data distillation is much less represented and mainly focused
on a theoretical perspective. The current paper explores the potential of a
simple distillation technique previously proposed in the context of
Less-than-one shot learning. The main goal is to push further the performance
of prototype-based soft-labels distillation in terms of classification
accuracy, by integrating optimization steps in the distillation process. The
analysis is performed on real-world data sets with various degrees of
imbalance. Experimental studies trace the capability of the method to distill
the data, but also the opportunity to act as an augmentation method, i.e. to
generate new data that is able to increase model accuracy when used in
conjunction with - as opposed to instead of - the original data.
| [
{
"created": "Mon, 25 Mar 2024 19:15:19 GMT",
"version": "v1"
}
] | 2024-03-27 | [
[
"Rosu",
"Radu-Andrei",
""
],
[
"Breaban",
"Mihaela-Elena",
""
],
[
"Luchian",
"Henri",
""
]
] | Dataset distillation aims at synthesizing a dataset by a small number of artificially generated data items, which, when used as training data, reproduce or approximate a machine learning (ML) model as if it were trained on the entire original dataset. Consequently, data distillation methods are usually tied to a specific ML algorithm. While recent literature deals mainly with distillation of large collections of images in the context of neural network models, tabular data distillation is much less represented and mainly focused on a theoretical perspective. The current paper explores the potential of a simple distillation technique previously proposed in the context of Less-than-one shot learning. The main goal is to push further the performance of prototype-based soft-labels distillation in terms of classification accuracy, by integrating optimization steps in the distillation process. The analysis is performed on real-world data sets with various degrees of imbalance. Experimental studies trace the capability of the method to distill the data, but also the opportunity to act as an augmentation method, i.e. to generate new data that is able to increase model accuracy when used in conjunction with - as opposed to instead of - the original data. |
1409.1879 | Pengfei Chen | Pengfei Chen, Yong Qi, Di Hou and Jiankang Liu | Bio-inspired Mechanism and Model Exploration of Software Aging | 41 pages | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Software systems situated in network environment may experience performance
degradation, availability decrease and even crash during long time running,
which is called software aging. This phenomenon has been studied for more than
15 years, but most of the literatures studied software as a black box, none of
them uncovered the fundamental and widely accepted mechanism of software aging
as far as we know. Through analyzing the characteristics between biological
aging and software aging, we find some interesting common points and bridge the
gap between these two seemingly unrelated phenomena. The free radical aging
theory in biological studies is also applicative to explore the mechanism and
model of software aging. This paper finds an equivalent concept named `software
free radical' in software aging to free radical in biological aging. In our
study, the accumulation of `software free radical' is a root cause of software
aging. Using the free radical modeling methodology in biological aging, we give
a model for describing the kinetic of software aging based on feedback loops.
Although this paper doesn't give enough theoretical proof of the modeling
method, the practical results show that the feedback loop model can describe
the kinetic of software aging precisely. To further validate the aging
mechanism, we propose several software rejuvenation strategies focusing on
cleaning the `software free radical'. The results show that software aging can
be mitigated effectively by strengthening negative feedback loop or weakening
positive feedback loop. This paper is the first try to answer the question `How
software ages' through interdisciplinary studies. Leveraging the conclusions in
this paper, people can design better software systems or keep their systems at
a high performance level during long time running.
| [
{
"created": "Wed, 27 Aug 2014 01:44:18 GMT",
"version": "v1"
}
] | 2016-11-04 | [
[
"Chen",
"Pengfei",
""
],
[
"Qi",
"Yong",
""
],
[
"Hou",
"Di",
""
],
[
"Liu",
"Jiankang",
""
]
] | Software systems situated in network environment may experience performance degradation, availability decrease and even crash during long time running, which is called software aging. This phenomenon has been studied for more than 15 years, but most of the literatures studied software as a black box, none of them uncovered the fundamental and widely accepted mechanism of software aging as far as we know. Through analyzing the characteristics between biological aging and software aging, we find some interesting common points and bridge the gap between these two seemingly unrelated phenomena. The free radical aging theory in biological studies is also applicative to explore the mechanism and model of software aging. This paper finds an equivalent concept named `software free radical' in software aging to free radical in biological aging. In our study, the accumulation of `software free radical' is a root cause of software aging. Using the free radical modeling methodology in biological aging, we give a model for describing the kinetic of software aging based on feedback loops. Although this paper doesn't give enough theoretical proof of the modeling method, the practical results show that the feedback loop model can describe the kinetic of software aging precisely. To further validate the aging mechanism, we propose several software rejuvenation strategies focusing on cleaning the `software free radical'. The results show that software aging can be mitigated effectively by strengthening negative feedback loop or weakening positive feedback loop. This paper is the first try to answer the question `How software ages' through interdisciplinary studies. Leveraging the conclusions in this paper, people can design better software systems or keep their systems at a high performance level during long time running. |
2208.08861 | Naruya Kondo | Naruya Kondo, So Kuroki, Ryosuke Hyakuta, Yutaka Matsuo, Shixiang
Shane Gu, Yoichi Ochiai | Deep Billboards towards Lossless Real2Sim in Virtual Reality | SIGGRAPH 2022 Immersive Pavilion | null | 10.1145/3532834.3536210 | null | cs.CV cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An aspirational goal for virtual reality (VR) is to bring in a rich diversity
of real world objects losslessly. Existing VR applications often convert
objects into explicit 3D models with meshes or point clouds, which allow fast
interactive rendering but also severely limit its quality and the types of
supported objects, fundamentally upper-bounding the "realism" of VR. Inspired
by the classic "billboards" technique in gaming, we develop Deep Billboards
that model 3D objects implicitly using neural networks, where only 2D image is
rendered at a time based on the user's viewing direction. Our system,
connecting a commercial VR headset with a server running neural rendering,
allows real-time high-resolution simulation of detailed rigid objects, hairy
objects, actuated dynamic objects and more in an interactive VR world,
drastically narrowing the existing real-to-simulation (real2sim) gap.
Additionally, we augment Deep Billboards with physical interaction capability,
adapting classic billboards from screen-based games to immersive VR. At our
pavilion, the visitors can use our off-the-shelf setup for quickly capturing
their favorite objects, and within minutes, experience them in an immersive and
interactive VR world with minimal loss of reality. Our project page:
https://sites.google.com/view/deepbillboards/
| [
{
"created": "Mon, 8 Aug 2022 16:16:29 GMT",
"version": "v1"
}
] | 2022-08-19 | [
[
"Kondo",
"Naruya",
""
],
[
"Kuroki",
"So",
""
],
[
"Hyakuta",
"Ryosuke",
""
],
[
"Matsuo",
"Yutaka",
""
],
[
"Gu",
"Shixiang Shane",
""
],
[
"Ochiai",
"Yoichi",
""
]
] | An aspirational goal for virtual reality (VR) is to bring in a rich diversity of real world objects losslessly. Existing VR applications often convert objects into explicit 3D models with meshes or point clouds, which allow fast interactive rendering but also severely limit its quality and the types of supported objects, fundamentally upper-bounding the "realism" of VR. Inspired by the classic "billboards" technique in gaming, we develop Deep Billboards that model 3D objects implicitly using neural networks, where only 2D image is rendered at a time based on the user's viewing direction. Our system, connecting a commercial VR headset with a server running neural rendering, allows real-time high-resolution simulation of detailed rigid objects, hairy objects, actuated dynamic objects and more in an interactive VR world, drastically narrowing the existing real-to-simulation (real2sim) gap. Additionally, we augment Deep Billboards with physical interaction capability, adapting classic billboards from screen-based games to immersive VR. At our pavilion, the visitors can use our off-the-shelf setup for quickly capturing their favorite objects, and within minutes, experience them in an immersive and interactive VR world with minimal loss of reality. Our project page: https://sites.google.com/view/deepbillboards/ |
1904.04987 | Zhexiong Shang | Zhexiong Shang, Zhigang Shen | Vision-model-based Real-time Localization of Unmanned Aerial Vehicle for
Autonomous Structure Inspection under GPS-denied Environment | 8 pages, 5 figures, submitted to i3ce 2019 | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | UAVs have been widely used in visual inspections of buildings, bridges and
other structures. In either outdoor autonomous or semi-autonomous flights
missions strong GPS signal is vital for UAV to locate its own positions.
However, strong GPS signal is not always available, and it can degrade or fully
loss underneath large structures or close to power lines, which can cause
serious control issues or even UAV crashes. Such limitations highly restricted
the applications of UAV as a routine inspection tool in various domains. In
this paper a vision-model-based real-time self-positioning method is proposed
to support autonomous aerial inspection without the need of GPS support.
Compared to other localization methods that requires additional onboard
sensors, the proposed method uses a single camera to continuously estimate the
inflight poses of UAV. Each step of the proposed method is discussed in detail,
and its performance is tested through an indoor test case.
| [
{
"created": "Wed, 10 Apr 2019 02:43:52 GMT",
"version": "v1"
}
] | 2019-04-11 | [
[
"Shang",
"Zhexiong",
""
],
[
"Shen",
"Zhigang",
""
]
] | UAVs have been widely used in visual inspections of buildings, bridges and other structures. In either outdoor autonomous or semi-autonomous flights missions strong GPS signal is vital for UAV to locate its own positions. However, strong GPS signal is not always available, and it can degrade or fully loss underneath large structures or close to power lines, which can cause serious control issues or even UAV crashes. Such limitations highly restricted the applications of UAV as a routine inspection tool in various domains. In this paper a vision-model-based real-time self-positioning method is proposed to support autonomous aerial inspection without the need of GPS support. Compared to other localization methods that requires additional onboard sensors, the proposed method uses a single camera to continuously estimate the inflight poses of UAV. Each step of the proposed method is discussed in detail, and its performance is tested through an indoor test case. |
2206.06360 | Kai Zhang | Kai Zhang and Nick Kolkin and Sai Bi and Fujun Luan and Zexiang Xu and
Eli Shechtman and Noah Snavely | ARF: Artistic Radiance Fields | Project page: https://www.cs.cornell.edu/projects/arf/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We present a method for transferring the artistic features of an arbitrary
style image to a 3D scene. Previous methods that perform 3D stylization on
point clouds or meshes are sensitive to geometric reconstruction errors for
complex real-world scenes. Instead, we propose to stylize the more robust
radiance field representation. We find that the commonly used Gram matrix-based
loss tends to produce blurry results without faithful brushstrokes, and
introduce a nearest neighbor-based loss that is highly effective at capturing
style details while maintaining multi-view consistency. We also propose a novel
deferred back-propagation method to enable optimization of memory-intensive
radiance fields using style losses defined on full-resolution rendered images.
Our extensive evaluation demonstrates that our method outperforms baselines by
generating artistic appearance that more closely resembles the style image.
Please check our project page for video results and open-source
implementations: https://www.cs.cornell.edu/projects/arf/ .
| [
{
"created": "Mon, 13 Jun 2022 17:55:31 GMT",
"version": "v1"
}
] | 2022-06-14 | [
[
"Zhang",
"Kai",
""
],
[
"Kolkin",
"Nick",
""
],
[
"Bi",
"Sai",
""
],
[
"Luan",
"Fujun",
""
],
[
"Xu",
"Zexiang",
""
],
[
"Shechtman",
"Eli",
""
],
[
"Snavely",
"Noah",
""
]
] | We present a method for transferring the artistic features of an arbitrary style image to a 3D scene. Previous methods that perform 3D stylization on point clouds or meshes are sensitive to geometric reconstruction errors for complex real-world scenes. Instead, we propose to stylize the more robust radiance field representation. We find that the commonly used Gram matrix-based loss tends to produce blurry results without faithful brushstrokes, and introduce a nearest neighbor-based loss that is highly effective at capturing style details while maintaining multi-view consistency. We also propose a novel deferred back-propagation method to enable optimization of memory-intensive radiance fields using style losses defined on full-resolution rendered images. Our extensive evaluation demonstrates that our method outperforms baselines by generating artistic appearance that more closely resembles the style image. Please check our project page for video results and open-source implementations: https://www.cs.cornell.edu/projects/arf/ . |
1802.07956 | Borja Bovcon | Borja Bovcon, Rok Mandeljc, Janez Per\v{s}, Matej Kristan | Stereo obstacle detection for unmanned surface vehicles by IMU-assisted
semantic segmentation | 14 pages, 18 figures, new publicly available multi-modal obstacle
detection dataset | null | 10.1016/j.robot.2018.02.017 | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A new obstacle detection algorithm for unmanned surface vehicles (USVs) is
presented. A state-of-the-art graphical model for semantic segmentation is
extended to incorporate boat pitch and roll measurements from the on-board
inertial measurement unit (IMU), and a stereo verification algorithm that
consolidates tentative detections obtained from the segmentation is proposed.
The IMU readings are used to estimate the location of horizon line in the
image, which automatically adjusts the priors in the probabilistic semantic
segmentation model. We derive the equations for projecting the horizon into
images, propose an efficient optimization algorithm for the extended graphical
model, and offer a practical IMU-camera-USV calibration procedure. Using an USV
equipped with multiple synchronized sensors, we captured a new challenging
multi-modal dataset, and annotated its images with water edge and obstacles.
Experimental results show that the proposed algorithm significantly outperforms
the state of the art, with nearly 30% improvement in water-edge detection
accuracy, an over 21% reduction of false positive rate, an almost 60% reduction
of false negative rate, and an over 65% increase of true positive rate, while
its Matlab implementation runs in real-time.
| [
{
"created": "Thu, 22 Feb 2018 09:52:43 GMT",
"version": "v1"
}
] | 2020-01-07 | [
[
"Bovcon",
"Borja",
""
],
[
"Mandeljc",
"Rok",
""
],
[
"Perš",
"Janez",
""
],
[
"Kristan",
"Matej",
""
]
] | A new obstacle detection algorithm for unmanned surface vehicles (USVs) is presented. A state-of-the-art graphical model for semantic segmentation is extended to incorporate boat pitch and roll measurements from the on-board inertial measurement unit (IMU), and a stereo verification algorithm that consolidates tentative detections obtained from the segmentation is proposed. The IMU readings are used to estimate the location of horizon line in the image, which automatically adjusts the priors in the probabilistic semantic segmentation model. We derive the equations for projecting the horizon into images, propose an efficient optimization algorithm for the extended graphical model, and offer a practical IMU-camera-USV calibration procedure. Using an USV equipped with multiple synchronized sensors, we captured a new challenging multi-modal dataset, and annotated its images with water edge and obstacles. Experimental results show that the proposed algorithm significantly outperforms the state of the art, with nearly 30% improvement in water-edge detection accuracy, an over 21% reduction of false positive rate, an almost 60% reduction of false negative rate, and an over 65% increase of true positive rate, while its Matlab implementation runs in real-time. |
2106.06847 | Jiezhang Cao | Jiezhang Cao, Yawei Li, Kai Zhang, Luc Van Gool | Video Super-Resolution Transformer | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Video super-resolution (VSR), with the aim to restore a high-resolution video
from its corresponding low-resolution version, is a spatial-temporal sequence
prediction problem. Recently, Transformer has been gaining popularity due to
its parallel computing ability for sequence-to-sequence modeling. Thus, it
seems to be straightforward to apply the vision Transformer to solve VSR.
However, the typical block design of Transformer with a fully connected
self-attention layer and a token-wise feed-forward layer does not fit well for
VSR due to the following two reasons. First, the fully connected self-attention
layer neglects to exploit the data locality because this layer relies on linear
layers to compute attention maps. Second, the token-wise feed-forward layer
lacks the feature alignment which is important for VSR since this layer
independently processes each of the input token embeddings without any
interaction among them. In this paper, we make the first attempt to adapt
Transformer for VSR. Specifically, to tackle the first issue, we present a
spatial-temporal convolutional self-attention layer with a theoretical
understanding to exploit the locality information. For the second issue, we
design a bidirectional optical flow-based feed-forward layer to discover the
correlations across different video frames and also align features. Extensive
experiments on several benchmark datasets demonstrate the effectiveness of our
proposed method. The code will be available at
https://github.com/caojiezhang/VSR-Transformer.
| [
{
"created": "Sat, 12 Jun 2021 20:00:32 GMT",
"version": "v1"
},
{
"created": "Sat, 25 Mar 2023 09:39:43 GMT",
"version": "v2"
},
{
"created": "Tue, 4 Jul 2023 15:30:58 GMT",
"version": "v3"
}
] | 2023-07-06 | [
[
"Cao",
"Jiezhang",
""
],
[
"Li",
"Yawei",
""
],
[
"Zhang",
"Kai",
""
],
[
"Van Gool",
"Luc",
""
]
] | Video super-resolution (VSR), with the aim to restore a high-resolution video from its corresponding low-resolution version, is a spatial-temporal sequence prediction problem. Recently, Transformer has been gaining popularity due to its parallel computing ability for sequence-to-sequence modeling. Thus, it seems to be straightforward to apply the vision Transformer to solve VSR. However, the typical block design of Transformer with a fully connected self-attention layer and a token-wise feed-forward layer does not fit well for VSR due to the following two reasons. First, the fully connected self-attention layer neglects to exploit the data locality because this layer relies on linear layers to compute attention maps. Second, the token-wise feed-forward layer lacks the feature alignment which is important for VSR since this layer independently processes each of the input token embeddings without any interaction among them. In this paper, we make the first attempt to adapt Transformer for VSR. Specifically, to tackle the first issue, we present a spatial-temporal convolutional self-attention layer with a theoretical understanding to exploit the locality information. For the second issue, we design a bidirectional optical flow-based feed-forward layer to discover the correlations across different video frames and also align features. Extensive experiments on several benchmark datasets demonstrate the effectiveness of our proposed method. The code will be available at https://github.com/caojiezhang/VSR-Transformer. |
2305.09293 | Simon Kristoffersson Lind | Simon Kristoffersson Lind, Rudolph Triebel, Luigi Nardi, Volker
Krueger | Out-of-Distribution Detection for Adaptive Computer Vision | Published in Springer Lecture Notes for Computer Science Vol. 13886
as part of the conference proceedings for Scandinavian Conference on Image
Analysis 2023 | In: Gade, R., Felsberg, M., K\"am\"ar\"ainen, JK. (eds) Image
Analysis. SCIA 2023. Lecture Notes in Computer Science, vol 13886. Springer,
Cham | 10.1007/978-3-031-31438-4_21 | null | cs.CV cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | It is well known that computer vision can be unreliable when faced with
previously unseen imaging conditions. This paper proposes a method to adapt
camera parameters according to a normalizing flow-based out-of-distibution
detector. A small-scale study is conducted which shows that adapting camera
parameters according to this out-of-distibution detector leads to an average
increase of 3 to 4 percentage points in mAP, mAR and F1 performance metrics of
a YOLOv4 object detector. As a secondary result, this paper also shows that it
is possible to train a normalizing flow model for out-of-distribution detection
on the COCO dataset, which is larger and more diverse than most benchmarks for
out-of-distibution detectors.
| [
{
"created": "Tue, 16 May 2023 09:01:42 GMT",
"version": "v1"
}
] | 2023-05-17 | [
[
"Lind",
"Simon Kristoffersson",
""
],
[
"Triebel",
"Rudolph",
""
],
[
"Nardi",
"Luigi",
""
],
[
"Krueger",
"Volker",
""
]
] | It is well known that computer vision can be unreliable when faced with previously unseen imaging conditions. This paper proposes a method to adapt camera parameters according to a normalizing flow-based out-of-distibution detector. A small-scale study is conducted which shows that adapting camera parameters according to this out-of-distibution detector leads to an average increase of 3 to 4 percentage points in mAP, mAR and F1 performance metrics of a YOLOv4 object detector. As a secondary result, this paper also shows that it is possible to train a normalizing flow model for out-of-distribution detection on the COCO dataset, which is larger and more diverse than most benchmarks for out-of-distibution detectors. |
2305.14009 | Sebastian Pineda Arango | Sebastian Pineda Arango, Josif Grabocka | Deep Pipeline Embeddings for AutoML | 9 pages | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automated Machine Learning (AutoML) is a promising direction for
democratizing AI by automatically deploying Machine Learning systems with
minimal human expertise. The core technical challenge behind AutoML is
optimizing the pipelines of Machine Learning systems (e.g. the choice of
preprocessing, augmentations, models, optimizers, etc.). Existing Pipeline
Optimization techniques fail to explore deep interactions between pipeline
stages/components. As a remedy, this paper proposes a novel neural architecture
that captures the deep interaction between the components of a Machine Learning
pipeline. We propose embedding pipelines into a latent representation through a
novel per-component encoder mechanism. To search for optimal pipelines, such
pipeline embeddings are used within deep-kernel Gaussian Process surrogates
inside a Bayesian Optimization setup. Furthermore, we meta-learn the parameters
of the pipeline embedding network using existing evaluations of pipelines on
diverse collections of related datasets (a.k.a. meta-datasets). Through
extensive experiments on three large-scale meta-datasets, we demonstrate that
pipeline embeddings yield state-of-the-art results in Pipeline Optimization.
| [
{
"created": "Tue, 23 May 2023 12:40:38 GMT",
"version": "v1"
},
{
"created": "Wed, 24 May 2023 19:29:19 GMT",
"version": "v2"
}
] | 2023-05-26 | [
[
"Arango",
"Sebastian Pineda",
""
],
[
"Grabocka",
"Josif",
""
]
] | Automated Machine Learning (AutoML) is a promising direction for democratizing AI by automatically deploying Machine Learning systems with minimal human expertise. The core technical challenge behind AutoML is optimizing the pipelines of Machine Learning systems (e.g. the choice of preprocessing, augmentations, models, optimizers, etc.). Existing Pipeline Optimization techniques fail to explore deep interactions between pipeline stages/components. As a remedy, this paper proposes a novel neural architecture that captures the deep interaction between the components of a Machine Learning pipeline. We propose embedding pipelines into a latent representation through a novel per-component encoder mechanism. To search for optimal pipelines, such pipeline embeddings are used within deep-kernel Gaussian Process surrogates inside a Bayesian Optimization setup. Furthermore, we meta-learn the parameters of the pipeline embedding network using existing evaluations of pipelines on diverse collections of related datasets (a.k.a. meta-datasets). Through extensive experiments on three large-scale meta-datasets, we demonstrate that pipeline embeddings yield state-of-the-art results in Pipeline Optimization. |
1506.00976 | Gautier Marti | Gautier Marti, Philippe Very and Philippe Donnat | Toward a generic representation of random variables for machine learning | submitted to Pattern Recognition Letters | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a pre-processing and a distance which improve the
performance of machine learning algorithms working on independent and
identically distributed stochastic processes. We introduce a novel
non-parametric approach to represent random variables which splits apart
dependency and distribution without losing any information. We also propound an
associated metric leveraging this representation and its statistical estimate.
Besides experiments on synthetic datasets, the benefits of our contribution is
illustrated through the example of clustering financial time series, for
instance prices from the credit default swaps market. Results are available on
the website www.datagrapple.com and an IPython Notebook tutorial is available
at www.datagrapple.com/Tech for reproducible research.
| [
{
"created": "Tue, 2 Jun 2015 17:58:48 GMT",
"version": "v1"
},
{
"created": "Thu, 3 Sep 2015 19:23:30 GMT",
"version": "v2"
}
] | 2015-09-04 | [
[
"Marti",
"Gautier",
""
],
[
"Very",
"Philippe",
""
],
[
"Donnat",
"Philippe",
""
]
] | This paper presents a pre-processing and a distance which improve the performance of machine learning algorithms working on independent and identically distributed stochastic processes. We introduce a novel non-parametric approach to represent random variables which splits apart dependency and distribution without losing any information. We also propound an associated metric leveraging this representation and its statistical estimate. Besides experiments on synthetic datasets, the benefits of our contribution is illustrated through the example of clustering financial time series, for instance prices from the credit default swaps market. Results are available on the website www.datagrapple.com and an IPython Notebook tutorial is available at www.datagrapple.com/Tech for reproducible research. |
2403.10736 | Yuhan Zhao | Yuhan Zhao and Quanyan Zhu | Stackelberg Meta-Learning Based Shared Control for Assistive Driving | null | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Shared control allows the human driver to collaborate with an assistive
driving system while retaining the ability to make decisions and take control
if necessary. However, human-vehicle teaming and planning are challenging due
to environmental uncertainties, the human's bounded rationality, and the
variability in human behaviors. An effective collaboration plan needs to learn
and adapt to these uncertainties. To this end, we develop a Stackelberg
meta-learning algorithm to create automated learning-based planning for shared
control. The Stackelberg games are used to capture the leader-follower
structure in the asymmetric interactions between the human driver and the
assistive driving system. The meta-learning algorithm generates a common
behavioral model, which is capable of fast adaptation using a small amount of
driving data to assist optimal decision-making. We use a case study of an
obstacle avoidance driving scenario to corroborate that the adapted human
behavioral model can successfully assist the human driver in reaching the
target destination. Besides, it saves driving time compared with a driver-only
scheme and is also robust to drivers' bounded rationality and errors.
| [
{
"created": "Fri, 15 Mar 2024 23:45:53 GMT",
"version": "v1"
}
] | 2024-03-19 | [
[
"Zhao",
"Yuhan",
""
],
[
"Zhu",
"Quanyan",
""
]
] | Shared control allows the human driver to collaborate with an assistive driving system while retaining the ability to make decisions and take control if necessary. However, human-vehicle teaming and planning are challenging due to environmental uncertainties, the human's bounded rationality, and the variability in human behaviors. An effective collaboration plan needs to learn and adapt to these uncertainties. To this end, we develop a Stackelberg meta-learning algorithm to create automated learning-based planning for shared control. The Stackelberg games are used to capture the leader-follower structure in the asymmetric interactions between the human driver and the assistive driving system. The meta-learning algorithm generates a common behavioral model, which is capable of fast adaptation using a small amount of driving data to assist optimal decision-making. We use a case study of an obstacle avoidance driving scenario to corroborate that the adapted human behavioral model can successfully assist the human driver in reaching the target destination. Besides, it saves driving time compared with a driver-only scheme and is also robust to drivers' bounded rationality and errors. |
1402.7198 | Venugopal K r | T Shiva Prakash, K B Raja, K R Venugopal, S S Iyengar, L M Patnaik | Two-Hop Routing with Traffic-Differentiation for QoS Guarantee in
Wireless Sensor Networks | 13 pages | International Journal of Information Processing, 7(3), 100-112,
2013 | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a Traffic-Differentiated Two-Hop Routing protocol for
Quality of Service (QoS) in Wireless Sensor Networks (WSNs). It targets WSN
applications having different types of data traffic with several priorities.
The protocol achieves to increase Packet Reception Ratio (PRR) and reduce
end-to-end delay while considering multi-queue priority policy, two-hop
neighborhood information, link reliability and power efficiency. The protocol
is modular and utilizes effective methods for estimating the link metrics.
Numerical results show that the proposed protocol is a feasible solution to
addresses QoS service differenti- ation for traffic with different priorities.
| [
{
"created": "Fri, 28 Feb 2014 10:45:59 GMT",
"version": "v1"
}
] | 2014-03-03 | [
[
"Prakash",
"T Shiva",
""
],
[
"Raja",
"K B",
""
],
[
"Venugopal",
"K R",
""
],
[
"Iyengar",
"S S",
""
],
[
"Patnaik",
"L M",
""
]
] | This paper proposes a Traffic-Differentiated Two-Hop Routing protocol for Quality of Service (QoS) in Wireless Sensor Networks (WSNs). It targets WSN applications having different types of data traffic with several priorities. The protocol achieves to increase Packet Reception Ratio (PRR) and reduce end-to-end delay while considering multi-queue priority policy, two-hop neighborhood information, link reliability and power efficiency. The protocol is modular and utilizes effective methods for estimating the link metrics. Numerical results show that the proposed protocol is a feasible solution to addresses QoS service differenti- ation for traffic with different priorities. |
2007.07632 | Yifei Shen | Yifei Shen, Yuanming Shi, Jun Zhang, Khaled B. Letaief | Graph Neural Networks for Scalable Radio Resource Management:
Architecture Design and Theoretical Analysis | Accepted by IEEE Journal on Selected Areas in Communications - Series
on Machine Learning for Communications and Networks | null | null | null | cs.IT cs.LG eess.SP math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning has recently emerged as a disruptive technology to solve
challenging radio resource management problems in wireless networks. However,
the neural network architectures adopted by existing works suffer from poor
scalability, generalization, and lack of interpretability. A long-standing
approach to improve scalability and generalization is to incorporate the
structures of the target task into the neural network architecture. In this
paper, we propose to apply graph neural networks (GNNs) to solve large-scale
radio resource management problems, supported by effective neural network
architecture design and theoretical analysis. Specifically, we first
demonstrate that radio resource management problems can be formulated as graph
optimization problems that enjoy a universal permutation equivariance property.
We then identify a class of neural networks, named \emph{message passing graph
neural networks} (MPGNNs). It is demonstrated that they not only satisfy the
permutation equivariance property, but also can generalize to large-scale
problems while enjoying a high computational efficiency. For interpretablity
and theoretical guarantees, we prove the equivalence between MPGNNs and a class
of distributed optimization algorithms, which is then used to analyze the
performance and generalization of MPGNN-based methods. Extensive simulations,
with power control and beamforming as two examples, will demonstrate that the
proposed method, trained in an unsupervised manner with unlabeled samples,
matches or even outperforms classic optimization-based algorithms without
domain-specific knowledge. Remarkably, the proposed method is highly scalable
and can solve the beamforming problem in an interference channel with $1000$
transceiver pairs within $6$ milliseconds on a single GPU.
| [
{
"created": "Wed, 15 Jul 2020 11:43:32 GMT",
"version": "v1"
},
{
"created": "Thu, 29 Oct 2020 06:30:34 GMT",
"version": "v2"
}
] | 2020-10-30 | [
[
"Shen",
"Yifei",
""
],
[
"Shi",
"Yuanming",
""
],
[
"Zhang",
"Jun",
""
],
[
"Letaief",
"Khaled B.",
""
]
] | Deep learning has recently emerged as a disruptive technology to solve challenging radio resource management problems in wireless networks. However, the neural network architectures adopted by existing works suffer from poor scalability, generalization, and lack of interpretability. A long-standing approach to improve scalability and generalization is to incorporate the structures of the target task into the neural network architecture. In this paper, we propose to apply graph neural networks (GNNs) to solve large-scale radio resource management problems, supported by effective neural network architecture design and theoretical analysis. Specifically, we first demonstrate that radio resource management problems can be formulated as graph optimization problems that enjoy a universal permutation equivariance property. We then identify a class of neural networks, named \emph{message passing graph neural networks} (MPGNNs). It is demonstrated that they not only satisfy the permutation equivariance property, but also can generalize to large-scale problems while enjoying a high computational efficiency. For interpretablity and theoretical guarantees, we prove the equivalence between MPGNNs and a class of distributed optimization algorithms, which is then used to analyze the performance and generalization of MPGNN-based methods. Extensive simulations, with power control and beamforming as two examples, will demonstrate that the proposed method, trained in an unsupervised manner with unlabeled samples, matches or even outperforms classic optimization-based algorithms without domain-specific knowledge. Remarkably, the proposed method is highly scalable and can solve the beamforming problem in an interference channel with $1000$ transceiver pairs within $6$ milliseconds on a single GPU. |
2011.06104 | Arash Mohammadi | Elahe Rahimian, Soheil Zabihi, Amir Asif, Dario Farina, Seyed Farokh
Atashzar, and Arash Mohammadi | FS-HGR: Few-shot Learning for Hand Gesture Recognition via
ElectroMyography | null | null | null | null | cs.LG eess.SP | http://creativecommons.org/licenses/by/4.0/ | This work is motivated by the recent advances in Deep Neural Networks (DNNs)
and their widespread applications in human-machine interfaces. DNNs have been
recently used for detecting the intended hand gesture through processing of
surface electromyogram (sEMG) signals. The ultimate goal of these approaches is
to realize high-performance controllers for prosthetic. However, although DNNs
have shown superior accuracy than conventional methods when large amounts of
data are available for training, their performance substantially decreases when
data are limited. Collecting large datasets for training may be feasible in
research laboratories, but it is not a practical approach for real-life
applications. Therefore, there is an unmet need for the design of a modern
gesture detection technique that relies on minimal training data while
providing high accuracy. Here we propose an innovative and novel "Few-Shot
Learning" framework based on the formulation of meta-learning, referred to as
the FS-HGR, to address this need. Few-shot learning is a variant of domain
adaptation with the goal of inferring the required output based on just one or
a few training examples. More specifically, the proposed FS-HGR quickly
generalizes after seeing very few examples from each class. The proposed
approach led to 85.94% classification accuracy on new repetitions with few-shot
observation (5-way 5-shot), 81.29% accuracy on new subjects with few-shot
observation (5-way 5-shot), and 73.36% accuracy on new gestures with few-shot
observation (5-way 5-shot).
| [
{
"created": "Wed, 11 Nov 2020 22:33:31 GMT",
"version": "v1"
}
] | 2020-11-13 | [
[
"Rahimian",
"Elahe",
""
],
[
"Zabihi",
"Soheil",
""
],
[
"Asif",
"Amir",
""
],
[
"Farina",
"Dario",
""
],
[
"Atashzar",
"Seyed Farokh",
""
],
[
"Mohammadi",
"Arash",
""
]
] | This work is motivated by the recent advances in Deep Neural Networks (DNNs) and their widespread applications in human-machine interfaces. DNNs have been recently used for detecting the intended hand gesture through processing of surface electromyogram (sEMG) signals. The ultimate goal of these approaches is to realize high-performance controllers for prosthetic. However, although DNNs have shown superior accuracy than conventional methods when large amounts of data are available for training, their performance substantially decreases when data are limited. Collecting large datasets for training may be feasible in research laboratories, but it is not a practical approach for real-life applications. Therefore, there is an unmet need for the design of a modern gesture detection technique that relies on minimal training data while providing high accuracy. Here we propose an innovative and novel "Few-Shot Learning" framework based on the formulation of meta-learning, referred to as the FS-HGR, to address this need. Few-shot learning is a variant of domain adaptation with the goal of inferring the required output based on just one or a few training examples. More specifically, the proposed FS-HGR quickly generalizes after seeing very few examples from each class. The proposed approach led to 85.94% classification accuracy on new repetitions with few-shot observation (5-way 5-shot), 81.29% accuracy on new subjects with few-shot observation (5-way 5-shot), and 73.36% accuracy on new gestures with few-shot observation (5-way 5-shot). |
2304.09623 | Pranav Jeevan P | Chirag P, Mukta Wagle, Ravi Kant Gupta, Pranav Jeevan, Amit Sethi | CHATTY: Coupled Holistic Adversarial Transport Terms with Yield for
Unsupervised Domain Adaptation | 10 pages, 4 figures | null | null | null | cs.CV eess.IV | http://creativecommons.org/licenses/by/4.0/ | We propose a new technique called CHATTY: Coupled Holistic Adversarial
Transport Terms with Yield for Unsupervised Domain Adaptation. Adversarial
training is commonly used for learning domain-invariant representations by
reversing the gradients from a domain discriminator head to train the feature
extractor layers of a neural network. We propose significant modifications to
the adversarial head, its training objective, and the classifier head. With the
aim of reducing class confusion, we introduce a sub-network which displaces the
classifier outputs of the source and target domain samples in a learnable
manner. We control this movement using a novel transport loss that spreads
class clusters away from each other and makes it easier for the classifier to
find the decision boundaries for both the source and target domains. The
results of adding this new loss to a careful selection of previously proposed
losses leads to improvement in UDA results compared to the previous
state-of-the-art methods on benchmark datasets. We show the importance of the
proposed loss term using ablation studies and visualization of the movement of
target domain sample in representation space.
| [
{
"created": "Wed, 19 Apr 2023 13:00:23 GMT",
"version": "v1"
},
{
"created": "Thu, 20 Apr 2023 16:39:43 GMT",
"version": "v2"
}
] | 2023-04-21 | [
[
"P",
"Chirag",
""
],
[
"Wagle",
"Mukta",
""
],
[
"Gupta",
"Ravi Kant",
""
],
[
"Jeevan",
"Pranav",
""
],
[
"Sethi",
"Amit",
""
]
] | We propose a new technique called CHATTY: Coupled Holistic Adversarial Transport Terms with Yield for Unsupervised Domain Adaptation. Adversarial training is commonly used for learning domain-invariant representations by reversing the gradients from a domain discriminator head to train the feature extractor layers of a neural network. We propose significant modifications to the adversarial head, its training objective, and the classifier head. With the aim of reducing class confusion, we introduce a sub-network which displaces the classifier outputs of the source and target domain samples in a learnable manner. We control this movement using a novel transport loss that spreads class clusters away from each other and makes it easier for the classifier to find the decision boundaries for both the source and target domains. The results of adding this new loss to a careful selection of previously proposed losses leads to improvement in UDA results compared to the previous state-of-the-art methods on benchmark datasets. We show the importance of the proposed loss term using ablation studies and visualization of the movement of target domain sample in representation space. |
1912.10836 | Aykut \c{C}ay{\i}r | Aykut \c{C}ay{\i}r, U\u{g}ur \"Unal and Hasan Da\u{g} | Random CapsNet Forest Model for Imbalanced Malware Type Classification
Task | 30 pages, 10 figures, typos are corrected, references are added | null | null | null | cs.CR cs.CV cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | Behavior of a malware varies with respect to malware types. Therefore,knowing
type of a malware affects strategies of system protection softwares. Many
malware type classification models empowered by machine and deep learning
achieve superior accuracies to predict malware types.Machine learning based
models need to do heavy feature engineering and feature engineering is
dominantly effecting performance of models.On the other hand, deep learning
based models require less feature engineering than machine learning based
models. However, traditional deep learning architectures and components cause
very complex and data sensitive models. Capsule network architecture minimizes
this complexity and data sensitivity unlike classical convolutional neural
network architectures. This paper proposes an ensemble capsule network model
based on bootstrap aggregating technique. The proposed method are tested on two
malware datasets, whose the-state-of-the-art results are well-known.
| [
{
"created": "Fri, 20 Dec 2019 06:40:40 GMT",
"version": "v1"
},
{
"created": "Tue, 10 Mar 2020 13:51:31 GMT",
"version": "v2"
},
{
"created": "Mon, 30 Mar 2020 19:56:53 GMT",
"version": "v3"
},
{
"created": "Sun, 23 Aug 2020 20:21:04 GMT",
"version": "v4"
}
] | 2020-08-25 | [
[
"Çayır",
"Aykut",
""
],
[
"Ünal",
"Uğur",
""
],
[
"Dağ",
"Hasan",
""
]
] | Behavior of a malware varies with respect to malware types. Therefore,knowing type of a malware affects strategies of system protection softwares. Many malware type classification models empowered by machine and deep learning achieve superior accuracies to predict malware types.Machine learning based models need to do heavy feature engineering and feature engineering is dominantly effecting performance of models.On the other hand, deep learning based models require less feature engineering than machine learning based models. However, traditional deep learning architectures and components cause very complex and data sensitive models. Capsule network architecture minimizes this complexity and data sensitivity unlike classical convolutional neural network architectures. This paper proposes an ensemble capsule network model based on bootstrap aggregating technique. The proposed method are tested on two malware datasets, whose the-state-of-the-art results are well-known. |
2406.06720 | Mazen Al Borno | Matteo A. Coscia and Mazen Al Borno | Vibrotactile versus Visual Stimulation in Learning the Piano | null | null | null | null | cs.HC | http://creativecommons.org/licenses/by/4.0/ | Vibrotactile stimulation has been explored to accelerate the acquisition of
motor skills involving finger movements (Gemicioglu et al. 22, Markow et al.
2010, Seim et al. 17). This study evaluates the effectiveness of vibrotactile
stimulation compared to visual feedback in learning a 14-note one-handed tune
on the piano. In the experiment, 14 subjects with no prior piano experience
were exposed to both vibrotactile and visual stimulation to determine which was
more effective. Subjects were randomized 1:1 in a group that first receives
vibrotactile stimulation, then visual stimulation or in a group that first
receives visual stimulation, then vibrotactile stimulation. Effectiveness was
measured by evaluating the timing error and accuracy. Results from our study
indicated that the timing error for vibrotactile stimulation was 12.1% (SD
6.0%), while the equivalent for visual stimulation was 22.3% (SD 10.3%). The
accuracy for vibrotactile stimulation was 69.2% (SD 27.2%), while the
equivalent for visual stimulation was 91.3% (SD 13.5%). It was observed that
vibrotactile stimulation was generally more effective at minimizing the timing
error at which the notes were hit compared to visual stimulation, and no
statistically significant differences were found in accuracy.
| [
{
"created": "Mon, 10 Jun 2024 18:31:12 GMT",
"version": "v1"
}
] | 2024-06-12 | [
[
"Coscia",
"Matteo A.",
""
],
[
"Borno",
"Mazen Al",
""
]
] | Vibrotactile stimulation has been explored to accelerate the acquisition of motor skills involving finger movements (Gemicioglu et al. 22, Markow et al. 2010, Seim et al. 17). This study evaluates the effectiveness of vibrotactile stimulation compared to visual feedback in learning a 14-note one-handed tune on the piano. In the experiment, 14 subjects with no prior piano experience were exposed to both vibrotactile and visual stimulation to determine which was more effective. Subjects were randomized 1:1 in a group that first receives vibrotactile stimulation, then visual stimulation or in a group that first receives visual stimulation, then vibrotactile stimulation. Effectiveness was measured by evaluating the timing error and accuracy. Results from our study indicated that the timing error for vibrotactile stimulation was 12.1% (SD 6.0%), while the equivalent for visual stimulation was 22.3% (SD 10.3%). The accuracy for vibrotactile stimulation was 69.2% (SD 27.2%), while the equivalent for visual stimulation was 91.3% (SD 13.5%). It was observed that vibrotactile stimulation was generally more effective at minimizing the timing error at which the notes were hit compared to visual stimulation, and no statistically significant differences were found in accuracy. |
1908.01039 | Chloe Hsu | Chloe Ching-Yun Hsu, Michaela Hardt, Moritz Hardt | Linear Dynamics: Clustering without identification | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Linear dynamical systems are a fundamental and powerful parametric model
class. However, identifying the parameters of a linear dynamical system is a
venerable task, permitting provably efficient solutions only in special cases.
This work shows that the eigenspectrum of unknown linear dynamics can be
identified without full system identification. We analyze a computationally
efficient and provably convergent algorithm to estimate the eigenvalues of the
state-transition matrix in a linear dynamical system.
When applied to time series clustering, our algorithm can efficiently cluster
multi-dimensional time series with temporal offsets and varying lengths, under
the assumption that the time series are generated from linear dynamical
systems. Evaluating our algorithm on both synthetic data and real
electrocardiogram (ECG) signals, we see improvements in clustering quality over
existing baselines.
| [
{
"created": "Fri, 2 Aug 2019 20:15:56 GMT",
"version": "v1"
},
{
"created": "Mon, 2 Sep 2019 13:46:48 GMT",
"version": "v2"
},
{
"created": "Sat, 29 Feb 2020 08:51:25 GMT",
"version": "v3"
}
] | 2020-03-03 | [
[
"Hsu",
"Chloe Ching-Yun",
""
],
[
"Hardt",
"Michaela",
""
],
[
"Hardt",
"Moritz",
""
]
] | Linear dynamical systems are a fundamental and powerful parametric model class. However, identifying the parameters of a linear dynamical system is a venerable task, permitting provably efficient solutions only in special cases. This work shows that the eigenspectrum of unknown linear dynamics can be identified without full system identification. We analyze a computationally efficient and provably convergent algorithm to estimate the eigenvalues of the state-transition matrix in a linear dynamical system. When applied to time series clustering, our algorithm can efficiently cluster multi-dimensional time series with temporal offsets and varying lengths, under the assumption that the time series are generated from linear dynamical systems. Evaluating our algorithm on both synthetic data and real electrocardiogram (ECG) signals, we see improvements in clustering quality over existing baselines. |
2112.10591 | Franck Davoine | Vincent Brebion and Julien Moreau and Franck Davoine | Real-Time Optical Flow for Vehicular Perception with Low- and
High-Resolution Event Cameras | 13 pages, journal paper | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Event cameras capture changes of illumination in the observed scene rather
than accumulating light to create images. Thus, they allow for applications
under high-speed motion and complex lighting conditions, where traditional
framebased sensors show their limits with blur and over- or underexposed
pixels. Thanks to these unique properties, they represent nowadays an highly
attractive sensor for ITS-related applications. Event-based optical flow (EBOF)
has been studied following the rise in popularity of these neuromorphic
cameras. The recent arrival of high-definition neuromorphic sensors, however,
challenges the existing approaches, because of the increased resolution of the
events pixel array and a much higher throughput. As an answer to these points,
we propose an optimized framework for computing optical flow in real-time with
both low- and high-resolution event cameras. We formulate a novel dense
representation for the sparse events flow, in the form of the "inverse
exponential distance surface". It serves as an interim frame, designed for the
use of proven, state-of-the-art frame-based optical flow computation methods.
We evaluate our approach on both low- and high-resolution driving sequences,
and show that it often achieves better results than the current state of the
art, while also reaching higher frame rates, 250Hz at 346 x 260 pixels and 77Hz
at 1280 x 720 pixels.
| [
{
"created": "Mon, 20 Dec 2021 15:09:20 GMT",
"version": "v1"
}
] | 2021-12-21 | [
[
"Brebion",
"Vincent",
""
],
[
"Moreau",
"Julien",
""
],
[
"Davoine",
"Franck",
""
]
] | Event cameras capture changes of illumination in the observed scene rather than accumulating light to create images. Thus, they allow for applications under high-speed motion and complex lighting conditions, where traditional framebased sensors show their limits with blur and over- or underexposed pixels. Thanks to these unique properties, they represent nowadays an highly attractive sensor for ITS-related applications. Event-based optical flow (EBOF) has been studied following the rise in popularity of these neuromorphic cameras. The recent arrival of high-definition neuromorphic sensors, however, challenges the existing approaches, because of the increased resolution of the events pixel array and a much higher throughput. As an answer to these points, we propose an optimized framework for computing optical flow in real-time with both low- and high-resolution event cameras. We formulate a novel dense representation for the sparse events flow, in the form of the "inverse exponential distance surface". It serves as an interim frame, designed for the use of proven, state-of-the-art frame-based optical flow computation methods. We evaluate our approach on both low- and high-resolution driving sequences, and show that it often achieves better results than the current state of the art, while also reaching higher frame rates, 250Hz at 346 x 260 pixels and 77Hz at 1280 x 720 pixels. |
2209.08725 | Ka-Hei Hui | Ka-Hei Hui, Ruihui Li, Jingyu Hu, Chi-Wing Fu | Neural Wavelet-domain Diffusion for 3D Shape Generation | null | null | null | null | cs.CV cs.GR | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This paper presents a new approach for 3D shape generation, enabling direct
generative modeling on a continuous implicit representation in wavelet domain.
Specifically, we propose a compact wavelet representation with a pair of coarse
and detail coefficient volumes to implicitly represent 3D shapes via truncated
signed distance functions and multi-scale biorthogonal wavelets, and formulate
a pair of neural networks: a generator based on the diffusion model to produce
diverse shapes in the form of coarse coefficient volumes; and a detail
predictor to further produce compatible detail coefficient volumes for
enriching the generated shapes with fine structures and details. Both
quantitative and qualitative experimental results manifest the superiority of
our approach in generating diverse and high-quality shapes with complex
topology and structures, clean surfaces, and fine details, exceeding the 3D
generation capabilities of the state-of-the-art models.
| [
{
"created": "Mon, 19 Sep 2022 02:51:48 GMT",
"version": "v1"
}
] | 2022-09-20 | [
[
"Hui",
"Ka-Hei",
""
],
[
"Li",
"Ruihui",
""
],
[
"Hu",
"Jingyu",
""
],
[
"Fu",
"Chi-Wing",
""
]
] | This paper presents a new approach for 3D shape generation, enabling direct generative modeling on a continuous implicit representation in wavelet domain. Specifically, we propose a compact wavelet representation with a pair of coarse and detail coefficient volumes to implicitly represent 3D shapes via truncated signed distance functions and multi-scale biorthogonal wavelets, and formulate a pair of neural networks: a generator based on the diffusion model to produce diverse shapes in the form of coarse coefficient volumes; and a detail predictor to further produce compatible detail coefficient volumes for enriching the generated shapes with fine structures and details. Both quantitative and qualitative experimental results manifest the superiority of our approach in generating diverse and high-quality shapes with complex topology and structures, clean surfaces, and fine details, exceeding the 3D generation capabilities of the state-of-the-art models. |
2201.12914 | Stephany Rajeh | Stephany Rajeh and Marinette Savonnet and Eric Leclercq and Hocine
Cherifi | Investigating Centrality Measures in Social Networks with Community
Structure | Accepted in The International Conference on Complex Networks and
their Applications (2020) | null | 10.1007/978-3-030-65347-7_18 | null | cs.SI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Centrality measures are crucial in quantifying the influence of the members
of a social network. Although there has been a great deal of work dealing with
this issue, the vast majority of classical centrality measures are agnostic of
the community structure characterizing many social networks. Recent works have
developed community-aware centrality measures that exploit features of the
community structure information encountered in most real-world complex
networks. In this paper, we investigate the interactions between 5 popular
classical centrality measures and 5 community-aware centrality measures using 8
real-world online networks. Correlation as well as similarity measures between
both type of centrality measures are computed. Results show that
community-aware centrality measures can be divided into two groups. The first
group, which includes Bridging centrality, Community Hub-Bridge and
Participation Coefficient, provides distinctive node information as compared to
classical centrality. This behavior is consistent across the networks. The
second group which includes Community-based Mediator and Number of Neighboring
Communities is characterized by more mixed results that vary across networks.
| [
{
"created": "Sun, 30 Jan 2022 21:06:33 GMT",
"version": "v1"
}
] | 2022-02-01 | [
[
"Rajeh",
"Stephany",
""
],
[
"Savonnet",
"Marinette",
""
],
[
"Leclercq",
"Eric",
""
],
[
"Cherifi",
"Hocine",
""
]
] | Centrality measures are crucial in quantifying the influence of the members of a social network. Although there has been a great deal of work dealing with this issue, the vast majority of classical centrality measures are agnostic of the community structure characterizing many social networks. Recent works have developed community-aware centrality measures that exploit features of the community structure information encountered in most real-world complex networks. In this paper, we investigate the interactions between 5 popular classical centrality measures and 5 community-aware centrality measures using 8 real-world online networks. Correlation as well as similarity measures between both type of centrality measures are computed. Results show that community-aware centrality measures can be divided into two groups. The first group, which includes Bridging centrality, Community Hub-Bridge and Participation Coefficient, provides distinctive node information as compared to classical centrality. This behavior is consistent across the networks. The second group which includes Community-based Mediator and Number of Neighboring Communities is characterized by more mixed results that vary across networks. |
2310.17416 | Satheesh Kumar Perepu Dr | Kaushik Dey, Satheesh K. Perepu and Abir Das | Goals are Enough: Inducing AdHoc cooperation among unseen Multi-Agent
systems in IMFs | Accepted for publication in IEEE CCNC 2024 conference | null | null | null | cs.AI cs.MA | http://creativecommons.org/licenses/by/4.0/ | Intent-based management will play a critical role in achieving customers'
expectations in the next-generation mobile networks. Traditional methods cannot
perform efficient resource management since they tend to handle each
expectation independently. Existing approaches, e.g., based on multi-agent
reinforcement learning (MARL) allocate resources in an efficient fashion when
there are conflicting expectations on the network slice. However, in reality,
systems are often far more complex to be addressed by a standalone MARL
formulation. Often there exists a hierarchical structure of intent fulfilment
where multiple pre-trained, self-interested agents may need to be further
orchestrated by a supervisor or controller agent. Such agents may arrive in the
system adhoc, which then needs to be orchestrated along with other available
agents. Retraining the whole system every time is often infeasible given the
associated time and cost. Given the challenges, such adhoc coordination of
pre-trained systems could be achieved through an intelligent supervisor agent
which incentivizes pre-trained RL/MARL agents through sets of dynamic contracts
(goals or bonuses) and encourages them to act as a cohesive unit towards
fulfilling a global expectation. Some approaches use a rule-based supervisor
agent and deploy the hierarchical constituent agents sequentially, based on
human-coded rules.
In the current work, we propose a framework whereby pre-trained agents can be
orchestrated in parallel leveraging an AI-based supervisor agent. For this, we
propose to use Adhoc-Teaming approaches which assign optimal goals to the MARL
agents and incentivize them to exhibit certain desired behaviours. Results on
the network emulator show that the proposed approach results in faster and
improved fulfilment of expectations when compared to rule-based approaches and
even generalizes to changes in environments.
| [
{
"created": "Thu, 26 Oct 2023 14:21:36 GMT",
"version": "v1"
}
] | 2023-10-27 | [
[
"Dey",
"Kaushik",
""
],
[
"Perepu",
"Satheesh K.",
""
],
[
"Das",
"Abir",
""
]
] | Intent-based management will play a critical role in achieving customers' expectations in the next-generation mobile networks. Traditional methods cannot perform efficient resource management since they tend to handle each expectation independently. Existing approaches, e.g., based on multi-agent reinforcement learning (MARL) allocate resources in an efficient fashion when there are conflicting expectations on the network slice. However, in reality, systems are often far more complex to be addressed by a standalone MARL formulation. Often there exists a hierarchical structure of intent fulfilment where multiple pre-trained, self-interested agents may need to be further orchestrated by a supervisor or controller agent. Such agents may arrive in the system adhoc, which then needs to be orchestrated along with other available agents. Retraining the whole system every time is often infeasible given the associated time and cost. Given the challenges, such adhoc coordination of pre-trained systems could be achieved through an intelligent supervisor agent which incentivizes pre-trained RL/MARL agents through sets of dynamic contracts (goals or bonuses) and encourages them to act as a cohesive unit towards fulfilling a global expectation. Some approaches use a rule-based supervisor agent and deploy the hierarchical constituent agents sequentially, based on human-coded rules. In the current work, we propose a framework whereby pre-trained agents can be orchestrated in parallel leveraging an AI-based supervisor agent. For this, we propose to use Adhoc-Teaming approaches which assign optimal goals to the MARL agents and incentivize them to exhibit certain desired behaviours. Results on the network emulator show that the proposed approach results in faster and improved fulfilment of expectations when compared to rule-based approaches and even generalizes to changes in environments. |
1504.07962 | Shoaib Ehsan | Shoaib Ehsan, Adrian F. Clark and Klaus D. McDonald-Maier | Hardware based Scale- and Rotation-Invariant Feature Extraction: A
Retrospective Analysis and Future Directions | ICCEE 2009 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Computer Vision techniques represent a class of algorithms that are highly
computation and data intensive in nature. Generally, performance of these
algorithms in terms of execution speed on desktop computers is far from
real-time. Since real-time performance is desirable in many applications,
special-purpose hardware is required in most cases to achieve this goal. Scale-
and rotation-invariant local feature extraction is a low level computer vision
task with very high computational complexity. The state-of-the-art algorithms
that currently exist in this domain, like SIFT and SURF, suffer from slow
execution speeds and at best can only achieve rates of 2-3 Hz on modern desktop
computers. Hardware-based scale- and rotation-invariant local feature
extraction is an emerging trend enabling real-time performance for these
computationally complex algorithms. This paper takes a retrospective look at
the advances made so far in this field, discusses the hardware design
strategies employed and results achieved, identifies current research gaps and
suggests future research directions.
| [
{
"created": "Wed, 29 Apr 2015 18:52:37 GMT",
"version": "v1"
}
] | 2015-04-30 | [
[
"Ehsan",
"Shoaib",
""
],
[
"Clark",
"Adrian F.",
""
],
[
"McDonald-Maier",
"Klaus D.",
""
]
] | Computer Vision techniques represent a class of algorithms that are highly computation and data intensive in nature. Generally, performance of these algorithms in terms of execution speed on desktop computers is far from real-time. Since real-time performance is desirable in many applications, special-purpose hardware is required in most cases to achieve this goal. Scale- and rotation-invariant local feature extraction is a low level computer vision task with very high computational complexity. The state-of-the-art algorithms that currently exist in this domain, like SIFT and SURF, suffer from slow execution speeds and at best can only achieve rates of 2-3 Hz on modern desktop computers. Hardware-based scale- and rotation-invariant local feature extraction is an emerging trend enabling real-time performance for these computationally complex algorithms. This paper takes a retrospective look at the advances made so far in this field, discusses the hardware design strategies employed and results achieved, identifies current research gaps and suggests future research directions. |
2309.09668 | Bowen Yin | Bowen Yin, Xuying Zhang, Zhongyu Li, Li Liu, Ming-Ming Cheng, Qibin
Hou | DFormer: Rethinking RGBD Representation Learning for Semantic
Segmentation | Accepted by ICLR 2024 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present DFormer, a novel RGB-D pretraining framework to learn transferable
representations for RGB-D segmentation tasks. DFormer has two new key
innovations: 1) Unlike previous works that encode RGB-D information with RGB
pretrained backbone, we pretrain the backbone using image-depth pairs from
ImageNet-1K, and hence the DFormer is endowed with the capacity to encode RGB-D
representations; 2) DFormer comprises a sequence of RGB-D blocks, which are
tailored for encoding both RGB and depth information through a novel building
block design. DFormer avoids the mismatched encoding of the 3D geometry
relationships in depth maps by RGB pretrained backbones, which widely lies in
existing methods but has not been resolved. We finetune the pretrained DFormer
on two popular RGB-D tasks, i.e., RGB-D semantic segmentation and RGB-D salient
object detection, with a lightweight decoder head. Experimental results show
that our DFormer achieves new state-of-the-art performance on these two tasks
with less than half of the computational cost of the current best methods on
two RGB-D semantic segmentation datasets and five RGB-D salient object
detection datasets. Our code is available at:
https://github.com/VCIP-RGBD/DFormer.
| [
{
"created": "Mon, 18 Sep 2023 11:09:11 GMT",
"version": "v1"
},
{
"created": "Wed, 7 Feb 2024 11:07:29 GMT",
"version": "v2"
}
] | 2024-02-08 | [
[
"Yin",
"Bowen",
""
],
[
"Zhang",
"Xuying",
""
],
[
"Li",
"Zhongyu",
""
],
[
"Liu",
"Li",
""
],
[
"Cheng",
"Ming-Ming",
""
],
[
"Hou",
"Qibin",
""
]
] | We present DFormer, a novel RGB-D pretraining framework to learn transferable representations for RGB-D segmentation tasks. DFormer has two new key innovations: 1) Unlike previous works that encode RGB-D information with RGB pretrained backbone, we pretrain the backbone using image-depth pairs from ImageNet-1K, and hence the DFormer is endowed with the capacity to encode RGB-D representations; 2) DFormer comprises a sequence of RGB-D blocks, which are tailored for encoding both RGB and depth information through a novel building block design. DFormer avoids the mismatched encoding of the 3D geometry relationships in depth maps by RGB pretrained backbones, which widely lies in existing methods but has not been resolved. We finetune the pretrained DFormer on two popular RGB-D tasks, i.e., RGB-D semantic segmentation and RGB-D salient object detection, with a lightweight decoder head. Experimental results show that our DFormer achieves new state-of-the-art performance on these two tasks with less than half of the computational cost of the current best methods on two RGB-D semantic segmentation datasets and five RGB-D salient object detection datasets. Our code is available at: https://github.com/VCIP-RGBD/DFormer. |
1703.09923 | Deyu Meng | Zilu Ma and Shiqi Liu and Deyu Meng | On Convergence Property of Implicit Self-paced Objective | 9 pages, 0 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Self-paced learning (SPL) is a new methodology that simulates the learning
principle of humans/animals to start learning easier aspects of a learning
task, and then gradually take more complex examples into training. This
new-coming learning regime has been empirically substantiated to be effective
in various computer vision and pattern recognition tasks. Recently, it has been
proved that the SPL regime has a close relationship to a implicit self-paced
objective function. While this implicit objective could provide helpful
interpretations to the effectiveness, especially the robustness, insights under
the SPL paradigms, there are still no theoretical results strictly proved to
verify such relationship. To this issue, in this paper, we provide some
convergence results on this implicit objective of SPL. Specifically, we prove
that the learning process of SPL always converges to critical points of this
implicit objective under some mild conditions. This result verifies the
intrinsic relationship between SPL and this implicit objective, and makes the
previous robustness analysis on SPL complete and theoretically rational.
| [
{
"created": "Wed, 29 Mar 2017 07:53:43 GMT",
"version": "v1"
}
] | 2017-03-30 | [
[
"Ma",
"Zilu",
""
],
[
"Liu",
"Shiqi",
""
],
[
"Meng",
"Deyu",
""
]
] | Self-paced learning (SPL) is a new methodology that simulates the learning principle of humans/animals to start learning easier aspects of a learning task, and then gradually take more complex examples into training. This new-coming learning regime has been empirically substantiated to be effective in various computer vision and pattern recognition tasks. Recently, it has been proved that the SPL regime has a close relationship to a implicit self-paced objective function. While this implicit objective could provide helpful interpretations to the effectiveness, especially the robustness, insights under the SPL paradigms, there are still no theoretical results strictly proved to verify such relationship. To this issue, in this paper, we provide some convergence results on this implicit objective of SPL. Specifically, we prove that the learning process of SPL always converges to critical points of this implicit objective under some mild conditions. This result verifies the intrinsic relationship between SPL and this implicit objective, and makes the previous robustness analysis on SPL complete and theoretically rational. |
2302.04731 | Tomasz Steifer | Valentino Delle Rose, Alexander Kozachinskiy, Cristobal Rojas, Tomasz
Steifer | Find a witness or shatter: the landscape of computable PAC learning | 12 pages, 1 figure (corrected version) | null | null | null | cs.CC cs.LG math.LO stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper contributes to the study of CPAC learnability -- a computable
version of PAC learning -- by solving three open questions from recent papers.
Firstly, we prove that every improperly CPAC learnable class is contained in a
class which is properly CPAC learnable with polynomial sample complexity. This
confirms a conjecture by Agarwal et al (COLT 2021). Secondly, we show that
there exists a decidable class of hypothesis which is properly CPAC learnable,
but only with uncomputably fast growing sample complexity. This solves a
question from Sterkenburg (COLT 2022). Finally, we construct a decidable class
of finite Littlestone dimension which is not improperly CPAC learnable,
strengthening a recent result of Sterkenburg (2022) and answering a question
posed by Hasrati and Ben-David (ALT 2023). Together with previous work, our
results provide a complete landscape for the learnability problem in the CPAC
setting.
| [
{
"created": "Mon, 6 Feb 2023 02:52:36 GMT",
"version": "v1"
},
{
"created": "Thu, 23 Feb 2023 18:14:31 GMT",
"version": "v2"
}
] | 2023-02-24 | [
[
"Rose",
"Valentino Delle",
""
],
[
"Kozachinskiy",
"Alexander",
""
],
[
"Rojas",
"Cristobal",
""
],
[
"Steifer",
"Tomasz",
""
]
] | This paper contributes to the study of CPAC learnability -- a computable version of PAC learning -- by solving three open questions from recent papers. Firstly, we prove that every improperly CPAC learnable class is contained in a class which is properly CPAC learnable with polynomial sample complexity. This confirms a conjecture by Agarwal et al (COLT 2021). Secondly, we show that there exists a decidable class of hypothesis which is properly CPAC learnable, but only with uncomputably fast growing sample complexity. This solves a question from Sterkenburg (COLT 2022). Finally, we construct a decidable class of finite Littlestone dimension which is not improperly CPAC learnable, strengthening a recent result of Sterkenburg (2022) and answering a question posed by Hasrati and Ben-David (ALT 2023). Together with previous work, our results provide a complete landscape for the learnability problem in the CPAC setting. |
2109.09373 | Ke Wang | Ke Wang, Hengyi Fei and Petar Kormushev | Fast Online Optimization for Terrain-Blind Bipedal Robot Walking with a
Decoupled Actuated SLIP Model | 8 pages, 8 figures, submitted to ICRA 2022 | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | We present a highly reactive controller which enables bipedal robots to
blindly walk over various kinds of uneven terrains while resisting pushes. The
high level motion planner does fast online optimization for footstep locations
and Center of Mass (CoM) height using the decoupled actuated Spring Loaded
Inverted Pendulum (aSLIP) model. The decoupled aSLIP model simplifies the
original aSLIP with Linear Inverted Pendulum (LIP) dynamics in horizontal
states and spring dynamics in the vertical state. The motion planning can be
formulated as a discrete-time Model Predictive Control (MPC) and solved at a
frequency of 1k~HZ. The output of the motion planner using a reduced-order
model is fed into an inverse-dynamics based whole body controller for execution
on the robot. A key result of this controller is that the foot of the robot is
compliant, which further extends the robot's ability to be robust to unobserved
terrain changes. We evaluate our method in simulation with the bipedal robot
SLIDER. Results show the robot can blindly walk over various uneven terrains
including slopes, wave fields and stairs. It can also resist pushes while
walking on uneven terrain.
| [
{
"created": "Mon, 20 Sep 2021 08:49:20 GMT",
"version": "v1"
}
] | 2021-09-21 | [
[
"Wang",
"Ke",
""
],
[
"Fei",
"Hengyi",
""
],
[
"Kormushev",
"Petar",
""
]
] | We present a highly reactive controller which enables bipedal robots to blindly walk over various kinds of uneven terrains while resisting pushes. The high level motion planner does fast online optimization for footstep locations and Center of Mass (CoM) height using the decoupled actuated Spring Loaded Inverted Pendulum (aSLIP) model. The decoupled aSLIP model simplifies the original aSLIP with Linear Inverted Pendulum (LIP) dynamics in horizontal states and spring dynamics in the vertical state. The motion planning can be formulated as a discrete-time Model Predictive Control (MPC) and solved at a frequency of 1k~HZ. The output of the motion planner using a reduced-order model is fed into an inverse-dynamics based whole body controller for execution on the robot. A key result of this controller is that the foot of the robot is compliant, which further extends the robot's ability to be robust to unobserved terrain changes. We evaluate our method in simulation with the bipedal robot SLIDER. Results show the robot can blindly walk over various uneven terrains including slopes, wave fields and stairs. It can also resist pushes while walking on uneven terrain. |
2110.10153 | Nicholas Gray | Nicholas Gray and Marco De Angelis and Scott Ferson | The Creation of Puffin, the Automatic Uncertainty Compiler | 21 Pages, 10 Figures | null | 10.1016/j.ijar.2023.108951 | null | cs.MS stat.CO | http://creativecommons.org/licenses/by-sa/4.0/ | An uncertainty compiler is a tool that automatically translates original
computer source code lacking explicit uncertainty analysis into code containing
appropriate uncertainty representations and uncertainty propagation algorithms.
We have developed an prototype uncertainty compiler along with an associated
object-oriented uncertainty language in the form of a stand-alone Python
library. It handles the specifications of input uncertainties and inserts calls
to intrusive uncertainty quantification algorithms in the library. The
uncertainty compiler can apply intrusive uncertainty propagation methods to
codes or parts of codes and therefore more comprehensively and flexibly address
both epistemic and aleatory uncertainties.
| [
{
"created": "Tue, 19 Oct 2021 10:28:35 GMT",
"version": "v1"
},
{
"created": "Mon, 25 Oct 2021 09:40:02 GMT",
"version": "v2"
}
] | 2023-06-05 | [
[
"Gray",
"Nicholas",
""
],
[
"De Angelis",
"Marco",
""
],
[
"Ferson",
"Scott",
""
]
] | An uncertainty compiler is a tool that automatically translates original computer source code lacking explicit uncertainty analysis into code containing appropriate uncertainty representations and uncertainty propagation algorithms. We have developed an prototype uncertainty compiler along with an associated object-oriented uncertainty language in the form of a stand-alone Python library. It handles the specifications of input uncertainties and inserts calls to intrusive uncertainty quantification algorithms in the library. The uncertainty compiler can apply intrusive uncertainty propagation methods to codes or parts of codes and therefore more comprehensively and flexibly address both epistemic and aleatory uncertainties. |
2311.03250 | Zilin Xiao | Zilin Xiao, Ming Gong, Jie Wu, Xingyao Zhang, Linjun Shou, Jian Pei,
Daxin Jiang | Instructed Language Models with Retrievers Are Powerful Entity Linkers | Accepted to EMNLP 2023 Main | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Generative approaches powered by large language models (LLMs) have
demonstrated emergent abilities in tasks that require complex reasoning
abilities. Yet the generative nature still makes the generated content suffer
from hallucinations, thus unsuitable for entity-centric tasks like entity
linking (EL) requiring precise entity predictions over a large knowledge base.
We present Instructed Generative Entity Linker (INSGENEL), the first approach
that enables casual language models to perform entity linking over knowledge
bases. Several methods to equip language models with EL capability were
proposed in this work, including (i) a sequence-to-sequence training EL
objective with instruction-tuning, (ii) a novel generative EL framework based
on a light-weight potential mention retriever that frees the model from heavy
and non-parallelizable decoding, achieving 4$\times$ speedup without compromise
on linking metrics. INSGENEL outperforms previous generative alternatives with
+6.8 F1 points gain on average, also with a huge advantage in training data
efficiency and training compute consumption. In addition, our skillfully
engineered in-context learning (ICL) framework for EL still lags behind
INSGENEL significantly, reaffirming that the EL task remains a persistent
hurdle for general LLMs.
| [
{
"created": "Mon, 6 Nov 2023 16:38:51 GMT",
"version": "v1"
}
] | 2023-11-07 | [
[
"Xiao",
"Zilin",
""
],
[
"Gong",
"Ming",
""
],
[
"Wu",
"Jie",
""
],
[
"Zhang",
"Xingyao",
""
],
[
"Shou",
"Linjun",
""
],
[
"Pei",
"Jian",
""
],
[
"Jiang",
"Daxin",
""
]
] | Generative approaches powered by large language models (LLMs) have demonstrated emergent abilities in tasks that require complex reasoning abilities. Yet the generative nature still makes the generated content suffer from hallucinations, thus unsuitable for entity-centric tasks like entity linking (EL) requiring precise entity predictions over a large knowledge base. We present Instructed Generative Entity Linker (INSGENEL), the first approach that enables casual language models to perform entity linking over knowledge bases. Several methods to equip language models with EL capability were proposed in this work, including (i) a sequence-to-sequence training EL objective with instruction-tuning, (ii) a novel generative EL framework based on a light-weight potential mention retriever that frees the model from heavy and non-parallelizable decoding, achieving 4$\times$ speedup without compromise on linking metrics. INSGENEL outperforms previous generative alternatives with +6.8 F1 points gain on average, also with a huge advantage in training data efficiency and training compute consumption. In addition, our skillfully engineered in-context learning (ICL) framework for EL still lags behind INSGENEL significantly, reaffirming that the EL task remains a persistent hurdle for general LLMs. |
1706.08427 | Sebastian U. Stich | Sebastian U. Stich, Anant Raj, Martin Jaggi | Approximate Steepest Coordinate Descent | appearing at ICML 2017 | null | null | null | cs.LG math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new selection rule for the coordinate selection in coordinate
descent methods for huge-scale optimization. The efficiency of this novel
scheme is provably better than the efficiency of uniformly random selection,
and can reach the efficiency of steepest coordinate descent (SCD), enabling an
acceleration of a factor of up to $n$, the number of coordinates. In many
practical applications, our scheme can be implemented at no extra cost and
computational efficiency very close to the faster uniform selection. Numerical
experiments with Lasso and Ridge regression show promising improvements, in
line with our theoretical guarantees.
| [
{
"created": "Mon, 26 Jun 2017 15:07:02 GMT",
"version": "v1"
}
] | 2017-06-27 | [
[
"Stich",
"Sebastian U.",
""
],
[
"Raj",
"Anant",
""
],
[
"Jaggi",
"Martin",
""
]
] | We propose a new selection rule for the coordinate selection in coordinate descent methods for huge-scale optimization. The efficiency of this novel scheme is provably better than the efficiency of uniformly random selection, and can reach the efficiency of steepest coordinate descent (SCD), enabling an acceleration of a factor of up to $n$, the number of coordinates. In many practical applications, our scheme can be implemented at no extra cost and computational efficiency very close to the faster uniform selection. Numerical experiments with Lasso and Ridge regression show promising improvements, in line with our theoretical guarantees. |
1210.5128 | Yu Wang | Yu Wang, Weikang Qian, Shuchang Zhang and Bo Yuan | A Novel Learning Algorithm for Bayesian Network and Its Efficient
Implementation on GPU | null | null | null | null | cs.DC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Computational inference of causal relationships underlying complex networks,
such as gene-regulatory pathways, is NP-complete due to its combinatorial
nature when permuting all possible interactions. Markov chain Monte Carlo
(MCMC) has been introduced to sample only part of the combinations while still
guaranteeing convergence and traversability, which therefore becomes widely
used. However, MCMC is not able to perform efficiently enough for networks that
have more than 15~20 nodes because of the computational complexity. In this
paper, we use general purpose processor (GPP) and general purpose graphics
processing unit (GPGPU) to implement and accelerate a novel Bayesian network
learning algorithm. With a hash-table-based memory-saving strategy and a novel
task assigning strategy, we achieve a 10-fold acceleration per iteration than
using a serial GPP. Specially, we use a greedy method to search for the best
graph from a given order. We incorporate a prior component in the current
scoring function, which further facilitates the searching. Overall, we are able
to apply this system to networks with more than 60 nodes, allowing inferences
and modeling of bigger and more complex networks than current methods.
| [
{
"created": "Thu, 18 Oct 2012 14:02:12 GMT",
"version": "v1"
}
] | 2012-10-19 | [
[
"Wang",
"Yu",
""
],
[
"Qian",
"Weikang",
""
],
[
"Zhang",
"Shuchang",
""
],
[
"Yuan",
"Bo",
""
]
] | Computational inference of causal relationships underlying complex networks, such as gene-regulatory pathways, is NP-complete due to its combinatorial nature when permuting all possible interactions. Markov chain Monte Carlo (MCMC) has been introduced to sample only part of the combinations while still guaranteeing convergence and traversability, which therefore becomes widely used. However, MCMC is not able to perform efficiently enough for networks that have more than 15~20 nodes because of the computational complexity. In this paper, we use general purpose processor (GPP) and general purpose graphics processing unit (GPGPU) to implement and accelerate a novel Bayesian network learning algorithm. With a hash-table-based memory-saving strategy and a novel task assigning strategy, we achieve a 10-fold acceleration per iteration than using a serial GPP. Specially, we use a greedy method to search for the best graph from a given order. We incorporate a prior component in the current scoring function, which further facilitates the searching. Overall, we are able to apply this system to networks with more than 60 nodes, allowing inferences and modeling of bigger and more complex networks than current methods. |
2108.03146 | Juan Cante | Daniel Yago, Juan Cante, Oriol Lloberas-Valls, Javier Oliver | Topology Optimization Methods for 3D Structural Problems: A Comparative
Study | null | Archives of Computational Methods in Engineering, 2021 | 10.1007/s11831-021-09626-2 | null | cs.CE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The work provides an exhaustive comparison of some representative families of
topology optimization methods for 3D structural optimization, such as the Solid
Isotropic Material with Penalization (SIMP), the Level-set, the Bidirectional
Evolutionary Structural Optimization (BESO), and the Variational Topology
Optimization (VARTOP) methods. The main differences and similarities of these
approaches are then highlighted from an algorithmic standpoint. The comparison
is carried out via the study of a set of numerical benchmark cases using
industrial-like fine-discretization meshes (around 1 million finite elements),
and Matlab as the common computational platform, to ensure fair comparisons.
Then, the results obtained for every benchmark case with the different methods
are compared in terms of computational cost, topology quality, achieved minimum
value of the objective function, and robustness of the computations
(convergence in objective function and topology). Finally, some quantitative
and qualitative results are presented, from which, an attempt of qualification
of the methods, in terms of their relative performance, is done.
| [
{
"created": "Fri, 6 Aug 2021 14:40:13 GMT",
"version": "v1"
}
] | 2021-08-09 | [
[
"Yago",
"Daniel",
""
],
[
"Cante",
"Juan",
""
],
[
"Lloberas-Valls",
"Oriol",
""
],
[
"Oliver",
"Javier",
""
]
] | The work provides an exhaustive comparison of some representative families of topology optimization methods for 3D structural optimization, such as the Solid Isotropic Material with Penalization (SIMP), the Level-set, the Bidirectional Evolutionary Structural Optimization (BESO), and the Variational Topology Optimization (VARTOP) methods. The main differences and similarities of these approaches are then highlighted from an algorithmic standpoint. The comparison is carried out via the study of a set of numerical benchmark cases using industrial-like fine-discretization meshes (around 1 million finite elements), and Matlab as the common computational platform, to ensure fair comparisons. Then, the results obtained for every benchmark case with the different methods are compared in terms of computational cost, topology quality, achieved minimum value of the objective function, and robustness of the computations (convergence in objective function and topology). Finally, some quantitative and qualitative results are presented, from which, an attempt of qualification of the methods, in terms of their relative performance, is done. |
2310.04944 | Zheng Zhang | Yuntong Hu, Zheng Zhang, Liang Zhao | Beyond Text: A Deep Dive into Large Language Models' Ability on
Understanding Graph Data | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Large language models (LLMs) have achieved impressive performance on many
natural language processing tasks. However, their capabilities on
graph-structured data remain relatively unexplored. In this paper, we conduct a
series of experiments benchmarking leading LLMs on diverse graph prediction
tasks spanning node, edge, and graph levels. We aim to assess whether LLMs can
effectively process graph data and leverage topological structures to enhance
performance, compared to specialized graph neural networks. Through varied
prompt formatting and task/dataset selection, we analyze how well LLMs can
interpret and utilize graph structures. By comparing LLMs' performance with
specialized graph models, we offer insights into the strengths and limitations
of employing LLMs for graph analytics. Our findings provide insights into LLMs'
capabilities and suggest avenues for further exploration in applying them to
graph analytics.
| [
{
"created": "Sat, 7 Oct 2023 23:25:22 GMT",
"version": "v1"
}
] | 2023-10-10 | [
[
"Hu",
"Yuntong",
""
],
[
"Zhang",
"Zheng",
""
],
[
"Zhao",
"Liang",
""
]
] | Large language models (LLMs) have achieved impressive performance on many natural language processing tasks. However, their capabilities on graph-structured data remain relatively unexplored. In this paper, we conduct a series of experiments benchmarking leading LLMs on diverse graph prediction tasks spanning node, edge, and graph levels. We aim to assess whether LLMs can effectively process graph data and leverage topological structures to enhance performance, compared to specialized graph neural networks. Through varied prompt formatting and task/dataset selection, we analyze how well LLMs can interpret and utilize graph structures. By comparing LLMs' performance with specialized graph models, we offer insights into the strengths and limitations of employing LLMs for graph analytics. Our findings provide insights into LLMs' capabilities and suggest avenues for further exploration in applying them to graph analytics. |
1602.01870 | Ido Tal | Eren Sasoglu, Ido Tal | Polar Coding for Processes with Memory | Submitted to IEEE Transactions on Information Theory | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study polar coding for stochastic processes with memory. For example, a
process may be defined by the joint distribution of the input and output of a
channel. The memory may be present in the channel, the input, or both. We show
that $\psi$-mixing processes polarize under the standard Ar\i{}kan transform,
under a mild condition. We further show that the rate of polarization of the
\emph{low-entropy} synthetic channels is roughly $O(2^{-\sqrt{N}})$, where $N$
is the blocklength. That is, essentially the same rate as in the memoryless
case.
| [
{
"created": "Thu, 4 Feb 2016 22:18:42 GMT",
"version": "v1"
},
{
"created": "Mon, 23 Jan 2017 09:01:53 GMT",
"version": "v2"
},
{
"created": "Wed, 15 Aug 2018 09:55:10 GMT",
"version": "v3"
}
] | 2018-08-16 | [
[
"Sasoglu",
"Eren",
""
],
[
"Tal",
"Ido",
""
]
] | We study polar coding for stochastic processes with memory. For example, a process may be defined by the joint distribution of the input and output of a channel. The memory may be present in the channel, the input, or both. We show that $\psi$-mixing processes polarize under the standard Ar\i{}kan transform, under a mild condition. We further show that the rate of polarization of the \emph{low-entropy} synthetic channels is roughly $O(2^{-\sqrt{N}})$, where $N$ is the blocklength. That is, essentially the same rate as in the memoryless case. |
1605.00982 | Peter Dugan Dr | Peter J. Dugan, Christopher W. Clark, Yann Andr\'e LeCun, Sofie M. Van
Parijs | Phase 4: DCL System Using Deep Learning Approaches for Land-Based or
Ship-Based Real-Time Recognition and Localization of Marine Mammals -
Distributed Processing and Big Data Applications | National Oceanic Partnership Program (NOPP) sponsored by ONR and NFWF | null | null | N000141210585 | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While the animal bioacoustics community at large is collecting huge amounts
of acoustic data at an unprecedented pace, processing these data is
problematic. Currently in bioacoustics, there is no effective way to achieve
high performance computing using commericial off the shelf (COTS) or government
off the shelf (GOTS) tools. Although several advances have been made in the
open source and commercial software community, these offerings either support
specific applications that do not integrate well with data formats in
bioacoustics or they are too general. Furthermore, complex algorithms that use
deep learning strategies require special considerations, such as very large
libraiers of exemplars (whale sounds) readily available for algorithm training
and testing. Detection-classification for passive acoustics is a data-mining
strategy and our goals are aligned with best practices that appeal to the
general data mining and machine learning communities where the problem of
processing large data is common. Therefore, the objective of this work is to
advance the state-of-the art for data-mining large passive acoustic datasets as
they pertain to bioacoustics. With this basic deficiency recognized at the
forefront, portions of the grant were dedicated to fostering deep-learning by
way of international competitions (kaggle.com) meant to attract deep-learning
solutions. The focus of this early work was targeted to make significant
progress in addressing big data systems and advanced algorithms over the
duration of the grant from 2012 to 2015. This early work provided simulataneous
advances in systems-algorithms research while supporting various collaborations
and projects.
| [
{
"created": "Tue, 3 May 2016 16:54:07 GMT",
"version": "v1"
},
{
"created": "Thu, 5 May 2016 18:35:16 GMT",
"version": "v2"
}
] | 2016-05-06 | [
[
"Dugan",
"Peter J.",
""
],
[
"Clark",
"Christopher W.",
""
],
[
"LeCun",
"Yann André",
""
],
[
"Van Parijs",
"Sofie M.",
""
]
] | While the animal bioacoustics community at large is collecting huge amounts of acoustic data at an unprecedented pace, processing these data is problematic. Currently in bioacoustics, there is no effective way to achieve high performance computing using commericial off the shelf (COTS) or government off the shelf (GOTS) tools. Although several advances have been made in the open source and commercial software community, these offerings either support specific applications that do not integrate well with data formats in bioacoustics or they are too general. Furthermore, complex algorithms that use deep learning strategies require special considerations, such as very large libraiers of exemplars (whale sounds) readily available for algorithm training and testing. Detection-classification for passive acoustics is a data-mining strategy and our goals are aligned with best practices that appeal to the general data mining and machine learning communities where the problem of processing large data is common. Therefore, the objective of this work is to advance the state-of-the art for data-mining large passive acoustic datasets as they pertain to bioacoustics. With this basic deficiency recognized at the forefront, portions of the grant were dedicated to fostering deep-learning by way of international competitions (kaggle.com) meant to attract deep-learning solutions. The focus of this early work was targeted to make significant progress in addressing big data systems and advanced algorithms over the duration of the grant from 2012 to 2015. This early work provided simulataneous advances in systems-algorithms research while supporting various collaborations and projects. |
1111.3153 | Elsa Tolone | Kyriaki Ioannidou (LTTL), Elsa Tolone (LIGM, FaMAF) | Construction du lexique LGLex \`a partir des tables du Lexique-Grammaire
des verbes du grec moderne | 30\`eme Colloque international sur le Lexique et la Grammaire
(LGC'11), Nicosie : Chypre (2011) | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we summerize the work done on the resources of Modern Greek on
the Lexicon-Grammar of verbs. We detail the definitional features of each
table, and all changes made to the names of features to make them consistent.
Through the development of the table of classes, including all the features, we
have considered the conversion of tables in a syntactic lexicon: LGLex. The
lexicon, in plain text format or XML, is generated by the LGExtract tool
(Constant & Tolone, 2010). This format is directly usable in applications of
Natural Language Processing (NLP).
| [
{
"created": "Mon, 14 Nov 2011 09:34:59 GMT",
"version": "v1"
}
] | 2011-11-15 | [
[
"Ioannidou",
"Kyriaki",
"",
"LTTL"
],
[
"Tolone",
"Elsa",
"",
"LIGM, FaMAF"
]
] | In this paper, we summerize the work done on the resources of Modern Greek on the Lexicon-Grammar of verbs. We detail the definitional features of each table, and all changes made to the names of features to make them consistent. Through the development of the table of classes, including all the features, we have considered the conversion of tables in a syntactic lexicon: LGLex. The lexicon, in plain text format or XML, is generated by the LGExtract tool (Constant & Tolone, 2010). This format is directly usable in applications of Natural Language Processing (NLP). |
2404.00994 | Bernhard Egger | Maximilian Weiherer, Andreea Dogaru, Shreya Kapoor, Hannah Schieber,
Bernhard Egger | AMOR: Ambiguous Authorship Order | SIGBOVIK '24 submission | null | null | null | cs.CV | http://creativecommons.org/publicdomain/zero/1.0/ | As we all know, writing scientific papers together with our beloved
colleagues is a truly remarkable experience (partially): endless discussions
about the same useless paragraph over and over again, followed by long days and
long nights -- both at the same time. What a wonderful ride it is! What a
beautiful life we have. But wait, there's one tiny little problem that utterly
shatters the peace, turning even renowned scientists into bloodthirsty
monsters: author order. The reason is that, contrary to widespread opinion,
it's not the font size that matters, but the way things are ordered. Of course,
this is a fairly well-known fact among scientists all across the planet (and
beyond) and explains clearly why we regularly have to read about yet another
escalated paper submission in local police reports.
In this paper, we take an important step backwards to tackle this issue by
solving the so-called author ordering problem (AOP) once and for all.
Specifically, we propose AMOR, a system that replaces silly constructs like
co-first or co-middle authorship with a simple yet easy probabilistic approach
based on random shuffling of the author list at viewing time. In addition to
AOP, we also solve the ambiguous author ordering citation problem} (AAOCP) on
the fly. Stop author violence, be human.
| [
{
"created": "Mon, 1 Apr 2024 08:44:11 GMT",
"version": "v1"
}
] | 2024-04-02 | [
[
"Weiherer",
"Maximilian",
""
],
[
"Dogaru",
"Andreea",
""
],
[
"Kapoor",
"Shreya",
""
],
[
"Schieber",
"Hannah",
""
],
[
"Egger",
"Bernhard",
""
]
] | As we all know, writing scientific papers together with our beloved colleagues is a truly remarkable experience (partially): endless discussions about the same useless paragraph over and over again, followed by long days and long nights -- both at the same time. What a wonderful ride it is! What a beautiful life we have. But wait, there's one tiny little problem that utterly shatters the peace, turning even renowned scientists into bloodthirsty monsters: author order. The reason is that, contrary to widespread opinion, it's not the font size that matters, but the way things are ordered. Of course, this is a fairly well-known fact among scientists all across the planet (and beyond) and explains clearly why we regularly have to read about yet another escalated paper submission in local police reports. In this paper, we take an important step backwards to tackle this issue by solving the so-called author ordering problem (AOP) once and for all. Specifically, we propose AMOR, a system that replaces silly constructs like co-first or co-middle authorship with a simple yet easy probabilistic approach based on random shuffling of the author list at viewing time. In addition to AOP, we also solve the ambiguous author ordering citation problem} (AAOCP) on the fly. Stop author violence, be human. |
1601.06180 | Robert Peharz | Robert Peharz, Robert Gens, Franz Pernkopf, Pedro Domingos | On the Latent Variable Interpretation in Sum-Product Networks | Revised version, accepted for publication in IEEE Transactions on
Machine Intelligence and Pattern Analysis (TPAMI). Shortened and revised
Section 4: Thanks to our reviewers, pointing out that Theorem 2 holds for
selective SPNs. Added paragraph in Section 2.1, relating sizes of
original/augmented SPNs. Fixed typos, rephrased sentences, revised references | null | null | null | cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the central themes in Sum-Product networks (SPNs) is the
interpretation of sum nodes as marginalized latent variables (LVs). This
interpretation yields an increased syntactic or semantic structure, allows the
application of the EM algorithm and to efficiently perform MPE inference. In
literature, the LV interpretation was justified by explicitly introducing the
indicator variables corresponding to the LVs' states. However, as pointed out
in this paper, this approach is in conflict with the completeness condition in
SPNs and does not fully specify the probabilistic model. We propose a remedy
for this problem by modifying the original approach for introducing the LVs,
which we call SPN augmentation. We discuss conditional independencies in
augmented SPNs, formally establish the probabilistic interpretation of the
sum-weights and give an interpretation of augmented SPNs as Bayesian networks.
Based on these results, we find a sound derivation of the EM algorithm for
SPNs. Furthermore, the Viterbi-style algorithm for MPE proposed in literature
was never proven to be correct. We show that this is indeed a correct
algorithm, when applied to selective SPNs, and in particular when applied to
augmented SPNs. Our theoretical results are confirmed in experiments on
synthetic data and 103 real-world datasets.
| [
{
"created": "Fri, 22 Jan 2016 21:40:33 GMT",
"version": "v1"
},
{
"created": "Fri, 28 Oct 2016 07:54:35 GMT",
"version": "v2"
}
] | 2016-10-31 | [
[
"Peharz",
"Robert",
""
],
[
"Gens",
"Robert",
""
],
[
"Pernkopf",
"Franz",
""
],
[
"Domingos",
"Pedro",
""
]
] | One of the central themes in Sum-Product networks (SPNs) is the interpretation of sum nodes as marginalized latent variables (LVs). This interpretation yields an increased syntactic or semantic structure, allows the application of the EM algorithm and to efficiently perform MPE inference. In literature, the LV interpretation was justified by explicitly introducing the indicator variables corresponding to the LVs' states. However, as pointed out in this paper, this approach is in conflict with the completeness condition in SPNs and does not fully specify the probabilistic model. We propose a remedy for this problem by modifying the original approach for introducing the LVs, which we call SPN augmentation. We discuss conditional independencies in augmented SPNs, formally establish the probabilistic interpretation of the sum-weights and give an interpretation of augmented SPNs as Bayesian networks. Based on these results, we find a sound derivation of the EM algorithm for SPNs. Furthermore, the Viterbi-style algorithm for MPE proposed in literature was never proven to be correct. We show that this is indeed a correct algorithm, when applied to selective SPNs, and in particular when applied to augmented SPNs. Our theoretical results are confirmed in experiments on synthetic data and 103 real-world datasets. |
2302.08055 | Chenjiu Wang | Chenjiu Wang, Ke He, Ruiqi Fan, Xiaonan Wang, Yang Kong, Wei Wang,
Qinfen Hao | CXL over Ethernet: A Novel FPGA-based Memory Disaggregation Design in
Data Centers | null | null | null | null | cs.AR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Memory resources in data centers generally suffer from low utilization and
lack of dynamics. Memory disaggregation solves these problems by decoupling CPU
and memory, which currently includes approaches based on RDMA or
interconnection protocols such as Compute Express Link (CXL). However, the
RDMA-based approach involves code refactoring and higher latency. The CXL-based
approach supports native memory semantics and overcomes the shortcomings of
RDMA, but is limited within rack level. In addition, memory pooling and sharing
based on CXL products are currently in the process of early exploration and
still take time to be available in the future. In this paper, we propose the
CXL over Ethernet approach that the host processor can access the remote memory
with memory semantics through Ethernet. Our approach can support native memory
load/store access and extends the physical range to cross server and rack
levels by taking advantage of CXL and RDMA technologies. We prototype our
approach with one server and two FPGA boards with 100 Gbps network and measure
the memory access latency. Furthermore, we optimize the memory access path by
using data cache and congestion control algorithm in the critical path to
further lower access latency. The evaluation results show that the average
latency for the server to access remote memory is 1.97 {\mu}s, which is about
37% lower than the baseline latency in the industry. The latency can be further
reduced to 415 ns with cache block and hit access on FPGA.
| [
{
"created": "Thu, 16 Feb 2023 03:36:04 GMT",
"version": "v1"
},
{
"created": "Wed, 22 Feb 2023 08:16:57 GMT",
"version": "v2"
}
] | 2023-02-23 | [
[
"Wang",
"Chenjiu",
""
],
[
"He",
"Ke",
""
],
[
"Fan",
"Ruiqi",
""
],
[
"Wang",
"Xiaonan",
""
],
[
"Kong",
"Yang",
""
],
[
"Wang",
"Wei",
""
],
[
"Hao",
"Qinfen",
""
]
] | Memory resources in data centers generally suffer from low utilization and lack of dynamics. Memory disaggregation solves these problems by decoupling CPU and memory, which currently includes approaches based on RDMA or interconnection protocols such as Compute Express Link (CXL). However, the RDMA-based approach involves code refactoring and higher latency. The CXL-based approach supports native memory semantics and overcomes the shortcomings of RDMA, but is limited within rack level. In addition, memory pooling and sharing based on CXL products are currently in the process of early exploration and still take time to be available in the future. In this paper, we propose the CXL over Ethernet approach that the host processor can access the remote memory with memory semantics through Ethernet. Our approach can support native memory load/store access and extends the physical range to cross server and rack levels by taking advantage of CXL and RDMA technologies. We prototype our approach with one server and two FPGA boards with 100 Gbps network and measure the memory access latency. Furthermore, we optimize the memory access path by using data cache and congestion control algorithm in the critical path to further lower access latency. The evaluation results show that the average latency for the server to access remote memory is 1.97 {\mu}s, which is about 37% lower than the baseline latency in the industry. The latency can be further reduced to 415 ns with cache block and hit access on FPGA. |
1908.01738 | Dragos-Adrian (Adi) Seredinschi PhD | Rachid Guerraoui and Petr Kuznetsov and Matteo Monti and Matej
Pavlovic and Dragos-Adrian Seredinschi and Yann Vonlanthen | Scalable Byzantine Reliable Broadcast (Extended Version) | This is an extended version of a conference article, appearing (best
paper award) in the proceedings of the 33rd International Symposium on
Distributed Computing (DISC 2019), October 14--18, 2019, Budapest, Hungary | null | 10.4230/LIPIcs.DISC.2019.22 | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Byzantine reliable broadcast is a powerful primitive that allows a set of
processes to agree on a message from a designated sender, even if some
processes (including the sender) are Byzantine. Existing broadcast protocols
for this setting scale poorly, as they typically build on quorum systems with
strong intersection guarantees, which results in linear per-process
communication and computation complexity.
We generalize the Byzantine reliable broadcast abstraction to the
probabilistic setting, allowing each of its properties to be violated with a
fixed, arbitrarily small probability. We leverage these relaxed guarantees in a
protocol where we replace quorums with stochastic samples. Compared to quorums,
samples are significantly smaller in size, leading to a more scalable design.
We obtain the first Byzantine reliable broadcast protocol with logarithmic
per-process communication and computation complexity.
We conduct a complete and thorough analysis of our protocol, deriving bounds
on the probability of each of its properties being compromised. During our
analysis, we introduce a novel general technique we call adversary decorators.
Adversary decorators allow us to make claims about the optimal strategy of the
Byzantine adversary without having to make any additional assumptions. We also
introduce Threshold Contagion, a model of message propagation through a system
with Byzantine processes. To the best of our knowledge, this is the first
formal analysis of a probabilistic broadcast protocol in the Byzantine fault
model. We show numerically that practically negligible failure probabilities
can be achieved with realistic security parameters.
| [
{
"created": "Mon, 5 Aug 2019 17:30:00 GMT",
"version": "v1"
},
{
"created": "Mon, 17 Feb 2020 20:36:21 GMT",
"version": "v2"
},
{
"created": "Wed, 19 Feb 2020 19:50:03 GMT",
"version": "v3"
}
] | 2020-02-21 | [
[
"Guerraoui",
"Rachid",
""
],
[
"Kuznetsov",
"Petr",
""
],
[
"Monti",
"Matteo",
""
],
[
"Pavlovic",
"Matej",
""
],
[
"Seredinschi",
"Dragos-Adrian",
""
],
[
"Vonlanthen",
"Yann",
""
]
] | Byzantine reliable broadcast is a powerful primitive that allows a set of processes to agree on a message from a designated sender, even if some processes (including the sender) are Byzantine. Existing broadcast protocols for this setting scale poorly, as they typically build on quorum systems with strong intersection guarantees, which results in linear per-process communication and computation complexity. We generalize the Byzantine reliable broadcast abstraction to the probabilistic setting, allowing each of its properties to be violated with a fixed, arbitrarily small probability. We leverage these relaxed guarantees in a protocol where we replace quorums with stochastic samples. Compared to quorums, samples are significantly smaller in size, leading to a more scalable design. We obtain the first Byzantine reliable broadcast protocol with logarithmic per-process communication and computation complexity. We conduct a complete and thorough analysis of our protocol, deriving bounds on the probability of each of its properties being compromised. During our analysis, we introduce a novel general technique we call adversary decorators. Adversary decorators allow us to make claims about the optimal strategy of the Byzantine adversary without having to make any additional assumptions. We also introduce Threshold Contagion, a model of message propagation through a system with Byzantine processes. To the best of our knowledge, this is the first formal analysis of a probabilistic broadcast protocol in the Byzantine fault model. We show numerically that practically negligible failure probabilities can be achieved with realistic security parameters. |
1604.05964 | Sandeep Dhavane | Sandeep Babasaheb Dhavane, Mohammed Zafar Ali Khan | Cloud Cognitive Radio HetNets with Limited Feedback | equation no. 16 17 and 18 have flaws, result of which final outage
derivation is not converging | null | null | null | cs.IT cs.NI math.IT | http://creativecommons.org/licenses/by/4.0/ | In this paper we propose a cloud based interweave cognitive radio HetNets
which combines gain of cloud based radio that is increased rate for cell edge
users and better spectral efficiency of cognitive radio. Simulation results for
limited feedback shows approximately 100 % increase in rate for primary while
300 % for secondary cell edge users with same outage in cloud over conventional
cognitive radio network.
| [
{
"created": "Wed, 20 Apr 2016 13:59:55 GMT",
"version": "v1"
},
{
"created": "Thu, 19 May 2016 15:39:40 GMT",
"version": "v2"
}
] | 2016-05-20 | [
[
"Dhavane",
"Sandeep Babasaheb",
""
],
[
"Khan",
"Mohammed Zafar Ali",
""
]
] | In this paper we propose a cloud based interweave cognitive radio HetNets which combines gain of cloud based radio that is increased rate for cell edge users and better spectral efficiency of cognitive radio. Simulation results for limited feedback shows approximately 100 % increase in rate for primary while 300 % for secondary cell edge users with same outage in cloud over conventional cognitive radio network. |
1501.00680 | Ricardo Monge | Osvaldo Skliar and Ricardo E. Monge and Sherry Gapper | A New Method for Signal and Image Analysis: The Square Wave Method | null | null | null | null | cs.NA cs.CV math.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A brief review is provided of the use of the Square Wave Method (SWM) in the
field of signal and image analysis and it is specified how results thus
obtained are expressed using the Square Wave Transform (SWT), in the frequency
domain. To illustrate the new approach introduced in this field, the results of
two cases are analyzed: a) a sequence of samples (that is, measured values) of
an electromyographic recording; and b) the classic image of Lenna.
| [
{
"created": "Sun, 4 Jan 2015 14:35:58 GMT",
"version": "v1"
}
] | 2015-01-06 | [
[
"Skliar",
"Osvaldo",
""
],
[
"Monge",
"Ricardo E.",
""
],
[
"Gapper",
"Sherry",
""
]
] | A brief review is provided of the use of the Square Wave Method (SWM) in the field of signal and image analysis and it is specified how results thus obtained are expressed using the Square Wave Transform (SWT), in the frequency domain. To illustrate the new approach introduced in this field, the results of two cases are analyzed: a) a sequence of samples (that is, measured values) of an electromyographic recording; and b) the classic image of Lenna. |
0901.3754 | Vahab Mirrokni | Eyal Even-dar, Yishay Mansour, Vahab Mirrokni, S. Muthukrishnan, Uri
Nadav | Bid Optimization in Broad-Match Ad auctions | World Wide Web Conference (WWW09), 10 pages, 2 figures | null | null | null | cs.GT cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ad auctions in sponsored search support ``broad match'' that allows an
advertiser to target a large number of queries while bidding only on a limited
number. While giving more expressiveness to advertisers, this feature makes it
challenging to optimize bids to maximize their returns: choosing to bid on a
query as a broad match because it provides high profit results in one bidding
for related queries which may yield low or even negative profits.
We abstract and study the complexity of the {\em bid optimization problem}
which is to determine an advertiser's bids on a subset of keywords (possibly
using broad match) so that her profit is maximized. In the query language model
when the advertiser is allowed to bid on all queries as broad match, we present
an linear programming (LP)-based polynomial-time algorithm that gets the
optimal profit. In the model in which an advertiser can only bid on keywords,
ie., a subset of keywords as an exact or broad match, we show that this problem
is not approximable within any reasonable approximation factor unless P=NP. To
deal with this hardness result, we present a constant-factor approximation when
the optimal profit significantly exceeds the cost. This algorithm is based on
rounding a natural LP formulation of the problem. Finally, we study a budgeted
variant of the problem, and show that in the query language model, one can find
two budget constrained ad campaigns in polynomial time that implement the
optimal bidding strategy. Our results are the first to address bid optimization
under the broad match feature which is common in ad auctions.
| [
{
"created": "Fri, 23 Jan 2009 19:09:01 GMT",
"version": "v1"
}
] | 2009-01-26 | [
[
"Even-dar",
"Eyal",
""
],
[
"Mansour",
"Yishay",
""
],
[
"Mirrokni",
"Vahab",
""
],
[
"Muthukrishnan",
"S.",
""
],
[
"Nadav",
"Uri",
""
]
] | Ad auctions in sponsored search support ``broad match'' that allows an advertiser to target a large number of queries while bidding only on a limited number. While giving more expressiveness to advertisers, this feature makes it challenging to optimize bids to maximize their returns: choosing to bid on a query as a broad match because it provides high profit results in one bidding for related queries which may yield low or even negative profits. We abstract and study the complexity of the {\em bid optimization problem} which is to determine an advertiser's bids on a subset of keywords (possibly using broad match) so that her profit is maximized. In the query language model when the advertiser is allowed to bid on all queries as broad match, we present an linear programming (LP)-based polynomial-time algorithm that gets the optimal profit. In the model in which an advertiser can only bid on keywords, ie., a subset of keywords as an exact or broad match, we show that this problem is not approximable within any reasonable approximation factor unless P=NP. To deal with this hardness result, we present a constant-factor approximation when the optimal profit significantly exceeds the cost. This algorithm is based on rounding a natural LP formulation of the problem. Finally, we study a budgeted variant of the problem, and show that in the query language model, one can find two budget constrained ad campaigns in polynomial time that implement the optimal bidding strategy. Our results are the first to address bid optimization under the broad match feature which is common in ad auctions. |
2002.11437 | Alexandros Hollender | Aris Filos-Ratsikas, Alexandros Hollender, Katerina Sotiraki, Manolis
Zampetakis | Consensus-Halving: Does It Ever Get Easier? | Journal version. Preliminary version appeared at EC '20 | SIAM Journal on Computing, 52(2):412-451 (2023) | 10.1137/20M1387493 | null | cs.CC cs.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the $\varepsilon$-Consensus-Halving problem, a fundamental problem in fair
division, there are $n$ agents with valuations over the interval $[0,1]$, and
the goal is to divide the interval into pieces and assign a label "$+$" or
"$-$" to each piece, such that every agent values the total amount of "$+$" and
the total amount of "$-$" almost equally. The problem was recently proven by
Filos-Ratsikas and Goldberg [2019] to be the first "natural" complete problem
for the computational class PPA, answering a decade-old open question.
In this paper, we examine the extent to which the problem becomes easy to
solve, if one restricts the class of valuation functions. To this end, we
provide the following contributions. First, we obtain a strengthening of the
PPA-hardness result of [Filos-Ratsikas and Goldberg, 2019], to the case when
agents have piecewise uniform valuations with only two blocks. We obtain this
result via a new reduction, which is in fact conceptually much simpler than the
corresponding one in [Filos-Ratsikas and Goldberg, 2019]. Then, we consider the
case of single-block (uniform) valuations and provide a parameterized
polynomial time algorithm for solving $\varepsilon$-Consensus-Halving for any
$\varepsilon$, as well as a polynomial-time algorithm for $\varepsilon=1/2$.
Finally, an important application of our new techniques is the first hardness
result for a generalization of Consensus-Halving, the Consensus-$1/k$-Division
problem [Simmons and Su, 2003]. In particular, we prove that
$\varepsilon$-Consensus-$1/3$-Division is PPAD-hard.
| [
{
"created": "Wed, 26 Feb 2020 12:43:56 GMT",
"version": "v1"
},
{
"created": "Fri, 28 Feb 2020 16:24:58 GMT",
"version": "v2"
},
{
"created": "Fri, 31 Jul 2020 22:43:26 GMT",
"version": "v3"
},
{
"created": "Mon, 24 Apr 2023 19:10:35 GMT",
"version": "v4"
}
] | 2023-04-26 | [
[
"Filos-Ratsikas",
"Aris",
""
],
[
"Hollender",
"Alexandros",
""
],
[
"Sotiraki",
"Katerina",
""
],
[
"Zampetakis",
"Manolis",
""
]
] | In the $\varepsilon$-Consensus-Halving problem, a fundamental problem in fair division, there are $n$ agents with valuations over the interval $[0,1]$, and the goal is to divide the interval into pieces and assign a label "$+$" or "$-$" to each piece, such that every agent values the total amount of "$+$" and the total amount of "$-$" almost equally. The problem was recently proven by Filos-Ratsikas and Goldberg [2019] to be the first "natural" complete problem for the computational class PPA, answering a decade-old open question. In this paper, we examine the extent to which the problem becomes easy to solve, if one restricts the class of valuation functions. To this end, we provide the following contributions. First, we obtain a strengthening of the PPA-hardness result of [Filos-Ratsikas and Goldberg, 2019], to the case when agents have piecewise uniform valuations with only two blocks. We obtain this result via a new reduction, which is in fact conceptually much simpler than the corresponding one in [Filos-Ratsikas and Goldberg, 2019]. Then, we consider the case of single-block (uniform) valuations and provide a parameterized polynomial time algorithm for solving $\varepsilon$-Consensus-Halving for any $\varepsilon$, as well as a polynomial-time algorithm for $\varepsilon=1/2$. Finally, an important application of our new techniques is the first hardness result for a generalization of Consensus-Halving, the Consensus-$1/k$-Division problem [Simmons and Su, 2003]. In particular, we prove that $\varepsilon$-Consensus-$1/3$-Division is PPAD-hard. |
1704.02516 | Santhosh Kumar Ramakrishnan | Santhosh K. Ramakrishnan, Ambar Pal, Gaurav Sharma and Anurag Mittal | An Empirical Evaluation of Visual Question Answering for Novel Objects | 11 pages, 4 figures, accepted in CVPR 2017 (poster) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the problem of answering questions about images in the harder
setting, where the test questions and corresponding images contain novel
objects, which were not queried about in the training data. Such setting is
inevitable in real world-owing to the heavy tailed distribution of the visual
categories, there would be some objects which would not be annotated in the
train set. We show that the performance of two popular existing methods drop
significantly (up to 28%) when evaluated on novel objects cf. known objects. We
propose methods which use large existing external corpora of (i) unlabeled
text, i.e. books, and (ii) images tagged with classes, to achieve novel object
based visual question answering. We do systematic empirical studies, for both
an oracle case where the novel objects are known textually, as well as a fully
automatic case without any explicit knowledge of the novel objects, but with
the minimal assumption that the novel objects are semantically related to the
existing objects in training. The proposed methods for novel object based
visual question answering are modular and can potentially be used with many
visual question answering architectures. We show consistent improvements with
the two popular architectures and give qualitative analysis of the cases where
the model does well and of those where it fails to bring improvements.
| [
{
"created": "Sat, 8 Apr 2017 17:51:46 GMT",
"version": "v1"
}
] | 2017-04-11 | [
[
"Ramakrishnan",
"Santhosh K.",
""
],
[
"Pal",
"Ambar",
""
],
[
"Sharma",
"Gaurav",
""
],
[
"Mittal",
"Anurag",
""
]
] | We study the problem of answering questions about images in the harder setting, where the test questions and corresponding images contain novel objects, which were not queried about in the training data. Such setting is inevitable in real world-owing to the heavy tailed distribution of the visual categories, there would be some objects which would not be annotated in the train set. We show that the performance of two popular existing methods drop significantly (up to 28%) when evaluated on novel objects cf. known objects. We propose methods which use large existing external corpora of (i) unlabeled text, i.e. books, and (ii) images tagged with classes, to achieve novel object based visual question answering. We do systematic empirical studies, for both an oracle case where the novel objects are known textually, as well as a fully automatic case without any explicit knowledge of the novel objects, but with the minimal assumption that the novel objects are semantically related to the existing objects in training. The proposed methods for novel object based visual question answering are modular and can potentially be used with many visual question answering architectures. We show consistent improvements with the two popular architectures and give qualitative analysis of the cases where the model does well and of those where it fails to bring improvements. |
2301.00592 | Chiyu Zhang | Chiyu Zhang, Jun Yang, Zaiyan Dai, Peng Cao | Edge Enhanced Image Style Transfer via Transformers | null | null | null | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, arbitrary image style transfer has attracted more and more
attention. Given a pair of content and style images, a stylized one is hoped
that retains the content from the former while catching style patterns from the
latter. However, it is difficult to simultaneously keep well the trade-off
between the content details and the style features. To stylize the image with
sufficient style patterns, the content details may be damaged and sometimes the
objects of images can not be distinguished clearly. For this reason, we present
a new transformer-based method named STT for image style transfer and an edge
loss which can enhance the content details apparently to avoid generating
blurred results for excessive rendering on style features. Qualitative and
quantitative experiments demonstrate that STT achieves comparable performance
to state-of-the-art image style transfer methods while alleviating the content
leak problem.
| [
{
"created": "Mon, 2 Jan 2023 10:39:31 GMT",
"version": "v1"
}
] | 2023-01-03 | [
[
"Zhang",
"Chiyu",
""
],
[
"Yang",
"Jun",
""
],
[
"Dai",
"Zaiyan",
""
],
[
"Cao",
"Peng",
""
]
] | In recent years, arbitrary image style transfer has attracted more and more attention. Given a pair of content and style images, a stylized one is hoped that retains the content from the former while catching style patterns from the latter. However, it is difficult to simultaneously keep well the trade-off between the content details and the style features. To stylize the image with sufficient style patterns, the content details may be damaged and sometimes the objects of images can not be distinguished clearly. For this reason, we present a new transformer-based method named STT for image style transfer and an edge loss which can enhance the content details apparently to avoid generating blurred results for excessive rendering on style features. Qualitative and quantitative experiments demonstrate that STT achieves comparable performance to state-of-the-art image style transfer methods while alleviating the content leak problem. |
1212.5692 | Vivek Nigam | Nick Benton, Martin Hofmann, Vivek Nigam | Abstract Effects and Proof-Relevant Logical Relations | null | null | null | null | cs.PL cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a novel variant of logical relations that maps types not merely
to partial equivalence relations on values, as is commonly done, but rather to
a proof-relevant generalisation thereof, namely setoids. The objects of a
setoid establish that values inhabit semantic types, whilst its morphisms are
understood as proofs of semantic equivalence. The transition to proof-relevance
solves two well-known problems caused by the use of existential quantification
over future worlds in traditional Kripke logical relations: failure of
admissibility, and spurious functional dependencies. We illustrate the novel
format with two applications: a direct-style validation of Pitts and Stark's
equivalences for "new" and a denotational semantics for a region-based effect
system that supports type abstraction in the sense that only externally visible
effects need to be tracked; non-observable internal modifications, such as the
reorganisation of a search tree or lazy initialisation, can count as `pure' or
`read only'. This `fictional purity' allows clients of a module soundly to
validate more effect-based program equivalences than would be possible with
traditional effect systems.
| [
{
"created": "Sat, 22 Dec 2012 13:54:02 GMT",
"version": "v1"
}
] | 2012-12-27 | [
[
"Benton",
"Nick",
""
],
[
"Hofmann",
"Martin",
""
],
[
"Nigam",
"Vivek",
""
]
] | We introduce a novel variant of logical relations that maps types not merely to partial equivalence relations on values, as is commonly done, but rather to a proof-relevant generalisation thereof, namely setoids. The objects of a setoid establish that values inhabit semantic types, whilst its morphisms are understood as proofs of semantic equivalence. The transition to proof-relevance solves two well-known problems caused by the use of existential quantification over future worlds in traditional Kripke logical relations: failure of admissibility, and spurious functional dependencies. We illustrate the novel format with two applications: a direct-style validation of Pitts and Stark's equivalences for "new" and a denotational semantics for a region-based effect system that supports type abstraction in the sense that only externally visible effects need to be tracked; non-observable internal modifications, such as the reorganisation of a search tree or lazy initialisation, can count as `pure' or `read only'. This `fictional purity' allows clients of a module soundly to validate more effect-based program equivalences than would be possible with traditional effect systems. |
2105.06820 | Charalambos Poullis | Farhan Rahman Wasee, Alen Joy, Charalambos Poullis | Predicting Surface Reflectance Properties of Outdoor Scenes Under
Unknown Natural Illumination | null | null | null | null | cs.CV cs.GR | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Estimating and modelling the appearance of an object under outdoor
illumination conditions is a complex process. Although there have been several
studies on illumination estimation and relighting, very few of them focus on
estimating the reflectance properties of outdoor objects and scenes. This paper
addresses this problem and proposes a complete framework to predict surface
reflectance properties of outdoor scenes under unknown natural illumination.
Uniquely, we recast the problem into its two constituent components involving
the BRDF incoming light and outgoing view directions: (i) surface points'
radiance captured in the images, and outgoing view directions are aggregated
and encoded into reflectance maps, and (ii) a neural network trained on
reflectance maps of renders of a unit sphere under arbitrary light directions
infers a low-parameter reflection model representing the reflectance properties
at each surface in the scene. Our model is based on a combination of
phenomenological and physics-based scattering models and can relight the scenes
from novel viewpoints. We present experiments that show that rendering with the
predicted reflectance properties results in a visually similar appearance to
using textures that cannot otherwise be disentangled from the reflectance
properties.
| [
{
"created": "Fri, 14 May 2021 13:31:47 GMT",
"version": "v1"
}
] | 2021-05-17 | [
[
"Wasee",
"Farhan Rahman",
""
],
[
"Joy",
"Alen",
""
],
[
"Poullis",
"Charalambos",
""
]
] | Estimating and modelling the appearance of an object under outdoor illumination conditions is a complex process. Although there have been several studies on illumination estimation and relighting, very few of them focus on estimating the reflectance properties of outdoor objects and scenes. This paper addresses this problem and proposes a complete framework to predict surface reflectance properties of outdoor scenes under unknown natural illumination. Uniquely, we recast the problem into its two constituent components involving the BRDF incoming light and outgoing view directions: (i) surface points' radiance captured in the images, and outgoing view directions are aggregated and encoded into reflectance maps, and (ii) a neural network trained on reflectance maps of renders of a unit sphere under arbitrary light directions infers a low-parameter reflection model representing the reflectance properties at each surface in the scene. Our model is based on a combination of phenomenological and physics-based scattering models and can relight the scenes from novel viewpoints. We present experiments that show that rendering with the predicted reflectance properties results in a visually similar appearance to using textures that cannot otherwise be disentangled from the reflectance properties. |
2202.02765 | Naman Agarwal | Julian Zimmert, Naman Agarwal, Satyen Kale | Pushing the Efficiency-Regret Pareto Frontier for Online Learning of
Portfolios and Quantum States | null | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | We revisit the classical online portfolio selection problem. It is widely
assumed that a trade-off between computational complexity and regret is
unavoidable, with Cover's Universal Portfolios algorithm, SOFT-BAYES and
ADA-BARRONS currently constituting its state-of-the-art Pareto frontier. In
this paper, we present the first efficient algorithm, BISONS, that obtains
polylogarithmic regret with memory and per-step running time requirements that
are polynomial in the dimension, displacing ADA-BARRONS from the Pareto
frontier. Additionally, we resolve a COLT 2020 open problem by showing that a
certain Follow-The-Regularized-Leader algorithm with log-barrier regularization
suffers an exponentially larger dependence on the dimension than previously
conjectured. Thus, we rule out this algorithm as a candidate for the Pareto
frontier. We also extend our algorithm and analysis to a more general problem
than online portfolio selection, viz. online learning of quantum states with
log loss. This algorithm, called SCHRODINGER'S BISONS, is the first efficient
algorithm with polylogarithmic regret for this more general problem.
| [
{
"created": "Sun, 6 Feb 2022 12:15:55 GMT",
"version": "v1"
}
] | 2022-02-08 | [
[
"Zimmert",
"Julian",
""
],
[
"Agarwal",
"Naman",
""
],
[
"Kale",
"Satyen",
""
]
] | We revisit the classical online portfolio selection problem. It is widely assumed that a trade-off between computational complexity and regret is unavoidable, with Cover's Universal Portfolios algorithm, SOFT-BAYES and ADA-BARRONS currently constituting its state-of-the-art Pareto frontier. In this paper, we present the first efficient algorithm, BISONS, that obtains polylogarithmic regret with memory and per-step running time requirements that are polynomial in the dimension, displacing ADA-BARRONS from the Pareto frontier. Additionally, we resolve a COLT 2020 open problem by showing that a certain Follow-The-Regularized-Leader algorithm with log-barrier regularization suffers an exponentially larger dependence on the dimension than previously conjectured. Thus, we rule out this algorithm as a candidate for the Pareto frontier. We also extend our algorithm and analysis to a more general problem than online portfolio selection, viz. online learning of quantum states with log loss. This algorithm, called SCHRODINGER'S BISONS, is the first efficient algorithm with polylogarithmic regret for this more general problem. |
cs/0410042 | Patrick C. McGuire | H. Ritter, J.J. Steil, C. Noelker, F. Roethling, P.C. McGuire | Neural Architectures for Robot Intelligence | 37 pages, 17 figures | Reviews in the Neurosciences, vol. 14, no. 1-2, pp. 121-143 (2003) | null | null | cs.RO cs.CV cs.HC cs.LG cs.NE q-bio.NC | null | We argue that the direct experimental approaches to elucidate the
architecture of higher brains may benefit from insights gained from exploring
the possibilities and limits of artificial control architectures for robot
systems. We present some of our recent work that has been motivated by that
view and that is centered around the study of various aspects of hand actions
since these are intimately linked with many higher cognitive abilities. As
examples, we report on the development of a modular system for the recognition
of continuous hand postures based on neural nets, the use of vision and tactile
sensing for guiding prehensile movements of a multifingered hand, and the
recognition and use of hand gestures for robot teaching.
Regarding the issue of learning, we propose to view real-world learning from
the perspective of data mining and to focus more strongly on the imitation of
observed actions instead of purely reinforcement-based exploration. As a
concrete example of such an effort we report on the status of an ongoing
project in our lab in which a robot equipped with an attention system with a
neurally inspired architecture is taught actions by using hand gestures in
conjunction with speech commands. We point out some of the lessons learnt from
this system, and discuss how systems of this kind can contribute to the study
of issues at the junction between natural and artificial cognitive systems.
| [
{
"created": "Mon, 18 Oct 2004 10:50:28 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Ritter",
"H.",
""
],
[
"Steil",
"J. J.",
""
],
[
"Noelker",
"C.",
""
],
[
"Roethling",
"F.",
""
],
[
"McGuire",
"P. C.",
""
]
] | We argue that the direct experimental approaches to elucidate the architecture of higher brains may benefit from insights gained from exploring the possibilities and limits of artificial control architectures for robot systems. We present some of our recent work that has been motivated by that view and that is centered around the study of various aspects of hand actions since these are intimately linked with many higher cognitive abilities. As examples, we report on the development of a modular system for the recognition of continuous hand postures based on neural nets, the use of vision and tactile sensing for guiding prehensile movements of a multifingered hand, and the recognition and use of hand gestures for robot teaching. Regarding the issue of learning, we propose to view real-world learning from the perspective of data mining and to focus more strongly on the imitation of observed actions instead of purely reinforcement-based exploration. As a concrete example of such an effort we report on the status of an ongoing project in our lab in which a robot equipped with an attention system with a neurally inspired architecture is taught actions by using hand gestures in conjunction with speech commands. We point out some of the lessons learnt from this system, and discuss how systems of this kind can contribute to the study of issues at the junction between natural and artificial cognitive systems. |
1505.06427 | Lantian Li Mr. | Lantian Li and Dong Wang and Zhiyong Zhang and Thomas Fang Zheng | Deep Speaker Vectors for Semi Text-independent Speaker Verification | null | null | null | null | cs.CL cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent research shows that deep neural networks (DNNs) can be used to extract
deep speaker vectors (d-vectors) that preserve speaker characteristics and can
be used in speaker verification. This new method has been tested on
text-dependent speaker verification tasks, and improvement was reported when
combined with the conventional i-vector method.
This paper extends the d-vector approach to semi text-independent speaker
verification tasks, i.e., the text of the speech is in a limited set of short
phrases. We explore various settings of the DNN structure used for d-vector
extraction, and present a phone-dependent training which employs the posterior
features obtained from an ASR system. The experimental results show that it is
possible to apply d-vectors on semi text-independent speaker recognition, and
the phone-dependent training improves system performance.
| [
{
"created": "Sun, 24 May 2015 11:22:40 GMT",
"version": "v1"
}
] | 2015-05-26 | [
[
"Li",
"Lantian",
""
],
[
"Wang",
"Dong",
""
],
[
"Zhang",
"Zhiyong",
""
],
[
"Zheng",
"Thomas Fang",
""
]
] | Recent research shows that deep neural networks (DNNs) can be used to extract deep speaker vectors (d-vectors) that preserve speaker characteristics and can be used in speaker verification. This new method has been tested on text-dependent speaker verification tasks, and improvement was reported when combined with the conventional i-vector method. This paper extends the d-vector approach to semi text-independent speaker verification tasks, i.e., the text of the speech is in a limited set of short phrases. We explore various settings of the DNN structure used for d-vector extraction, and present a phone-dependent training which employs the posterior features obtained from an ASR system. The experimental results show that it is possible to apply d-vectors on semi text-independent speaker recognition, and the phone-dependent training improves system performance. |
2008.06877 | Meysam Asgari-Chenaghlu | Meysam Asgari-Chenaghlu, Mohammad-Reza Feizi-Derakhshi, Leili
farzinvash, Mohammad-Ali Balafar, Cina Motamed | TopicBERT: A Transformer transfer learning based memory-graph approach
for multimodal streaming social media topic detection | null | null | 10.1016/j.chaos.2021.111274 | null | cs.CL cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Real time nature of social networks with bursty short messages and their
respective large data scale spread among vast variety of topics are research
interest of many researchers. These properties of social networks which are
known as 5'Vs of big data has led to many unique and enlightenment algorithms
and techniques applied to large social networking datasets and data streams.
Many of these researches are based on detection and tracking of hot topics and
trending social media events that help revealing many unanswered questions.
These algorithms and in some cases software products mostly rely on the nature
of the language itself. Although, other techniques such as unsupervised data
mining methods are language independent but many requirements for a
comprehensive solution are not met. Many research issues such as noisy
sentences that adverse grammar and new online user invented words are
challenging maintenance of a good social network topic detection and tracking
methodology; The semantic relationship between words and in most cases,
synonyms are also ignored by many of these researches. In this research, we use
Transformers combined with an incremental community detection algorithm.
Transformer in one hand, provides the semantic relation between words in
different contexts. On the other hand, the proposed graph mining technique
enhances the resulting topics with aid of simple structural rules. Named entity
recognition from multimodal data, image and text, labels the named entities
with entity type and the extracted topics are tuned using them. All operations
of proposed system has been applied with big social data perspective under
NoSQL technologies. In order to present a working and systematic solution, we
combined MongoDB with Neo4j as two major database systems of our work. The
proposed system shows higher precision and recall compared to other methods in
three different datasets.
| [
{
"created": "Sun, 16 Aug 2020 10:39:50 GMT",
"version": "v1"
}
] | 2021-08-05 | [
[
"Asgari-Chenaghlu",
"Meysam",
""
],
[
"Feizi-Derakhshi",
"Mohammad-Reza",
""
],
[
"farzinvash",
"Leili",
""
],
[
"Balafar",
"Mohammad-Ali",
""
],
[
"Motamed",
"Cina",
""
]
] | Real time nature of social networks with bursty short messages and their respective large data scale spread among vast variety of topics are research interest of many researchers. These properties of social networks which are known as 5'Vs of big data has led to many unique and enlightenment algorithms and techniques applied to large social networking datasets and data streams. Many of these researches are based on detection and tracking of hot topics and trending social media events that help revealing many unanswered questions. These algorithms and in some cases software products mostly rely on the nature of the language itself. Although, other techniques such as unsupervised data mining methods are language independent but many requirements for a comprehensive solution are not met. Many research issues such as noisy sentences that adverse grammar and new online user invented words are challenging maintenance of a good social network topic detection and tracking methodology; The semantic relationship between words and in most cases, synonyms are also ignored by many of these researches. In this research, we use Transformers combined with an incremental community detection algorithm. Transformer in one hand, provides the semantic relation between words in different contexts. On the other hand, the proposed graph mining technique enhances the resulting topics with aid of simple structural rules. Named entity recognition from multimodal data, image and text, labels the named entities with entity type and the extracted topics are tuned using them. All operations of proposed system has been applied with big social data perspective under NoSQL technologies. In order to present a working and systematic solution, we combined MongoDB with Neo4j as two major database systems of our work. The proposed system shows higher precision and recall compared to other methods in three different datasets. |
2403.12402 | Yifan Peng | Yifan Peng, Ilia Kulikov, Yilin Yang, Sravya Popuri, Hui Lu, Changhan
Wang, Hongyu Gong | An Empirical Study of Speech Language Models for Prompt-Conditioned
Speech Synthesis | null | null | null | null | cs.CL cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Speech language models (LMs) are promising for high-quality speech synthesis
through in-context learning. A typical speech LM takes discrete semantic units
as content and a short utterance as prompt, and synthesizes speech which
preserves the content's semantics but mimics the prompt's style. However, there
is no systematic understanding on how the synthesized audio is controlled by
the prompt and content. In this work, we conduct an empirical study of the
widely used autoregressive (AR) and non-autoregressive (NAR) speech LMs and
provide insights into the prompt design and content semantic units. Our
analysis reveals that heterogeneous and nonstationary prompts hurt the audio
quality in contrast to the previous finding that longer prompts always lead to
better synthesis. Moreover, we find that the speaker style of the synthesized
audio is also affected by the content in addition to the prompt. We further
show that semantic units carry rich acoustic information such as pitch, tempo,
volume and speech emphasis, which might be leaked from the content to the
synthesized audio.
| [
{
"created": "Tue, 19 Mar 2024 03:22:28 GMT",
"version": "v1"
}
] | 2024-03-20 | [
[
"Peng",
"Yifan",
""
],
[
"Kulikov",
"Ilia",
""
],
[
"Yang",
"Yilin",
""
],
[
"Popuri",
"Sravya",
""
],
[
"Lu",
"Hui",
""
],
[
"Wang",
"Changhan",
""
],
[
"Gong",
"Hongyu",
""
]
] | Speech language models (LMs) are promising for high-quality speech synthesis through in-context learning. A typical speech LM takes discrete semantic units as content and a short utterance as prompt, and synthesizes speech which preserves the content's semantics but mimics the prompt's style. However, there is no systematic understanding on how the synthesized audio is controlled by the prompt and content. In this work, we conduct an empirical study of the widely used autoregressive (AR) and non-autoregressive (NAR) speech LMs and provide insights into the prompt design and content semantic units. Our analysis reveals that heterogeneous and nonstationary prompts hurt the audio quality in contrast to the previous finding that longer prompts always lead to better synthesis. Moreover, we find that the speaker style of the synthesized audio is also affected by the content in addition to the prompt. We further show that semantic units carry rich acoustic information such as pitch, tempo, volume and speech emphasis, which might be leaked from the content to the synthesized audio. |
1809.00060 | Ga Wu | Yu Qing Zhou, Ga Wu, Scott Sanner, Putra Manggala | Aesthetic Features for Personalized Photo Recommendation | In Proceedings of the Late-Breaking Results track part of the Twelfth
ACM Conference on Recommender Systems, Vancouver, BC, Canada, October 6,
2018, 2 pages | null | null | null | cs.IR cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many photography websites such as Flickr, 500px, Unsplash, and Adobe Behance
are used by amateur and professional photography enthusiasts. Unlike
content-based image search, such users of photography websites are not just
looking for photos with certain content, but more generally for photos with a
certain photographic "aesthetic". In this context, we explore personalized
photo recommendation and propose two aesthetic feature extraction methods based
on (i) color space and (ii) deep style transfer embeddings. Using a dataset
from 500px, we evaluate how these features can be best leveraged by
collaborative filtering methods and show that (ii) provides a significant boost
in photo recommendation performance.
| [
{
"created": "Fri, 31 Aug 2018 20:57:26 GMT",
"version": "v1"
}
] | 2018-09-05 | [
[
"Zhou",
"Yu Qing",
""
],
[
"Wu",
"Ga",
""
],
[
"Sanner",
"Scott",
""
],
[
"Manggala",
"Putra",
""
]
] | Many photography websites such as Flickr, 500px, Unsplash, and Adobe Behance are used by amateur and professional photography enthusiasts. Unlike content-based image search, such users of photography websites are not just looking for photos with certain content, but more generally for photos with a certain photographic "aesthetic". In this context, we explore personalized photo recommendation and propose two aesthetic feature extraction methods based on (i) color space and (ii) deep style transfer embeddings. Using a dataset from 500px, we evaluate how these features can be best leveraged by collaborative filtering methods and show that (ii) provides a significant boost in photo recommendation performance. |
2101.08197 | Rafael Ferreira | Rafael Ferreira, Mariana Leite, David Semedo and Joao Magalhaes | Open-Domain Conversational Search Assistant with Transformers | null | null | null | null | cs.IR cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Open-domain conversational search assistants aim at answering user questions
about open topics in a conversational manner. In this paper we show how the
Transformer architecture achieves state-of-the-art results in key IR tasks,
leveraging the creation of conversational assistants that engage in open-domain
conversational search with single, yet informative, answers. In particular, we
propose an open-domain abstractive conversational search agent pipeline to
address two major challenges: first, conversation context-aware search and
second, abstractive search-answers generation. To address the first challenge,
the conversation context is modeled with a query rewriting method that unfolds
the context of the conversation up to a specific moment to search for the
correct answers. These answers are then passed to a Transformer-based re-ranker
to further improve retrieval performance. The second challenge, is tackled with
recent Abstractive Transformer architectures to generate a digest of the top
most relevant passages. Experiments show that Transformers deliver a solid
performance across all tasks in conversational search, outperforming the best
TREC CAsT 2019 baseline.
| [
{
"created": "Wed, 20 Jan 2021 16:02:15 GMT",
"version": "v1"
}
] | 2021-01-21 | [
[
"Ferreira",
"Rafael",
""
],
[
"Leite",
"Mariana",
""
],
[
"Semedo",
"David",
""
],
[
"Magalhaes",
"Joao",
""
]
] | Open-domain conversational search assistants aim at answering user questions about open topics in a conversational manner. In this paper we show how the Transformer architecture achieves state-of-the-art results in key IR tasks, leveraging the creation of conversational assistants that engage in open-domain conversational search with single, yet informative, answers. In particular, we propose an open-domain abstractive conversational search agent pipeline to address two major challenges: first, conversation context-aware search and second, abstractive search-answers generation. To address the first challenge, the conversation context is modeled with a query rewriting method that unfolds the context of the conversation up to a specific moment to search for the correct answers. These answers are then passed to a Transformer-based re-ranker to further improve retrieval performance. The second challenge, is tackled with recent Abstractive Transformer architectures to generate a digest of the top most relevant passages. Experiments show that Transformers deliver a solid performance across all tasks in conversational search, outperforming the best TREC CAsT 2019 baseline. |
2404.06797 | Alma Ghafari | Soheil Behnezhad, Moses Charikar, Vincent Cohen-Addad, Alma Ghafari,
Weiyun Ma | Fully Dynamic Correlation Clustering: Breaking 3-Approximation | null | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the classic correlation clustering in the dynamic setting. Given $n$
objects and a complete labeling of the object-pairs as either similar or
dissimilar, the goal is to partition the objects into arbitrarily many clusters
while minimizing disagreements with the labels. In the dynamic setting, an
update consists of a flip of a label of an edge. In a breakthrough result,
[BDHSS, FOCS'19] showed how to maintain a 3-approximation with polylogarithmic
update time by providing a dynamic implementation of the Pivot algorithm of
[ACN, STOC'05]. Since then, it has been a major open problem to determine
whether the 3-approximation barrier can be broken in the fully dynamic setting.
In this paper, we resolve this problem. Our algorithm, Modified Pivot, locally
improves the output of Pivot by moving some vertices to other existing clusters
or new singleton clusters. We present an analysis showing that this
modification does indeed improve the approximation to below 3. We also show
that its output can be maintained in polylogarithmic time per update.
| [
{
"created": "Wed, 10 Apr 2024 07:36:34 GMT",
"version": "v1"
},
{
"created": "Thu, 11 Apr 2024 18:35:12 GMT",
"version": "v2"
}
] | 2024-04-15 | [
[
"Behnezhad",
"Soheil",
""
],
[
"Charikar",
"Moses",
""
],
[
"Cohen-Addad",
"Vincent",
""
],
[
"Ghafari",
"Alma",
""
],
[
"Ma",
"Weiyun",
""
]
] | We study the classic correlation clustering in the dynamic setting. Given $n$ objects and a complete labeling of the object-pairs as either similar or dissimilar, the goal is to partition the objects into arbitrarily many clusters while minimizing disagreements with the labels. In the dynamic setting, an update consists of a flip of a label of an edge. In a breakthrough result, [BDHSS, FOCS'19] showed how to maintain a 3-approximation with polylogarithmic update time by providing a dynamic implementation of the Pivot algorithm of [ACN, STOC'05]. Since then, it has been a major open problem to determine whether the 3-approximation barrier can be broken in the fully dynamic setting. In this paper, we resolve this problem. Our algorithm, Modified Pivot, locally improves the output of Pivot by moving some vertices to other existing clusters or new singleton clusters. We present an analysis showing that this modification does indeed improve the approximation to below 3. We also show that its output can be maintained in polylogarithmic time per update. |
2401.02038 | Yiheng Liu | Yiheng Liu, Hao He, Tianle Han, Xu Zhang, Mengyuan Liu, Jiaming Tian,
Yutong Zhang, Jiaqi Wang, Xiaohui Gao, Tianyang Zhong, Yi Pan, Shaochen Xu,
Zihao Wu, Zhengliang Liu, Xin Zhang, Shu Zhang, Xintao Hu, Tuo Zhang, Ning
Qiang, Tianming Liu, Bao Ge | Understanding LLMs: A Comprehensive Overview from Training to Inference | 30 pages,6 figures | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | The introduction of ChatGPT has led to a significant increase in the
utilization of Large Language Models (LLMs) for addressing downstream tasks.
There's an increasing focus on cost-efficient training and deployment within
this context. Low-cost training and deployment of LLMs represent the future
development trend. This paper reviews the evolution of large language model
training techniques and inference deployment technologies aligned with this
emerging trend. The discussion on training includes various aspects, including
data preprocessing, training architecture, pre-training tasks, parallel
training, and relevant content related to model fine-tuning. On the inference
side, the paper covers topics such as model compression, parallel computation,
memory scheduling, and structural optimization. It also explores LLMs'
utilization and provides insights into their future development.
| [
{
"created": "Thu, 4 Jan 2024 02:43:57 GMT",
"version": "v1"
},
{
"created": "Sat, 6 Jan 2024 03:32:08 GMT",
"version": "v2"
}
] | 2024-01-09 | [
[
"Liu",
"Yiheng",
""
],
[
"He",
"Hao",
""
],
[
"Han",
"Tianle",
""
],
[
"Zhang",
"Xu",
""
],
[
"Liu",
"Mengyuan",
""
],
[
"Tian",
"Jiaming",
""
],
[
"Zhang",
"Yutong",
""
],
[
"Wang",
"Jiaqi",
""
],
[
"Gao",
"Xiaohui",
""
],
[
"Zhong",
"Tianyang",
""
],
[
"Pan",
"Yi",
""
],
[
"Xu",
"Shaochen",
""
],
[
"Wu",
"Zihao",
""
],
[
"Liu",
"Zhengliang",
""
],
[
"Zhang",
"Xin",
""
],
[
"Zhang",
"Shu",
""
],
[
"Hu",
"Xintao",
""
],
[
"Zhang",
"Tuo",
""
],
[
"Qiang",
"Ning",
""
],
[
"Liu",
"Tianming",
""
],
[
"Ge",
"Bao",
""
]
] | The introduction of ChatGPT has led to a significant increase in the utilization of Large Language Models (LLMs) for addressing downstream tasks. There's an increasing focus on cost-efficient training and deployment within this context. Low-cost training and deployment of LLMs represent the future development trend. This paper reviews the evolution of large language model training techniques and inference deployment technologies aligned with this emerging trend. The discussion on training includes various aspects, including data preprocessing, training architecture, pre-training tasks, parallel training, and relevant content related to model fine-tuning. On the inference side, the paper covers topics such as model compression, parallel computation, memory scheduling, and structural optimization. It also explores LLMs' utilization and provides insights into their future development. |
1809.07221 | Hazel Murray | Hazel Murray and David Malone | Exploring the Impact of Password Dataset Distribution on Guessing | null | 2018 16th Annual Conference on Privacy, Security and Trust (PST) | 10.1109/PST.2018.8514194 | null | cs.CR stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Leaks from password datasets are a regular occurrence. An organization may
defend a leak with reassurances that just a small subset of passwords were
taken. In this paper we show that the leak of a relatively small number of
text-based passwords from an organizations' stored dataset can lead to a
further large collection of users being compromised. Taking a sample of
passwords from a given dataset of passwords we exploit the knowledge we gain of
the distribution to guess other samples from the same dataset. We show
theoretically and empirically that the distribution of passwords in the sample
follows the same distribution as the passwords in the whole dataset. We propose
a function that measures the ability of one distribution to estimate another.
Leveraging this we show that a sample of passwords leaked from a given dataset,
will compromise the remaining passwords in that dataset better than a sample
leaked from another source.
| [
{
"created": "Wed, 19 Sep 2018 14:35:57 GMT",
"version": "v1"
}
] | 2021-03-29 | [
[
"Murray",
"Hazel",
""
],
[
"Malone",
"David",
""
]
] | Leaks from password datasets are a regular occurrence. An organization may defend a leak with reassurances that just a small subset of passwords were taken. In this paper we show that the leak of a relatively small number of text-based passwords from an organizations' stored dataset can lead to a further large collection of users being compromised. Taking a sample of passwords from a given dataset of passwords we exploit the knowledge we gain of the distribution to guess other samples from the same dataset. We show theoretically and empirically that the distribution of passwords in the sample follows the same distribution as the passwords in the whole dataset. We propose a function that measures the ability of one distribution to estimate another. Leveraging this we show that a sample of passwords leaked from a given dataset, will compromise the remaining passwords in that dataset better than a sample leaked from another source. |
2110.13465 | Ruiteng Zhang | Ruiteng Zhang, Jianguo Wei, Wenhuan Lu, Lin Zhang, Yantao Ji, Junhai
Xu, Xugang Lu | CS-Rep: Making Speaker Verification Networks Embracing
Re-parameterization | Accepted by ICASSP 2022 | null | null | null | cs.SD cs.LG eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatic speaker verification (ASV) systems, which determine whether two
speeches are from the same speaker, mainly focus on verification accuracy while
ignoring inference speed. However, in real applications, both inference speed
and verification accuracy are essential. This study proposes cross-sequential
re-parameterization (CS-Rep), a novel topology re-parameterization strategy for
multi-type networks, to increase the inference speed and verification accuracy
of models. CS-Rep solves the problem that existing re-parameterization methods
are unsuitable for typical ASV backbones. When a model applies CS-Rep, the
training-period network utilizes a multi-branch topology to capture speaker
information, whereas the inference-period model converts to a time-delay neural
network (TDNN)-like plain backbone with stacked TDNN layers to achieve the fast
inference speed. Based on CS-Rep, an improved TDNN with friendly test and
deployment called Rep-TDNN is proposed. Compared with the state-of-the-art
model ECAPA-TDNN, which is highly recognized in the industry, Rep-TDNN
increases the actual inference speed by about 50% and reduces the EER by 10%.
The code will be released.
| [
{
"created": "Tue, 26 Oct 2021 08:00:03 GMT",
"version": "v1"
},
{
"created": "Mon, 4 Apr 2022 02:20:53 GMT",
"version": "v2"
}
] | 2022-04-05 | [
[
"Zhang",
"Ruiteng",
""
],
[
"Wei",
"Jianguo",
""
],
[
"Lu",
"Wenhuan",
""
],
[
"Zhang",
"Lin",
""
],
[
"Ji",
"Yantao",
""
],
[
"Xu",
"Junhai",
""
],
[
"Lu",
"Xugang",
""
]
] | Automatic speaker verification (ASV) systems, which determine whether two speeches are from the same speaker, mainly focus on verification accuracy while ignoring inference speed. However, in real applications, both inference speed and verification accuracy are essential. This study proposes cross-sequential re-parameterization (CS-Rep), a novel topology re-parameterization strategy for multi-type networks, to increase the inference speed and verification accuracy of models. CS-Rep solves the problem that existing re-parameterization methods are unsuitable for typical ASV backbones. When a model applies CS-Rep, the training-period network utilizes a multi-branch topology to capture speaker information, whereas the inference-period model converts to a time-delay neural network (TDNN)-like plain backbone with stacked TDNN layers to achieve the fast inference speed. Based on CS-Rep, an improved TDNN with friendly test and deployment called Rep-TDNN is proposed. Compared with the state-of-the-art model ECAPA-TDNN, which is highly recognized in the industry, Rep-TDNN increases the actual inference speed by about 50% and reduces the EER by 10%. The code will be released. |
2103.11467 | Daniyal Amir Awan | Daniyal Amir Awan, Renato L.G. Cavalcante, Slawomir Stanczak | Robust Cell-Load Learning with a Small Sample Set | Published in IEEE Transactions on Signal Processing ( Volume: 68) | IEEE Transactions on Signal Processing, Volume 68, 2020 | null | null | cs.IT cs.LG math.IT | http://creativecommons.org/licenses/by/4.0/ | Learning of the cell-load in radio access networks (RANs) has to be performed
within a short time period. Therefore, we propose a learning framework that is
robust against uncertainties resulting from the need for learning based on a
relatively small training sample set. To this end, we incorporate prior
knowledge about the cell-load in the learning framework. For example, an
inherent property of the cell-load is that it is monotonic in downlink (data)
rates. To obtain additional prior knowledge we first study the feasible rate
region, i.e., the set of all vectors of user rates that can be supported by the
network. We prove that the feasible rate region is compact. Moreover, we show
the existence of a Lipschitz function that maps feasible rate vectors to
cell-load vectors. With these results in hand, we present a learning technique
that guarantees a minimum approximation error in the worst-case scenario by
using prior knowledge and a small training sample set. Simulations in the
network simulator NS3 demonstrate that the proposed method exhibits better
robustness and accuracy than standard multivariate learning techniques,
especially for small training sample sets.
| [
{
"created": "Sun, 21 Mar 2021 19:17:01 GMT",
"version": "v1"
}
] | 2021-03-23 | [
[
"Awan",
"Daniyal Amir",
""
],
[
"Cavalcante",
"Renato L. G.",
""
],
[
"Stanczak",
"Slawomir",
""
]
] | Learning of the cell-load in radio access networks (RANs) has to be performed within a short time period. Therefore, we propose a learning framework that is robust against uncertainties resulting from the need for learning based on a relatively small training sample set. To this end, we incorporate prior knowledge about the cell-load in the learning framework. For example, an inherent property of the cell-load is that it is monotonic in downlink (data) rates. To obtain additional prior knowledge we first study the feasible rate region, i.e., the set of all vectors of user rates that can be supported by the network. We prove that the feasible rate region is compact. Moreover, we show the existence of a Lipschitz function that maps feasible rate vectors to cell-load vectors. With these results in hand, we present a learning technique that guarantees a minimum approximation error in the worst-case scenario by using prior knowledge and a small training sample set. Simulations in the network simulator NS3 demonstrate that the proposed method exhibits better robustness and accuracy than standard multivariate learning techniques, especially for small training sample sets. |
1904.10772 | Cedric Scheerlinck | Cedric Scheerlinck, Henri Rebecq, Timo Stoffregen, Nick Barnes, Robert
Mahony, Davide Scaramuzza | CED: Color Event Camera Dataset | Conference on Computer Vision and Pattern Recognition Workshops | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Event cameras are novel, bio-inspired visual sensors, whose pixels output
asynchronous and independent timestamped spikes at local intensity changes,
called 'events'. Event cameras offer advantages over conventional frame-based
cameras in terms of latency, high dynamic range (HDR) and temporal resolution.
Until recently, event cameras have been limited to outputting events in the
intensity channel, however, recent advances have resulted in the development of
color event cameras, such as the Color-DAVIS346. In this work, we present and
release the first Color Event Camera Dataset (CED), containing 50 minutes of
footage with both color frames and events. CED features a wide variety of
indoor and outdoor scenes, which we hope will help drive forward event-based
vision research. We also present an extension of the event camera simulator
ESIM that enables simulation of color events. Finally, we present an evaluation
of three state-of-the-art image reconstruction methods that can be used to
convert the Color-DAVIS346 into a continuous-time, HDR, color video camera to
visualise the event stream, and for use in downstream vision applications.
| [
{
"created": "Wed, 24 Apr 2019 12:42:12 GMT",
"version": "v1"
}
] | 2019-04-25 | [
[
"Scheerlinck",
"Cedric",
""
],
[
"Rebecq",
"Henri",
""
],
[
"Stoffregen",
"Timo",
""
],
[
"Barnes",
"Nick",
""
],
[
"Mahony",
"Robert",
""
],
[
"Scaramuzza",
"Davide",
""
]
] | Event cameras are novel, bio-inspired visual sensors, whose pixels output asynchronous and independent timestamped spikes at local intensity changes, called 'events'. Event cameras offer advantages over conventional frame-based cameras in terms of latency, high dynamic range (HDR) and temporal resolution. Until recently, event cameras have been limited to outputting events in the intensity channel, however, recent advances have resulted in the development of color event cameras, such as the Color-DAVIS346. In this work, we present and release the first Color Event Camera Dataset (CED), containing 50 minutes of footage with both color frames and events. CED features a wide variety of indoor and outdoor scenes, which we hope will help drive forward event-based vision research. We also present an extension of the event camera simulator ESIM that enables simulation of color events. Finally, we present an evaluation of three state-of-the-art image reconstruction methods that can be used to convert the Color-DAVIS346 into a continuous-time, HDR, color video camera to visualise the event stream, and for use in downstream vision applications. |
2406.18242 | Dongqi Fan | Dongqi Fan, Junhao Zhang, Liang Chang | ConStyle v2: A Strong Prompter for All-in-One Image Restoration | null | null | null | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces ConStyle v2, a strong plug-and-play prompter designed
to output clean visual prompts and assist U-Net Image Restoration models in
handling multiple degradations. The joint training process of IRConStyle, an
Image Restoration framework consisting of ConStyle and a general restoration
network, is divided into two stages: first, pre-training ConStyle alone, and
then freezing its weights to guide the training of the general restoration
network. Three improvements are proposed in the pre-training stage to train
ConStyle: unsupervised pre-training, adding a pretext task (i.e.
classification), and adopting knowledge distillation. Without bells and
whistles, we can get ConStyle v2, a strong prompter for all-in-one Image
Restoration, in less than two GPU days and doesn't require any fine-tuning.
Extensive experiments on Restormer (transformer-based), NAFNet (CNN-based),
MAXIM-1S (MLP-based), and a vanilla CNN network demonstrate that ConStyle v2
can enhance any U-Net style Image Restoration models to all-in-one Image
Restoration models. Furthermore, models guided by the well-trained ConStyle v2
exhibit superior performance in some specific degradation compared to ConStyle.
| [
{
"created": "Wed, 26 Jun 2024 10:46:44 GMT",
"version": "v1"
}
] | 2024-06-27 | [
[
"Fan",
"Dongqi",
""
],
[
"Zhang",
"Junhao",
""
],
[
"Chang",
"Liang",
""
]
] | This paper introduces ConStyle v2, a strong plug-and-play prompter designed to output clean visual prompts and assist U-Net Image Restoration models in handling multiple degradations. The joint training process of IRConStyle, an Image Restoration framework consisting of ConStyle and a general restoration network, is divided into two stages: first, pre-training ConStyle alone, and then freezing its weights to guide the training of the general restoration network. Three improvements are proposed in the pre-training stage to train ConStyle: unsupervised pre-training, adding a pretext task (i.e. classification), and adopting knowledge distillation. Without bells and whistles, we can get ConStyle v2, a strong prompter for all-in-one Image Restoration, in less than two GPU days and doesn't require any fine-tuning. Extensive experiments on Restormer (transformer-based), NAFNet (CNN-based), MAXIM-1S (MLP-based), and a vanilla CNN network demonstrate that ConStyle v2 can enhance any U-Net style Image Restoration models to all-in-one Image Restoration models. Furthermore, models guided by the well-trained ConStyle v2 exhibit superior performance in some specific degradation compared to ConStyle. |
2002.12478 | Qingsong Wen | Qingsong Wen, Liang Sun, Fan Yang, Xiaomin Song, Jingkun Gao, Xue
Wang, Huan Xu | Time Series Data Augmentation for Deep Learning: A Survey | Accepted by the 30th International Joint Conference on Artificial
Intelligence (IJCAI 2021); Selected by Paper Digest into Most Influential
IJCAI Papers (Version: 2022-02), Rank 1st (Link:
https://www.paperdigest.org/2022/02/most-influential-ijcai-papers-2022-02/) | null | 10.24963/ijcai.2021/631 | null | cs.LG eess.SP stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning performs remarkably well on many time series analysis tasks
recently. The superior performance of deep neural networks relies heavily on a
large number of training data to avoid overfitting. However, the labeled data
of many real-world time series applications may be limited such as
classification in medical time series and anomaly detection in AIOps. As an
effective way to enhance the size and quality of the training data, data
augmentation is crucial to the successful application of deep learning models
on time series data. In this paper, we systematically review different data
augmentation methods for time series. We propose a taxonomy for the reviewed
methods, and then provide a structured review for these methods by highlighting
their strengths and limitations. We also empirically compare different data
augmentation methods for different tasks including time series classification,
anomaly detection, and forecasting. Finally, we discuss and highlight five
future directions to provide useful research guidance.
| [
{
"created": "Thu, 27 Feb 2020 23:38:11 GMT",
"version": "v1"
},
{
"created": "Fri, 5 Mar 2021 03:40:17 GMT",
"version": "v2"
},
{
"created": "Sat, 18 Sep 2021 01:11:36 GMT",
"version": "v3"
},
{
"created": "Thu, 31 Mar 2022 18:22:00 GMT",
"version": "v4"
}
] | 2022-04-04 | [
[
"Wen",
"Qingsong",
""
],
[
"Sun",
"Liang",
""
],
[
"Yang",
"Fan",
""
],
[
"Song",
"Xiaomin",
""
],
[
"Gao",
"Jingkun",
""
],
[
"Wang",
"Xue",
""
],
[
"Xu",
"Huan",
""
]
] | Deep learning performs remarkably well on many time series analysis tasks recently. The superior performance of deep neural networks relies heavily on a large number of training data to avoid overfitting. However, the labeled data of many real-world time series applications may be limited such as classification in medical time series and anomaly detection in AIOps. As an effective way to enhance the size and quality of the training data, data augmentation is crucial to the successful application of deep learning models on time series data. In this paper, we systematically review different data augmentation methods for time series. We propose a taxonomy for the reviewed methods, and then provide a structured review for these methods by highlighting their strengths and limitations. We also empirically compare different data augmentation methods for different tasks including time series classification, anomaly detection, and forecasting. Finally, we discuss and highlight five future directions to provide useful research guidance. |
2404.18934 | Michelle Greene | Michelle R. Greene, Benjamin J. Balas, Mark D. Lescroart, Paul R.
MacNeilage, Jennifer A. Hart, Kamran Binaee, Peter A. Hausamann, Ronald
Mezile, Bharath Shankar, Christian B. Sinnott, Kaylie Capurro, Savannah
Halow, Hunter Howe, Mariam Josyula, Annie Li, Abraham Mieses, Amina Mohamed,
Ilya Nudnou, Ezra Parkhill, Peter Riley, Brett Schmidt, Matthew W. Shinkle,
Wentao Si, Brian Szekely, Joaquin M. Torres, and Eliana Weissmann | The Visual Experience Dataset: Over 200 Recorded Hours of Integrated Eye
Movement, Odometry, and Egocentric Video | 40 pages, 1 table, 9 figures | null | null | null | cs.CV cs.HC | http://creativecommons.org/licenses/by/4.0/ | We introduce the Visual Experience Dataset (VEDB), a compilation of over 240
hours of egocentric video combined with gaze- and head-tracking data that
offers an unprecedented view of the visual world as experienced by human
observers. The dataset consists of 717 sessions, recorded by 58 observers
ranging from 6-49 years old. This paper outlines the data collection,
processing, and labeling protocols undertaken to ensure a representative sample
and discusses the potential sources of error or bias within the dataset. The
VEDB's potential applications are vast, including improving gaze tracking
methodologies, assessing spatiotemporal image statistics, and refining deep
neural networks for scene and activity recognition. The VEDB is accessible
through established open science platforms and is intended to be a living
dataset with plans for expansion and community contributions. It is released
with an emphasis on ethical considerations, such as participant privacy and the
mitigation of potential biases. By providing a dataset grounded in real-world
experiences and accompanied by extensive metadata and supporting code, the
authors invite the research community to utilize and contribute to the VEDB,
facilitating a richer understanding of visual perception and behavior in
naturalistic settings.
| [
{
"created": "Thu, 15 Feb 2024 10:34:28 GMT",
"version": "v1"
},
{
"created": "Tue, 13 Aug 2024 16:01:14 GMT",
"version": "v2"
}
] | 2024-08-14 | [
[
"Greene",
"Michelle R.",
""
],
[
"Balas",
"Benjamin J.",
""
],
[
"Lescroart",
"Mark D.",
""
],
[
"MacNeilage",
"Paul R.",
""
],
[
"Hart",
"Jennifer A.",
""
],
[
"Binaee",
"Kamran",
""
],
[
"Hausamann",
"Peter A.",
""
],
[
"Mezile",
"Ronald",
""
],
[
"Shankar",
"Bharath",
""
],
[
"Sinnott",
"Christian B.",
""
],
[
"Capurro",
"Kaylie",
""
],
[
"Halow",
"Savannah",
""
],
[
"Howe",
"Hunter",
""
],
[
"Josyula",
"Mariam",
""
],
[
"Li",
"Annie",
""
],
[
"Mieses",
"Abraham",
""
],
[
"Mohamed",
"Amina",
""
],
[
"Nudnou",
"Ilya",
""
],
[
"Parkhill",
"Ezra",
""
],
[
"Riley",
"Peter",
""
],
[
"Schmidt",
"Brett",
""
],
[
"Shinkle",
"Matthew W.",
""
],
[
"Si",
"Wentao",
""
],
[
"Szekely",
"Brian",
""
],
[
"Torres",
"Joaquin M.",
""
],
[
"Weissmann",
"Eliana",
""
]
] | We introduce the Visual Experience Dataset (VEDB), a compilation of over 240 hours of egocentric video combined with gaze- and head-tracking data that offers an unprecedented view of the visual world as experienced by human observers. The dataset consists of 717 sessions, recorded by 58 observers ranging from 6-49 years old. This paper outlines the data collection, processing, and labeling protocols undertaken to ensure a representative sample and discusses the potential sources of error or bias within the dataset. The VEDB's potential applications are vast, including improving gaze tracking methodologies, assessing spatiotemporal image statistics, and refining deep neural networks for scene and activity recognition. The VEDB is accessible through established open science platforms and is intended to be a living dataset with plans for expansion and community contributions. It is released with an emphasis on ethical considerations, such as participant privacy and the mitigation of potential biases. By providing a dataset grounded in real-world experiences and accompanied by extensive metadata and supporting code, the authors invite the research community to utilize and contribute to the VEDB, facilitating a richer understanding of visual perception and behavior in naturalistic settings. |
1808.06915 | Cor-Paul Bezemer | Cor-Paul Bezemer, Simon Eismann, Vincenzo Ferme, Johannes Grohmann,
Robert Heinrich, Pooyan Jamshidi, Weiyi Shang, Andr\'e van Hoorn, Monica
Villaviencio, J\"urgen Walter, Felix Willnecker | How is Performance Addressed in DevOps? A Survey on Industrial Practices | This research was conducted by the SPEC RG DevOps Performance Working
Group (https://research.spec.org/devopswg) | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | DevOps is a modern software engineering paradigm that is gaining widespread
adoption in industry. The goal of DevOps is to bring software changes into
production with a high frequency and fast feedback cycles. This conflicts with
software quality assurance activities, particularly with respect to
performance. For instance, performance evaluation activities -- such as load
testing -- require a considerable amount of time to get statistically
significant results.
We conducted an industrial survey to get insights into how performance is
addressed in industrial DevOps settings. In particular, we were interested in
the frequency of executing performance evaluations, the tools being used, the
granularity of the obtained performance data, and the use of model-based
techniques. The survey responses, which come from a wide variety of
participants from different industry sectors, indicate that the complexity of
performance engineering approaches and tools is a barrier for wide-spread
adoption of performance analysis in DevOps. The implication of our results is
that performance analysis tools need to have a short learning curve, and should
be easy to integrate into the DevOps pipeline.
| [
{
"created": "Tue, 21 Aug 2018 14:22:28 GMT",
"version": "v1"
}
] | 2018-08-22 | [
[
"Bezemer",
"Cor-Paul",
""
],
[
"Eismann",
"Simon",
""
],
[
"Ferme",
"Vincenzo",
""
],
[
"Grohmann",
"Johannes",
""
],
[
"Heinrich",
"Robert",
""
],
[
"Jamshidi",
"Pooyan",
""
],
[
"Shang",
"Weiyi",
""
],
[
"van Hoorn",
"André",
""
],
[
"Villaviencio",
"Monica",
""
],
[
"Walter",
"Jürgen",
""
],
[
"Willnecker",
"Felix",
""
]
] | DevOps is a modern software engineering paradigm that is gaining widespread adoption in industry. The goal of DevOps is to bring software changes into production with a high frequency and fast feedback cycles. This conflicts with software quality assurance activities, particularly with respect to performance. For instance, performance evaluation activities -- such as load testing -- require a considerable amount of time to get statistically significant results. We conducted an industrial survey to get insights into how performance is addressed in industrial DevOps settings. In particular, we were interested in the frequency of executing performance evaluations, the tools being used, the granularity of the obtained performance data, and the use of model-based techniques. The survey responses, which come from a wide variety of participants from different industry sectors, indicate that the complexity of performance engineering approaches and tools is a barrier for wide-spread adoption of performance analysis in DevOps. The implication of our results is that performance analysis tools need to have a short learning curve, and should be easy to integrate into the DevOps pipeline. |
1405.1102 | Joel Tropp | Joel A. Tropp | Convex recovery of a structured signal from independent random linear
measurements | 18 pages, 1 figure. To appear in "Sampling Theory, a Renaissance."
v2: minor corrections. v3: updated citations and increased emphasis on
Mendelson's contributions | null | null | null | cs.IT math.IT math.ST stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This chapter develops a theoretical analysis of the convex programming method
for recovering a structured signal from independent random linear measurements.
This technique delivers bounds for the sampling complexity that are similar
with recent results for standard Gaussian measurements, but the argument
applies to a much wider class of measurement ensembles. To demonstrate the
power of this approach, the paper presents a short analysis of phase retrieval
by trace-norm minimization. The key technical tool is a framework, due to
Mendelson and coauthors, for bounding a nonnegative empirical process.
| [
{
"created": "Mon, 5 May 2014 23:11:04 GMT",
"version": "v1"
},
{
"created": "Wed, 10 Sep 2014 00:06:07 GMT",
"version": "v2"
},
{
"created": "Wed, 3 Dec 2014 21:42:34 GMT",
"version": "v3"
}
] | 2014-12-05 | [
[
"Tropp",
"Joel A.",
""
]
] | This chapter develops a theoretical analysis of the convex programming method for recovering a structured signal from independent random linear measurements. This technique delivers bounds for the sampling complexity that are similar with recent results for standard Gaussian measurements, but the argument applies to a much wider class of measurement ensembles. To demonstrate the power of this approach, the paper presents a short analysis of phase retrieval by trace-norm minimization. The key technical tool is a framework, due to Mendelson and coauthors, for bounding a nonnegative empirical process. |
2405.19785 | Nicol\`o Botteghi | Nicol\`o Botteghi, Paolo Motta, Andrea Manzoni, Paolo Zunino, Mengwu
Guo | Recurrent Deep Kernel Learning of Dynamical Systems | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Digital twins require computationally-efficient reduced-order models (ROMs)
that can accurately describe complex dynamics of physical assets. However,
constructing ROMs from noisy high-dimensional data is challenging. In this
work, we propose a data-driven, non-intrusive method that utilizes stochastic
variational deep kernel learning (SVDKL) to discover low-dimensional latent
spaces from data and a recurrent version of SVDKL for representing and
predicting the evolution of latent dynamics. The proposed method is
demonstrated with two challenging examples -- a double pendulum and a
reaction-diffusion system. Results show that our framework is capable of (i)
denoising and reconstructing measurements, (ii) learning compact
representations of system states, (iii) predicting system evolution in
low-dimensional latent spaces, and (iv) quantifying modeling uncertainties.
| [
{
"created": "Thu, 30 May 2024 07:49:02 GMT",
"version": "v1"
}
] | 2024-05-31 | [
[
"Botteghi",
"Nicolò",
""
],
[
"Motta",
"Paolo",
""
],
[
"Manzoni",
"Andrea",
""
],
[
"Zunino",
"Paolo",
""
],
[
"Guo",
"Mengwu",
""
]
] | Digital twins require computationally-efficient reduced-order models (ROMs) that can accurately describe complex dynamics of physical assets. However, constructing ROMs from noisy high-dimensional data is challenging. In this work, we propose a data-driven, non-intrusive method that utilizes stochastic variational deep kernel learning (SVDKL) to discover low-dimensional latent spaces from data and a recurrent version of SVDKL for representing and predicting the evolution of latent dynamics. The proposed method is demonstrated with two challenging examples -- a double pendulum and a reaction-diffusion system. Results show that our framework is capable of (i) denoising and reconstructing measurements, (ii) learning compact representations of system states, (iii) predicting system evolution in low-dimensional latent spaces, and (iv) quantifying modeling uncertainties. |
2404.01402 | Zixi Wang | Zixi Wang, Zeyi Liu, Nicolas Ouporov, Shuran Song | ContactHandover: Contact-Guided Robot-to-Human Object Handover | Project website:
https://clairezixiwang.github.io/ContactHandover.github.io/ | null | null | null | cs.RO cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | Robot-to-human object handover is an important step in many human robot
collaboration tasks. A successful handover requires the robot to maintain a
stable grasp on the object while making sure the human receives the object in a
natural and easy-to-use manner. We propose ContactHandover, a robot to human
handover system that consists of two phases: a contact-guided grasping phase
and an object delivery phase. During the grasping phase, ContactHandover
predicts both 6-DoF robot grasp poses and a 3D affordance map of human contact
points on the object. The robot grasp poses are reranked by penalizing those
that block human contact points, and the robot executes the highest ranking
grasp. During the delivery phase, the robot end effector pose is computed by
maximizing human contact points close to the human while minimizing the human
arm joint torques and displacements. We evaluate our system on 27 diverse
household objects and show that our system achieves better visibility and
reachability of human contacts to the receiver compared to several baselines.
More results can be found on
https://clairezixiwang.github.io/ContactHandover.github.io
| [
{
"created": "Mon, 1 Apr 2024 18:12:09 GMT",
"version": "v1"
}
] | 2024-04-03 | [
[
"Wang",
"Zixi",
""
],
[
"Liu",
"Zeyi",
""
],
[
"Ouporov",
"Nicolas",
""
],
[
"Song",
"Shuran",
""
]
] | Robot-to-human object handover is an important step in many human robot collaboration tasks. A successful handover requires the robot to maintain a stable grasp on the object while making sure the human receives the object in a natural and easy-to-use manner. We propose ContactHandover, a robot to human handover system that consists of two phases: a contact-guided grasping phase and an object delivery phase. During the grasping phase, ContactHandover predicts both 6-DoF robot grasp poses and a 3D affordance map of human contact points on the object. The robot grasp poses are reranked by penalizing those that block human contact points, and the robot executes the highest ranking grasp. During the delivery phase, the robot end effector pose is computed by maximizing human contact points close to the human while minimizing the human arm joint torques and displacements. We evaluate our system on 27 diverse household objects and show that our system achieves better visibility and reachability of human contacts to the receiver compared to several baselines. More results can be found on https://clairezixiwang.github.io/ContactHandover.github.io |
0911.2948 | RadhaKrishna Ganti | Radha Krishna Ganti and Martin Haenggi | Spatial Analysis of Opportunistic Downlink Relaying in a Two-Hop
Cellular System | Submitted to IEEE Transactions on Communications | null | null | null | cs.IT cs.NI math.IT stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider a two-hop cellular system in which the mobile nodes help the base
station by relaying information to the dead spots. While two-hop cellular
schemes have been analyzed previously, the distribution of the node locations
has not been explicitly taken into account. In this paper, we model the node
locations of the base stations and the mobile stations as a point process on
the plane and then analyze the performance of two different two-hop schemes in
the downlink. In one scheme the node nearest to the destination that has
decoded information from the base station in the first hop is used as the
relay. In the second scheme the node with the best channel to the relay that
received information in the first hop acts as a relay. In both these schemes we
obtain the success probability of the two hop scheme, accounting for the
interference from all other cells. We use tools from stochastic geometry and
point process theory to analyze the two hop schemes. Besides the results
obtained a main contribution of the paper is to introduce a mathematical
framework that can be used to analyze arbitrary relaying schemes. Some of the
main contributions of this paper are the analytical techniques introduced for
the inclusion of the spatial locations of the nodes into the mathematical
analysis.
| [
{
"created": "Mon, 16 Nov 2009 04:09:54 GMT",
"version": "v1"
}
] | 2009-11-19 | [
[
"Ganti",
"Radha Krishna",
""
],
[
"Haenggi",
"Martin",
""
]
] | We consider a two-hop cellular system in which the mobile nodes help the base station by relaying information to the dead spots. While two-hop cellular schemes have been analyzed previously, the distribution of the node locations has not been explicitly taken into account. In this paper, we model the node locations of the base stations and the mobile stations as a point process on the plane and then analyze the performance of two different two-hop schemes in the downlink. In one scheme the node nearest to the destination that has decoded information from the base station in the first hop is used as the relay. In the second scheme the node with the best channel to the relay that received information in the first hop acts as a relay. In both these schemes we obtain the success probability of the two hop scheme, accounting for the interference from all other cells. We use tools from stochastic geometry and point process theory to analyze the two hop schemes. Besides the results obtained a main contribution of the paper is to introduce a mathematical framework that can be used to analyze arbitrary relaying schemes. Some of the main contributions of this paper are the analytical techniques introduced for the inclusion of the spatial locations of the nodes into the mathematical analysis. |
1808.08070 | Uwe Krien | Simon Hilpert, Cord Kaldemeyer, Uwe Krien, Stefan G\"unther, Clemens
Wingenbach, Guido Plessmann | The Open Energy Modelling Framework (oemof) - A new approach to
facilitate open science in energy system modelling | null | Energy Strategy Reviews, Volume 22, 2018, Pages 16-25 | 10.1016/j.esr.2018.07.001 | null | cs.CE | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Energy system models have become indispensable tools for planning future
energy systems by providing insights into different development trajectories.
However, sustainable systems with high shares of renewable energy are
characterized by growing cross-sectoral interdependencies and decentralized
structures. To capture important properties of increasingly complex energy
systems, sophisticated and flexible modelling tools are needed. At the same
time, open science is becoming increasingly important in energy system
modelling. This paper presents the Open Energy Modelling Framework (oemof) as a
novel approach to energy system modelling, representation and analysis. The
framework provides a toolbox to construct comprehensive energy system models
and has been published open source under a free licence. Through collaborative
development based on open processes, the framework supports a maximum level of
participation, transparency and open science principles in energy system
modelling. Based on a generic graph-based description of energy systems, it is
well-suited to flexibly model complex cross-sectoral systems and incorporate
various modelling approaches. This makes the framework a multi-purpose
modelling environment for modelling and analyzing different systems at scales
ranging from urban to transnational.
| [
{
"created": "Fri, 24 Aug 2018 10:06:44 GMT",
"version": "v1"
}
] | 2018-08-27 | [
[
"Hilpert",
"Simon",
""
],
[
"Kaldemeyer",
"Cord",
""
],
[
"Krien",
"Uwe",
""
],
[
"Günther",
"Stefan",
""
],
[
"Wingenbach",
"Clemens",
""
],
[
"Plessmann",
"Guido",
""
]
] | Energy system models have become indispensable tools for planning future energy systems by providing insights into different development trajectories. However, sustainable systems with high shares of renewable energy are characterized by growing cross-sectoral interdependencies and decentralized structures. To capture important properties of increasingly complex energy systems, sophisticated and flexible modelling tools are needed. At the same time, open science is becoming increasingly important in energy system modelling. This paper presents the Open Energy Modelling Framework (oemof) as a novel approach to energy system modelling, representation and analysis. The framework provides a toolbox to construct comprehensive energy system models and has been published open source under a free licence. Through collaborative development based on open processes, the framework supports a maximum level of participation, transparency and open science principles in energy system modelling. Based on a generic graph-based description of energy systems, it is well-suited to flexibly model complex cross-sectoral systems and incorporate various modelling approaches. This makes the framework a multi-purpose modelling environment for modelling and analyzing different systems at scales ranging from urban to transnational. |
2008.08999 | Qian Zheng | Qian Zheng, Weikai Wu, Hanting Pan, Niloy Mitra, Daniel Cohen-Or, Hui
Huang | Object Properties Inferring from and Transfer for Human Interaction
Motions | null | null | null | null | cs.CV cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Humans regularly interact with their surrounding objects. Such interactions
often result in strongly correlated motion between humans and the interacting
objects. We thus ask: "Is it possible to infer object properties from skeletal
motion alone, even without seeing the interacting object itself?" In this
paper, we present a fine-grained action recognition method that learns to infer
such latent object properties from human interaction motion alone. This
inference allows us to disentangle the motion from the object property and
transfer object properties to a given motion. We collected a large number of
videos and 3D skeletal motions of the performing actors using an inertial
motion capture device. We analyze similar actions and learn subtle differences
among them to reveal latent properties of the interacting objects. In
particular, we learn to identify the interacting object, by estimating its
weight, or its fragility or delicacy. Our results clearly demonstrate that the
interaction motions and interacting objects are highly correlated and indeed
relative object latent properties can be inferred from the 3D skeleton
sequences alone, leading to new synthesis possibilities for human interaction
motions. Dataset will be available at http://vcc.szu.edu.cn/research/2020/IT.
| [
{
"created": "Thu, 20 Aug 2020 14:36:34 GMT",
"version": "v1"
}
] | 2020-08-21 | [
[
"Zheng",
"Qian",
""
],
[
"Wu",
"Weikai",
""
],
[
"Pan",
"Hanting",
""
],
[
"Mitra",
"Niloy",
""
],
[
"Cohen-Or",
"Daniel",
""
],
[
"Huang",
"Hui",
""
]
] | Humans regularly interact with their surrounding objects. Such interactions often result in strongly correlated motion between humans and the interacting objects. We thus ask: "Is it possible to infer object properties from skeletal motion alone, even without seeing the interacting object itself?" In this paper, we present a fine-grained action recognition method that learns to infer such latent object properties from human interaction motion alone. This inference allows us to disentangle the motion from the object property and transfer object properties to a given motion. We collected a large number of videos and 3D skeletal motions of the performing actors using an inertial motion capture device. We analyze similar actions and learn subtle differences among them to reveal latent properties of the interacting objects. In particular, we learn to identify the interacting object, by estimating its weight, or its fragility or delicacy. Our results clearly demonstrate that the interaction motions and interacting objects are highly correlated and indeed relative object latent properties can be inferred from the 3D skeleton sequences alone, leading to new synthesis possibilities for human interaction motions. Dataset will be available at http://vcc.szu.edu.cn/research/2020/IT. |
2405.17935 | Changle Qu | Changle Qu, Sunhao Dai, Xiaochi Wei, Hengyi Cai, Shuaiqiang Wang,
Dawei Yin, Jun Xu, Ji-Rong Wen | Tool Learning with Large Language Models: A Survey | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Recently, tool learning with large language models (LLMs) has emerged as a
promising paradigm for augmenting the capabilities of LLMs to tackle highly
complex problems. Despite growing attention and rapid advancements in this
field, the existing literature remains fragmented and lacks systematic
organization, posing barriers to entry for newcomers. This gap motivates us to
conduct a comprehensive survey of existing works on tool learning with LLMs. In
this survey, we focus on reviewing existing literature from the two primary
aspects (1) why tool learning is beneficial and (2) how tool learning is
implemented, enabling a comprehensive understanding of tool learning with LLMs.
We first explore the "why" by reviewing both the benefits of tool integration
and the inherent benefits of the tool learning paradigm from six specific
aspects. In terms of "how", we systematically review the literature according
to a taxonomy of four key stages in the tool learning workflow: task planning,
tool selection, tool calling, and response generation. Additionally, we provide
a detailed summary of existing benchmarks and evaluation methods, categorizing
them according to their relevance to different stages. Finally, we discuss
current challenges and outline potential future directions, aiming to inspire
both researchers and industrial developers to further explore this emerging and
promising area. We also maintain a GitHub repository to continually keep track
of the relevant papers and resources in this rising area at
\url{https://github.com/quchangle1/LLM-Tool-Survey}.
| [
{
"created": "Tue, 28 May 2024 08:01:26 GMT",
"version": "v1"
},
{
"created": "Thu, 30 May 2024 11:01:10 GMT",
"version": "v2"
}
] | 2024-05-31 | [
[
"Qu",
"Changle",
""
],
[
"Dai",
"Sunhao",
""
],
[
"Wei",
"Xiaochi",
""
],
[
"Cai",
"Hengyi",
""
],
[
"Wang",
"Shuaiqiang",
""
],
[
"Yin",
"Dawei",
""
],
[
"Xu",
"Jun",
""
],
[
"Wen",
"Ji-Rong",
""
]
] | Recently, tool learning with large language models (LLMs) has emerged as a promising paradigm for augmenting the capabilities of LLMs to tackle highly complex problems. Despite growing attention and rapid advancements in this field, the existing literature remains fragmented and lacks systematic organization, posing barriers to entry for newcomers. This gap motivates us to conduct a comprehensive survey of existing works on tool learning with LLMs. In this survey, we focus on reviewing existing literature from the two primary aspects (1) why tool learning is beneficial and (2) how tool learning is implemented, enabling a comprehensive understanding of tool learning with LLMs. We first explore the "why" by reviewing both the benefits of tool integration and the inherent benefits of the tool learning paradigm from six specific aspects. In terms of "how", we systematically review the literature according to a taxonomy of four key stages in the tool learning workflow: task planning, tool selection, tool calling, and response generation. Additionally, we provide a detailed summary of existing benchmarks and evaluation methods, categorizing them according to their relevance to different stages. Finally, we discuss current challenges and outline potential future directions, aiming to inspire both researchers and industrial developers to further explore this emerging and promising area. We also maintain a GitHub repository to continually keep track of the relevant papers and resources in this rising area at \url{https://github.com/quchangle1/LLM-Tool-Survey}. |
1310.0524 | Reshad Patuck | Reshad Patuck and Julio Hernandez-Castro | Steganography using the Extensible Messaging and Presence Protocol
(XMPP) | 13 pages, 3 figures, 2 tables | null | null | null | cs.MM cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present here the first work to propose different mechanisms for hiding
data in the Extensible Messaging and Presence Protocol (XMPP). This is a very
popular instant messaging protocol used by many messaging platforms such as
Google Talk, Cisco, LiveJournal and many others. Our paper describes how to
send a secret message from one XMPP client to another, without raising the
suspicion of any intermediaries. The methods described primarily focus on using
the underlying protocol as a means for steganography, unlike other related
works that try to hide data in the content of instant messages. In doing so, we
provide a more robust means of data hiding and additionally offer some
preliminary analysis of its general security, in particular against
entropic-based steganalysis.
| [
{
"created": "Fri, 27 Sep 2013 12:00:29 GMT",
"version": "v1"
}
] | 2013-10-03 | [
[
"Patuck",
"Reshad",
""
],
[
"Hernandez-Castro",
"Julio",
""
]
] | We present here the first work to propose different mechanisms for hiding data in the Extensible Messaging and Presence Protocol (XMPP). This is a very popular instant messaging protocol used by many messaging platforms such as Google Talk, Cisco, LiveJournal and many others. Our paper describes how to send a secret message from one XMPP client to another, without raising the suspicion of any intermediaries. The methods described primarily focus on using the underlying protocol as a means for steganography, unlike other related works that try to hide data in the content of instant messages. In doing so, we provide a more robust means of data hiding and additionally offer some preliminary analysis of its general security, in particular against entropic-based steganalysis. |
2407.15376 | Yang Xu | Yang Xu, Yifan Feng, and Yu Jiang | Structure-Aware Residual-Center Representation for Self-Supervised
Open-Set 3D Cross-Modal Retrieval | ICME 2024 | null | null | null | cs.MM | http://creativecommons.org/licenses/by-sa/4.0/ | Existing methods of 3D cross-modal retrieval heavily lean on category
distribution priors within the training set, which diminishes their efficacy
when tasked with unseen categories under open-set environments. To tackle this
problem, we propose the Structure-Aware Residual-Center Representation (SRCR)
framework for self-supervised open-set 3D cross-modal retrieval. To address the
center deviation due to category distribution differences, we utilize the
Residual-Center Embedding (RCE) for each object by nested auto-encoders, rather
than directly mapping them to the modality or category centers. Besides, we
perform the Hierarchical Structure Learning (HSL) approach to leverage the
high-order correlations among objects for generalization, by constructing a
heterogeneous hypergraph structure based on hierarchical inter-modality,
intra-object, and implicit-category correlations. Extensive experiments and
ablation studies on four benchmarks demonstrate the superiority of our proposed
framework compared to state-of-the-art methods.
| [
{
"created": "Mon, 22 Jul 2024 04:56:13 GMT",
"version": "v1"
}
] | 2024-07-23 | [
[
"Xu",
"Yang",
""
],
[
"Feng",
"Yifan",
""
],
[
"Jiang",
"Yu",
""
]
] | Existing methods of 3D cross-modal retrieval heavily lean on category distribution priors within the training set, which diminishes their efficacy when tasked with unseen categories under open-set environments. To tackle this problem, we propose the Structure-Aware Residual-Center Representation (SRCR) framework for self-supervised open-set 3D cross-modal retrieval. To address the center deviation due to category distribution differences, we utilize the Residual-Center Embedding (RCE) for each object by nested auto-encoders, rather than directly mapping them to the modality or category centers. Besides, we perform the Hierarchical Structure Learning (HSL) approach to leverage the high-order correlations among objects for generalization, by constructing a heterogeneous hypergraph structure based on hierarchical inter-modality, intra-object, and implicit-category correlations. Extensive experiments and ablation studies on four benchmarks demonstrate the superiority of our proposed framework compared to state-of-the-art methods. |
2302.13275 | Wei Yu | Wei Yu, Kuiyuan Yang, Yalong Bai, Hongxun Yao, Yong Rui | Learning cross space mapping via DNN using large scale click-through
logs | Accepted by IEEE Transactions on Multimedia 2015 | IEEE TRANSACTIONS ON MULTIMEDIA, VOL.17, NO.11, pp.2000-2007,
NOVEMBER 2015 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The gap between low-level visual signals and high-level semantics has been
progressively bridged by continuous development of deep neural network (DNN).
With recent progress of DNN, almost all image classification tasks have
achieved new records of accuracy. To extend the ability of DNN to image
retrieval tasks, we proposed a unified DNN model for image-query similarity
calculation by simultaneously modeling image and query in one network. The
unified DNN is named the cross space mapping (CSM) model, which contains two
parts, a convolutional part and a query-embedding part. The image and query are
mapped to a common vector space via these two parts respectively, and
image-query similarity is naturally defined as an inner product of their
mappings in the space. To ensure good generalization ability of the DNN, we
learn weights of the DNN from a large number of click-through logs which
consists of 23 million clicked image-query pairs between 1 million images and
11.7 million queries. Both the qualitative results and quantitative results on
an image retrieval evaluation task with 1000 queries demonstrate the
superiority of the proposed method.
| [
{
"created": "Sun, 26 Feb 2023 09:00:35 GMT",
"version": "v1"
}
] | 2023-02-28 | [
[
"Yu",
"Wei",
""
],
[
"Yang",
"Kuiyuan",
""
],
[
"Bai",
"Yalong",
""
],
[
"Yao",
"Hongxun",
""
],
[
"Rui",
"Yong",
""
]
] | The gap between low-level visual signals and high-level semantics has been progressively bridged by continuous development of deep neural network (DNN). With recent progress of DNN, almost all image classification tasks have achieved new records of accuracy. To extend the ability of DNN to image retrieval tasks, we proposed a unified DNN model for image-query similarity calculation by simultaneously modeling image and query in one network. The unified DNN is named the cross space mapping (CSM) model, which contains two parts, a convolutional part and a query-embedding part. The image and query are mapped to a common vector space via these two parts respectively, and image-query similarity is naturally defined as an inner product of their mappings in the space. To ensure good generalization ability of the DNN, we learn weights of the DNN from a large number of click-through logs which consists of 23 million clicked image-query pairs between 1 million images and 11.7 million queries. Both the qualitative results and quantitative results on an image retrieval evaluation task with 1000 queries demonstrate the superiority of the proposed method. |
2210.14162 | Tsunehiko Tanaka | Tsunehiko Tanaka, Daiki Kimura, Michiaki Tatsubori | Commonsense Knowledge from Scene Graphs for Textual Environments | AAAI-22 Workshop on Reinforcement Learning in Games | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Text-based games are becoming commonly used in reinforcement learning as
real-world simulation environments. They are usually imperfect information
games, and their interactions are only in the textual modality. To challenge
these games, it is effective to complement the missing information by providing
knowledge outside the game, such as human common sense. However, such knowledge
has only been available from textual information in previous works. In this
paper, we investigate the advantage of employing commonsense reasoning obtained
from visual datasets such as scene graph datasets. In general, images convey
more comprehensive information compared with text for humans. This property
enables to extract commonsense relationship knowledge more useful for acting
effectively in a game. We compare the statistics of spatial relationships
available in Visual Genome (a scene graph dataset) and ConceptNet (a text-based
knowledge) to analyze the effectiveness of introducing scene graph datasets. We
also conducted experiments on a text-based game task that requires commonsense
reasoning. Our experimental results demonstrated that our proposed methods have
higher and competitive performance than existing state-of-the-art methods.
| [
{
"created": "Wed, 19 Oct 2022 03:09:17 GMT",
"version": "v1"
}
] | 2022-10-26 | [
[
"Tanaka",
"Tsunehiko",
""
],
[
"Kimura",
"Daiki",
""
],
[
"Tatsubori",
"Michiaki",
""
]
] | Text-based games are becoming commonly used in reinforcement learning as real-world simulation environments. They are usually imperfect information games, and their interactions are only in the textual modality. To challenge these games, it is effective to complement the missing information by providing knowledge outside the game, such as human common sense. However, such knowledge has only been available from textual information in previous works. In this paper, we investigate the advantage of employing commonsense reasoning obtained from visual datasets such as scene graph datasets. In general, images convey more comprehensive information compared with text for humans. This property enables to extract commonsense relationship knowledge more useful for acting effectively in a game. We compare the statistics of spatial relationships available in Visual Genome (a scene graph dataset) and ConceptNet (a text-based knowledge) to analyze the effectiveness of introducing scene graph datasets. We also conducted experiments on a text-based game task that requires commonsense reasoning. Our experimental results demonstrated that our proposed methods have higher and competitive performance than existing state-of-the-art methods. |
1905.11669 | Weicheng Li Mr | Weicheng Li, Rui Wang, Zhongzhi Luan, Di Huang, Zidong Du, Yunji Chen
and Depei Qian | CompactNet: Platform-Aware Automatic Optimization for Convolutional
Neural Networks | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Convolutional Neural Network (CNN) based Deep Learning (DL) has achieved
great progress in many real-life applications. Meanwhile, due to the complex
model structures against strict latency and memory restriction, the
implementation of CNN models on the resource-limited platforms is becoming more
challenging. This work proposes a solution, called CompactNet\footnote{Project
URL: \url{https://github.com/CompactNet/CompactNet}}, which automatically
optimizes a pre-trained CNN model on a specific resource-limited platform given
a specific target of inference speedup. Guided by a simulator of the target
platform, CompactNet progressively trims a pre-trained network by removing
certain redundant filters until the target speedup is reached and generates an
optimal platform-specific model while maintaining the accuracy. We evaluate our
work on two platforms of a mobile ARM CPU and a machine learning accelerator
NPU (Cambricon-1A ISA) on a Huawei Mate10 smartphone. For the state-of-the-art
slim CNN model made for the embedded platform, MobileNetV2, CompactNet achieves
up to a 1.8x kernel computation speedup with equal or even higher accuracy for
image classification tasks on the Cifar-10 dataset.
| [
{
"created": "Tue, 28 May 2019 08:24:58 GMT",
"version": "v1"
}
] | 2019-05-29 | [
[
"Li",
"Weicheng",
""
],
[
"Wang",
"Rui",
""
],
[
"Luan",
"Zhongzhi",
""
],
[
"Huang",
"Di",
""
],
[
"Du",
"Zidong",
""
],
[
"Chen",
"Yunji",
""
],
[
"Qian",
"Depei",
""
]
] | Convolutional Neural Network (CNN) based Deep Learning (DL) has achieved great progress in many real-life applications. Meanwhile, due to the complex model structures against strict latency and memory restriction, the implementation of CNN models on the resource-limited platforms is becoming more challenging. This work proposes a solution, called CompactNet\footnote{Project URL: \url{https://github.com/CompactNet/CompactNet}}, which automatically optimizes a pre-trained CNN model on a specific resource-limited platform given a specific target of inference speedup. Guided by a simulator of the target platform, CompactNet progressively trims a pre-trained network by removing certain redundant filters until the target speedup is reached and generates an optimal platform-specific model while maintaining the accuracy. We evaluate our work on two platforms of a mobile ARM CPU and a machine learning accelerator NPU (Cambricon-1A ISA) on a Huawei Mate10 smartphone. For the state-of-the-art slim CNN model made for the embedded platform, MobileNetV2, CompactNet achieves up to a 1.8x kernel computation speedup with equal or even higher accuracy for image classification tasks on the Cifar-10 dataset. |
2110.10765 | Patrick J. Fasano | Brandon Cook and Patrick J. Fasano and Pieter Maris and Chao Yang and
Dossay Oryspayev | Accelerating quantum many-body configuration interaction with directives | 22 pages, 7 figures, 11 code listings, WACCPD@SC21 | null | 10.1007/978-3-030-97759-7_6 | null | cs.DC cs.CE cs.MS cs.PF nucl-th | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Many-Fermion Dynamics-nuclear, or MFDn, is a configuration interaction (CI)
code for nuclear structure calculations. It is a platform-independent Fortran
90 code using a hybrid MPI+X programming model. For CPU platforms the
application has a robust and optimized OpenMP implementation for shared memory
parallelism. As part of the NESAP application readiness program for NERSC's
latest Perlmutter system, MFDn has been updated to take advantage of
accelerators. The current mainline GPU port is based on OpenACC. In this work
we describe some of the key challenges of creating an efficient GPU
implementation. Additionally, we compare the support of OpenMP and OpenACC on
AMD and NVIDIA GPUs.
| [
{
"created": "Wed, 20 Oct 2021 20:17:18 GMT",
"version": "v1"
}
] | 2022-05-17 | [
[
"Cook",
"Brandon",
""
],
[
"Fasano",
"Patrick J.",
""
],
[
"Maris",
"Pieter",
""
],
[
"Yang",
"Chao",
""
],
[
"Oryspayev",
"Dossay",
""
]
] | Many-Fermion Dynamics-nuclear, or MFDn, is a configuration interaction (CI) code for nuclear structure calculations. It is a platform-independent Fortran 90 code using a hybrid MPI+X programming model. For CPU platforms the application has a robust and optimized OpenMP implementation for shared memory parallelism. As part of the NESAP application readiness program for NERSC's latest Perlmutter system, MFDn has been updated to take advantage of accelerators. The current mainline GPU port is based on OpenACC. In this work we describe some of the key challenges of creating an efficient GPU implementation. Additionally, we compare the support of OpenMP and OpenACC on AMD and NVIDIA GPUs. |
2112.15060 | Jiayuan Chen | Jiayuan Chen and Boyu Zhang and Yinfei Xu and Meng Wang | TextRGNN: Residual Graph Neural Networks for Text Classification | The content of this paper will appear as part of another paper | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recently, text classification model based on graph neural network (GNN) has
attracted more and more attention. Most of these models adopt a similar network
paradigm, that is, using pre-training node embedding initialization and
two-layer graph convolution. In this work, we propose TextRGNN, an improved GNN
structure that introduces residual connection to deepen the convolution network
depth. Our structure can obtain a wider node receptive field and effectively
suppress the over-smoothing of node features. In addition, we integrate the
probabilistic language model into the initialization of graph node embedding,
so that the non-graph semantic information of can be better extracted. The
experimental results show that our model is general and efficient. It can
significantly improve the classification accuracy whether in corpus level or
text level, and achieve SOTA performance on a wide range of text classification
datasets.
| [
{
"created": "Thu, 30 Dec 2021 13:48:58 GMT",
"version": "v1"
},
{
"created": "Wed, 25 Jan 2023 11:33:31 GMT",
"version": "v2"
}
] | 2023-01-26 | [
[
"Chen",
"Jiayuan",
""
],
[
"Zhang",
"Boyu",
""
],
[
"Xu",
"Yinfei",
""
],
[
"Wang",
"Meng",
""
]
] | Recently, text classification model based on graph neural network (GNN) has attracted more and more attention. Most of these models adopt a similar network paradigm, that is, using pre-training node embedding initialization and two-layer graph convolution. In this work, we propose TextRGNN, an improved GNN structure that introduces residual connection to deepen the convolution network depth. Our structure can obtain a wider node receptive field and effectively suppress the over-smoothing of node features. In addition, we integrate the probabilistic language model into the initialization of graph node embedding, so that the non-graph semantic information of can be better extracted. The experimental results show that our model is general and efficient. It can significantly improve the classification accuracy whether in corpus level or text level, and achieve SOTA performance on a wide range of text classification datasets. |
2106.11791 | Soujanya Poria | Navonil Majumder, Deepanway Ghosal, Devamanyu Hazarika, Alexander
Gelbukh, Rada Mihalcea, Soujanya Poria | Exemplars-guided Empathetic Response Generation Controlled by the
Elements of Human Communication | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | The majority of existing methods for empathetic response generation rely on
the emotion of the context to generate empathetic responses. However, empathy
is much more than generating responses with an appropriate emotion. It also
often entails subtle expressions of understanding and personal resonance with
the situation of the other interlocutor. Unfortunately, such qualities are
difficult to quantify and the datasets lack the relevant annotations. To
address this issue, in this paper we propose an approach that relies on
exemplars to cue the generative model on fine stylistic properties that signal
empathy to the interlocutor. To this end, we employ dense passage retrieval to
extract relevant exemplary responses from the training set. Three elements of
human communication -- emotional presence, interpretation, and exploration, and
sentiment are additionally introduced using synthetic labels to guide the
generation towards empathy. The human evaluation is also extended by these
elements of human communication. We empirically show that these approaches
yield significant improvements in empathetic response quality in terms of both
automated and human-evaluated metrics. The implementation is available at
https://github.com/declare-lab/exemplary-empathy.
| [
{
"created": "Tue, 22 Jun 2021 14:02:33 GMT",
"version": "v1"
},
{
"created": "Sun, 1 Aug 2021 10:00:39 GMT",
"version": "v2"
},
{
"created": "Thu, 5 Aug 2021 00:53:39 GMT",
"version": "v3"
}
] | 2021-08-06 | [
[
"Majumder",
"Navonil",
""
],
[
"Ghosal",
"Deepanway",
""
],
[
"Hazarika",
"Devamanyu",
""
],
[
"Gelbukh",
"Alexander",
""
],
[
"Mihalcea",
"Rada",
""
],
[
"Poria",
"Soujanya",
""
]
] | The majority of existing methods for empathetic response generation rely on the emotion of the context to generate empathetic responses. However, empathy is much more than generating responses with an appropriate emotion. It also often entails subtle expressions of understanding and personal resonance with the situation of the other interlocutor. Unfortunately, such qualities are difficult to quantify and the datasets lack the relevant annotations. To address this issue, in this paper we propose an approach that relies on exemplars to cue the generative model on fine stylistic properties that signal empathy to the interlocutor. To this end, we employ dense passage retrieval to extract relevant exemplary responses from the training set. Three elements of human communication -- emotional presence, interpretation, and exploration, and sentiment are additionally introduced using synthetic labels to guide the generation towards empathy. The human evaluation is also extended by these elements of human communication. We empirically show that these approaches yield significant improvements in empathetic response quality in terms of both automated and human-evaluated metrics. The implementation is available at https://github.com/declare-lab/exemplary-empathy. |
2405.14594 | Yuni Susanti | Yuni Susanti | Data Augmentation Techniques for Process Extraction from Scientific
Publications | null | null | null | null | cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present data augmentation techniques for process extraction tasks in
scientific publications. We cast the process extraction task as a sequence
labeling task where we identify all the entities in a sentence and label them
according to their process-specific roles. The proposed method attempts to
create meaningful augmented sentences by utilizing (1) process-specific
information from the original sentence, (2) role label similarity, and (3)
sentence similarity. We demonstrate that the proposed methods substantially
improve the performance of the process extraction model trained on chemistry
domain datasets, up to 12.3 points improvement in performance accuracy
(F-score). The proposed methods could potentially reduce overfitting as well,
especially when training on small datasets or in a low-resource setting such as
in chemistry and other scientific domains.
| [
{
"created": "Thu, 23 May 2024 14:09:02 GMT",
"version": "v1"
}
] | 2024-05-24 | [
[
"Susanti",
"Yuni",
""
]
] | We present data augmentation techniques for process extraction tasks in scientific publications. We cast the process extraction task as a sequence labeling task where we identify all the entities in a sentence and label them according to their process-specific roles. The proposed method attempts to create meaningful augmented sentences by utilizing (1) process-specific information from the original sentence, (2) role label similarity, and (3) sentence similarity. We demonstrate that the proposed methods substantially improve the performance of the process extraction model trained on chemistry domain datasets, up to 12.3 points improvement in performance accuracy (F-score). The proposed methods could potentially reduce overfitting as well, especially when training on small datasets or in a low-resource setting such as in chemistry and other scientific domains. |
1703.00538 | Jinying Chen | Jinying Chen and Hong Yu | Unsupervised Ensemble Ranking of Terms in Electronic Health Record Notes
Based on Their Importance to Patients | null | null | 10.1016/j.jbi.2017.02.016 | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: Electronic health record (EHR) notes contain abundant medical
jargon that can be difficult for patients to comprehend. One way to help
patients is to reduce information overload and help them focus on medical terms
that matter most to them.
Objective: The aim of this work was to develop FIT (Finding Important Terms
for patients), an unsupervised natural language processing (NLP) system that
ranks medical terms in EHR notes based on their importance to patients.
Methods: We built FIT on a new unsupervised ensemble ranking model derived
from the biased random walk algorithm to combine heterogeneous information
resources for ranking candidate terms from each EHR note. Specifically, FIT
integrates four single views for term importance: patient use of medical
concepts, document-level term salience, word-occurrence based term relatedness,
and topic coherence. It also incorporates partial information of term
importance as conveyed by terms' unfamiliarity levels and semantic types. We
evaluated FIT on 90 expert-annotated EHR notes and compared it with three
benchmark unsupervised ensemble ranking methods.
Results: FIT achieved 0.885 AUC-ROC for ranking candidate terms from EHR
notes to identify important terms. When including term identification, the
performance of FIT for identifying important terms from EHR notes was 0.813
AUC-ROC. It outperformed the three ensemble rankers for most metrics. Its
performance is relatively insensitive to its parameter.
Conclusions: FIT can automatically identify EHR terms important to patients
and may help develop personalized interventions to improve quality of care. By
using unsupervised learning as well as a robust and flexible framework for
information fusion, FIT can be readily applied to other domains and
applications.
| [
{
"created": "Wed, 1 Mar 2017 22:37:02 GMT",
"version": "v1"
},
{
"created": "Sat, 25 Mar 2017 21:34:10 GMT",
"version": "v2"
}
] | 2017-03-28 | [
[
"Chen",
"Jinying",
""
],
[
"Yu",
"Hong",
""
]
] | Background: Electronic health record (EHR) notes contain abundant medical jargon that can be difficult for patients to comprehend. One way to help patients is to reduce information overload and help them focus on medical terms that matter most to them. Objective: The aim of this work was to develop FIT (Finding Important Terms for patients), an unsupervised natural language processing (NLP) system that ranks medical terms in EHR notes based on their importance to patients. Methods: We built FIT on a new unsupervised ensemble ranking model derived from the biased random walk algorithm to combine heterogeneous information resources for ranking candidate terms from each EHR note. Specifically, FIT integrates four single views for term importance: patient use of medical concepts, document-level term salience, word-occurrence based term relatedness, and topic coherence. It also incorporates partial information of term importance as conveyed by terms' unfamiliarity levels and semantic types. We evaluated FIT on 90 expert-annotated EHR notes and compared it with three benchmark unsupervised ensemble ranking methods. Results: FIT achieved 0.885 AUC-ROC for ranking candidate terms from EHR notes to identify important terms. When including term identification, the performance of FIT for identifying important terms from EHR notes was 0.813 AUC-ROC. It outperformed the three ensemble rankers for most metrics. Its performance is relatively insensitive to its parameter. Conclusions: FIT can automatically identify EHR terms important to patients and may help develop personalized interventions to improve quality of care. By using unsupervised learning as well as a robust and flexible framework for information fusion, FIT can be readily applied to other domains and applications. |
2406.17074 | George Drettakis | Panagiotis Papantonakis, Georgios Kopanas, Bernhard Kerbl, Alexandre
Lanvin, George Drettakis | Reducing the Memory Footprint of 3D Gaussian Splatting | Project website: https://repo-sam.inria.fr/fungraph/reduced_3dgs/ | Proceedings of the ACM on Computer Graphics and Interactive
Techniques, Volume 7, Issue 1 Article No.: 16, Pages 1 - 17, 2024 | 10.1145/3651282 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | 3D Gaussian splatting provides excellent visual quality for novel view
synthesis, with fast training and real-time rendering; unfortunately, the
memory requirements of this method for storing and transmission are
unreasonably high. We first analyze the reasons for this, identifying three
main areas where storage can be reduced: the number of 3D Gaussian primitives
used to represent a scene, the number of coefficients for the spherical
harmonics used to represent directional radiance, and the precision required to
store Gaussian primitive attributes. We present a solution to each of these
issues. First, we propose an efficient, resolution-aware primitive pruning
approach, reducing the primitive count by half. Second, we introduce an
adaptive adjustment method to choose the number of coefficients used to
represent directional radiance for each Gaussian primitive, and finally a
codebook-based quantization method, together with a half-float representation
for further memory reduction. Taken together, these three components result in
a 27 reduction in overall size on disk on the standard datasets we tested,
along with a 1.7 speedup in rendering speed. We demonstrate our method on
standard datasets and show how our solution results in significantly reduced
download times when using the method on a mobile device.
| [
{
"created": "Mon, 24 Jun 2024 19:01:44 GMT",
"version": "v1"
}
] | 2024-06-26 | [
[
"Papantonakis",
"Panagiotis",
""
],
[
"Kopanas",
"Georgios",
""
],
[
"Kerbl",
"Bernhard",
""
],
[
"Lanvin",
"Alexandre",
""
],
[
"Drettakis",
"George",
""
]
] | 3D Gaussian splatting provides excellent visual quality for novel view synthesis, with fast training and real-time rendering; unfortunately, the memory requirements of this method for storing and transmission are unreasonably high. We first analyze the reasons for this, identifying three main areas where storage can be reduced: the number of 3D Gaussian primitives used to represent a scene, the number of coefficients for the spherical harmonics used to represent directional radiance, and the precision required to store Gaussian primitive attributes. We present a solution to each of these issues. First, we propose an efficient, resolution-aware primitive pruning approach, reducing the primitive count by half. Second, we introduce an adaptive adjustment method to choose the number of coefficients used to represent directional radiance for each Gaussian primitive, and finally a codebook-based quantization method, together with a half-float representation for further memory reduction. Taken together, these three components result in a 27 reduction in overall size on disk on the standard datasets we tested, along with a 1.7 speedup in rendering speed. We demonstrate our method on standard datasets and show how our solution results in significantly reduced download times when using the method on a mobile device. |
2403.15697 | Anoop Jain | Pushkal Purohit and Anoop Jain | Passivity-based Attack Identification and Mitigation with
Event-triggered Observer Feedback and Switching Controller | null | null | null | null | cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper addresses the problem of output consensus in linear passive
multi-agent systems under a False Data Injection (FDI) attack, considering the
unavailability of complete state information. Our formulation relies on an
event-based cryptographic authentication scheme for sensor integrity and
considers FDI attacks at the actuator end, inspired by their practical nature
and usages. For secure consensus, we propose (i) a passivity-based approach for
detecting FDI attacks on the system and (ii) a Zeno-free event-triggered
observer-based switching controller, which switches between the normal and the
defense modes following an attack detection. We show that the closed-loop
system achieves practical consensus under the controller's action in the
defense mode. Simulation examples are provided to support the theoretical
findings.
| [
{
"created": "Sat, 23 Mar 2024 03:19:42 GMT",
"version": "v1"
}
] | 2024-03-26 | [
[
"Purohit",
"Pushkal",
""
],
[
"Jain",
"Anoop",
""
]
] | This paper addresses the problem of output consensus in linear passive multi-agent systems under a False Data Injection (FDI) attack, considering the unavailability of complete state information. Our formulation relies on an event-based cryptographic authentication scheme for sensor integrity and considers FDI attacks at the actuator end, inspired by their practical nature and usages. For secure consensus, we propose (i) a passivity-based approach for detecting FDI attacks on the system and (ii) a Zeno-free event-triggered observer-based switching controller, which switches between the normal and the defense modes following an attack detection. We show that the closed-loop system achieves practical consensus under the controller's action in the defense mode. Simulation examples are provided to support the theoretical findings. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.