id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2305.05738
|
Chia-Hao Li
|
Chia-Hao Li and Niraj K. Jha
|
DOCTOR: A Multi-Disease Detection Continual Learning Framework Based on
Wearable Medical Sensors
|
39 pages, 14 figures. This work has been submitted to the ACM for
possible publication. Copyright may be transferred without notice, after
which this version may no longer be accessible
| null | null | null |
cs.LG cs.HC eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern advances in machine learning (ML) and wearable medical sensors (WMSs)
in edge devices have enabled ML-driven disease detection for smart healthcare.
Conventional ML-driven methods for disease detection rely on customizing
individual models for each disease and its corresponding WMS data. However,
such methods lack adaptability to distribution shifts and new task
classification classes. In addition, they need to be rearchitected and
retrained from scratch for each new disease. Moreover, installing multiple ML
models in an edge device consumes excessive memory, drains the battery faster,
and complicates the detection process. To address these challenges, we propose
DOCTOR, a multi-disease detection continual learning (CL) framework based on
WMSs. It employs a multi-headed deep neural network (DNN) and a replay-style CL
algorithm. The CL algorithm enables the framework to continually learn new
missions where different data distributions, classification classes, and
disease detection tasks are introduced sequentially. It counteracts
catastrophic forgetting with a data preservation method and a synthetic data
generation (SDG) module. The data preservation method preserves the most
informative subset of real training data from previous missions for exemplar
replay. The SDG module models the probability distribution of the real training
data and generates synthetic data for generative replay while retaining data
privacy. The multi-headed DNN enables DOCTOR to detect multiple diseases
simultaneously based on user WMS data. We demonstrate DOCTOR's efficacy in
maintaining high disease classification accuracy with a single DNN model in
various CL experiments. In complex scenarios, DOCTOR achieves 1.43 times better
average test accuracy, 1.25 times better F1-score, and 0.41 higher backward
transfer than the naive fine-tuning framework with a small model size of less
than 350KB.
|
[
{
"created": "Tue, 9 May 2023 19:33:17 GMT",
"version": "v1"
},
{
"created": "Tue, 8 Aug 2023 20:23:37 GMT",
"version": "v2"
},
{
"created": "Thu, 12 Oct 2023 20:45:38 GMT",
"version": "v3"
},
{
"created": "Wed, 6 Mar 2024 20:56:06 GMT",
"version": "v4"
},
{
"created": "Wed, 19 Jun 2024 01:06:15 GMT",
"version": "v5"
}
] |
2024-06-21
|
[
[
"Li",
"Chia-Hao",
""
],
[
"Jha",
"Niraj K.",
""
]
] |
Modern advances in machine learning (ML) and wearable medical sensors (WMSs) in edge devices have enabled ML-driven disease detection for smart healthcare. Conventional ML-driven methods for disease detection rely on customizing individual models for each disease and its corresponding WMS data. However, such methods lack adaptability to distribution shifts and new task classification classes. In addition, they need to be rearchitected and retrained from scratch for each new disease. Moreover, installing multiple ML models in an edge device consumes excessive memory, drains the battery faster, and complicates the detection process. To address these challenges, we propose DOCTOR, a multi-disease detection continual learning (CL) framework based on WMSs. It employs a multi-headed deep neural network (DNN) and a replay-style CL algorithm. The CL algorithm enables the framework to continually learn new missions where different data distributions, classification classes, and disease detection tasks are introduced sequentially. It counteracts catastrophic forgetting with a data preservation method and a synthetic data generation (SDG) module. The data preservation method preserves the most informative subset of real training data from previous missions for exemplar replay. The SDG module models the probability distribution of the real training data and generates synthetic data for generative replay while retaining data privacy. The multi-headed DNN enables DOCTOR to detect multiple diseases simultaneously based on user WMS data. We demonstrate DOCTOR's efficacy in maintaining high disease classification accuracy with a single DNN model in various CL experiments. In complex scenarios, DOCTOR achieves 1.43 times better average test accuracy, 1.25 times better F1-score, and 0.41 higher backward transfer than the naive fine-tuning framework with a small model size of less than 350KB.
|
2105.13777
|
Minghui Yang
|
Qiuyue Liu, Shiyuan Qiang, Minghui Yang, and Keqin Feng
|
Linear Complexity of Binary Interleaved Sequences of Period 4n
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Binary periodic sequences with good autocorrelation property have many
applications in many aspects of communication. In past decades many series of
such binary sequences have been constructed. In the application of
cryptography, such binary sequences are required to have larger linear
complexity. Tang and Ding \cite{X. Tang} presented a method to construct a
series of binary sequences with period 4$n$ having optimal autocorrelation.
Such sequences are interleaved by two arbitrary binary sequences with period
$n\equiv 3\pmod 4$ and ideal autocorrelation. In this paper we present a
general formula on the linear complexity of such interleaved sequences.
Particularly, we show that the linear complexity of such sequences with period
4$n$ is not bigger than $2n+2$. Interleaving by several types of known binary
sequences with ideal autocorrelation ($m$-sequences, Legendre, twin-prime and
Hall's sequences), we present many series of such sequences having the maximum
value $2n+2$ of linear complexity which gives an answer of a problem raised by
N. Li and X. Tang \cite{N. Li}. Finally, in the conclusion section we show that
it can be seen easily that the 2-adic complexity of all such interleaved
sequences reaches the maximum value $\log_{2}(2^{4n}-1)$.
|
[
{
"created": "Fri, 28 May 2021 12:30:09 GMT",
"version": "v1"
}
] |
2021-05-31
|
[
[
"Liu",
"Qiuyue",
""
],
[
"Qiang",
"Shiyuan",
""
],
[
"Yang",
"Minghui",
""
],
[
"Feng",
"Keqin",
""
]
] |
Binary periodic sequences with good autocorrelation property have many applications in many aspects of communication. In past decades many series of such binary sequences have been constructed. In the application of cryptography, such binary sequences are required to have larger linear complexity. Tang and Ding \cite{X. Tang} presented a method to construct a series of binary sequences with period 4$n$ having optimal autocorrelation. Such sequences are interleaved by two arbitrary binary sequences with period $n\equiv 3\pmod 4$ and ideal autocorrelation. In this paper we present a general formula on the linear complexity of such interleaved sequences. Particularly, we show that the linear complexity of such sequences with period 4$n$ is not bigger than $2n+2$. Interleaving by several types of known binary sequences with ideal autocorrelation ($m$-sequences, Legendre, twin-prime and Hall's sequences), we present many series of such sequences having the maximum value $2n+2$ of linear complexity which gives an answer of a problem raised by N. Li and X. Tang \cite{N. Li}. Finally, in the conclusion section we show that it can be seen easily that the 2-adic complexity of all such interleaved sequences reaches the maximum value $\log_{2}(2^{4n}-1)$.
|
2309.10280
|
Forsad Al Hossain
|
Forsad Al Hossain, Tanjid Hasan Tonmoy, Andrew A. Lover, George A.
Corey, Mohammad Arif Ul Alam, Tauhidur Rahman
|
Crowdotic: A Privacy-Preserving Hospital Waiting Room Crowd Density
Estimation with Non-speech Audio
| null | null | null | null |
cs.SD cs.CR cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Privacy-preserving crowd density analysis finds application across a wide
range of scenarios, substantially enhancing smart building operation and
management while upholding privacy expectations in various spaces. We propose a
non-speech audio-based approach for crowd analytics, leveraging a
transformer-based model. Our results demonstrate that non-speech audio alone
can be used to conduct such analysis with remarkable accuracy. To the best of
our knowledge, this is the first time when non-speech audio signals are
proposed for predicting occupancy. As far as we know, there has been no other
similar approach of its kind prior to this. To accomplish this, we deployed our
sensor-based platform in the waiting room of a large hospital with IRB approval
over a period of several months to capture non-speech audio and thermal images
for the training and evaluation of our models. The proposed non-speech-based
approach outperformed the thermal camera-based model and all other baselines.
In addition to demonstrating superior performance without utilizing speech
audio, we conduct further analysis using differential privacy techniques to
provide additional privacy guarantees. Overall, our work demonstrates the
viability of employing non-speech audio data for accurate occupancy estimation,
while also ensuring the exclusion of speech-related content and providing
robust privacy protections through differential privacy guarantees.
|
[
{
"created": "Tue, 19 Sep 2023 03:08:20 GMT",
"version": "v1"
},
{
"created": "Wed, 20 Sep 2023 23:45:05 GMT",
"version": "v2"
}
] |
2023-09-22
|
[
[
"Hossain",
"Forsad Al",
""
],
[
"Tonmoy",
"Tanjid Hasan",
""
],
[
"Lover",
"Andrew A.",
""
],
[
"Corey",
"George A.",
""
],
[
"Alam",
"Mohammad Arif Ul",
""
],
[
"Rahman",
"Tauhidur",
""
]
] |
Privacy-preserving crowd density analysis finds application across a wide range of scenarios, substantially enhancing smart building operation and management while upholding privacy expectations in various spaces. We propose a non-speech audio-based approach for crowd analytics, leveraging a transformer-based model. Our results demonstrate that non-speech audio alone can be used to conduct such analysis with remarkable accuracy. To the best of our knowledge, this is the first time when non-speech audio signals are proposed for predicting occupancy. As far as we know, there has been no other similar approach of its kind prior to this. To accomplish this, we deployed our sensor-based platform in the waiting room of a large hospital with IRB approval over a period of several months to capture non-speech audio and thermal images for the training and evaluation of our models. The proposed non-speech-based approach outperformed the thermal camera-based model and all other baselines. In addition to demonstrating superior performance without utilizing speech audio, we conduct further analysis using differential privacy techniques to provide additional privacy guarantees. Overall, our work demonstrates the viability of employing non-speech audio data for accurate occupancy estimation, while also ensuring the exclusion of speech-related content and providing robust privacy protections through differential privacy guarantees.
|
2304.04276
|
Yehonatan Fridman
|
Yehonatan Fridman, Guy Tamir, Gal Oren
|
Portability and Scalability of OpenMP Offloading on State-of-the-art
Accelerators
|
13 pages, 5 Figures, 5 Tables
| null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Over the last decade, most of the increase in computing power has been gained
by advances in accelerated many-core architectures, mainly in the form of
GPGPUs. While accelerators achieve phenomenal performances in various computing
tasks, their utilization requires code adaptations and transformations. Thus,
OpenMP, the most common standard for multi-threading in scientific computing
applications, introduced offloading capabilities between host (CPUs) and
accelerators since v4.0, with increasing support in the successive v4.5, v5.0,
v5.1, and the latest v5.2 versions. Recently, two state-of-the-art GPUs -- the
Intel Ponte Vecchio Max 1100 and the NVIDIA A100 GPUs -- were released to the
market, with the oneAPI and NVHPC compilers for offloading, correspondingly. In
this work, we present early performance results of OpenMP offloading
capabilities to these devices while specifically analyzing the portability of
advanced directives (using SOLLVE's OMPVV test suite) and the scalability of
the hardware in representative scientific mini-app (the LULESH benchmark). Our
results show that the coverage for version 4.5 is nearly complete in both
latest NVHPC and oneAPI tools. However, we observed a lack of support in
versions 5.0, 5.1, and 5.2, which is particularly noticeable when using NVHPC.
From the performance perspective, we found that the PVC1100 and A100 are
relatively comparable on the LULESH benchmark. While the A100 is slightly
better due to faster memory bandwidth, the PVC1100 reaches the next problem
size (400^3) scalably due to the larger memory size.
|
[
{
"created": "Sun, 9 Apr 2023 16:40:20 GMT",
"version": "v1"
},
{
"created": "Sun, 14 May 2023 04:47:18 GMT",
"version": "v2"
}
] |
2023-05-16
|
[
[
"Fridman",
"Yehonatan",
""
],
[
"Tamir",
"Guy",
""
],
[
"Oren",
"Gal",
""
]
] |
Over the last decade, most of the increase in computing power has been gained by advances in accelerated many-core architectures, mainly in the form of GPGPUs. While accelerators achieve phenomenal performances in various computing tasks, their utilization requires code adaptations and transformations. Thus, OpenMP, the most common standard for multi-threading in scientific computing applications, introduced offloading capabilities between host (CPUs) and accelerators since v4.0, with increasing support in the successive v4.5, v5.0, v5.1, and the latest v5.2 versions. Recently, two state-of-the-art GPUs -- the Intel Ponte Vecchio Max 1100 and the NVIDIA A100 GPUs -- were released to the market, with the oneAPI and NVHPC compilers for offloading, correspondingly. In this work, we present early performance results of OpenMP offloading capabilities to these devices while specifically analyzing the portability of advanced directives (using SOLLVE's OMPVV test suite) and the scalability of the hardware in representative scientific mini-app (the LULESH benchmark). Our results show that the coverage for version 4.5 is nearly complete in both latest NVHPC and oneAPI tools. However, we observed a lack of support in versions 5.0, 5.1, and 5.2, which is particularly noticeable when using NVHPC. From the performance perspective, we found that the PVC1100 and A100 are relatively comparable on the LULESH benchmark. While the A100 is slightly better due to faster memory bandwidth, the PVC1100 reaches the next problem size (400^3) scalably due to the larger memory size.
|
2303.16113
|
Lili Chen
|
Lili Chen, Jingge Zhu, Jamie Evans
|
Graph Neural Networks for Power Allocation in Wireless Networks with
Full Duplex Nodes
|
Published in 2023 IEEE International Conference on Communications
Workshops (ICC Workshops)
|
ICC Workshops (2023) 277-282
|
10.1109/ICCWorkshops57953.2023.10283511
| null |
cs.NI cs.IT cs.LG eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Due to mutual interference between users, power allocation problems in
wireless networks are often non-convex and computationally challenging. Graph
neural networks (GNNs) have recently emerged as a promising approach to
tackling these problems and an approach that exploits the underlying topology
of wireless networks. In this paper, we propose a novel graph representation
method for wireless networks that include full-duplex (FD) nodes. We then
design a corresponding FD Graph Neural Network (F-GNN) with the aim of
allocating transmit powers to maximise the network throughput. Our results show
that our F-GNN achieves state-of-art performance with significantly less
computation time. Besides, F-GNN offers an excellent trade-off between
performance and complexity compared to classical approaches. We further refine
this trade-off by introducing a distance-based threshold for inclusion or
exclusion of edges in the network. We show that an appropriately chosen
threshold reduces required training time by roughly 20% with a relatively minor
loss in performance.
|
[
{
"created": "Mon, 27 Mar 2023 10:59:09 GMT",
"version": "v1"
},
{
"created": "Mon, 8 Jan 2024 03:07:28 GMT",
"version": "v2"
}
] |
2024-01-09
|
[
[
"Chen",
"Lili",
""
],
[
"Zhu",
"Jingge",
""
],
[
"Evans",
"Jamie",
""
]
] |
Due to mutual interference between users, power allocation problems in wireless networks are often non-convex and computationally challenging. Graph neural networks (GNNs) have recently emerged as a promising approach to tackling these problems and an approach that exploits the underlying topology of wireless networks. In this paper, we propose a novel graph representation method for wireless networks that include full-duplex (FD) nodes. We then design a corresponding FD Graph Neural Network (F-GNN) with the aim of allocating transmit powers to maximise the network throughput. Our results show that our F-GNN achieves state-of-art performance with significantly less computation time. Besides, F-GNN offers an excellent trade-off between performance and complexity compared to classical approaches. We further refine this trade-off by introducing a distance-based threshold for inclusion or exclusion of edges in the network. We show that an appropriately chosen threshold reduces required training time by roughly 20% with a relatively minor loss in performance.
|
2105.10223
|
Andre Rodrigues
|
Andr\'e Rodrigues, Andr\'e Santos, Kyle Montague, Hugo Nicolau and
Tiago Guerreiro
|
WildKey: A Privacy-Aware Keyboard Toolkit for Data Collection
In-The-Wild
|
12 pages, 3 figures
| null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Touch data, and in particular text-entry data, has been mostly collected in
the laboratory, under controlled conditions. While touch and text-entry data
have consistently shown its potential for monitoring and detecting a variety of
conditions and impairments, its deployment in-the-wild remains a challenge. In
this paper, we present WildKey, an Android keyboard toolkit that allows for the
usable deployment of in-the-wild user studies. WildKey is able to analyze
text-entry behaviors through implicit and explicit text-entry data collection
while ensuring user privacy. We detail each of the WildKey's components and
features, all of the metrics collected, and discuss the steps taken to ensure
user privacy and promote compliance.
|
[
{
"created": "Fri, 21 May 2021 09:22:09 GMT",
"version": "v1"
}
] |
2021-05-24
|
[
[
"Rodrigues",
"André",
""
],
[
"Santos",
"André",
""
],
[
"Montague",
"Kyle",
""
],
[
"Nicolau",
"Hugo",
""
],
[
"Guerreiro",
"Tiago",
""
]
] |
Touch data, and in particular text-entry data, has been mostly collected in the laboratory, under controlled conditions. While touch and text-entry data have consistently shown its potential for monitoring and detecting a variety of conditions and impairments, its deployment in-the-wild remains a challenge. In this paper, we present WildKey, an Android keyboard toolkit that allows for the usable deployment of in-the-wild user studies. WildKey is able to analyze text-entry behaviors through implicit and explicit text-entry data collection while ensuring user privacy. We detail each of the WildKey's components and features, all of the metrics collected, and discuss the steps taken to ensure user privacy and promote compliance.
|
2407.05446
|
Urvashi Kishnani
|
Urvashi Kishnani, Isabella Cardenas, Jailene Castillo, Rosalyn Conry,
Lukas Rodwin, Rika Ruiz, Matthew Walther and Sanchari Das
|
Towards Perceived Security, Perceived Privacy, and the Universal Design
of E-Payment Applications
| null | null | null | null |
cs.HC cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
With the growth of digital monetary transactions and cashless payments,
encouraged by the COVID-19 pandemic, use of e-payment applications is on the
rise. It is thus imperative to understand and evaluate the current posture of
e-payment applications from three major user-facing angles: security, privacy,
and usability. To this, we created a high-fidelity prototype of an e-payment
application that encompassed features that we wanted to test with users. We
then conducted a pilot study where we recruited 12 participants who tested our
prototype. We find that both security and privacy are important for users of
e-payment applications. Additionally, some participants perceive the strength
of security and privacy based on the usability of the application. We provide
recommendations such as universal design of e-payment applications.
|
[
{
"created": "Sun, 7 Jul 2024 17:15:09 GMT",
"version": "v1"
}
] |
2024-07-09
|
[
[
"Kishnani",
"Urvashi",
""
],
[
"Cardenas",
"Isabella",
""
],
[
"Castillo",
"Jailene",
""
],
[
"Conry",
"Rosalyn",
""
],
[
"Rodwin",
"Lukas",
""
],
[
"Ruiz",
"Rika",
""
],
[
"Walther",
"Matthew",
""
],
[
"Das",
"Sanchari",
""
]
] |
With the growth of digital monetary transactions and cashless payments, encouraged by the COVID-19 pandemic, use of e-payment applications is on the rise. It is thus imperative to understand and evaluate the current posture of e-payment applications from three major user-facing angles: security, privacy, and usability. To this, we created a high-fidelity prototype of an e-payment application that encompassed features that we wanted to test with users. We then conducted a pilot study where we recruited 12 participants who tested our prototype. We find that both security and privacy are important for users of e-payment applications. Additionally, some participants perceive the strength of security and privacy based on the usability of the application. We provide recommendations such as universal design of e-payment applications.
|
2304.09748
|
Kangyeol Kim
|
Kangyeol Kim, Sunghyun Park, Junsoo Lee, Jaegul Choo
|
Reference-based Image Composition with Sketch via Structure-aware
Diffusion Model
|
7 pages; Code URL: https://github.com/kangyeolk/Paint-by-Sketch
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Recent remarkable improvements in large-scale text-to-image generative models
have shown promising results in generating high-fidelity images. To further
enhance editability and enable fine-grained generation, we introduce a
multi-input-conditioned image composition model that incorporates a sketch as a
novel modal, alongside a reference image. Thanks to the edge-level
controllability using sketches, our method enables a user to edit or complete
an image sub-part with a desired structure (i.e., sketch) and content (i.e.,
reference image). Our framework fine-tunes a pre-trained diffusion model to
complete missing regions using the reference image while maintaining sketch
guidance. Albeit simple, this leads to wide opportunities to fulfill user needs
for obtaining the in-demand images. Through extensive experiments, we
demonstrate that our proposed method offers unique use cases for image
manipulation, enabling user-driven modifications of arbitrary scenes.
|
[
{
"created": "Fri, 31 Mar 2023 06:12:58 GMT",
"version": "v1"
}
] |
2023-04-20
|
[
[
"Kim",
"Kangyeol",
""
],
[
"Park",
"Sunghyun",
""
],
[
"Lee",
"Junsoo",
""
],
[
"Choo",
"Jaegul",
""
]
] |
Recent remarkable improvements in large-scale text-to-image generative models have shown promising results in generating high-fidelity images. To further enhance editability and enable fine-grained generation, we introduce a multi-input-conditioned image composition model that incorporates a sketch as a novel modal, alongside a reference image. Thanks to the edge-level controllability using sketches, our method enables a user to edit or complete an image sub-part with a desired structure (i.e., sketch) and content (i.e., reference image). Our framework fine-tunes a pre-trained diffusion model to complete missing regions using the reference image while maintaining sketch guidance. Albeit simple, this leads to wide opportunities to fulfill user needs for obtaining the in-demand images. Through extensive experiments, we demonstrate that our proposed method offers unique use cases for image manipulation, enabling user-driven modifications of arbitrary scenes.
|
2407.17756
|
Ananna Biswas
|
Ananna Biswas, Hongyu An
|
Preliminary Results of Neuromorphic Controller Design and a Parkinson's
Disease Dataset Building for Closed-Loop Deep Brain Stimulation
| null | null | null | null |
cs.NE q-bio.NC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Parkinson's Disease afflicts millions of individuals globally. Emerging as a
promising brain rehabilitation therapy for Parkinson's Disease, Closed-loop
Deep Brain Stimulation (CL-DBS) aims to alleviate motor symptoms. The CL-DBS
system comprises an implanted battery-powered medical device in the chest that
sends stimulation signals to the brains of patients. These electrical
stimulation signals are delivered to targeted brain regions via electrodes,
with the magnitude of stimuli adjustable. However, current CL-DBS systems
utilize energy-inefficient approaches, including reinforcement learning, fuzzy
interface, and field-programmable gate array (FPGA), among others. These
approaches make the traditional CL-DBS system impractical for implanted and
wearable medical devices. This research proposes a novel neuromorphic approach
that builds upon Leaky Integrate and Fire neuron (LIF) controllers to adjust
the magnitude of DBS electric signals according to the various severities of PD
patients. Our neuromorphic controllers, on-off LIF controller, and dual LIF
controller, successfully reduced the power consumption of CL-DBS systems by 19%
and 56%, respectively. Meanwhile, the suppression efficiency increased by 4.7%
and 6.77%. Additionally, to address the data scarcity of Parkinson's Disease
symptoms, we built Parkinson's Disease datasets that include the raw neural
activities from the subthalamic nucleus at beta oscillations, which are typical
physiological biomarkers for Parkinson's Disease.
|
[
{
"created": "Thu, 25 Jul 2024 04:10:15 GMT",
"version": "v1"
}
] |
2024-07-26
|
[
[
"Biswas",
"Ananna",
""
],
[
"An",
"Hongyu",
""
]
] |
Parkinson's Disease afflicts millions of individuals globally. Emerging as a promising brain rehabilitation therapy for Parkinson's Disease, Closed-loop Deep Brain Stimulation (CL-DBS) aims to alleviate motor symptoms. The CL-DBS system comprises an implanted battery-powered medical device in the chest that sends stimulation signals to the brains of patients. These electrical stimulation signals are delivered to targeted brain regions via electrodes, with the magnitude of stimuli adjustable. However, current CL-DBS systems utilize energy-inefficient approaches, including reinforcement learning, fuzzy interface, and field-programmable gate array (FPGA), among others. These approaches make the traditional CL-DBS system impractical for implanted and wearable medical devices. This research proposes a novel neuromorphic approach that builds upon Leaky Integrate and Fire neuron (LIF) controllers to adjust the magnitude of DBS electric signals according to the various severities of PD patients. Our neuromorphic controllers, on-off LIF controller, and dual LIF controller, successfully reduced the power consumption of CL-DBS systems by 19% and 56%, respectively. Meanwhile, the suppression efficiency increased by 4.7% and 6.77%. Additionally, to address the data scarcity of Parkinson's Disease symptoms, we built Parkinson's Disease datasets that include the raw neural activities from the subthalamic nucleus at beta oscillations, which are typical physiological biomarkers for Parkinson's Disease.
|
2104.07960
|
Roman Seidel
|
Roman Seidel, Andr\'e Apitzsch, Gangolf Hirtz
|
OmniFlow: Human Omnidirectional Optical Flow
|
CVPRW 2021: The Second OmniCV Workshop: Omnidirectional Computer
Vision in Research and Industry
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Optical flow is the motion of a pixel between at least two consecutive video
frames and can be estimated through an end-to-end trainable convolutional
neural network. To this end, large training datasets are required to improve
the accuracy of optical flow estimation. Our paper presents OmniFlow: a new
synthetic omnidirectional human optical flow dataset. Based on a rendering
engine we create a naturalistic 3D indoor environment with textured rooms,
characters, actions, objects, illumination and motion blur where all components
of the environment are shuffled during the data capturing process. The
simulation has as output rendered images of household activities and the
corresponding forward and backward optical flow. To verify the data for
training volumetric correspondence networks for optical flow estimation we
train different subsets of the data and test on OmniFlow with and without
Test-Time-Augmentation. As a result we have generated 23,653 image pairs and
corresponding forward and backward optical flow. Our dataset can be downloaded
from: https://mytuc.org/byfs
|
[
{
"created": "Fri, 16 Apr 2021 08:25:20 GMT",
"version": "v1"
}
] |
2021-04-19
|
[
[
"Seidel",
"Roman",
""
],
[
"Apitzsch",
"André",
""
],
[
"Hirtz",
"Gangolf",
""
]
] |
Optical flow is the motion of a pixel between at least two consecutive video frames and can be estimated through an end-to-end trainable convolutional neural network. To this end, large training datasets are required to improve the accuracy of optical flow estimation. Our paper presents OmniFlow: a new synthetic omnidirectional human optical flow dataset. Based on a rendering engine we create a naturalistic 3D indoor environment with textured rooms, characters, actions, objects, illumination and motion blur where all components of the environment are shuffled during the data capturing process. The simulation has as output rendered images of household activities and the corresponding forward and backward optical flow. To verify the data for training volumetric correspondence networks for optical flow estimation we train different subsets of the data and test on OmniFlow with and without Test-Time-Augmentation. As a result we have generated 23,653 image pairs and corresponding forward and backward optical flow. Our dataset can be downloaded from: https://mytuc.org/byfs
|
2010.07494
|
Zhiyuan Xu
|
Zhiyuan Xu, Kun Wu, Zhengping Che, Jian Tang, Jieping Ye
|
Knowledge Transfer in Multi-Task Deep Reinforcement Learning for
Continuous Control
| null | null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While Deep Reinforcement Learning (DRL) has emerged as a promising approach
to many complex tasks, it remains challenging to train a single DRL agent that
is capable of undertaking multiple different continuous control tasks. In this
paper, we present a Knowledge Transfer based Multi-task Deep Reinforcement
Learning framework (KTM-DRL) for continuous control, which enables a single DRL
agent to achieve expert-level performance in multiple different tasks by
learning from task-specific teachers. In KTM-DRL, the multi-task agent first
leverages an offline knowledge transfer algorithm designed particularly for the
actor-critic architecture to quickly learn a control policy from the experience
of task-specific teachers, and then it employs an online learning algorithm to
further improve itself by learning from new online transition samples under the
guidance of those teachers. We perform a comprehensive empirical study with two
commonly-used benchmarks in the MuJoCo continuous control task suite. The
experimental results well justify the effectiveness of KTM-DRL and its
knowledge transfer and online learning algorithms, as well as its superiority
over the state-of-the-art by a large margin.
|
[
{
"created": "Thu, 15 Oct 2020 03:26:47 GMT",
"version": "v1"
},
{
"created": "Fri, 16 Oct 2020 14:34:32 GMT",
"version": "v2"
}
] |
2020-10-19
|
[
[
"Xu",
"Zhiyuan",
""
],
[
"Wu",
"Kun",
""
],
[
"Che",
"Zhengping",
""
],
[
"Tang",
"Jian",
""
],
[
"Ye",
"Jieping",
""
]
] |
While Deep Reinforcement Learning (DRL) has emerged as a promising approach to many complex tasks, it remains challenging to train a single DRL agent that is capable of undertaking multiple different continuous control tasks. In this paper, we present a Knowledge Transfer based Multi-task Deep Reinforcement Learning framework (KTM-DRL) for continuous control, which enables a single DRL agent to achieve expert-level performance in multiple different tasks by learning from task-specific teachers. In KTM-DRL, the multi-task agent first leverages an offline knowledge transfer algorithm designed particularly for the actor-critic architecture to quickly learn a control policy from the experience of task-specific teachers, and then it employs an online learning algorithm to further improve itself by learning from new online transition samples under the guidance of those teachers. We perform a comprehensive empirical study with two commonly-used benchmarks in the MuJoCo continuous control task suite. The experimental results well justify the effectiveness of KTM-DRL and its knowledge transfer and online learning algorithms, as well as its superiority over the state-of-the-art by a large margin.
|
2206.00484
|
Pierre Schumacher
|
Pierre Schumacher, Daniel H\"aufle, Dieter B\"uchler, Syn Schmitt,
Georg Martius
|
DEP-RL: Embodied Exploration for Reinforcement Learning in Overactuated
and Musculoskeletal Systems
| null | null | null | null |
cs.RO cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Muscle-actuated organisms are capable of learning an unparalleled diversity
of dexterous movements despite their vast amount of muscles. Reinforcement
learning (RL) on large musculoskeletal models, however, has not been able to
show similar performance. We conjecture that ineffective exploration in large
overactuated action spaces is a key problem. This is supported by the finding
that common exploration noise strategies are inadequate in synthetic examples
of overactuated systems. We identify differential extrinsic plasticity (DEP), a
method from the domain of self-organization, as being able to induce
state-space covering exploration within seconds of interaction. By integrating
DEP into RL, we achieve fast learning of reaching and locomotion in
musculoskeletal systems, outperforming current approaches in all considered
tasks in sample efficiency and robustness.
|
[
{
"created": "Mon, 30 May 2022 15:52:54 GMT",
"version": "v1"
},
{
"created": "Thu, 27 Apr 2023 12:53:39 GMT",
"version": "v2"
}
] |
2023-04-28
|
[
[
"Schumacher",
"Pierre",
""
],
[
"Häufle",
"Daniel",
""
],
[
"Büchler",
"Dieter",
""
],
[
"Schmitt",
"Syn",
""
],
[
"Martius",
"Georg",
""
]
] |
Muscle-actuated organisms are capable of learning an unparalleled diversity of dexterous movements despite their vast amount of muscles. Reinforcement learning (RL) on large musculoskeletal models, however, has not been able to show similar performance. We conjecture that ineffective exploration in large overactuated action spaces is a key problem. This is supported by the finding that common exploration noise strategies are inadequate in synthetic examples of overactuated systems. We identify differential extrinsic plasticity (DEP), a method from the domain of self-organization, as being able to induce state-space covering exploration within seconds of interaction. By integrating DEP into RL, we achieve fast learning of reaching and locomotion in musculoskeletal systems, outperforming current approaches in all considered tasks in sample efficiency and robustness.
|
2006.11108
|
Kai Dresia
|
G\"unther Waxenegger-Wilfing, Kai Dresia, Jan Christian Deeken,
Michael Oschwald
|
A Reinforcement Learning Approach for Transient Control of Liquid Rocket
Engines
| null | null |
10.1109/TAES.2021.3074134
| null |
cs.LG cs.SY eess.SY math.OC stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Nowadays, liquid rocket engines use closed-loop control at most near steady
operating conditions. The control of the transient phases is traditionally
performed in open-loop due to highly nonlinear system dynamics. This situation
is unsatisfactory, in particular for reusable engines. The open-loop control
system cannot provide optimal engine performance due to external disturbances
or the degeneration of engine components over time. In this paper, we study a
deep reinforcement learning approach for optimal control of a generic
gas-generator engine's continuous start-up phase. It is shown that the learned
policy can reach different steady-state operating points and convincingly adapt
to changing system parameters. A quantitative comparison with carefully tuned
open-loop sequences and PID controllers is included. The deep reinforcement
learning controller achieves the highest performance and requires only minimal
computational effort to calculate the control action, which is a big advantage
over approaches that require online optimization, such as model predictive
control. control.
|
[
{
"created": "Fri, 19 Jun 2020 12:50:18 GMT",
"version": "v1"
}
] |
2021-05-27
|
[
[
"Waxenegger-Wilfing",
"Günther",
""
],
[
"Dresia",
"Kai",
""
],
[
"Deeken",
"Jan Christian",
""
],
[
"Oschwald",
"Michael",
""
]
] |
Nowadays, liquid rocket engines use closed-loop control at most near steady operating conditions. The control of the transient phases is traditionally performed in open-loop due to highly nonlinear system dynamics. This situation is unsatisfactory, in particular for reusable engines. The open-loop control system cannot provide optimal engine performance due to external disturbances or the degeneration of engine components over time. In this paper, we study a deep reinforcement learning approach for optimal control of a generic gas-generator engine's continuous start-up phase. It is shown that the learned policy can reach different steady-state operating points and convincingly adapt to changing system parameters. A quantitative comparison with carefully tuned open-loop sequences and PID controllers is included. The deep reinforcement learning controller achieves the highest performance and requires only minimal computational effort to calculate the control action, which is a big advantage over approaches that require online optimization, such as model predictive control. control.
|
2305.01400
|
Gabriel Dulac-Arnold
|
Geoffrey Cideron, Baruch Tabanpour, Sebastian Curi, Sertan Girgin,
Leonard Hussenot, Gabriel Dulac-Arnold, Matthieu Geist, Olivier Pietquin,
Robert Dadashi
|
Get Back Here: Robust Imitation by Return-to-Distribution Planning
| null | null | null | null |
cs.RO cs.AI cs.LG cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
We consider the Imitation Learning (IL) setup where expert data are not
collected on the actual deployment environment but on a different version. To
address the resulting distribution shift, we combine behavior cloning (BC) with
a planner that is tasked to bring the agent back to states visited by the
expert whenever the agent deviates from the demonstration distribution. The
resulting algorithm, POIR, can be trained offline, and leverages online
interactions to efficiently fine-tune its planner to improve performance over
time. We test POIR on a variety of human-generated manipulation demonstrations
in a realistic robotic manipulation simulator and show robustness of the
learned policy to different initial state distributions and noisy dynamics.
|
[
{
"created": "Tue, 2 May 2023 13:19:08 GMT",
"version": "v1"
}
] |
2023-05-03
|
[
[
"Cideron",
"Geoffrey",
""
],
[
"Tabanpour",
"Baruch",
""
],
[
"Curi",
"Sebastian",
""
],
[
"Girgin",
"Sertan",
""
],
[
"Hussenot",
"Leonard",
""
],
[
"Dulac-Arnold",
"Gabriel",
""
],
[
"Geist",
"Matthieu",
""
],
[
"Pietquin",
"Olivier",
""
],
[
"Dadashi",
"Robert",
""
]
] |
We consider the Imitation Learning (IL) setup where expert data are not collected on the actual deployment environment but on a different version. To address the resulting distribution shift, we combine behavior cloning (BC) with a planner that is tasked to bring the agent back to states visited by the expert whenever the agent deviates from the demonstration distribution. The resulting algorithm, POIR, can be trained offline, and leverages online interactions to efficiently fine-tune its planner to improve performance over time. We test POIR on a variety of human-generated manipulation demonstrations in a realistic robotic manipulation simulator and show robustness of the learned policy to different initial state distributions and noisy dynamics.
|
2306.05276
|
Enrico Santus
|
Simone Scaboro, Beatrice Portellia, Emmanuele Chersoni, Enrico Santus,
Giuseppe Serra
|
Extensive Evaluation of Transformer-based Architectures for Adverse Drug
Events Extraction
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Adverse Event (ADE) extraction is one of the core tasks in digital
pharmacovigilance, especially when applied to informal texts. This task has
been addressed by the Natural Language Processing community using large
pre-trained language models, such as BERT. Despite the great number of
Transformer-based architectures used in the literature, it is unclear which of
them has better performances and why. Therefore, in this paper we perform an
extensive evaluation and analysis of 19 Transformer-based models for ADE
extraction on informal texts. We compare the performance of all the considered
models on two datasets with increasing levels of informality (forums posts and
tweets). We also combine the purely Transformer-based models with two
commonly-used additional processing layers (CRF and LSTM), and analyze their
effect on the models performance. Furthermore, we use a well-established
feature importance technique (SHAP) to correlate the performance of the models
with a set of features that describe them: model category (AutoEncoding,
AutoRegressive, Text-to-Text), pretraining domain, training from scratch, and
model size in number of parameters. At the end of our analyses, we identify a
list of take-home messages that can be derived from the experimental data.
|
[
{
"created": "Thu, 8 Jun 2023 15:25:24 GMT",
"version": "v1"
}
] |
2023-06-09
|
[
[
"Scaboro",
"Simone",
""
],
[
"Portellia",
"Beatrice",
""
],
[
"Chersoni",
"Emmanuele",
""
],
[
"Santus",
"Enrico",
""
],
[
"Serra",
"Giuseppe",
""
]
] |
Adverse Event (ADE) extraction is one of the core tasks in digital pharmacovigilance, especially when applied to informal texts. This task has been addressed by the Natural Language Processing community using large pre-trained language models, such as BERT. Despite the great number of Transformer-based architectures used in the literature, it is unclear which of them has better performances and why. Therefore, in this paper we perform an extensive evaluation and analysis of 19 Transformer-based models for ADE extraction on informal texts. We compare the performance of all the considered models on two datasets with increasing levels of informality (forums posts and tweets). We also combine the purely Transformer-based models with two commonly-used additional processing layers (CRF and LSTM), and analyze their effect on the models performance. Furthermore, we use a well-established feature importance technique (SHAP) to correlate the performance of the models with a set of features that describe them: model category (AutoEncoding, AutoRegressive, Text-to-Text), pretraining domain, training from scratch, and model size in number of parameters. At the end of our analyses, we identify a list of take-home messages that can be derived from the experimental data.
|
1905.01077
|
Ting Yao
|
Jingwen Chen, Yingwei Pan, Yehao Li, Ting Yao, Hongyang Chao, Tao Mei
|
Temporal Deformable Convolutional Encoder-Decoder Networks for Video
Captioning
|
AAAI 2019
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It is well believed that video captioning is a fundamental but challenging
task in both computer vision and artificial intelligence fields. The prevalent
approach is to map an input video to a variable-length output sentence in a
sequence to sequence manner via Recurrent Neural Network (RNN). Nevertheless,
the training of RNN still suffers to some degree from vanishing/exploding
gradient problem, making the optimization difficult. Moreover, the inherently
recurrent dependency in RNN prevents parallelization within a sequence during
training and therefore limits the computations. In this paper, we present a
novel design --- Temporal Deformable Convolutional Encoder-Decoder Networks
(dubbed as TDConvED) that fully employ convolutions in both encoder and decoder
networks for video captioning. Technically, we exploit convolutional block
structures that compute intermediate states of a fixed number of inputs and
stack several blocks to capture long-term relationships. The structure in
encoder is further equipped with temporal deformable convolution to enable
free-form deformation of temporal sampling. Our model also capitalizes on
temporal attention mechanism for sentence generation. Extensive experiments are
conducted on both MSVD and MSR-VTT video captioning datasets, and superior
results are reported when comparing to conventional RNN-based encoder-decoder
techniques. More remarkably, TDConvED increases CIDEr-D performance from 58.8%
to 67.2% on MSVD.
|
[
{
"created": "Fri, 3 May 2019 08:59:10 GMT",
"version": "v1"
}
] |
2019-05-06
|
[
[
"Chen",
"Jingwen",
""
],
[
"Pan",
"Yingwei",
""
],
[
"Li",
"Yehao",
""
],
[
"Yao",
"Ting",
""
],
[
"Chao",
"Hongyang",
""
],
[
"Mei",
"Tao",
""
]
] |
It is well believed that video captioning is a fundamental but challenging task in both computer vision and artificial intelligence fields. The prevalent approach is to map an input video to a variable-length output sentence in a sequence to sequence manner via Recurrent Neural Network (RNN). Nevertheless, the training of RNN still suffers to some degree from vanishing/exploding gradient problem, making the optimization difficult. Moreover, the inherently recurrent dependency in RNN prevents parallelization within a sequence during training and therefore limits the computations. In this paper, we present a novel design --- Temporal Deformable Convolutional Encoder-Decoder Networks (dubbed as TDConvED) that fully employ convolutions in both encoder and decoder networks for video captioning. Technically, we exploit convolutional block structures that compute intermediate states of a fixed number of inputs and stack several blocks to capture long-term relationships. The structure in encoder is further equipped with temporal deformable convolution to enable free-form deformation of temporal sampling. Our model also capitalizes on temporal attention mechanism for sentence generation. Extensive experiments are conducted on both MSVD and MSR-VTT video captioning datasets, and superior results are reported when comparing to conventional RNN-based encoder-decoder techniques. More remarkably, TDConvED increases CIDEr-D performance from 58.8% to 67.2% on MSVD.
|
1411.3698
|
Qingqing Huang
|
Qingqing Huang, Rong Ge, Sham Kakade, Munther Dahleh
|
Minimal Realization Problems for Hidden Markov Models
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Consider a stationary discrete random process with alphabet size d, which is
assumed to be the output process of an unknown stationary Hidden Markov Model
(HMM). Given the joint probabilities of finite length strings of the process,
we are interested in finding a finite state generative model to describe the
entire process. In particular, we focus on two classes of models: HMMs and
quasi-HMMs, which is a strictly larger class of models containing HMMs. In the
main theorem, we show that if the random process is generated by an HMM of
order less or equal than k, and whose transition and observation probability
matrix are in general position, namely almost everywhere on the parameter
space, both the minimal quasi-HMM realization and the minimal HMM realization
can be efficiently computed based on the joint probabilities of all the length
N strings, for N > 4 lceil log_d(k) rceil +1. In this paper, we also aim to
compare and connect the two lines of literature: realization theory of HMMs,
and the recent development in learning latent variable models with tensor
decomposition techniques.
|
[
{
"created": "Thu, 13 Nov 2014 20:30:06 GMT",
"version": "v1"
},
{
"created": "Mon, 14 Dec 2015 19:48:40 GMT",
"version": "v2"
}
] |
2015-12-15
|
[
[
"Huang",
"Qingqing",
""
],
[
"Ge",
"Rong",
""
],
[
"Kakade",
"Sham",
""
],
[
"Dahleh",
"Munther",
""
]
] |
Consider a stationary discrete random process with alphabet size d, which is assumed to be the output process of an unknown stationary Hidden Markov Model (HMM). Given the joint probabilities of finite length strings of the process, we are interested in finding a finite state generative model to describe the entire process. In particular, we focus on two classes of models: HMMs and quasi-HMMs, which is a strictly larger class of models containing HMMs. In the main theorem, we show that if the random process is generated by an HMM of order less or equal than k, and whose transition and observation probability matrix are in general position, namely almost everywhere on the parameter space, both the minimal quasi-HMM realization and the minimal HMM realization can be efficiently computed based on the joint probabilities of all the length N strings, for N > 4 lceil log_d(k) rceil +1. In this paper, we also aim to compare and connect the two lines of literature: realization theory of HMMs, and the recent development in learning latent variable models with tensor decomposition techniques.
|
2312.08602
|
Mateo Perez
|
Ernst Moritz Hahn, Mateo Perez, Sven Schewe, Fabio Somenzi, Ashutosh
Trivedi, Dominik Wojtczak
|
Omega-Regular Decision Processes
| null | null | null | null |
cs.LO cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Regular decision processes (RDPs) are a subclass of non-Markovian decision
processes where the transition and reward functions are guarded by some regular
property of the past (a lookback). While RDPs enable intuitive and succinct
representation of non-Markovian decision processes, their expressive power
coincides with finite-state Markov decision processes (MDPs). We introduce
omega-regular decision processes (ODPs) where the non-Markovian aspect of the
transition and reward functions are extended to an omega-regular lookahead over
the system evolution. Semantically, these lookaheads can be considered as
promises made by the decision maker or the learning agent about her future
behavior. In particular, we assume that, if the promised lookaheads are not
met, then the payoff to the decision maker is $\bot$ (least desirable payoff),
overriding any rewards collected by the decision maker. We enable optimization
and learning for ODPs under the discounted-reward objective by reducing them to
lexicographic optimization and learning over finite MDPs. We present
experimental results demonstrating the effectiveness of the proposed reduction.
|
[
{
"created": "Thu, 14 Dec 2023 01:58:51 GMT",
"version": "v1"
}
] |
2023-12-15
|
[
[
"Hahn",
"Ernst Moritz",
""
],
[
"Perez",
"Mateo",
""
],
[
"Schewe",
"Sven",
""
],
[
"Somenzi",
"Fabio",
""
],
[
"Trivedi",
"Ashutosh",
""
],
[
"Wojtczak",
"Dominik",
""
]
] |
Regular decision processes (RDPs) are a subclass of non-Markovian decision processes where the transition and reward functions are guarded by some regular property of the past (a lookback). While RDPs enable intuitive and succinct representation of non-Markovian decision processes, their expressive power coincides with finite-state Markov decision processes (MDPs). We introduce omega-regular decision processes (ODPs) where the non-Markovian aspect of the transition and reward functions are extended to an omega-regular lookahead over the system evolution. Semantically, these lookaheads can be considered as promises made by the decision maker or the learning agent about her future behavior. In particular, we assume that, if the promised lookaheads are not met, then the payoff to the decision maker is $\bot$ (least desirable payoff), overriding any rewards collected by the decision maker. We enable optimization and learning for ODPs under the discounted-reward objective by reducing them to lexicographic optimization and learning over finite MDPs. We present experimental results demonstrating the effectiveness of the proposed reduction.
|
2102.10476
|
Rachitesh Kumar
|
Santiago Balseiro, Christian Kroer, Rachitesh Kumar
|
Contextual Standard Auctions with Budgets: Revenue Equivalence and
Efficiency Guarantees
| null | null | null | null |
cs.GT econ.TH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The internet advertising market is a multi-billion dollar industry, in which
advertisers buy thousands of ad placements every day by repeatedly
participating in auctions. An important and ubiquitous feature of these
auctions is the presence of campaign budgets, which specify the maximum amount
the advertisers are willing to pay over a specified time period. In this paper,
we present a new model to study the equilibrium bidding strategies in standard
auctions, a large class of auctions that includes first- and second-price
auctions, for advertisers who satisfy budget constraints on average. Our model
dispenses with the common, yet unrealistic assumption that advertisers' values
are independent and instead assumes a contextual model in which advertisers
determine their values using a common feature vector. We show the existence of
a natural value-pacing-based Bayes-Nash equilibrium under very mild
assumptions. Furthermore, we prove a revenue equivalence showing that all
standard auctions yield the same revenue even in the presence of budget
constraints. Leveraging this equivalence, we prove Price of Anarchy bounds for
liquid welfare and structural properties of pacing-based equilibria that hold
for all standard auctions. In recent years, the internet advertising market has
adopted first-price auctions as the preferred paradigm for selling advertising
slots. Our work thus takes an important step toward understanding the
implications of the shift to first-price auctions in internet advertising
markets by studying how the choice of the selling mechanism impacts revenues,
welfare, and advertisers' bidding strategies.
|
[
{
"created": "Sat, 20 Feb 2021 23:41:25 GMT",
"version": "v1"
},
{
"created": "Wed, 11 May 2022 04:34:45 GMT",
"version": "v2"
},
{
"created": "Sun, 9 Oct 2022 15:52:47 GMT",
"version": "v3"
}
] |
2022-10-11
|
[
[
"Balseiro",
"Santiago",
""
],
[
"Kroer",
"Christian",
""
],
[
"Kumar",
"Rachitesh",
""
]
] |
The internet advertising market is a multi-billion dollar industry, in which advertisers buy thousands of ad placements every day by repeatedly participating in auctions. An important and ubiquitous feature of these auctions is the presence of campaign budgets, which specify the maximum amount the advertisers are willing to pay over a specified time period. In this paper, we present a new model to study the equilibrium bidding strategies in standard auctions, a large class of auctions that includes first- and second-price auctions, for advertisers who satisfy budget constraints on average. Our model dispenses with the common, yet unrealistic assumption that advertisers' values are independent and instead assumes a contextual model in which advertisers determine their values using a common feature vector. We show the existence of a natural value-pacing-based Bayes-Nash equilibrium under very mild assumptions. Furthermore, we prove a revenue equivalence showing that all standard auctions yield the same revenue even in the presence of budget constraints. Leveraging this equivalence, we prove Price of Anarchy bounds for liquid welfare and structural properties of pacing-based equilibria that hold for all standard auctions. In recent years, the internet advertising market has adopted first-price auctions as the preferred paradigm for selling advertising slots. Our work thus takes an important step toward understanding the implications of the shift to first-price auctions in internet advertising markets by studying how the choice of the selling mechanism impacts revenues, welfare, and advertisers' bidding strategies.
|
2304.04672
|
Jizhizi Li
|
Jizhizi Li, Jing Zhang, Dacheng Tao
|
Deep Image Matting: A Comprehensive Survey
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Image matting refers to extracting precise alpha matte from natural images,
and it plays a critical role in various downstream applications, such as image
editing. Despite being an ill-posed problem, traditional methods have been
trying to solve it for decades. The emergence of deep learning has
revolutionized the field of image matting and given birth to multiple new
techniques, including automatic, interactive, and referring image matting. This
paper presents a comprehensive review of recent advancements in image matting
in the era of deep learning. We focus on two fundamental sub-tasks: auxiliary
input-based image matting, which involves user-defined input to predict the
alpha matte, and automatic image matting, which generates results without any
manual intervention. We systematically review the existing methods for these
two tasks according to their task settings and network structures and provide a
summary of their advantages and disadvantages. Furthermore, we introduce the
commonly used image matting datasets and evaluate the performance of
representative matting methods both quantitatively and qualitatively. Finally,
we discuss relevant applications of image matting and highlight existing
challenges and potential opportunities for future research. We also maintain a
public repository to track the rapid development of deep image matting at
https://github.com/JizhiziLi/matting-survey.
|
[
{
"created": "Mon, 10 Apr 2023 15:48:55 GMT",
"version": "v1"
}
] |
2023-04-11
|
[
[
"Li",
"Jizhizi",
""
],
[
"Zhang",
"Jing",
""
],
[
"Tao",
"Dacheng",
""
]
] |
Image matting refers to extracting precise alpha matte from natural images, and it plays a critical role in various downstream applications, such as image editing. Despite being an ill-posed problem, traditional methods have been trying to solve it for decades. The emergence of deep learning has revolutionized the field of image matting and given birth to multiple new techniques, including automatic, interactive, and referring image matting. This paper presents a comprehensive review of recent advancements in image matting in the era of deep learning. We focus on two fundamental sub-tasks: auxiliary input-based image matting, which involves user-defined input to predict the alpha matte, and automatic image matting, which generates results without any manual intervention. We systematically review the existing methods for these two tasks according to their task settings and network structures and provide a summary of their advantages and disadvantages. Furthermore, we introduce the commonly used image matting datasets and evaluate the performance of representative matting methods both quantitatively and qualitatively. Finally, we discuss relevant applications of image matting and highlight existing challenges and potential opportunities for future research. We also maintain a public repository to track the rapid development of deep image matting at https://github.com/JizhiziLi/matting-survey.
|
2002.12441
|
Heytem Zitoun
|
Heytem Zitoun, Claude Michel, Laurent Michel, Michel Rueher
|
An efficient constraint based framework forhandling floating point SMT
problems
| null | null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces the 2019 version of \us{}, a novel Constraint
Programming framework for floating point verification problems expressed with
the SMT language of SMTLIB. SMT solvers decompose their task by delegating to
specific theories (e.g., floating point, bit vectors, arrays, ...) the task to
reason about combinatorial or otherwise complex constraints for which the SAT
encoding would be cumbersome or ineffective. This decomposition and encoding
processes lead to the obfuscation of the high-level constraints and a loss of
information on the structure of the combinatorial model. In \us{}, constraints
over the floats are first class objects, and the purpose is to expose and
exploit structures of floating point domains to enhance the search process. A
symbolic phase rewrites each SMTLIB instance to elementary constraints, and
eliminates auxiliary variables whose presence is counterproductive. A
diversification technique within the search steers it away from costly
enumerations in unproductive areas of the search space. The empirical
evaluation demonstrates that the 2019 version of \us{} is competitive on
computationally challenging floating point benchmarks that induce significant
search efforts even for other CP solvers. It highlights that the ability to
harness both inference and search is critical. Indeed, it yields a factor 3
improvement over Colibri and is up to 10 times faster than SMT solvers. The
evaluation was conducted over 214 benchmarks (The Griggio suite) which is a
standard within SMTLIB.
|
[
{
"created": "Thu, 27 Feb 2020 21:11:22 GMT",
"version": "v1"
}
] |
2020-03-02
|
[
[
"Zitoun",
"Heytem",
""
],
[
"Michel",
"Claude",
""
],
[
"Michel",
"Laurent",
""
],
[
"Rueher",
"Michel",
""
]
] |
This paper introduces the 2019 version of \us{}, a novel Constraint Programming framework for floating point verification problems expressed with the SMT language of SMTLIB. SMT solvers decompose their task by delegating to specific theories (e.g., floating point, bit vectors, arrays, ...) the task to reason about combinatorial or otherwise complex constraints for which the SAT encoding would be cumbersome or ineffective. This decomposition and encoding processes lead to the obfuscation of the high-level constraints and a loss of information on the structure of the combinatorial model. In \us{}, constraints over the floats are first class objects, and the purpose is to expose and exploit structures of floating point domains to enhance the search process. A symbolic phase rewrites each SMTLIB instance to elementary constraints, and eliminates auxiliary variables whose presence is counterproductive. A diversification technique within the search steers it away from costly enumerations in unproductive areas of the search space. The empirical evaluation demonstrates that the 2019 version of \us{} is competitive on computationally challenging floating point benchmarks that induce significant search efforts even for other CP solvers. It highlights that the ability to harness both inference and search is critical. Indeed, it yields a factor 3 improvement over Colibri and is up to 10 times faster than SMT solvers. The evaluation was conducted over 214 benchmarks (The Griggio suite) which is a standard within SMTLIB.
|
1803.09288
|
Austin Kozlowski
|
Austin C. Kozlowski, Matt Taddy, James A. Evans
|
The Geometry of Culture: Analyzing Meaning through Word Embeddings
| null |
American Sociological Review 2019, Vol. 84(5) 905-949
|
10.1177/0003122419877135
| null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We demonstrate the utility of a new methodological tool, neural-network word
embedding models, for large-scale text analysis, revealing how these models
produce richer insights into cultural associations and categories than possible
with prior methods. Word embeddings represent semantic relations between words
as geometric relationships between vectors in a high-dimensional space,
operationalizing a relational model of meaning consistent with contemporary
theories of identity and culture. We show that dimensions induced by word
differences (e.g. man - woman, rich - poor, black - white, liberal -
conservative) in these vector spaces closely correspond to dimensions of
cultural meaning, and the projection of words onto these dimensions reflects
widely shared cultural connotations when compared to surveyed responses and
labeled historical data. We pilot a method for testing the stability of these
associations, then demonstrate applications of word embeddings for
macro-cultural investigation with a longitudinal analysis of the coevolution of
gender and class associations in the United States over the 20th century and a
comparative analysis of historic distinctions between markers of gender and
class in the U.S. and Britain. We argue that the success of these
high-dimensional models motivates a move towards "high-dimensional theorizing"
of meanings, identities and cultural processes.
|
[
{
"created": "Sun, 25 Mar 2018 16:08:06 GMT",
"version": "v1"
}
] |
2019-11-13
|
[
[
"Kozlowski",
"Austin C.",
""
],
[
"Taddy",
"Matt",
""
],
[
"Evans",
"James A.",
""
]
] |
We demonstrate the utility of a new methodological tool, neural-network word embedding models, for large-scale text analysis, revealing how these models produce richer insights into cultural associations and categories than possible with prior methods. Word embeddings represent semantic relations between words as geometric relationships between vectors in a high-dimensional space, operationalizing a relational model of meaning consistent with contemporary theories of identity and culture. We show that dimensions induced by word differences (e.g. man - woman, rich - poor, black - white, liberal - conservative) in these vector spaces closely correspond to dimensions of cultural meaning, and the projection of words onto these dimensions reflects widely shared cultural connotations when compared to surveyed responses and labeled historical data. We pilot a method for testing the stability of these associations, then demonstrate applications of word embeddings for macro-cultural investigation with a longitudinal analysis of the coevolution of gender and class associations in the United States over the 20th century and a comparative analysis of historic distinctions between markers of gender and class in the U.S. and Britain. We argue that the success of these high-dimensional models motivates a move towards "high-dimensional theorizing" of meanings, identities and cultural processes.
|
2405.17369
|
Amin Ahmadi Kasani
|
Amin Ahmadi Kasani, Hedieh Sajedi
|
Predict joint angle of body parts based on sequence pattern recognition
| null |
2022 16th International Conference on Ubiquitous Information
Management and Communication (IMCOM)
|
10.1109/IMCOM53663.2022.9721801
| null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The way organs are positioned and moved in the workplace can cause pain and
physical harm. Therefore, ergonomists use ergonomic risk assessments based on
visual observation of the workplace, or review pictures and videos taken in the
workplace. Sometimes the workers in the photos are not in perfect condition.
Some parts of the workers' bodies may not be in the camera's field of view,
could be obscured by objects, or by self-occlusion, this is the main problem in
2D human posture recognition. It is difficult to predict the position of body
parts when they are not visible in the image, and geometric mathematical
methods are not entirely suitable for this purpose. Therefore, we created a
dataset with artificial images of a 3D human model, specifically for painful
postures, and real human photos from different viewpoints. Each image we
captured was based on a predefined joint angle for each 3D model or human
model. We created various images, including images where some body parts are
not visible. Nevertheless, the joint angle is estimated beforehand, so we could
study the case by converting the input images into the sequence of joint
connections between predefined body parts and extracting the desired joint
angle with a convolutional neural network. In the end, we obtained root mean
square error (RMSE) of 12.89 and mean absolute error (MAE) of 4.7 on the test
dataset.
|
[
{
"created": "Mon, 27 May 2024 17:24:11 GMT",
"version": "v1"
}
] |
2024-05-28
|
[
[
"Kasani",
"Amin Ahmadi",
""
],
[
"Sajedi",
"Hedieh",
""
]
] |
The way organs are positioned and moved in the workplace can cause pain and physical harm. Therefore, ergonomists use ergonomic risk assessments based on visual observation of the workplace, or review pictures and videos taken in the workplace. Sometimes the workers in the photos are not in perfect condition. Some parts of the workers' bodies may not be in the camera's field of view, could be obscured by objects, or by self-occlusion, this is the main problem in 2D human posture recognition. It is difficult to predict the position of body parts when they are not visible in the image, and geometric mathematical methods are not entirely suitable for this purpose. Therefore, we created a dataset with artificial images of a 3D human model, specifically for painful postures, and real human photos from different viewpoints. Each image we captured was based on a predefined joint angle for each 3D model or human model. We created various images, including images where some body parts are not visible. Nevertheless, the joint angle is estimated beforehand, so we could study the case by converting the input images into the sequence of joint connections between predefined body parts and extracting the desired joint angle with a convolutional neural network. In the end, we obtained root mean square error (RMSE) of 12.89 and mean absolute error (MAE) of 4.7 on the test dataset.
|
1002.2409
|
Rdv Ijcsis
|
Rashid Sheikh, Beerendra Kumar, Durgesh Kumar Mishra
|
Changing Neighbors k Secure Sum Protocol for Secure Multi Party
Computation
|
IEEE format, International Journal of Computer Science and
Information Security, IJCSIS January 2010, ISSN 1947 5500,
http://sites.google.com/site/ijcsis/
|
International Journal of Computer Science and Information
Security, IJCSIS, Vol. 7, No. 1, pp. 239-243, January 2010, USA
| null |
Journal of Computer Science, ISSN 1947 5500
|
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Secure sum computation of private data inputs is an important component of
Secure Multi party Computation (SMC).In this paper we provide a protocol to
compute the sum of individual data inputs with zero probability of data
leakage. In our proposed protocol we break input of each party into number of
segments and change the arrangement of the parties such that in each round of
the computation the neighbors are changed. In this protocol it becomes
impossible for semi honest parties to know the private data of some other
party.
|
[
{
"created": "Thu, 11 Feb 2010 19:58:10 GMT",
"version": "v1"
}
] |
2010-02-12
|
[
[
"Sheikh",
"Rashid",
""
],
[
"Kumar",
"Beerendra",
""
],
[
"Mishra",
"Durgesh Kumar",
""
]
] |
Secure sum computation of private data inputs is an important component of Secure Multi party Computation (SMC).In this paper we provide a protocol to compute the sum of individual data inputs with zero probability of data leakage. In our proposed protocol we break input of each party into number of segments and change the arrangement of the parties such that in each round of the computation the neighbors are changed. In this protocol it becomes impossible for semi honest parties to know the private data of some other party.
|
2009.10050
|
Alan Lundgard
|
Alan Lundgard
|
Measuring justice in machine learning
|
Presented at the ACM Conference on Fairness, Accountability, and
Transparency (30 January 2020) and at the ACM SIGACCESS Conference on
Computers and Accessibility: Workshop on AI Fairness for People with
Disabilities (27 October 2019). Version v2: typos and formatting corrected
| null |
10.1145/3351095.3372838
| null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
How can we build more just machine learning systems? To answer this question,
we need to know both what justice is and how to tell whether one system is more
or less just than another. That is, we need both a definition and a measure of
justice. Theories of distributive justice hold that justice can be measured (in
part) in terms of the fair distribution of benefits and burdens across people
in society. Recently, the field known as fair machine learning has turned to
John Rawls's theory of distributive justice for inspiration and
operationalization. However, philosophers known as capability theorists have
long argued that Rawls's theory uses the wrong measure of justice, thereby
encoding biases against people with disabilities. If these theorists are right,
is it possible to operationalize Rawls's theory in machine learning systems
without also encoding its biases? In this paper, I draw on examples from fair
machine learning to suggest that the answer to this question is no: the
capability theorists' arguments against Rawls's theory carry over into machine
learning systems. But capability theorists don't only argue that Rawls's theory
uses the wrong measure, they also offer an alternative measure. Which measure
of justice is right? And has fair machine learning been using the wrong one?
|
[
{
"created": "Mon, 21 Sep 2020 17:46:11 GMT",
"version": "v1"
},
{
"created": "Wed, 30 Sep 2020 16:52:42 GMT",
"version": "v2"
}
] |
2020-10-01
|
[
[
"Lundgard",
"Alan",
""
]
] |
How can we build more just machine learning systems? To answer this question, we need to know both what justice is and how to tell whether one system is more or less just than another. That is, we need both a definition and a measure of justice. Theories of distributive justice hold that justice can be measured (in part) in terms of the fair distribution of benefits and burdens across people in society. Recently, the field known as fair machine learning has turned to John Rawls's theory of distributive justice for inspiration and operationalization. However, philosophers known as capability theorists have long argued that Rawls's theory uses the wrong measure of justice, thereby encoding biases against people with disabilities. If these theorists are right, is it possible to operationalize Rawls's theory in machine learning systems without also encoding its biases? In this paper, I draw on examples from fair machine learning to suggest that the answer to this question is no: the capability theorists' arguments against Rawls's theory carry over into machine learning systems. But capability theorists don't only argue that Rawls's theory uses the wrong measure, they also offer an alternative measure. Which measure of justice is right? And has fair machine learning been using the wrong one?
|
2404.19486
|
Mariia Ignashina
|
Mariia Ignashina, Julia Ive
|
Safe Training with Sensitive In-domain Data: Leveraging Data
Fragmentation To Mitigate Linkage Attacks
| null | null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Current text generation models are trained using real data which can
potentially contain sensitive information, such as confidential patient
information and the like. Under certain conditions output of the training data
which they have memorised can be triggered, exposing sensitive data. To
mitigate against this risk we propose a safer alternative which sees fragmented
data in the form of domain-specific short phrases randomly grouped together
shared instead of full texts. Thus, text fragments that could re-identify an
individual cannot be reproduced by the model in one sequence, giving
significant protection against linkage attacks. We fine-tune several
state-of-the-art LLMs using meaningful syntactic chunks to explore their
utility. In particular, we fine-tune BERT-based models to predict two
cardiovascular diagnoses. Our results demonstrate the capacity of LLMs to
benefit from the pre-trained knowledge and deliver classification results when
fine-tuned with fragmented data comparable to fine-tuning with full training
data.
|
[
{
"created": "Tue, 30 Apr 2024 12:09:55 GMT",
"version": "v1"
}
] |
2024-05-01
|
[
[
"Ignashina",
"Mariia",
""
],
[
"Ive",
"Julia",
""
]
] |
Current text generation models are trained using real data which can potentially contain sensitive information, such as confidential patient information and the like. Under certain conditions output of the training data which they have memorised can be triggered, exposing sensitive data. To mitigate against this risk we propose a safer alternative which sees fragmented data in the form of domain-specific short phrases randomly grouped together shared instead of full texts. Thus, text fragments that could re-identify an individual cannot be reproduced by the model in one sequence, giving significant protection against linkage attacks. We fine-tune several state-of-the-art LLMs using meaningful syntactic chunks to explore their utility. In particular, we fine-tune BERT-based models to predict two cardiovascular diagnoses. Our results demonstrate the capacity of LLMs to benefit from the pre-trained knowledge and deliver classification results when fine-tuned with fragmented data comparable to fine-tuning with full training data.
|
2311.02428
|
Rajas Chitale
|
Rajas Chitale, Ankit Vaidya, Aditya Kane, Archana Ghotkar
|
Task Arithmetic with LoRA for Continual Learning
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Continual learning refers to the problem where the training data is available
in sequential chunks, termed "tasks". The majority of progress in continual
learning has been stunted by the problem of catastrophic forgetting, which is
caused by sequential training of the model on streams of data. Moreover, it
becomes computationally expensive to sequentially train large models multiple
times. To mitigate both of these problems at once, we propose a novel method to
continually train transformer-based vision models using low-rank adaptation and
task arithmetic. Our method completely bypasses the problem of catastrophic
forgetting, as well as reducing the computational requirement for training
models on each task. When aided with a small memory of 10 samples per class,
our method achieves performance close to full-set finetuning. We present
rigorous ablations to support the prowess of our method.
|
[
{
"created": "Sat, 4 Nov 2023 15:12:24 GMT",
"version": "v1"
}
] |
2023-11-07
|
[
[
"Chitale",
"Rajas",
""
],
[
"Vaidya",
"Ankit",
""
],
[
"Kane",
"Aditya",
""
],
[
"Ghotkar",
"Archana",
""
]
] |
Continual learning refers to the problem where the training data is available in sequential chunks, termed "tasks". The majority of progress in continual learning has been stunted by the problem of catastrophic forgetting, which is caused by sequential training of the model on streams of data. Moreover, it becomes computationally expensive to sequentially train large models multiple times. To mitigate both of these problems at once, we propose a novel method to continually train transformer-based vision models using low-rank adaptation and task arithmetic. Our method completely bypasses the problem of catastrophic forgetting, as well as reducing the computational requirement for training models on each task. When aided with a small memory of 10 samples per class, our method achieves performance close to full-set finetuning. We present rigorous ablations to support the prowess of our method.
|
2112.05019
|
Javier Garcia-Bernardo
|
Javier Garcia-Bernardo, Joost Witteman, Marilou Vlaanderen
|
Uncovering the Size of the Illegal Corporate Service Provider Industry
in the Netherlands: a Network Approach
| null | null |
10.1140/epjds/s13688-022-00334-w
| null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Economic crimes such as money laundering, terrorism financing, tax evasion or
corruption almost invariably involve the use of a corporate entity. Such
entities are regularly incorporated and managed by corporate services providers
(CSPs). Given this potential for enabling economic crime, the CSP industry in
the Netherlands is heavily regulated and CSPs require a license to operate.
Operating without a licence is illegal. In this paper, we estimate the size of
the illegal CSP sector in the Netherlands. For this, we develop a
classification method to detect potentially illegal CSPs based on their
similarity with licensed CSPs. Similarity is computed based on their position
within the network of directors, companies and addresses, and the
characteristics of such entities. We manually annotate a sample of the
potential illegal CSPs and estimate that illegal CSPs constitute 31--51\% of
the total number of CSPs and manage 19--27\% of all companies managed by CSPs.
Our analysis provides a tool to regulators to improve detection and prevention
of economic crime, and can be extended to the estimation of other illegal
activities.
|
[
{
"created": "Thu, 9 Dec 2021 16:27:00 GMT",
"version": "v1"
},
{
"created": "Sun, 13 Mar 2022 18:46:28 GMT",
"version": "v2"
}
] |
2022-04-28
|
[
[
"Garcia-Bernardo",
"Javier",
""
],
[
"Witteman",
"Joost",
""
],
[
"Vlaanderen",
"Marilou",
""
]
] |
Economic crimes such as money laundering, terrorism financing, tax evasion or corruption almost invariably involve the use of a corporate entity. Such entities are regularly incorporated and managed by corporate services providers (CSPs). Given this potential for enabling economic crime, the CSP industry in the Netherlands is heavily regulated and CSPs require a license to operate. Operating without a licence is illegal. In this paper, we estimate the size of the illegal CSP sector in the Netherlands. For this, we develop a classification method to detect potentially illegal CSPs based on their similarity with licensed CSPs. Similarity is computed based on their position within the network of directors, companies and addresses, and the characteristics of such entities. We manually annotate a sample of the potential illegal CSPs and estimate that illegal CSPs constitute 31--51\% of the total number of CSPs and manage 19--27\% of all companies managed by CSPs. Our analysis provides a tool to regulators to improve detection and prevention of economic crime, and can be extended to the estimation of other illegal activities.
|
1510.08963
|
Yusuke Hioka
|
Yusuke Hioka, Kenta Niwa
|
PSD estimation in Beamspace for Estimating Direct-to-Reverberant Ratio
from A Reverberant Speech Signal
|
In Proceedings of the ACE Challenge Workshop - a satellite event of
IEEE-WASPAA2015 (arXiv:1510.00383)
| null | null |
ACEChallenge/2015/04
|
cs.SD
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A method for estimation of direct-to-reverberant ratio (DRR) using a
microphone array is proposed. The proposed method estimates the power spectral
density (PSD) of the direct sound and the reverberation using the algorithm
\textit{PSD estimation in beamspace} with a microphone array and calculates the
DRR of the observed signal. The speech corpus of the ACE (Acoustic
Characterisation of Environments) Challenge was utilised for evaluating the
practical feasibility of the proposed method. The experimental results revealed
that the proposed method was able to effectively estimate the DRR from a
recording of a reverberant speech signal which included various environmental
noise.
|
[
{
"created": "Fri, 30 Oct 2015 03:45:41 GMT",
"version": "v1"
}
] |
2015-11-02
|
[
[
"Hioka",
"Yusuke",
""
],
[
"Niwa",
"Kenta",
""
]
] |
A method for estimation of direct-to-reverberant ratio (DRR) using a microphone array is proposed. The proposed method estimates the power spectral density (PSD) of the direct sound and the reverberation using the algorithm \textit{PSD estimation in beamspace} with a microphone array and calculates the DRR of the observed signal. The speech corpus of the ACE (Acoustic Characterisation of Environments) Challenge was utilised for evaluating the practical feasibility of the proposed method. The experimental results revealed that the proposed method was able to effectively estimate the DRR from a recording of a reverberant speech signal which included various environmental noise.
|
2201.03545
|
Saining Xie
|
Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor
Darrell and Saining Xie
|
A ConvNet for the 2020s
|
CVPR 2022; Code: https://github.com/facebookresearch/ConvNeXt
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The "Roaring 20s" of visual recognition began with the introduction of Vision
Transformers (ViTs), which quickly superseded ConvNets as the state-of-the-art
image classification model. A vanilla ViT, on the other hand, faces
difficulties when applied to general computer vision tasks such as object
detection and semantic segmentation. It is the hierarchical Transformers (e.g.,
Swin Transformers) that reintroduced several ConvNet priors, making
Transformers practically viable as a generic vision backbone and demonstrating
remarkable performance on a wide variety of vision tasks. However, the
effectiveness of such hybrid approaches is still largely credited to the
intrinsic superiority of Transformers, rather than the inherent inductive
biases of convolutions. In this work, we reexamine the design spaces and test
the limits of what a pure ConvNet can achieve. We gradually "modernize" a
standard ResNet toward the design of a vision Transformer, and discover several
key components that contribute to the performance difference along the way. The
outcome of this exploration is a family of pure ConvNet models dubbed ConvNeXt.
Constructed entirely from standard ConvNet modules, ConvNeXts compete favorably
with Transformers in terms of accuracy and scalability, achieving 87.8%
ImageNet top-1 accuracy and outperforming Swin Transformers on COCO detection
and ADE20K segmentation, while maintaining the simplicity and efficiency of
standard ConvNets.
|
[
{
"created": "Mon, 10 Jan 2022 18:59:10 GMT",
"version": "v1"
},
{
"created": "Wed, 2 Mar 2022 15:08:16 GMT",
"version": "v2"
}
] |
2022-03-03
|
[
[
"Liu",
"Zhuang",
""
],
[
"Mao",
"Hanzi",
""
],
[
"Wu",
"Chao-Yuan",
""
],
[
"Feichtenhofer",
"Christoph",
""
],
[
"Darrell",
"Trevor",
""
],
[
"Xie",
"Saining",
""
]
] |
The "Roaring 20s" of visual recognition began with the introduction of Vision Transformers (ViTs), which quickly superseded ConvNets as the state-of-the-art image classification model. A vanilla ViT, on the other hand, faces difficulties when applied to general computer vision tasks such as object detection and semantic segmentation. It is the hierarchical Transformers (e.g., Swin Transformers) that reintroduced several ConvNet priors, making Transformers practically viable as a generic vision backbone and demonstrating remarkable performance on a wide variety of vision tasks. However, the effectiveness of such hybrid approaches is still largely credited to the intrinsic superiority of Transformers, rather than the inherent inductive biases of convolutions. In this work, we reexamine the design spaces and test the limits of what a pure ConvNet can achieve. We gradually "modernize" a standard ResNet toward the design of a vision Transformer, and discover several key components that contribute to the performance difference along the way. The outcome of this exploration is a family of pure ConvNet models dubbed ConvNeXt. Constructed entirely from standard ConvNet modules, ConvNeXts compete favorably with Transformers in terms of accuracy and scalability, achieving 87.8% ImageNet top-1 accuracy and outperforming Swin Transformers on COCO detection and ADE20K segmentation, while maintaining the simplicity and efficiency of standard ConvNets.
|
1905.11070
|
Thodoris Sotiropoulos
|
Thodoris Sotiropoulos, Dimitris Mitropoulos, Diomidis Spinellis
|
Detecting Missing Dependencies and Notifiers in Puppet Programs
| null | null |
10.5281/zenodo.4039061
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Puppet is a popular computer system configuration management tool. It
provides abstractions that enable administrators to setup their computer
systems declaratively. Its use suffers from two potential pitfalls. First, if
ordering constraints are not specified whenever an abstraction depends on
another, the non-deterministic application of abstractions can lead to race
conditions. Second, if a service is not tied to its resources through
notification constructs, the system may operate in a stale state whenever a
resource gets modified. Such faults can degrade a computing infrastructure's
availability and functionality.
We have developed an approach that identifies these issues through the
analysis of a Puppet program and its system call trace. Specifically, we
present a formal model for traces, which allows us to capture the interactions
of Puppet abstractions with the file system. By analyzing these interactions we
identify (1) abstractions that are related to each other (e.g., operate on the
same file), and (2) abstractions that should act as notifiers so that changes
are correctly propagated. We then check the relationships from the trace's
analysis against the program's dependency graph: a representation containing
all the ordering constraints and notifications declared in the program. If a
mismatch is detected, our system reports a potential fault.
We have evaluated our method on a large set of Puppet modules, and discovered
57 previously unknown issues in 30 of them. Benchmarking further shows that our
approach can analyze in minutes real-world configurations with a magnitude
measured in thousands of lines and millions of system calls.
|
[
{
"created": "Mon, 27 May 2019 09:18:38 GMT",
"version": "v1"
}
] |
2023-12-05
|
[
[
"Sotiropoulos",
"Thodoris",
""
],
[
"Mitropoulos",
"Dimitris",
""
],
[
"Spinellis",
"Diomidis",
""
]
] |
Puppet is a popular computer system configuration management tool. It provides abstractions that enable administrators to setup their computer systems declaratively. Its use suffers from two potential pitfalls. First, if ordering constraints are not specified whenever an abstraction depends on another, the non-deterministic application of abstractions can lead to race conditions. Second, if a service is not tied to its resources through notification constructs, the system may operate in a stale state whenever a resource gets modified. Such faults can degrade a computing infrastructure's availability and functionality. We have developed an approach that identifies these issues through the analysis of a Puppet program and its system call trace. Specifically, we present a formal model for traces, which allows us to capture the interactions of Puppet abstractions with the file system. By analyzing these interactions we identify (1) abstractions that are related to each other (e.g., operate on the same file), and (2) abstractions that should act as notifiers so that changes are correctly propagated. We then check the relationships from the trace's analysis against the program's dependency graph: a representation containing all the ordering constraints and notifications declared in the program. If a mismatch is detected, our system reports a potential fault. We have evaluated our method on a large set of Puppet modules, and discovered 57 previously unknown issues in 30 of them. Benchmarking further shows that our approach can analyze in minutes real-world configurations with a magnitude measured in thousands of lines and millions of system calls.
|
2312.13537
|
Hai Zhang
|
Hai Zhang, Chunwei Wu, Guitao Cao, Hailing Wang, Wenming Cao
|
HyperEditor: Achieving Both Authenticity and Cross-Domain Capability in
Image Editing via Hypernetworks
|
Accepted by AAAI2024
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Editing real images authentically while also achieving cross-domain editing
remains a challenge. Recent studies have focused on converting real images into
latent codes and accomplishing image editing by manipulating these codes.
However, merely manipulating the latent codes would constrain the edited images
to the generator's image domain, hindering the attainment of diverse editing
goals. In response, we propose an innovative image editing method called
HyperEditor, which utilizes weight factors generated by hypernetworks to
reassign the weights of the pre-trained StyleGAN2's generator. Guided by CLIP's
cross-modal image-text semantic alignment, this innovative approach enables us
to simultaneously accomplish authentic attribute editing and cross-domain style
transfer, a capability not realized in previous methods. Additionally, we
ascertain that modifying only the weights of specific layers in the generator
can yield an equivalent editing result. Therefore, we introduce an adaptive
layer selector, enabling our hypernetworks to autonomously identify the layers
requiring output weight factors, which can further improve our hypernetworks'
efficiency. Extensive experiments on abundant challenging datasets demonstrate
the effectiveness of our method.
|
[
{
"created": "Thu, 21 Dec 2023 02:39:53 GMT",
"version": "v1"
}
] |
2023-12-22
|
[
[
"Zhang",
"Hai",
""
],
[
"Wu",
"Chunwei",
""
],
[
"Cao",
"Guitao",
""
],
[
"Wang",
"Hailing",
""
],
[
"Cao",
"Wenming",
""
]
] |
Editing real images authentically while also achieving cross-domain editing remains a challenge. Recent studies have focused on converting real images into latent codes and accomplishing image editing by manipulating these codes. However, merely manipulating the latent codes would constrain the edited images to the generator's image domain, hindering the attainment of diverse editing goals. In response, we propose an innovative image editing method called HyperEditor, which utilizes weight factors generated by hypernetworks to reassign the weights of the pre-trained StyleGAN2's generator. Guided by CLIP's cross-modal image-text semantic alignment, this innovative approach enables us to simultaneously accomplish authentic attribute editing and cross-domain style transfer, a capability not realized in previous methods. Additionally, we ascertain that modifying only the weights of specific layers in the generator can yield an equivalent editing result. Therefore, we introduce an adaptive layer selector, enabling our hypernetworks to autonomously identify the layers requiring output weight factors, which can further improve our hypernetworks' efficiency. Extensive experiments on abundant challenging datasets demonstrate the effectiveness of our method.
|
1603.04139
|
Frank Nielsen
|
Junlin Yao and Frank Nielsen
|
SSSC-AM: A Unified Framework for Video Co-Segmentation by Structured
Sparse Subspace Clustering with Appearance and Motion Features
|
19 pages, 6 figures, 5 tables, extend ICIP 2016
|
IEEE International Conference on Image Processing (ICIP), 2016
|
10.1109/ICIP.2016.7533102
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Video co-segmentation refers to the task of jointly segmenting common objects
appearing in a given group of videos. In practice, high-dimensional data such
as videos can be conceptually thought as being drawn from a union of subspaces
corresponding to categories rather than from a smooth manifold. Therefore,
segmenting data into respective subspaces --- subspace clustering --- finds
widespread applications in computer vision, including co-segmentation.
State-of-the-art methods via subspace clustering seek to solve the problem in
two steps:
First, an affinity matrix is built from data, with appearance features or
motion patterns. Second, the data are segmented by applying spectral clustering
to the affinity matrix. However, this process is insufficient to obtain an
optimal solution since it does not take into account the {\em interdependence}
of the affinity matrix with the segmentation. In this work, we present a novel
unified video co-segmentation framework inspired by the recent Structured
Sparse Subspace Clustering ($\mathrm{S^{3}C}$) based on the {\em
self-expressiveness} model. Our method yields more consistent segmentation
results. In order to improve the detectability of motion features with missing
trajectories due to occlusion or tracked points moving out of frames, we add an
extra-dimensional signature to the motion trajectories. Moreover, we
reformulate the $\mathrm{S^{3}C}$ algorithm by adding the affine subspace
constraint in order to make it more suitable to segment rigid motions lying in
affine subspaces of dimension at most $3$. Our experiments on MOViCS dataset
show that our framework achieves the highest overall performance among baseline
algorithms and demonstrate its robustness to heavy noise.
|
[
{
"created": "Mon, 14 Mar 2016 05:36:40 GMT",
"version": "v1"
},
{
"created": "Wed, 28 Sep 2016 22:05:15 GMT",
"version": "v2"
}
] |
2021-04-29
|
[
[
"Yao",
"Junlin",
""
],
[
"Nielsen",
"Frank",
""
]
] |
Video co-segmentation refers to the task of jointly segmenting common objects appearing in a given group of videos. In practice, high-dimensional data such as videos can be conceptually thought as being drawn from a union of subspaces corresponding to categories rather than from a smooth manifold. Therefore, segmenting data into respective subspaces --- subspace clustering --- finds widespread applications in computer vision, including co-segmentation. State-of-the-art methods via subspace clustering seek to solve the problem in two steps: First, an affinity matrix is built from data, with appearance features or motion patterns. Second, the data are segmented by applying spectral clustering to the affinity matrix. However, this process is insufficient to obtain an optimal solution since it does not take into account the {\em interdependence} of the affinity matrix with the segmentation. In this work, we present a novel unified video co-segmentation framework inspired by the recent Structured Sparse Subspace Clustering ($\mathrm{S^{3}C}$) based on the {\em self-expressiveness} model. Our method yields more consistent segmentation results. In order to improve the detectability of motion features with missing trajectories due to occlusion or tracked points moving out of frames, we add an extra-dimensional signature to the motion trajectories. Moreover, we reformulate the $\mathrm{S^{3}C}$ algorithm by adding the affine subspace constraint in order to make it more suitable to segment rigid motions lying in affine subspaces of dimension at most $3$. Our experiments on MOViCS dataset show that our framework achieves the highest overall performance among baseline algorithms and demonstrate its robustness to heavy noise.
|
1702.08423
|
Zhifei Zhang
|
Zhifei Zhang, Yang Song, Hairong Qi
|
Age Progression/Regression by Conditional Adversarial Autoencoder
|
Accepted by The IEEE Conference on Computer Vision and Pattern
Recognition (CVPR 2017)
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
"If I provide you a face image of mine (without telling you the actual age
when I took the picture) and a large amount of face images that I crawled
(containing labeled faces of different ages but not necessarily paired), can
you show me what I would look like when I am 80 or what I was like when I was
5?" The answer is probably a "No." Most existing face aging works attempt to
learn the transformation between age groups and thus would require the paired
samples as well as the labeled query image. In this paper, we look at the
problem from a generative modeling perspective such that no paired samples is
required. In addition, given an unlabeled image, the generative model can
directly produce the image with desired age attribute. We propose a conditional
adversarial autoencoder (CAAE) that learns a face manifold, traversing on which
smooth age progression and regression can be realized simultaneously. In CAAE,
the face is first mapped to a latent vector through a convolutional encoder,
and then the vector is projected to the face manifold conditional on age
through a deconvolutional generator. The latent vector preserves personalized
face features (i.e., personality) and the age condition controls progression
vs. regression. Two adversarial networks are imposed on the encoder and
generator, respectively, forcing to generate more photo-realistic faces.
Experimental results demonstrate the appealing performance and flexibility of
the proposed framework by comparing with the state-of-the-art and ground truth.
|
[
{
"created": "Mon, 27 Feb 2017 18:28:58 GMT",
"version": "v1"
},
{
"created": "Tue, 28 Mar 2017 20:02:15 GMT",
"version": "v2"
}
] |
2017-03-30
|
[
[
"Zhang",
"Zhifei",
""
],
[
"Song",
"Yang",
""
],
[
"Qi",
"Hairong",
""
]
] |
"If I provide you a face image of mine (without telling you the actual age when I took the picture) and a large amount of face images that I crawled (containing labeled faces of different ages but not necessarily paired), can you show me what I would look like when I am 80 or what I was like when I was 5?" The answer is probably a "No." Most existing face aging works attempt to learn the transformation between age groups and thus would require the paired samples as well as the labeled query image. In this paper, we look at the problem from a generative modeling perspective such that no paired samples is required. In addition, given an unlabeled image, the generative model can directly produce the image with desired age attribute. We propose a conditional adversarial autoencoder (CAAE) that learns a face manifold, traversing on which smooth age progression and regression can be realized simultaneously. In CAAE, the face is first mapped to a latent vector through a convolutional encoder, and then the vector is projected to the face manifold conditional on age through a deconvolutional generator. The latent vector preserves personalized face features (i.e., personality) and the age condition controls progression vs. regression. Two adversarial networks are imposed on the encoder and generator, respectively, forcing to generate more photo-realistic faces. Experimental results demonstrate the appealing performance and flexibility of the proposed framework by comparing with the state-of-the-art and ground truth.
|
2107.00997
|
Ginno Mill\'an
|
G. Mill\'an
|
An Algorithm for Flow Control in Computer Networks Based in Discrete
Control Theory
|
4 pages, in Spanish, 4 figures, 1 table, IEEE Latin America
Transactions
| null | null | null |
cs.NI eess.SP
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Developing of an effective flow control algorithm to avoid congestion is a
hot topic in computer network society. This document gives a mathematical model
for general network at the beginning, and then discrete control theory is
proposed as a key tool to design a new flow control algorithm to avoid
congestion in the high-speed computer network, the proposed algorithm ensures
stability of network system. The results of the simulation show that the
proposed method can adjust the sending speed and the queue level in the buffer
quickly and effectively. In addition, the method is easy to implement and apply
to high-speed computer network.
|
[
{
"created": "Sun, 23 May 2021 21:46:46 GMT",
"version": "v1"
}
] |
2021-07-05
|
[
[
"Millán",
"G.",
""
]
] |
Developing of an effective flow control algorithm to avoid congestion is a hot topic in computer network society. This document gives a mathematical model for general network at the beginning, and then discrete control theory is proposed as a key tool to design a new flow control algorithm to avoid congestion in the high-speed computer network, the proposed algorithm ensures stability of network system. The results of the simulation show that the proposed method can adjust the sending speed and the queue level in the buffer quickly and effectively. In addition, the method is easy to implement and apply to high-speed computer network.
|
1907.02786
|
Kenjiro Kondo
|
Kenjiro Kondo, Akihiko Ishikawa, Masashi Kimura
|
Sequence to Sequence with Attention for Influenza Prevalence Prediction
using Google Trends
|
7 pages, ICCBB2019
| null | null | null |
cs.LG cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Early prediction of the prevalence of influenza reduces its impact. Various
studies have been conducted to predict the number of influenza-infected people.
However, these studies are not highly accurate especially in the distant future
such as over one month. To deal with this problem, we investigate the sequence
to sequence (Seq2Seq) with attention model using Google Trends data to assess
and predict the number of influenza-infected people over the course of multiple
weeks. Google Trends data help to compensate the dark figures including the
statistics and improve the prediction accuracy. We demonstrate that the
attention mechanism is highly effective to improve prediction accuracy and
achieves state-of-the art results, with a Pearson correlation and
root-mean-square error of 0.996 and 0.67, respectively. However, the prediction
accuracy of the peak of influenza epidemic is not sufficient, and further
investigation is needed to overcome this problem.
|
[
{
"created": "Wed, 3 Jul 2019 08:14:05 GMT",
"version": "v1"
}
] |
2019-07-08
|
[
[
"Kondo",
"Kenjiro",
""
],
[
"Ishikawa",
"Akihiko",
""
],
[
"Kimura",
"Masashi",
""
]
] |
Early prediction of the prevalence of influenza reduces its impact. Various studies have been conducted to predict the number of influenza-infected people. However, these studies are not highly accurate especially in the distant future such as over one month. To deal with this problem, we investigate the sequence to sequence (Seq2Seq) with attention model using Google Trends data to assess and predict the number of influenza-infected people over the course of multiple weeks. Google Trends data help to compensate the dark figures including the statistics and improve the prediction accuracy. We demonstrate that the attention mechanism is highly effective to improve prediction accuracy and achieves state-of-the art results, with a Pearson correlation and root-mean-square error of 0.996 and 0.67, respectively. However, the prediction accuracy of the peak of influenza epidemic is not sufficient, and further investigation is needed to overcome this problem.
|
cs/0310019
|
Koskas Michel
|
Michel Koskas
|
A hierarchical Algorithm to Solve the Shortest Path Problem in Valued
Graphs
|
18 pages, 5 figures
| null | null | null |
cs.DS cs.DM
| null |
This paper details a new algorithm to solve the shortest path problem in
valued graphs. Its complexity is $O(D \log v)$ where $D$ is the graph diameter
and $v$ its number of vertices. This complexity has to be compared to the one
of the Dijkstra's algorithm, which is $O(e\log v)$ where $e$ is the number of
edges of the graph. This new algorithm lies on a hierarchical representation of
the graph, using radix trees. The performances of this algorithm show a major
improvement over the ones of the algorithms known up to now.
|
[
{
"created": "Fri, 10 Oct 2003 18:01:25 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Koskas",
"Michel",
""
]
] |
This paper details a new algorithm to solve the shortest path problem in valued graphs. Its complexity is $O(D \log v)$ where $D$ is the graph diameter and $v$ its number of vertices. This complexity has to be compared to the one of the Dijkstra's algorithm, which is $O(e\log v)$ where $e$ is the number of edges of the graph. This new algorithm lies on a hierarchical representation of the graph, using radix trees. The performances of this algorithm show a major improvement over the ones of the algorithms known up to now.
|
2405.18942
|
Mehran Hosseini
|
Linus Jeary, Tom Kuipers, Mehran Hosseini, Nicola Paoletti
|
Verifiably Robust Conformal Prediction
| null | null | null | null |
cs.LO cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Conformal Prediction (CP) is a popular uncertainty quantification method that
provides distribution-free, statistically valid prediction sets, assuming that
training and test data are exchangeable. In such a case, CP's prediction sets
are guaranteed to cover the (unknown) true test output with a user-specified
probability. Nevertheless, this guarantee is violated when the data is
subjected to adversarial attacks, which often result in a significant loss of
coverage. Recently, several approaches have been put forward to recover CP
guarantees in this setting. These approaches leverage variations of randomised
smoothing to produce conservative sets which account for the effect of the
adversarial perturbations. They are, however, limited in that they only support
$\ell^2$-bounded perturbations and classification tasks. This paper introduces
VRCP (Verifiably Robust Conformal Prediction), a new framework that leverages
recent neural network verification methods to recover coverage guarantees under
adversarial attacks. Our VRCP method is the first to support perturbations
bounded by arbitrary norms including $\ell^1$, $\ell^2$, and $\ell^\infty$, as
well as regression tasks. We evaluate and compare our approach on image
classification tasks (CIFAR10, CIFAR100, and TinyImageNet) and regression tasks
for deep reinforcement learning environments. In every case, VRCP achieves
above nominal coverage and yields significantly more efficient and informative
prediction regions than the SotA.
|
[
{
"created": "Wed, 29 May 2024 09:50:43 GMT",
"version": "v1"
},
{
"created": "Thu, 6 Jun 2024 10:43:08 GMT",
"version": "v2"
}
] |
2024-06-07
|
[
[
"Jeary",
"Linus",
""
],
[
"Kuipers",
"Tom",
""
],
[
"Hosseini",
"Mehran",
""
],
[
"Paoletti",
"Nicola",
""
]
] |
Conformal Prediction (CP) is a popular uncertainty quantification method that provides distribution-free, statistically valid prediction sets, assuming that training and test data are exchangeable. In such a case, CP's prediction sets are guaranteed to cover the (unknown) true test output with a user-specified probability. Nevertheless, this guarantee is violated when the data is subjected to adversarial attacks, which often result in a significant loss of coverage. Recently, several approaches have been put forward to recover CP guarantees in this setting. These approaches leverage variations of randomised smoothing to produce conservative sets which account for the effect of the adversarial perturbations. They are, however, limited in that they only support $\ell^2$-bounded perturbations and classification tasks. This paper introduces VRCP (Verifiably Robust Conformal Prediction), a new framework that leverages recent neural network verification methods to recover coverage guarantees under adversarial attacks. Our VRCP method is the first to support perturbations bounded by arbitrary norms including $\ell^1$, $\ell^2$, and $\ell^\infty$, as well as regression tasks. We evaluate and compare our approach on image classification tasks (CIFAR10, CIFAR100, and TinyImageNet) and regression tasks for deep reinforcement learning environments. In every case, VRCP achieves above nominal coverage and yields significantly more efficient and informative prediction regions than the SotA.
|
2202.05932
|
Yu Zhang
|
Yu Zhang, Zhihong Shen, Chieh-Han Wu, Boya Xie, Junheng Hao, Ye-Yi
Wang, Kuansan Wang, Jiawei Han
|
Metadata-Induced Contrastive Learning for Zero-Shot Multi-Label Text
Classification
|
12 pages; Accepted to WWW 2022
| null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large-scale multi-label text classification (LMTC) aims to associate a
document with its relevant labels from a large candidate set. Most existing
LMTC approaches rely on massive human-annotated training data, which are often
costly to obtain and suffer from a long-tailed label distribution (i.e., many
labels occur only a few times in the training set). In this paper, we study
LMTC under the zero-shot setting, which does not require any annotated
documents with labels and only relies on label surface names and descriptions.
To train a classifier that calculates the similarity score between a document
and a label, we propose a novel metadata-induced contrastive learning (MICoL)
method. Different from previous text-based contrastive learning techniques,
MICoL exploits document metadata (e.g., authors, venues, and references of
research papers), which are widely available on the Web, to derive similar
document-document pairs. Experimental results on two large-scale datasets show
that: (1) MICoL significantly outperforms strong zero-shot text classification
and contrastive learning baselines; (2) MICoL is on par with the
state-of-the-art supervised metadata-aware LMTC method trained on 10K-200K
labeled documents; and (3) MICoL tends to predict more infrequent labels than
supervised methods, thus alleviates the deteriorated performance on long-tailed
labels.
|
[
{
"created": "Fri, 11 Feb 2022 23:22:17 GMT",
"version": "v1"
},
{
"created": "Thu, 24 Mar 2022 22:34:41 GMT",
"version": "v2"
}
] |
2023-10-24
|
[
[
"Zhang",
"Yu",
""
],
[
"Shen",
"Zhihong",
""
],
[
"Wu",
"Chieh-Han",
""
],
[
"Xie",
"Boya",
""
],
[
"Hao",
"Junheng",
""
],
[
"Wang",
"Ye-Yi",
""
],
[
"Wang",
"Kuansan",
""
],
[
"Han",
"Jiawei",
""
]
] |
Large-scale multi-label text classification (LMTC) aims to associate a document with its relevant labels from a large candidate set. Most existing LMTC approaches rely on massive human-annotated training data, which are often costly to obtain and suffer from a long-tailed label distribution (i.e., many labels occur only a few times in the training set). In this paper, we study LMTC under the zero-shot setting, which does not require any annotated documents with labels and only relies on label surface names and descriptions. To train a classifier that calculates the similarity score between a document and a label, we propose a novel metadata-induced contrastive learning (MICoL) method. Different from previous text-based contrastive learning techniques, MICoL exploits document metadata (e.g., authors, venues, and references of research papers), which are widely available on the Web, to derive similar document-document pairs. Experimental results on two large-scale datasets show that: (1) MICoL significantly outperforms strong zero-shot text classification and contrastive learning baselines; (2) MICoL is on par with the state-of-the-art supervised metadata-aware LMTC method trained on 10K-200K labeled documents; and (3) MICoL tends to predict more infrequent labels than supervised methods, thus alleviates the deteriorated performance on long-tailed labels.
|
1501.04056
|
Mazen Alamir Prof
|
Mazen Alamir
|
NLP Solutions as Asymptotic Values of ODE Trajectories
| null | null | null | null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, it is shown that the solutions of general differentiable
constrained optimization problems can be viewed as asymptotic solutions to sets
of Ordinary Differential Equations (ODEs). The construction of the ODE
associated to the optimization problem is based on an exact penalty formulation
in which the weighting parameter dynamics is coordinated with that of the
decision variable so that there is no need to solve a sequence of optimization
problems, instead, a single ODE has to be solved using available efficient
methods. Examples are given in order to illustrate the results. This includes a
novel systematic approach to solve combinatoric optimization problems as well
as fast computation of a class of optimization problems using analogic circuits
leading to fast, parallel and highly scalable solutions.
|
[
{
"created": "Fri, 16 Jan 2015 17:16:10 GMT",
"version": "v1"
}
] |
2015-01-19
|
[
[
"Alamir",
"Mazen",
""
]
] |
In this paper, it is shown that the solutions of general differentiable constrained optimization problems can be viewed as asymptotic solutions to sets of Ordinary Differential Equations (ODEs). The construction of the ODE associated to the optimization problem is based on an exact penalty formulation in which the weighting parameter dynamics is coordinated with that of the decision variable so that there is no need to solve a sequence of optimization problems, instead, a single ODE has to be solved using available efficient methods. Examples are given in order to illustrate the results. This includes a novel systematic approach to solve combinatoric optimization problems as well as fast computation of a class of optimization problems using analogic circuits leading to fast, parallel and highly scalable solutions.
|
2406.15920
|
Jialang Xu
|
Jialang Xu, Nazir Sirajudeen, Matthew Boal, Nader Francis, Danail
Stoyanov, Evangelos Mazomenos
|
SEDMamba: Enhancing Selective State Space Modelling with Bottleneck
Mechanism and Fine-to-Coarse Temporal Fusion for Efficient Error Detection in
Robot-Assisted Surgery
|
8 pages
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automated detection of surgical errors can improve robotic-assisted surgery.
Despite promising progress, existing methods still face challenges in capturing
rich temporal context to establish long-term dependencies while maintaining
computational efficiency. In this paper, we propose a novel hierarchical model
named SEDMamba, which incorporates the selective state space model (SSM) into
surgical error detection, facilitating efficient long sequence modelling with
linear complexity. SEDMamba enhances selective SSM with bottleneck mechanism
and fine-to-coarse temporal fusion (FCTF) to detect and temporally localize
surgical errors in long videos. The bottleneck mechanism compresses and
restores features within their spatial dimension, thereby reducing
computational complexity. FCTF utilizes multiple dilated 1D convolutional
layers to merge temporal information across diverse scale ranges, accommodating
errors of varying durations. Besides, we deploy an established observational
clinical human reliability assessment tool (OCHRA) to annotate the errors of
suturing tasks in an open-source radical prostatectomy dataset (SAR-RARP50),
constructing the first frame-level in-vivo surgical error detection dataset to
support error detection in real-world scenarios. Experimental results
demonstrate that our SEDMamba outperforms state-of-the-art methods with at
least 1.82% AUC and 3.80% AP performance gain with significantly reduced
computational complexity.
|
[
{
"created": "Sat, 22 Jun 2024 19:20:35 GMT",
"version": "v1"
}
] |
2024-06-25
|
[
[
"Xu",
"Jialang",
""
],
[
"Sirajudeen",
"Nazir",
""
],
[
"Boal",
"Matthew",
""
],
[
"Francis",
"Nader",
""
],
[
"Stoyanov",
"Danail",
""
],
[
"Mazomenos",
"Evangelos",
""
]
] |
Automated detection of surgical errors can improve robotic-assisted surgery. Despite promising progress, existing methods still face challenges in capturing rich temporal context to establish long-term dependencies while maintaining computational efficiency. In this paper, we propose a novel hierarchical model named SEDMamba, which incorporates the selective state space model (SSM) into surgical error detection, facilitating efficient long sequence modelling with linear complexity. SEDMamba enhances selective SSM with bottleneck mechanism and fine-to-coarse temporal fusion (FCTF) to detect and temporally localize surgical errors in long videos. The bottleneck mechanism compresses and restores features within their spatial dimension, thereby reducing computational complexity. FCTF utilizes multiple dilated 1D convolutional layers to merge temporal information across diverse scale ranges, accommodating errors of varying durations. Besides, we deploy an established observational clinical human reliability assessment tool (OCHRA) to annotate the errors of suturing tasks in an open-source radical prostatectomy dataset (SAR-RARP50), constructing the first frame-level in-vivo surgical error detection dataset to support error detection in real-world scenarios. Experimental results demonstrate that our SEDMamba outperforms state-of-the-art methods with at least 1.82% AUC and 3.80% AP performance gain with significantly reduced computational complexity.
|
1505.05800
|
Amit Daniely
|
Amit Daniely
|
Complexity Theoretic Limitations on Learning Halfspaces
| null | null | null | null |
cs.CC cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the problem of agnostically learning halfspaces which is defined by
a fixed but unknown distribution $\mathcal{D}$ on $\mathbb{Q}^n\times \{\pm
1\}$. We define $\mathrm{Err}_{\mathrm{HALF}}(\mathcal{D})$ as the least error
of a halfspace classifier for $\mathcal{D}$. A learner who can access
$\mathcal{D}$ has to return a hypothesis whose error is small compared to
$\mathrm{Err}_{\mathrm{HALF}}(\mathcal{D})$.
Using the recently developed method of the author, Linial and Shalev-Shwartz
we prove hardness of learning results under a natural assumption on the
complexity of refuting random $K$-$\mathrm{XOR}$ formulas. We show that no
efficient learning algorithm has non-trivial worst-case performance even under
the guarantees that $\mathrm{Err}_{\mathrm{HALF}}(\mathcal{D}) \le \eta$ for
arbitrarily small constant $\eta>0$, and that $\mathcal{D}$ is supported in
$\{\pm 1\}^n\times \{\pm 1\}$. Namely, even under these favorable conditions
its error must be $\ge \frac{1}{2}-\frac{1}{n^c}$ for every $c>0$. In
particular, no efficient algorithm can achieve a constant approximation ratio.
Under a stronger version of the assumption (where $K$ can be poly-logarithmic
in $n$), we can take $\eta = 2^{-\log^{1-\nu}(n)}$ for arbitrarily small
$\nu>0$. Interestingly, this is even stronger than the best known lower bounds
(Arora et. al. 1993, Feldamn et. al. 2006, Guruswami and Raghavendra 2006) for
the case that the learner is restricted to return a halfspace classifier (i.e.
proper learning).
|
[
{
"created": "Thu, 21 May 2015 17:30:54 GMT",
"version": "v1"
},
{
"created": "Sun, 13 Mar 2016 06:23:35 GMT",
"version": "v2"
}
] |
2016-03-15
|
[
[
"Daniely",
"Amit",
""
]
] |
We study the problem of agnostically learning halfspaces which is defined by a fixed but unknown distribution $\mathcal{D}$ on $\mathbb{Q}^n\times \{\pm 1\}$. We define $\mathrm{Err}_{\mathrm{HALF}}(\mathcal{D})$ as the least error of a halfspace classifier for $\mathcal{D}$. A learner who can access $\mathcal{D}$ has to return a hypothesis whose error is small compared to $\mathrm{Err}_{\mathrm{HALF}}(\mathcal{D})$. Using the recently developed method of the author, Linial and Shalev-Shwartz we prove hardness of learning results under a natural assumption on the complexity of refuting random $K$-$\mathrm{XOR}$ formulas. We show that no efficient learning algorithm has non-trivial worst-case performance even under the guarantees that $\mathrm{Err}_{\mathrm{HALF}}(\mathcal{D}) \le \eta$ for arbitrarily small constant $\eta>0$, and that $\mathcal{D}$ is supported in $\{\pm 1\}^n\times \{\pm 1\}$. Namely, even under these favorable conditions its error must be $\ge \frac{1}{2}-\frac{1}{n^c}$ for every $c>0$. In particular, no efficient algorithm can achieve a constant approximation ratio. Under a stronger version of the assumption (where $K$ can be poly-logarithmic in $n$), we can take $\eta = 2^{-\log^{1-\nu}(n)}$ for arbitrarily small $\nu>0$. Interestingly, this is even stronger than the best known lower bounds (Arora et. al. 1993, Feldamn et. al. 2006, Guruswami and Raghavendra 2006) for the case that the learner is restricted to return a halfspace classifier (i.e. proper learning).
|
2402.08771
|
Andrej Rode
|
Frederik Ritter, Andrej Rode, Laurent Schmalen
|
Introducing RSESS: An Open Source Enumerative Sphere Shaping
Implementation Coded in Rust
|
Accepted for presentation at the 13th GNU Radio conference (GRCon)
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we present an open-source implementation of the enumerative
sphere shaping (ESS) algorithm used for probabilistic constellation shaping
(PCS). PCS aims at closing the shaping gap caused by using uniformly
distributed modulation symbols in channels for which information theory shows
non-uniformly distributed signaling to be optimal. ESS is one such PCS
algorithm that sets itself apart as it operates on a trellis representation of
a subset of the possible symbol sequences. ESS leads to an empirical
distribution of the symbols that closely approximates the optimal distribution
for the additive white Gaussian noise (AWGN) channel. We provide an open-source
implementation of this algorithm in the compiled language Rust, as well as
Python bindings with which our Rust code can be called in a regular Python
script. We also compare simulation results on the AWGN channel using our
implementation with previous works on this topic.
|
[
{
"created": "Tue, 13 Feb 2024 20:06:16 GMT",
"version": "v1"
}
] |
2024-02-15
|
[
[
"Ritter",
"Frederik",
""
],
[
"Rode",
"Andrej",
""
],
[
"Schmalen",
"Laurent",
""
]
] |
In this work, we present an open-source implementation of the enumerative sphere shaping (ESS) algorithm used for probabilistic constellation shaping (PCS). PCS aims at closing the shaping gap caused by using uniformly distributed modulation symbols in channels for which information theory shows non-uniformly distributed signaling to be optimal. ESS is one such PCS algorithm that sets itself apart as it operates on a trellis representation of a subset of the possible symbol sequences. ESS leads to an empirical distribution of the symbols that closely approximates the optimal distribution for the additive white Gaussian noise (AWGN) channel. We provide an open-source implementation of this algorithm in the compiled language Rust, as well as Python bindings with which our Rust code can be called in a regular Python script. We also compare simulation results on the AWGN channel using our implementation with previous works on this topic.
|
1503.03400
|
Karthik Gopalakrishnan
|
Dhruv Chand, Karthik Gopalakrishnan, Nisha KK, Mudit Sinha, Shreya
Sriram
|
Get 'em Moles! : Learning Spelling and Pronunciation through an
Educational Game
| null | null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Get 'em Moles! is a single-player educational game inspired by the classic
arcade game Whac-A-Mole. Primarily designed for touchscreen devices, Get 'em
Moles! aims to teach English spelling and pronunciation through engaging game
play. This paper describes the game, design decisions in the form of elements
that support learning, preliminary play-testing results, and future work.
|
[
{
"created": "Wed, 11 Mar 2015 16:14:16 GMT",
"version": "v1"
},
{
"created": "Sun, 30 Aug 2015 13:33:54 GMT",
"version": "v2"
}
] |
2015-09-01
|
[
[
"Chand",
"Dhruv",
""
],
[
"Gopalakrishnan",
"Karthik",
""
],
[
"KK",
"Nisha",
""
],
[
"Sinha",
"Mudit",
""
],
[
"Sriram",
"Shreya",
""
]
] |
Get 'em Moles! is a single-player educational game inspired by the classic arcade game Whac-A-Mole. Primarily designed for touchscreen devices, Get 'em Moles! aims to teach English spelling and pronunciation through engaging game play. This paper describes the game, design decisions in the form of elements that support learning, preliminary play-testing results, and future work.
|
cs/0607008
|
Jean-Sebastien Sereni
|
F\'ed\'eric Havet (INRIA Sophia Antipolis), Jean-S\'ebastien Sereni
(INRIA Sophia Antipolis), Riste Skrekovski
|
3-facial colouring of plane graphs
| null | null | null | null |
cs.DM
| null |
A plane graph is l-facially k-colourable if its vertices can be coloured with
k colours such that any two distinct vertices on a facial segment of length at
most l are coloured differently. We prove that every plane graph is 3-facially
11-colourable. As a consequence, we derive that every 2-connected plane graph
with maximum face-size at most 7 is cyclically 11-colourable. These two bounds
are for one off from those that are proposed by the (3l+1)-Conjecture and the
Cyclic Conjecture.
|
[
{
"created": "Mon, 3 Jul 2006 06:38:48 GMT",
"version": "v1"
}
] |
2016-08-16
|
[
[
"Havet",
"Fédéric",
"",
"INRIA Sophia Antipolis"
],
[
"Sereni",
"Jean-Sébastien",
"",
"INRIA Sophia Antipolis"
],
[
"Skrekovski",
"Riste",
""
]
] |
A plane graph is l-facially k-colourable if its vertices can be coloured with k colours such that any two distinct vertices on a facial segment of length at most l are coloured differently. We prove that every plane graph is 3-facially 11-colourable. As a consequence, we derive that every 2-connected plane graph with maximum face-size at most 7 is cyclically 11-colourable. These two bounds are for one off from those that are proposed by the (3l+1)-Conjecture and the Cyclic Conjecture.
|
2402.14201
|
Mohit Garg
|
Mohit Garg, Debajyoti Kar, Arindam Khan
|
Random-Order Online Independent Set of Intervals and Hyperrectangles
|
31 pages, Full version of ESA 2024 paper
| null | null | null |
cs.DS cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the Maximum Independent Set of Hyperrectangles problem, we are given a set
of $n$ (possibly overlapping) $d$-dimensional axis-aligned hyperrectangles, and
the goal is to find a subset of non-overlapping hyperrectangles of maximum
cardinality. For $d=1$, this corresponds to the classical Interval Scheduling
problem, where a simple greedy algorithm returns an optimal solution. In the
offline setting, for $d$-dimensional hyperrectangles, polynomial time $(\log
n)^{O(d)}$-approximation algorithms are known. However, the problem becomes
notably challenging in the online setting, where the input objects
(hyperrectangles) appear one by one in an adversarial order, and on the arrival
of an object, the algorithm needs to make an immediate and irrevocable decision
whether or not to select the object while maintaining the feasibility. Even for
interval scheduling, an $\Omega(n)$ lower bound is known on the competitive
ratio.
To circumvent these negative results, in this work, we study the online
maximum independent set of axis-aligned hyperrectangles in the random-order
arrival model, where the adversary specifies the set of input objects which
then arrive in a uniformly random order. Starting from the prototypical
secretary problem, the random-order model has received significant attention to
study algorithms beyond the worst-case competitive analysis. Surprisingly, we
show that the problem in the random-order model almost matches the best-known
offline approximation guarantees, up to polylogarithmic factors. In particular,
we give a simple $(\log n)^{O(d)}$-competitive algorithm for $d$-dimensional
hyperrectangles in this model, which runs in $\tilde{O_d}(n)$ time. Our
approach also yields $(\log n)^{O(d)}$-competitive algorithms in the
random-order model for more general objects such as $d$-dimensional fat objects
and ellipsoids. Furthermore, our guarantees hold with high probability.
|
[
{
"created": "Thu, 22 Feb 2024 01:04:18 GMT",
"version": "v1"
},
{
"created": "Wed, 26 Jun 2024 18:10:53 GMT",
"version": "v2"
}
] |
2024-06-28
|
[
[
"Garg",
"Mohit",
""
],
[
"Kar",
"Debajyoti",
""
],
[
"Khan",
"Arindam",
""
]
] |
In the Maximum Independent Set of Hyperrectangles problem, we are given a set of $n$ (possibly overlapping) $d$-dimensional axis-aligned hyperrectangles, and the goal is to find a subset of non-overlapping hyperrectangles of maximum cardinality. For $d=1$, this corresponds to the classical Interval Scheduling problem, where a simple greedy algorithm returns an optimal solution. In the offline setting, for $d$-dimensional hyperrectangles, polynomial time $(\log n)^{O(d)}$-approximation algorithms are known. However, the problem becomes notably challenging in the online setting, where the input objects (hyperrectangles) appear one by one in an adversarial order, and on the arrival of an object, the algorithm needs to make an immediate and irrevocable decision whether or not to select the object while maintaining the feasibility. Even for interval scheduling, an $\Omega(n)$ lower bound is known on the competitive ratio. To circumvent these negative results, in this work, we study the online maximum independent set of axis-aligned hyperrectangles in the random-order arrival model, where the adversary specifies the set of input objects which then arrive in a uniformly random order. Starting from the prototypical secretary problem, the random-order model has received significant attention to study algorithms beyond the worst-case competitive analysis. Surprisingly, we show that the problem in the random-order model almost matches the best-known offline approximation guarantees, up to polylogarithmic factors. In particular, we give a simple $(\log n)^{O(d)}$-competitive algorithm for $d$-dimensional hyperrectangles in this model, which runs in $\tilde{O_d}(n)$ time. Our approach also yields $(\log n)^{O(d)}$-competitive algorithms in the random-order model for more general objects such as $d$-dimensional fat objects and ellipsoids. Furthermore, our guarantees hold with high probability.
|
2103.08079
|
Katie Seaborn
|
Katie Seaborn, Peter Pennefather, Norihisa P. Miyake, Mihoko
Otake-Matsuura
|
Crossing the Tepper Line: An Emerging Ontology for Describing the
Dynamic Sociality of Embodied AI
|
Accepted at CHI EA '21
| null |
10.1145/3411763.3451783
| null |
cs.HC cs.AI cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Artificial intelligences (AI) are increasingly being embodied and embedded in
the world to carry out tasks and support decision-making with and for people.
Robots, recommender systems, voice assistants, virtual humans - do these
disparate types of embodied AI have something in common? Here we show how they
can manifest as "socially embodied AI." We define this as the state that
embodied AI "circumstantially" take on within interactive contexts when
perceived as both social and agentic by people. We offer a working ontology
that describes how embodied AI can dynamically transition into socially
embodied AI. We propose an ontological heuristic for describing the threshold:
the Tepper line. We reinforce our theoretical work with expert insights from a
card sort workshop. We end with two case studies to illustrate the dynamic and
contextual nature of this heuristic.
|
[
{
"created": "Mon, 15 Mar 2021 00:45:44 GMT",
"version": "v1"
}
] |
2021-03-16
|
[
[
"Seaborn",
"Katie",
""
],
[
"Pennefather",
"Peter",
""
],
[
"Miyake",
"Norihisa P.",
""
],
[
"Otake-Matsuura",
"Mihoko",
""
]
] |
Artificial intelligences (AI) are increasingly being embodied and embedded in the world to carry out tasks and support decision-making with and for people. Robots, recommender systems, voice assistants, virtual humans - do these disparate types of embodied AI have something in common? Here we show how they can manifest as "socially embodied AI." We define this as the state that embodied AI "circumstantially" take on within interactive contexts when perceived as both social and agentic by people. We offer a working ontology that describes how embodied AI can dynamically transition into socially embodied AI. We propose an ontological heuristic for describing the threshold: the Tepper line. We reinforce our theoretical work with expert insights from a card sort workshop. We end with two case studies to illustrate the dynamic and contextual nature of this heuristic.
|
2106.01808
|
Natanael Spisak
|
Giulio Isacchini, Natanael Spisak, Armita Nourmohammad, Thierry Mora,
Aleksandra M. Walczak
|
MINIMALIST: Mutual INformatIon Maximization for Amortized Likelihood
Inference from Sampled Trajectories
| null | null |
10.1103/PhysRevE.105.055309
| null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Simulation-based inference enables learning the parameters of a model even
when its likelihood cannot be computed in practice. One class of methods uses
data simulated with different parameters to infer models of the
likelihood-to-evidence ratio, or equivalently the posterior function. Here we
frame the inference task as an estimation of an energy function parametrized
with an artificial neural network. We present an intuitive approach where the
optimal model of the likelihood-to-evidence ratio is found by maximizing the
likelihood of simulated data. Within this framework, the connection between the
task of simulation-based inference and mutual information maximization is
clear, and we show how several known methods of posterior estimation relate to
alternative lower bounds to mutual information. These distinct objective
functions aim at the same optimal energy form and therefore can be directly
benchmarked. We compare their accuracy in the inference of model parameters,
focusing on four dynamical systems that encompass common challenges in time
series analysis: dynamics driven by multiplicative noise, nonlinear
interactions, chaotic behavior, and high-dimensional parameter space.
|
[
{
"created": "Thu, 3 Jun 2021 12:59:16 GMT",
"version": "v1"
},
{
"created": "Thu, 16 Dec 2021 15:44:20 GMT",
"version": "v2"
},
{
"created": "Fri, 8 Apr 2022 16:59:38 GMT",
"version": "v3"
}
] |
2022-06-08
|
[
[
"Isacchini",
"Giulio",
""
],
[
"Spisak",
"Natanael",
""
],
[
"Nourmohammad",
"Armita",
""
],
[
"Mora",
"Thierry",
""
],
[
"Walczak",
"Aleksandra M.",
""
]
] |
Simulation-based inference enables learning the parameters of a model even when its likelihood cannot be computed in practice. One class of methods uses data simulated with different parameters to infer models of the likelihood-to-evidence ratio, or equivalently the posterior function. Here we frame the inference task as an estimation of an energy function parametrized with an artificial neural network. We present an intuitive approach where the optimal model of the likelihood-to-evidence ratio is found by maximizing the likelihood of simulated data. Within this framework, the connection between the task of simulation-based inference and mutual information maximization is clear, and we show how several known methods of posterior estimation relate to alternative lower bounds to mutual information. These distinct objective functions aim at the same optimal energy form and therefore can be directly benchmarked. We compare their accuracy in the inference of model parameters, focusing on four dynamical systems that encompass common challenges in time series analysis: dynamics driven by multiplicative noise, nonlinear interactions, chaotic behavior, and high-dimensional parameter space.
|
2206.12725
|
Gavin Hartnett S
|
Gavin S. Hartnett, Li Ang Zhang, Caolionn O'Connell, Andrew J. Lohn,
Jair Aguirre
|
Empirical Evaluation of Physical Adversarial Patch Attacks Against
Overhead Object Detection Models
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Adversarial patches are images designed to fool otherwise well-performing
neural network-based computer vision models. Although these attacks were
initially conceived of and studied digitally, in that the raw pixel values of
the image were perturbed, recent work has demonstrated that these attacks can
successfully transfer to the physical world. This can be accomplished by
printing out the patch and adding it into scenes of newly captured images or
video footage. In this work we further test the efficacy of adversarial patch
attacks in the physical world under more challenging conditions. We consider
object detection models trained on overhead imagery acquired through aerial or
satellite cameras, and we test physical adversarial patches inserted into
scenes of a desert environment. Our main finding is that it is far more
difficult to successfully implement the adversarial patch attacks under these
conditions than in the previously considered conditions. This has important
implications for AI safety as the real-world threat posed by adversarial
examples may be overstated.
|
[
{
"created": "Sat, 25 Jun 2022 20:05:11 GMT",
"version": "v1"
}
] |
2022-06-28
|
[
[
"Hartnett",
"Gavin S.",
""
],
[
"Zhang",
"Li Ang",
""
],
[
"O'Connell",
"Caolionn",
""
],
[
"Lohn",
"Andrew J.",
""
],
[
"Aguirre",
"Jair",
""
]
] |
Adversarial patches are images designed to fool otherwise well-performing neural network-based computer vision models. Although these attacks were initially conceived of and studied digitally, in that the raw pixel values of the image were perturbed, recent work has demonstrated that these attacks can successfully transfer to the physical world. This can be accomplished by printing out the patch and adding it into scenes of newly captured images or video footage. In this work we further test the efficacy of adversarial patch attacks in the physical world under more challenging conditions. We consider object detection models trained on overhead imagery acquired through aerial or satellite cameras, and we test physical adversarial patches inserted into scenes of a desert environment. Our main finding is that it is far more difficult to successfully implement the adversarial patch attacks under these conditions than in the previously considered conditions. This has important implications for AI safety as the real-world threat posed by adversarial examples may be overstated.
|
1511.03937
|
Sukhamoy Pattanayak
|
Sukhamoy Pattanayak, Abhay Kumar Singh and Pratyush Kumar
|
DNA Cyclic Codes Over The Ring $ \F_2[u,v]/\langle u^2-1,v^3-v,uv-vu
\rangle$
|
17 pages, 4 Tables(Table 1 contained 2 pages). arXiv admin note:
substantial text overlap with arXiv:1508.02015; text overlap with
arXiv:1508.07113, arXiv:1505.06263 by other authors
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/publicdomain/zero/1.0/
|
In this paper, we mainly study the some structure of cyclic DNA codes of odd
length over the ring $R = \F_2[u,v]/\langle u^2-1,v^3-v,uv-vu \rangle$ which
play an important role in DNA computing. We established a direct link between
the element of ring $R$ and 64 codons by introducing a Gray map from $R$ to
$R_1 = F_2 + uF_2, u^2 = 1$ where $R_1$ is the ring of four elements. The
reverse constrain and the reverse-complement constraint codes over $R$ and
$R_1$ are studied in this paper. Binary image of the cyclic codes over R also
study. The paper concludes with some example on DNA codes obtained via gray
map.
|
[
{
"created": "Thu, 12 Nov 2015 15:56:21 GMT",
"version": "v1"
}
] |
2015-11-13
|
[
[
"Pattanayak",
"Sukhamoy",
""
],
[
"Singh",
"Abhay Kumar",
""
],
[
"Kumar",
"Pratyush",
""
]
] |
In this paper, we mainly study the some structure of cyclic DNA codes of odd length over the ring $R = \F_2[u,v]/\langle u^2-1,v^3-v,uv-vu \rangle$ which play an important role in DNA computing. We established a direct link between the element of ring $R$ and 64 codons by introducing a Gray map from $R$ to $R_1 = F_2 + uF_2, u^2 = 1$ where $R_1$ is the ring of four elements. The reverse constrain and the reverse-complement constraint codes over $R$ and $R_1$ are studied in this paper. Binary image of the cyclic codes over R also study. The paper concludes with some example on DNA codes obtained via gray map.
|
2106.14973
|
Dylan Turpin
|
Dylan Turpin, Liquan Wang, Stavros Tsogkas, Sven Dickinson and Animesh
Garg
|
GIFT: Generalizable Interaction-aware Functional Tool Affordances
without Labels
|
Qualitative results available at
https://www.pair.toronto.edu/gift-tools-rss21
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Tool use requires reasoning about the fit between an object's affordances and
the demands of a task. Visual affordance learning can benefit from
goal-directed interaction experience, but current techniques rely on human
labels or expert demonstrations to generate this data. In this paper, we
describe a method that grounds affordances in physical interactions instead,
thus removing the need for human labels or expert policies. We use an efficient
sampling-based method to generate successful trajectories that provide contact
data, which are then used to reveal affordance representations. Our framework,
GIFT, operates in two phases: first, we discover visual affordances from
goal-directed interaction with a set of procedurally generated tools; second,
we train a model to predict new instances of the discovered affordances on
novel tools in a self-supervised fashion. In our experiments, we show that GIFT
can leverage a sparse keypoint representation to predict grasp and interaction
points to accommodate multiple tasks, such as hooking, reaching, and hammering.
GIFT outperforms baselines on all tasks and matches a human oracle on two of
three tasks using novel tools.
|
[
{
"created": "Mon, 28 Jun 2021 20:43:35 GMT",
"version": "v1"
}
] |
2021-06-30
|
[
[
"Turpin",
"Dylan",
""
],
[
"Wang",
"Liquan",
""
],
[
"Tsogkas",
"Stavros",
""
],
[
"Dickinson",
"Sven",
""
],
[
"Garg",
"Animesh",
""
]
] |
Tool use requires reasoning about the fit between an object's affordances and the demands of a task. Visual affordance learning can benefit from goal-directed interaction experience, but current techniques rely on human labels or expert demonstrations to generate this data. In this paper, we describe a method that grounds affordances in physical interactions instead, thus removing the need for human labels or expert policies. We use an efficient sampling-based method to generate successful trajectories that provide contact data, which are then used to reveal affordance representations. Our framework, GIFT, operates in two phases: first, we discover visual affordances from goal-directed interaction with a set of procedurally generated tools; second, we train a model to predict new instances of the discovered affordances on novel tools in a self-supervised fashion. In our experiments, we show that GIFT can leverage a sparse keypoint representation to predict grasp and interaction points to accommodate multiple tasks, such as hooking, reaching, and hammering. GIFT outperforms baselines on all tasks and matches a human oracle on two of three tasks using novel tools.
|
2308.02194
|
Ioan Marius Bilasco PhD
|
Gaspard Goupy, Pierre Tirilly, Ioan Marius Bilasco
|
Paired Competing Neurons Improving STDP Supervised Local Learning In
Spiking Neural Networks
| null | null |
10.3389/fnins.2024.1401690
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Direct training of Spiking Neural Networks (SNNs) on neuromorphic hardware
has the potential to significantly reduce the energy consumption of artificial
neural network training. SNNs trained with Spike Timing-Dependent Plasticity
(STDP) benefit from gradient-free and unsupervised local learning, which can be
easily implemented on ultra-low-power neuromorphic hardware. However,
classification tasks cannot be performed solely with unsupervised STDP. In this
paper, we propose Stabilized Supervised STDP (S2-STDP), a supervised STDP
learning rule to train the classification layer of an SNN equipped with
unsupervised STDP for feature extraction. S2-STDP integrates error-modulated
weight updates that align neuron spikes with desired timestamps derived from
the average firing time within the layer. Then, we introduce a training
architecture called Paired Competing Neurons (PCN) to further enhance the
learning capabilities of our classification layer trained with S2-STDP. PCN
associates each class with paired neurons and encourages neuron specialization
toward target or non-target samples through intra-class competition. We
evaluate our methods on image recognition datasets, including MNIST,
Fashion-MNIST, and CIFAR-10. Results show that our methods outperform
state-of-the-art supervised STDP learning rules, for comparable architectures
and numbers of neurons. Further analysis demonstrates that the use of PCN
enhances the performance of S2-STDP, regardless of the hyperparameter set and
without introducing any additional hyperparameters.
|
[
{
"created": "Fri, 4 Aug 2023 08:20:54 GMT",
"version": "v1"
},
{
"created": "Sat, 27 Apr 2024 11:01:58 GMT",
"version": "v2"
}
] |
2024-07-25
|
[
[
"Goupy",
"Gaspard",
""
],
[
"Tirilly",
"Pierre",
""
],
[
"Bilasco",
"Ioan Marius",
""
]
] |
Direct training of Spiking Neural Networks (SNNs) on neuromorphic hardware has the potential to significantly reduce the energy consumption of artificial neural network training. SNNs trained with Spike Timing-Dependent Plasticity (STDP) benefit from gradient-free and unsupervised local learning, which can be easily implemented on ultra-low-power neuromorphic hardware. However, classification tasks cannot be performed solely with unsupervised STDP. In this paper, we propose Stabilized Supervised STDP (S2-STDP), a supervised STDP learning rule to train the classification layer of an SNN equipped with unsupervised STDP for feature extraction. S2-STDP integrates error-modulated weight updates that align neuron spikes with desired timestamps derived from the average firing time within the layer. Then, we introduce a training architecture called Paired Competing Neurons (PCN) to further enhance the learning capabilities of our classification layer trained with S2-STDP. PCN associates each class with paired neurons and encourages neuron specialization toward target or non-target samples through intra-class competition. We evaluate our methods on image recognition datasets, including MNIST, Fashion-MNIST, and CIFAR-10. Results show that our methods outperform state-of-the-art supervised STDP learning rules, for comparable architectures and numbers of neurons. Further analysis demonstrates that the use of PCN enhances the performance of S2-STDP, regardless of the hyperparameter set and without introducing any additional hyperparameters.
|
0805.2379
|
Kirill Yurkov
|
Boris Kudryashov and Kirill Yurkov
|
Linear code-based vector quantization for independent random variables
|
16 pages, 3 figures
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by-nc-sa/3.0/
|
In this paper we analyze the rate-distortion function R(D) achievable using
linear codes over GF(q), where q is a prime number.
|
[
{
"created": "Thu, 15 May 2008 19:05:46 GMT",
"version": "v1"
}
] |
2008-05-16
|
[
[
"Kudryashov",
"Boris",
""
],
[
"Yurkov",
"Kirill",
""
]
] |
In this paper we analyze the rate-distortion function R(D) achievable using linear codes over GF(q), where q is a prime number.
|
2106.08853
|
Joshua Kavner
|
Joshua Kavner, Lirong Xia
|
Strategic Behavior is Bliss: Iterative Voting Improves Social Welfare
|
21 pages, 5 figures, in NeurIPS 2021
| null | null | null |
cs.GT cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent work in iterative voting has defined the additive dynamic price of
anarchy (ADPoA) as the difference in social welfare between the truthful and
worst-case equilibrium profiles resulting from repeated strategic
manipulations. While iterative plurality has been shown to only return
alternatives with at most one less initial votes than the truthful winner, it
is less understood how agents' welfare changes in equilibrium. To this end, we
differentiate agents' utility from their manipulation mechanism and determine
iterative plurality's ADPoA in the worst- and average-cases. We first prove
that the worst-case ADPoA is linear in the number of agents. To overcome this
negative result, we study the average-case ADPoA and prove that equilibrium
winners have a constant order welfare advantage over the truthful winner in
expectation. Our positive results illustrate the prospect for social welfare to
increase due to strategic manipulation.
|
[
{
"created": "Wed, 16 Jun 2021 15:18:37 GMT",
"version": "v1"
},
{
"created": "Fri, 18 Jun 2021 19:34:45 GMT",
"version": "v2"
},
{
"created": "Sat, 21 Jan 2023 04:44:46 GMT",
"version": "v3"
}
] |
2023-01-24
|
[
[
"Kavner",
"Joshua",
""
],
[
"Xia",
"Lirong",
""
]
] |
Recent work in iterative voting has defined the additive dynamic price of anarchy (ADPoA) as the difference in social welfare between the truthful and worst-case equilibrium profiles resulting from repeated strategic manipulations. While iterative plurality has been shown to only return alternatives with at most one less initial votes than the truthful winner, it is less understood how agents' welfare changes in equilibrium. To this end, we differentiate agents' utility from their manipulation mechanism and determine iterative plurality's ADPoA in the worst- and average-cases. We first prove that the worst-case ADPoA is linear in the number of agents. To overcome this negative result, we study the average-case ADPoA and prove that equilibrium winners have a constant order welfare advantage over the truthful winner in expectation. Our positive results illustrate the prospect for social welfare to increase due to strategic manipulation.
|
1604.03583
|
Tarique Siddiqui
|
Tarique Siddiqui, Albert Kim, John Lee, Karrie Karahalios, Aditya
Parameswaran
|
Effortless Data Exploration with zenvisage: An Expressive and
Interactive Visual Analytics System
|
Tech Report
| null | null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Data visualization is by far the most commonly used mechanism to explore
data, especially by novice data analysts and data scientists. And yet, current
visual analytics tools are rather limited in their ability to guide data
scientists to interesting or desired visualizations: the process of visual data
exploration remains cumbersome and time-consuming. We propose zenvisage, a
platform for effortlessly visualizing interesting patterns, trends, or insights
from large datasets. We describe zenvisage's general purpose visual query
language, ZQL ("zee-quel") for specifying the desired visual trend, pattern, or
insight - ZQL draws from use-cases in a variety of domains, including biology,
mechanical engineering, climate science, and commerce. We formalize the
expressiveness of ZQL via a visual exploration algebra, and demonstrate that
ZQL is at least as expressive as that algebra. While analysts are free to use
ZQL directly, we also expose ZQL via a visual specification interface that we
describe in this paper. We then describe our architecture and optimizations,
preliminary experiments in supporting and optimizing for ZQL queries in our
initial zenvisage prototype, and a user study to evaluate whether data
scientists are able to effectively use zenvisage for real applications.
|
[
{
"created": "Tue, 12 Apr 2016 21:00:46 GMT",
"version": "v1"
},
{
"created": "Fri, 13 May 2016 02:09:53 GMT",
"version": "v2"
},
{
"created": "Thu, 4 Jan 2018 06:09:34 GMT",
"version": "v3"
}
] |
2018-01-16
|
[
[
"Siddiqui",
"Tarique",
""
],
[
"Kim",
"Albert",
""
],
[
"Lee",
"John",
""
],
[
"Karahalios",
"Karrie",
""
],
[
"Parameswaran",
"Aditya",
""
]
] |
Data visualization is by far the most commonly used mechanism to explore data, especially by novice data analysts and data scientists. And yet, current visual analytics tools are rather limited in their ability to guide data scientists to interesting or desired visualizations: the process of visual data exploration remains cumbersome and time-consuming. We propose zenvisage, a platform for effortlessly visualizing interesting patterns, trends, or insights from large datasets. We describe zenvisage's general purpose visual query language, ZQL ("zee-quel") for specifying the desired visual trend, pattern, or insight - ZQL draws from use-cases in a variety of domains, including biology, mechanical engineering, climate science, and commerce. We formalize the expressiveness of ZQL via a visual exploration algebra, and demonstrate that ZQL is at least as expressive as that algebra. While analysts are free to use ZQL directly, we also expose ZQL via a visual specification interface that we describe in this paper. We then describe our architecture and optimizations, preliminary experiments in supporting and optimizing for ZQL queries in our initial zenvisage prototype, and a user study to evaluate whether data scientists are able to effectively use zenvisage for real applications.
|
2405.02290
|
Roni Saputra Permana
|
P Paryanto, Rakha Rahmadani Pratama, Roni Permana Saputra
|
Wheel Odometry-Based Localization for Autonomous Wheelchair
|
6 pages, 10 figures, 3 tables
|
2023 International Conference on Radar, Antenna, Microwave,
Electronics, and Telecommunications (ICRAMET), Bandung, Indonesia, 2023, pp.
357-362
|
10.1109/ICRAMET60171.2023.10366532
| null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Localization is a fundamental requirement for an autonomous vehicle system.
One of the most often used systems for autonomous vehicle localization is the
global positioning system (GPS). Nevertheless, the functionality of GPS is
strongly dependent on the availability of satellites, making it unreliable in
some situations. As a result, autonomous vehicles must possess autonomous
self-localization capabilities to ensure their independent operation. Odometry
techniques are employed to achieve vehicle localization by predicting the
vehicle position and orientation based on sensor measurements of the vehicle
motion. One of the approaches employed in odometry is known as wheel odometry.
Wheel odometry has a lower degree of reliance on the surrounding environment
than visual odometry and laser odometry. This study aims to evaluate the
performance of wheel odometry implementation for an autonomous wheelchair in
the context of the localization process. The differential drive kinematic model
is employed to determine the predicted pose of a wheelchair. This prediction is
derived from the measurement of the linear and angular velocity of the
wheelchair. Several experiments have been conducted to evaluate the performance
of wheel odometry-based localization. Prior to experimenting, calibration
procedures have also been performed to ensure accurate measurements of the
sensor.
|
[
{
"created": "Wed, 27 Dec 2023 15:24:22 GMT",
"version": "v1"
}
] |
2024-05-07
|
[
[
"Paryanto",
"P",
""
],
[
"Pratama",
"Rakha Rahmadani",
""
],
[
"Saputra",
"Roni Permana",
""
]
] |
Localization is a fundamental requirement for an autonomous vehicle system. One of the most often used systems for autonomous vehicle localization is the global positioning system (GPS). Nevertheless, the functionality of GPS is strongly dependent on the availability of satellites, making it unreliable in some situations. As a result, autonomous vehicles must possess autonomous self-localization capabilities to ensure their independent operation. Odometry techniques are employed to achieve vehicle localization by predicting the vehicle position and orientation based on sensor measurements of the vehicle motion. One of the approaches employed in odometry is known as wheel odometry. Wheel odometry has a lower degree of reliance on the surrounding environment than visual odometry and laser odometry. This study aims to evaluate the performance of wheel odometry implementation for an autonomous wheelchair in the context of the localization process. The differential drive kinematic model is employed to determine the predicted pose of a wheelchair. This prediction is derived from the measurement of the linear and angular velocity of the wheelchair. Several experiments have been conducted to evaluate the performance of wheel odometry-based localization. Prior to experimenting, calibration procedures have also been performed to ensure accurate measurements of the sensor.
|
2305.18330
|
Areej Alsini
|
Areej Alsini, Du Q. Huynh and Amitava Datta
|
#REVAL: a semantic evaluation framework for hashtag recommendation
|
18 pages, 4 figures
| null | null | null |
cs.IR cs.AI cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Automatic evaluation of hashtag recommendation models is a fundamental task
in many online social network systems. In the traditional evaluation method,
the recommended hashtags from an algorithm are firstly compared with the ground
truth hashtags for exact correspondences. The number of exact matches is then
used to calculate the hit rate, hit ratio, precision, recall, or F1-score. This
way of evaluating hashtag similarities is inadequate as it ignores the semantic
correlation between the recommended and ground truth hashtags. To tackle this
problem, we propose a novel semantic evaluation framework for hashtag
recommendation, called #REval. This framework includes an internal module
referred to as BERTag, which automatically learns the hashtag embeddings. We
investigate on how the #REval framework performs under different word embedding
methods and different numbers of synonyms and hashtags in the recommendation
using our proposed #REval-hit-ratio measure. Our experiments of the proposed
framework on three large datasets show that #REval gave more meaningful hashtag
synonyms for hashtag recommendation evaluation. Our analysis also highlights
the sensitivity of the framework to the word embedding technique, with #REval
based on BERTag more superior over #REval based on FastText and Word2Vec.
|
[
{
"created": "Wed, 24 May 2023 07:10:56 GMT",
"version": "v1"
}
] |
2023-05-31
|
[
[
"Alsini",
"Areej",
""
],
[
"Huynh",
"Du Q.",
""
],
[
"Datta",
"Amitava",
""
]
] |
Automatic evaluation of hashtag recommendation models is a fundamental task in many online social network systems. In the traditional evaluation method, the recommended hashtags from an algorithm are firstly compared with the ground truth hashtags for exact correspondences. The number of exact matches is then used to calculate the hit rate, hit ratio, precision, recall, or F1-score. This way of evaluating hashtag similarities is inadequate as it ignores the semantic correlation between the recommended and ground truth hashtags. To tackle this problem, we propose a novel semantic evaluation framework for hashtag recommendation, called #REval. This framework includes an internal module referred to as BERTag, which automatically learns the hashtag embeddings. We investigate on how the #REval framework performs under different word embedding methods and different numbers of synonyms and hashtags in the recommendation using our proposed #REval-hit-ratio measure. Our experiments of the proposed framework on three large datasets show that #REval gave more meaningful hashtag synonyms for hashtag recommendation evaluation. Our analysis also highlights the sensitivity of the framework to the word embedding technique, with #REval based on BERTag more superior over #REval based on FastText and Word2Vec.
|
1110.1161
|
Petros Petrosyan
|
Petros A. Petrosyan
|
Interval edge-colorings of cubic graphs
|
3 pages
|
Proceedings of the CSIT Conference, Yerevan, 2011, pp. 86-88
| null | null |
cs.DM math.CO
|
http://creativecommons.org/licenses/by/3.0/
|
An edge-coloring of a multigraph G with colors 1,2,...,t is called an
interval t-coloring if all colors are used, and the colors of edges incident to
any vertex of G are distinct and form an interval of integers. In this paper we
prove that if G is a connected cubic multigraph (a connected cubic graph) that
admits an interval t-coloring, then t\leq |V(G)| +1 (t\leq |V(G)|), where V(G)
is the set of vertices of G. Moreover, if G is a connected cubic graph, G\neq
K_{4}, and G has an interval t-coloring, then t\leq |V(G)| -1. We also show
that these upper bounds are sharp. Finally, we prove that if G is a bipartite
subcubic multigraph, then G has an interval edge-coloring with no more than
four colors.
|
[
{
"created": "Thu, 6 Oct 2011 07:08:00 GMT",
"version": "v1"
}
] |
2011-10-07
|
[
[
"Petrosyan",
"Petros A.",
""
]
] |
An edge-coloring of a multigraph G with colors 1,2,...,t is called an interval t-coloring if all colors are used, and the colors of edges incident to any vertex of G are distinct and form an interval of integers. In this paper we prove that if G is a connected cubic multigraph (a connected cubic graph) that admits an interval t-coloring, then t\leq |V(G)| +1 (t\leq |V(G)|), where V(G) is the set of vertices of G. Moreover, if G is a connected cubic graph, G\neq K_{4}, and G has an interval t-coloring, then t\leq |V(G)| -1. We also show that these upper bounds are sharp. Finally, we prove that if G is a bipartite subcubic multigraph, then G has an interval edge-coloring with no more than four colors.
|
1808.07623
|
John-Thones Amenyo
|
John-Thones Amenyo
|
Principles, Paradigms and the Future of UAV Drone Teams in Use for
Engineering and Operation of Landscape-Scale Deployable Structures
| null | null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Drone fleets, with counts on the order of O(100) to O(1000), will play
important and significant roles in the automation of the deployment, operation,
maintenance and repair of ubiquitous and pervasive landscape scale elongated
structures, with the longest linear spatial dimensions of O(1 mile) to O(10
mile). The organization of the drone team to support the task is considered as
a digital platform, (specifically, a digital multi-sided platform). A
computational thinking approach is used to engineer the architecture of the
platform.
|
[
{
"created": "Thu, 23 Aug 2018 03:53:06 GMT",
"version": "v1"
}
] |
2018-08-24
|
[
[
"Amenyo",
"John-Thones",
""
]
] |
Drone fleets, with counts on the order of O(100) to O(1000), will play important and significant roles in the automation of the deployment, operation, maintenance and repair of ubiquitous and pervasive landscape scale elongated structures, with the longest linear spatial dimensions of O(1 mile) to O(10 mile). The organization of the drone team to support the task is considered as a digital platform, (specifically, a digital multi-sided platform). A computational thinking approach is used to engineer the architecture of the platform.
|
2209.13355
|
Henning Meyerhenke
|
Eugenio Angriman, Alexander van der Grinten, Michael Hamann, Henning
Meyerhenke, Manuel Penschuck
|
Algorithms for Large-scale Network Analysis and the NetworKit Toolkit
| null | null | null | null |
cs.SI cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The abundance of massive network data in a plethora of applications makes
scalable analysis algorithms and software tools necessary to generate knowledge
from such data in reasonable time. Addressing scalability as well as other
requirements such as good usability and a rich feature set, the open-source
software NetworKit has established itself as a popular tool for large-scale
network analysis. This chapter provides a brief overview of the contributions
to NetworKit made by the DFG Priority Programme SPP 1736 Algorithms for Big
Data. Algorithmic contributions in the areas of centrality computations,
community detection, and sparsification are in the focus, but we also mention
several other aspects -- such as current software engineering principles of the
project and ways to visualize network data within a NetworKit-based workflow.
|
[
{
"created": "Tue, 20 Sep 2022 12:10:27 GMT",
"version": "v1"
}
] |
2022-09-28
|
[
[
"Angriman",
"Eugenio",
""
],
[
"van der Grinten",
"Alexander",
""
],
[
"Hamann",
"Michael",
""
],
[
"Meyerhenke",
"Henning",
""
],
[
"Penschuck",
"Manuel",
""
]
] |
The abundance of massive network data in a plethora of applications makes scalable analysis algorithms and software tools necessary to generate knowledge from such data in reasonable time. Addressing scalability as well as other requirements such as good usability and a rich feature set, the open-source software NetworKit has established itself as a popular tool for large-scale network analysis. This chapter provides a brief overview of the contributions to NetworKit made by the DFG Priority Programme SPP 1736 Algorithms for Big Data. Algorithmic contributions in the areas of centrality computations, community detection, and sparsification are in the focus, but we also mention several other aspects -- such as current software engineering principles of the project and ways to visualize network data within a NetworKit-based workflow.
|
2307.14818
|
Anna Moskvina
|
Anna Moskvina, Bhushan Kotnis, Chris Catacata, Michael Janz, Nasrin
Saef
|
What Makes a Good Paraphrase: Do Automated Evaluations Work?
|
Extended Abstract for Konvens2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Paraphrasing is the task of expressing an essential idea or meaning in
different words. But how different should the words be in order to be
considered an acceptable paraphrase? And can we exclusively use automated
metrics to evaluate the quality of a paraphrase? We attempt to answer these
questions by conducting experiments on a German data set and performing
automatic and expert linguistic evaluation.
|
[
{
"created": "Thu, 27 Jul 2023 12:51:16 GMT",
"version": "v1"
}
] |
2023-07-28
|
[
[
"Moskvina",
"Anna",
""
],
[
"Kotnis",
"Bhushan",
""
],
[
"Catacata",
"Chris",
""
],
[
"Janz",
"Michael",
""
],
[
"Saef",
"Nasrin",
""
]
] |
Paraphrasing is the task of expressing an essential idea or meaning in different words. But how different should the words be in order to be considered an acceptable paraphrase? And can we exclusively use automated metrics to evaluate the quality of a paraphrase? We attempt to answer these questions by conducting experiments on a German data set and performing automatic and expert linguistic evaluation.
|
1803.03422
|
Mordechai Guri
|
Mordechai Guri, Yosef Solewicz, Andrey Daidakulov, Yuval Elovici
|
MOSQUITO: Covert Ultrasonic Transmissions between Two Air-Gapped
Computers using Speaker-to-Speaker Communication
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we show how two (or more) airgapped computers in the same room,
equipped with passive speakers, headphones, or earphones can covertly exchange
data via ultrasonic waves. Microphones are not required. Our method is based on
the capability of a malware to exploit a specific audio chip feature in order
to reverse the connected speakers from output devices into input devices -
unobtrusively rendering them microphones. We discuss the attack model and
provide technical background and implementation details. We show that although
the reversed speakers/headphones/earphones were not originally designed to
perform as microphones, they still respond well to the near-ultrasonic range
(18kHz to 24kHz). We evaluate the communication channel with different
equipment, and at various distances and transmission speeds, and also discuss
some practical considerations. Our results show that the speaker-to-speaker
communication can be used to covertly transmit data between two air-gapped
computers positioned a maximum of nine meters away from one another. Moreover,
we show that two (microphone-less) headphones can exchange data from a distance
of three meters apart. This enables 'headphones-to-headphones' covert
communication, which is discussed for the first time in this paper.
|
[
{
"created": "Fri, 9 Mar 2018 09:01:30 GMT",
"version": "v1"
},
{
"created": "Fri, 16 Mar 2018 10:38:38 GMT",
"version": "v2"
}
] |
2018-03-19
|
[
[
"Guri",
"Mordechai",
""
],
[
"Solewicz",
"Yosef",
""
],
[
"Daidakulov",
"Andrey",
""
],
[
"Elovici",
"Yuval",
""
]
] |
In this paper we show how two (or more) airgapped computers in the same room, equipped with passive speakers, headphones, or earphones can covertly exchange data via ultrasonic waves. Microphones are not required. Our method is based on the capability of a malware to exploit a specific audio chip feature in order to reverse the connected speakers from output devices into input devices - unobtrusively rendering them microphones. We discuss the attack model and provide technical background and implementation details. We show that although the reversed speakers/headphones/earphones were not originally designed to perform as microphones, they still respond well to the near-ultrasonic range (18kHz to 24kHz). We evaluate the communication channel with different equipment, and at various distances and transmission speeds, and also discuss some practical considerations. Our results show that the speaker-to-speaker communication can be used to covertly transmit data between two air-gapped computers positioned a maximum of nine meters away from one another. Moreover, we show that two (microphone-less) headphones can exchange data from a distance of three meters apart. This enables 'headphones-to-headphones' covert communication, which is discussed for the first time in this paper.
|
1802.10138
|
Sayyed Jaffar Ali Raza
|
Sayyed Jaffar Ali Raza, Nitish A. Gupta, Nisarg Chitaliya, Gita R.
Sukthankar
|
Real-World Modeling of a Pathfinding Robot Using Robot Operating System
(ROS)
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a practical approach towards implementing pathfinding
algorithms on real-world and low-cost non- commercial hardware platforms. While
using robotics simulation platforms as a test-bed for our algorithms we easily
overlook real- world exogenous problems that are developed by external factors.
Such problems involve robot wheel slips, asynchronous motors, abnormal sensory
data or unstable power sources. The real-world dynamics tend to be very painful
even for executing simple algorithms like a Wavefront planner or A-star search.
This paper addresses designing techniques that tend to be robust as well as
reusable for any hardware platforms; covering problems like controlling
asynchronous drives, odometry offset issues and handling abnormal sensory
feedback. The algorithm implementation medium and hardware design tools have
been kept general in order to present our work as a serving platform for future
researchers and robotics enthusiast working in the field of path planning
robotics.
|
[
{
"created": "Tue, 27 Feb 2018 19:49:42 GMT",
"version": "v1"
}
] |
2018-03-01
|
[
[
"Raza",
"Sayyed Jaffar Ali",
""
],
[
"Gupta",
"Nitish A.",
""
],
[
"Chitaliya",
"Nisarg",
""
],
[
"Sukthankar",
"Gita R.",
""
]
] |
This paper presents a practical approach towards implementing pathfinding algorithms on real-world and low-cost non- commercial hardware platforms. While using robotics simulation platforms as a test-bed for our algorithms we easily overlook real- world exogenous problems that are developed by external factors. Such problems involve robot wheel slips, asynchronous motors, abnormal sensory data or unstable power sources. The real-world dynamics tend to be very painful even for executing simple algorithms like a Wavefront planner or A-star search. This paper addresses designing techniques that tend to be robust as well as reusable for any hardware platforms; covering problems like controlling asynchronous drives, odometry offset issues and handling abnormal sensory feedback. The algorithm implementation medium and hardware design tools have been kept general in order to present our work as a serving platform for future researchers and robotics enthusiast working in the field of path planning robotics.
|
1902.08318
|
Daniel Lemire
|
Geoff Langdale, Daniel Lemire
|
Parsing Gigabytes of JSON per Second
|
software: https://github.com/lemire/simdjson
|
The VLDB Journal, 28(6), 2019
|
10.1007/s00778-019-00578-5
| null |
cs.DB cs.PF
|
http://creativecommons.org/licenses/by/4.0/
|
JavaScript Object Notation or JSON is a ubiquitous data exchange format on
the Web. Ingesting JSON documents can become a performance bottleneck due to
the sheer volume of data. We are thus motivated to make JSON parsing as fast as
possible.
Despite the maturity of the problem of JSON parsing, we show that substantial
speedups are possible. We present the first standard-compliant JSON parser to
process gigabytes of data per second on a single core, using commodity
processors. We can use a quarter or fewer instructions than a state-of-the-art
reference parser like RapidJSON. Unlike other validating parsers, our software
(simdjson) makes extensive use of Single Instruction, Multiple Data (SIMD)
instructions. To ensure reproducibility, simdjson is freely available as
open-source software under a liberal license.
|
[
{
"created": "Fri, 22 Feb 2019 00:24:01 GMT",
"version": "v1"
},
{
"created": "Mon, 25 Feb 2019 19:45:23 GMT",
"version": "v2"
},
{
"created": "Mon, 17 Jun 2019 21:51:55 GMT",
"version": "v3"
},
{
"created": "Tue, 13 Aug 2019 00:34:45 GMT",
"version": "v4"
},
{
"created": "Mon, 30 Dec 2019 23:10:47 GMT",
"version": "v5"
},
{
"created": "Thu, 2 Jan 2020 14:56:46 GMT",
"version": "v6"
},
{
"created": "Tue, 23 Jul 2024 21:56:05 GMT",
"version": "v7"
}
] |
2024-07-25
|
[
[
"Langdale",
"Geoff",
""
],
[
"Lemire",
"Daniel",
""
]
] |
JavaScript Object Notation or JSON is a ubiquitous data exchange format on the Web. Ingesting JSON documents can become a performance bottleneck due to the sheer volume of data. We are thus motivated to make JSON parsing as fast as possible. Despite the maturity of the problem of JSON parsing, we show that substantial speedups are possible. We present the first standard-compliant JSON parser to process gigabytes of data per second on a single core, using commodity processors. We can use a quarter or fewer instructions than a state-of-the-art reference parser like RapidJSON. Unlike other validating parsers, our software (simdjson) makes extensive use of Single Instruction, Multiple Data (SIMD) instructions. To ensure reproducibility, simdjson is freely available as open-source software under a liberal license.
|
2104.14519
|
Rohit Chadha
|
Rohit Chadha, A. Prasad Sistla and Mahesh Viswanathan
|
On Linear Time Decidability of Differential Privacy for Programs with
Unbounded Inputs
|
An extended abstract to be published in 36th Annual IEEE Symposium on
Logic in Computer Science (LICS 2021)
| null | null | null |
cs.CR cs.FL cs.LO cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce an automata model for describing interesting classes of
differential privacy mechanisms/algorithms that include known mechanisms from
the literature. These automata can model algorithms whose inputs can be an
unbounded sequence of real-valued query answers. We consider the problem of
checking whether there exists a constant $d$ such that the algorithm described
by these automata are $d\epsilon$-differentially private for all positive
values of the privacy budget parameter $\epsilon$. We show that this problem
can be decided in time linear in the automaton's size by identifying a
necessary and sufficient condition on the underlying graph of the automaton.
This paper's results are the first decidability results known for algorithms
with an unbounded number of query answers taking values from the set of reals.
|
[
{
"created": "Thu, 29 Apr 2021 17:34:44 GMT",
"version": "v1"
}
] |
2021-04-30
|
[
[
"Chadha",
"Rohit",
""
],
[
"Sistla",
"A. Prasad",
""
],
[
"Viswanathan",
"Mahesh",
""
]
] |
We introduce an automata model for describing interesting classes of differential privacy mechanisms/algorithms that include known mechanisms from the literature. These automata can model algorithms whose inputs can be an unbounded sequence of real-valued query answers. We consider the problem of checking whether there exists a constant $d$ such that the algorithm described by these automata are $d\epsilon$-differentially private for all positive values of the privacy budget parameter $\epsilon$. We show that this problem can be decided in time linear in the automaton's size by identifying a necessary and sufficient condition on the underlying graph of the automaton. This paper's results are the first decidability results known for algorithms with an unbounded number of query answers taking values from the set of reals.
|
2107.00077
|
William McCarthy
|
William P. McCarthy, Robert D. Hawkins, Haoliang Wang, Cameron
Holdaway, Judith E. Fan
|
Learning to communicate about shared procedural abstractions
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many real-world tasks require agents to coordinate their behavior to achieve
shared goals. Successful collaboration requires not only adopting the same
communicative conventions, but also grounding these conventions in the same
task-appropriate conceptual abstractions. We investigate how humans use natural
language to collaboratively solve physical assembly problems more effectively
over time. Human participants were paired up in an online environment to
reconstruct scenes containing two block towers. One participant could see the
target towers, and sent assembly instructions for the other participant to
reconstruct. Participants provided increasingly concise instructions across
repeated attempts on each pair of towers, using higher-level referring
expressions that captured each scene's hierarchical structure. To explain these
findings, we extend recent probabilistic models of ad-hoc convention formation
with an explicit perceptual learning mechanism. These results shed light on the
inductive biases that enable intelligent agents to coordinate upon shared
procedural abstractions.
|
[
{
"created": "Wed, 30 Jun 2021 19:59:11 GMT",
"version": "v1"
}
] |
2021-07-02
|
[
[
"McCarthy",
"William P.",
""
],
[
"Hawkins",
"Robert D.",
""
],
[
"Wang",
"Haoliang",
""
],
[
"Holdaway",
"Cameron",
""
],
[
"Fan",
"Judith E.",
""
]
] |
Many real-world tasks require agents to coordinate their behavior to achieve shared goals. Successful collaboration requires not only adopting the same communicative conventions, but also grounding these conventions in the same task-appropriate conceptual abstractions. We investigate how humans use natural language to collaboratively solve physical assembly problems more effectively over time. Human participants were paired up in an online environment to reconstruct scenes containing two block towers. One participant could see the target towers, and sent assembly instructions for the other participant to reconstruct. Participants provided increasingly concise instructions across repeated attempts on each pair of towers, using higher-level referring expressions that captured each scene's hierarchical structure. To explain these findings, we extend recent probabilistic models of ad-hoc convention formation with an explicit perceptual learning mechanism. These results shed light on the inductive biases that enable intelligent agents to coordinate upon shared procedural abstractions.
|
2202.05395
|
Alireza Sadeghi
|
Alireza Sadeghi
|
Robust, Deep, and Reinforcement Learning for Management of Communication
and Power Networks
|
PhD thesis
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This thesis develops data-driven machine learning algorithms to managing and
optimizing the next-generation highly complex cyberphysical systems, which
desperately need ground-breaking control, monitoring, and decision making
schemes that can guarantee robustness, scalability, and situational awareness.
The present thesis first develops principled methods to make generic machine
learning models robust against distributional uncertainties and adversarial
data. Particular focus will be on parametric models where some training data
are being used to learn a parametric model. The developed framework is of high
interest especially when training and testing data are drawn from "slightly"
different distribution. We then introduce distributionally robust learning
frameworks to minimize the worst-case expected loss over a prescribed ambiguity
set of training distributions quantified via Wasserstein distance. Later, we
build on this robust framework to design robust semi-supervised learning over
graph methods. The second part of this thesis aspires to fully unleash the
potential of next-generation wired and wireless networks, where we design
"smart" network entities using (deep) reinforcement learning approaches.
Finally, this thesis enhances the power system operation and control. Our
contribution is on sustainable distribution grids with high penetration of
renewable sources and demand response programs. To account for unanticipated
and rapidly changing renewable generation and load consumption scenarios, we
specifically delegate reactive power compensation to both utility-owned control
devices (e.g., capacitor banks), as well as smart inverters of distributed
generation units with cyber-capabilities.
|
[
{
"created": "Tue, 8 Feb 2022 05:49:06 GMT",
"version": "v1"
}
] |
2022-02-14
|
[
[
"Sadeghi",
"Alireza",
""
]
] |
This thesis develops data-driven machine learning algorithms to managing and optimizing the next-generation highly complex cyberphysical systems, which desperately need ground-breaking control, monitoring, and decision making schemes that can guarantee robustness, scalability, and situational awareness. The present thesis first develops principled methods to make generic machine learning models robust against distributional uncertainties and adversarial data. Particular focus will be on parametric models where some training data are being used to learn a parametric model. The developed framework is of high interest especially when training and testing data are drawn from "slightly" different distribution. We then introduce distributionally robust learning frameworks to minimize the worst-case expected loss over a prescribed ambiguity set of training distributions quantified via Wasserstein distance. Later, we build on this robust framework to design robust semi-supervised learning over graph methods. The second part of this thesis aspires to fully unleash the potential of next-generation wired and wireless networks, where we design "smart" network entities using (deep) reinforcement learning approaches. Finally, this thesis enhances the power system operation and control. Our contribution is on sustainable distribution grids with high penetration of renewable sources and demand response programs. To account for unanticipated and rapidly changing renewable generation and load consumption scenarios, we specifically delegate reactive power compensation to both utility-owned control devices (e.g., capacitor banks), as well as smart inverters of distributed generation units with cyber-capabilities.
|
2208.07281
|
Quanyu Dai
|
Quanyu Dai, Zhenhua Dong and Xu Chen
|
Debiased Recommendation with Neural Stratification
| null | null | null | null |
cs.IR cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Debiased recommender models have recently attracted increasing attention from
the academic and industry communities. Existing models are mostly based on the
technique of inverse propensity score (IPS). However, in the recommendation
domain, IPS can be hard to estimate given the sparse and noisy nature of the
observed user-item exposure data. To alleviate this problem, in this paper, we
assume that the user preference can be dominated by a small amount of latent
factors, and propose to cluster the users for computing more accurate IPS via
increasing the exposure densities. Basically, such method is similar with the
spirit of stratification models in applied statistics. However, unlike previous
heuristic stratification strategy, we learn the cluster criterion by presenting
the users with low ranking embeddings, which are future shared with the user
representations in the recommender model. At last, we find that our model has
strong connections with the previous two types of debiased recommender models.
We conduct extensive experiments based on real-world datasets to demonstrate
the effectiveness of the proposed method.
|
[
{
"created": "Mon, 15 Aug 2022 15:45:35 GMT",
"version": "v1"
}
] |
2022-08-16
|
[
[
"Dai",
"Quanyu",
""
],
[
"Dong",
"Zhenhua",
""
],
[
"Chen",
"Xu",
""
]
] |
Debiased recommender models have recently attracted increasing attention from the academic and industry communities. Existing models are mostly based on the technique of inverse propensity score (IPS). However, in the recommendation domain, IPS can be hard to estimate given the sparse and noisy nature of the observed user-item exposure data. To alleviate this problem, in this paper, we assume that the user preference can be dominated by a small amount of latent factors, and propose to cluster the users for computing more accurate IPS via increasing the exposure densities. Basically, such method is similar with the spirit of stratification models in applied statistics. However, unlike previous heuristic stratification strategy, we learn the cluster criterion by presenting the users with low ranking embeddings, which are future shared with the user representations in the recommender model. At last, we find that our model has strong connections with the previous two types of debiased recommender models. We conduct extensive experiments based on real-world datasets to demonstrate the effectiveness of the proposed method.
|
1711.06948
|
Zhizhen Liang Dr.
|
Zhizheng Liang, Lei Zhang, Jin Liu, Yong Zhou
|
A novel total variation model based on kernel functions and its
application
|
22 pages, 5 figures, 2 tables
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The total variation (TV) model and its related variants have already been
proposed for image processing in previous literature. In this paper a novel
total variation model based on kernel functions is proposed. In this novel
model, we first map each pixel value of an image into a Hilbert space by using
a nonlinear map, and then define a coupled image of an original image in order
to construct a kernel function. Finally, the proposed model is solved in a
kernel function space instead of in the projecting space from a nonlinear map.
For the proposed model, we theoretically show under what conditions the mapping
image is in the space of bounded variation when the original image is in the
space of bounded variation. It is also found that the proposed model further
extends the generalized TV model and the information from three different
channels of color images can be fused by adopting various kernel functions. A
series of experiments on some gray and color images are carried out to
demonstrate the effectiveness of the proposed model.
|
[
{
"created": "Sun, 19 Nov 2017 01:30:44 GMT",
"version": "v1"
}
] |
2017-11-21
|
[
[
"Liang",
"Zhizheng",
""
],
[
"Zhang",
"Lei",
""
],
[
"Liu",
"Jin",
""
],
[
"Zhou",
"Yong",
""
]
] |
The total variation (TV) model and its related variants have already been proposed for image processing in previous literature. In this paper a novel total variation model based on kernel functions is proposed. In this novel model, we first map each pixel value of an image into a Hilbert space by using a nonlinear map, and then define a coupled image of an original image in order to construct a kernel function. Finally, the proposed model is solved in a kernel function space instead of in the projecting space from a nonlinear map. For the proposed model, we theoretically show under what conditions the mapping image is in the space of bounded variation when the original image is in the space of bounded variation. It is also found that the proposed model further extends the generalized TV model and the information from three different channels of color images can be fused by adopting various kernel functions. A series of experiments on some gray and color images are carried out to demonstrate the effectiveness of the proposed model.
|
2008.01772
|
Ryan Pyle
|
Justin Sahs, Ryan Pyle, Aneel Damaraju, Josue Ortega Caro, Onur
Tavaslioglu, Andy Lu, Ankit Patel
|
Shallow Univariate ReLu Networks as Splines: Initialization, Loss
Surface, Hessian, & Gradient Flow Dynamics
|
14 pages, 4 figures in main text
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Understanding the learning dynamics and inductive bias of neural networks
(NNs) is hindered by the opacity of the relationship between NN parameters and
the function represented. We propose reparametrizing ReLU NNs as continuous
piecewise linear splines. Using this spline lens, we study learning dynamics in
shallow univariate ReLU NNs, finding unexpected insights and explanations for
several perplexing phenomena. We develop a surprisingly simple and transparent
view of the structure of the loss surface, including its critical and fixed
points, Hessian, and Hessian spectrum. We also show that standard weight
initializations yield very flat functions, and that this flatness, together
with overparametrization and the initial weight scale, is responsible for the
strength and type of implicit regularization, consistent with recent work
arXiv:1906.05827. Our implicit regularization results are complementary to
recent work arXiv:1906.07842, done independently, which showed that
initialization scale critically controls implicit regularization via a
kernel-based argument. Our spline-based approach reproduces their key implicit
regularization results but in a far more intuitive and transparent manner.
Going forward, our spline-based approach is likely to extend naturally to the
multivariate and deep settings, and will play a foundational role in efforts to
understand neural networks. Videos of learning dynamics using a spline-based
visualization are available at http://shorturl.at/tFWZ2.
|
[
{
"created": "Tue, 4 Aug 2020 19:19:49 GMT",
"version": "v1"
}
] |
2020-08-06
|
[
[
"Sahs",
"Justin",
""
],
[
"Pyle",
"Ryan",
""
],
[
"Damaraju",
"Aneel",
""
],
[
"Caro",
"Josue Ortega",
""
],
[
"Tavaslioglu",
"Onur",
""
],
[
"Lu",
"Andy",
""
],
[
"Patel",
"Ankit",
""
]
] |
Understanding the learning dynamics and inductive bias of neural networks (NNs) is hindered by the opacity of the relationship between NN parameters and the function represented. We propose reparametrizing ReLU NNs as continuous piecewise linear splines. Using this spline lens, we study learning dynamics in shallow univariate ReLU NNs, finding unexpected insights and explanations for several perplexing phenomena. We develop a surprisingly simple and transparent view of the structure of the loss surface, including its critical and fixed points, Hessian, and Hessian spectrum. We also show that standard weight initializations yield very flat functions, and that this flatness, together with overparametrization and the initial weight scale, is responsible for the strength and type of implicit regularization, consistent with recent work arXiv:1906.05827. Our implicit regularization results are complementary to recent work arXiv:1906.07842, done independently, which showed that initialization scale critically controls implicit regularization via a kernel-based argument. Our spline-based approach reproduces their key implicit regularization results but in a far more intuitive and transparent manner. Going forward, our spline-based approach is likely to extend naturally to the multivariate and deep settings, and will play a foundational role in efforts to understand neural networks. Videos of learning dynamics using a spline-based visualization are available at http://shorturl.at/tFWZ2.
|
2401.06785
|
Hongyi Guo
|
Hongyi Guo, Yuanshun Yao, Wei Shen, Jiaheng Wei, Xiaoying Zhang,
Zhaoran Wang, Yang Liu
|
Human-Instruction-Free LLM Self-Alignment with Limited Samples
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Aligning large language models (LLMs) with human values is a vital task for
LLM practitioners. Current alignment techniques have several limitations: (1)
requiring a large amount of annotated data; (2) demanding heavy human
involvement; (3) lacking a systematic mechanism to continuously improve. In
this work, we study aligning LLMs to a new domain with limited samples (e.g. <
100). We propose an algorithm that can self-align LLMs iteratively without
active human involvement. Unlike existing works, our algorithm relies on
neither human-crafted instructions nor labeled rewards, significantly reducing
human involvement. In addition, our algorithm can self-improve the alignment
continuously. The key idea is to first retrieve high-quality samples related to
the target domain and use them as In-context Learning examples to generate more
samples. Then we use the self-generated samples to finetune the LLM
iteratively. We show that our method can unlock the LLMs' self-generalization
ability to perform alignment with near-zero human supervision. We test our
algorithm on three benchmarks in safety, truthfulness, and
instruction-following, and show good performance in alignment, domain
adaptability, and scalability.
|
[
{
"created": "Sat, 6 Jan 2024 14:00:12 GMT",
"version": "v1"
}
] |
2024-01-17
|
[
[
"Guo",
"Hongyi",
""
],
[
"Yao",
"Yuanshun",
""
],
[
"Shen",
"Wei",
""
],
[
"Wei",
"Jiaheng",
""
],
[
"Zhang",
"Xiaoying",
""
],
[
"Wang",
"Zhaoran",
""
],
[
"Liu",
"Yang",
""
]
] |
Aligning large language models (LLMs) with human values is a vital task for LLM practitioners. Current alignment techniques have several limitations: (1) requiring a large amount of annotated data; (2) demanding heavy human involvement; (3) lacking a systematic mechanism to continuously improve. In this work, we study aligning LLMs to a new domain with limited samples (e.g. < 100). We propose an algorithm that can self-align LLMs iteratively without active human involvement. Unlike existing works, our algorithm relies on neither human-crafted instructions nor labeled rewards, significantly reducing human involvement. In addition, our algorithm can self-improve the alignment continuously. The key idea is to first retrieve high-quality samples related to the target domain and use them as In-context Learning examples to generate more samples. Then we use the self-generated samples to finetune the LLM iteratively. We show that our method can unlock the LLMs' self-generalization ability to perform alignment with near-zero human supervision. We test our algorithm on three benchmarks in safety, truthfulness, and instruction-following, and show good performance in alignment, domain adaptability, and scalability.
|
2405.14241
|
Chaokang Jiang
|
Chaokang Jiang, Dalong Du, Jiuming Liu, Siting Zhu, Zhenqiang Liu,
Zhuang Ma, Zhujin Liang and Jie Zhou
|
NeuroGauss4D-PCI: 4D Neural Fields and Gaussian Deformation Fields for
Point Cloud Interpolation
|
Under review
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Point Cloud Interpolation confronts challenges from point sparsity, complex
spatiotemporal dynamics, and the difficulty of deriving complete 3D point
clouds from sparse temporal information. This paper presents NeuroGauss4D-PCI,
which excels at modeling complex non-rigid deformations across varied dynamic
scenes. The method begins with an iterative Gaussian cloud soft clustering
module, offering structured temporal point cloud representations. The proposed
temporal radial basis function Gaussian residual utilizes Gaussian parameter
interpolation over time, enabling smooth parameter transitions and capturing
temporal residuals of Gaussian distributions. Additionally, a 4D Gaussian
deformation field tracks the evolution of these parameters, creating continuous
spatiotemporal deformation fields. A 4D neural field transforms low-dimensional
spatiotemporal coordinates ($x,y,z,t$) into a high-dimensional latent space.
Finally, we adaptively and efficiently fuse the latent features from neural
fields and the geometric features from Gaussian deformation fields.
NeuroGauss4D-PCI outperforms existing methods in point cloud frame
interpolation, delivering leading performance on both object-level (DHB) and
large-scale autonomous driving datasets (NL-Drive), with scalability to
auto-labeling and point cloud densification tasks. The source code is released
at https://github.com/jiangchaokang/NeuroGauss4D-PCI.
|
[
{
"created": "Thu, 23 May 2024 07:21:01 GMT",
"version": "v1"
}
] |
2024-05-24
|
[
[
"Jiang",
"Chaokang",
""
],
[
"Du",
"Dalong",
""
],
[
"Liu",
"Jiuming",
""
],
[
"Zhu",
"Siting",
""
],
[
"Liu",
"Zhenqiang",
""
],
[
"Ma",
"Zhuang",
""
],
[
"Liang",
"Zhujin",
""
],
[
"Zhou",
"Jie",
""
]
] |
Point Cloud Interpolation confronts challenges from point sparsity, complex spatiotemporal dynamics, and the difficulty of deriving complete 3D point clouds from sparse temporal information. This paper presents NeuroGauss4D-PCI, which excels at modeling complex non-rigid deformations across varied dynamic scenes. The method begins with an iterative Gaussian cloud soft clustering module, offering structured temporal point cloud representations. The proposed temporal radial basis function Gaussian residual utilizes Gaussian parameter interpolation over time, enabling smooth parameter transitions and capturing temporal residuals of Gaussian distributions. Additionally, a 4D Gaussian deformation field tracks the evolution of these parameters, creating continuous spatiotemporal deformation fields. A 4D neural field transforms low-dimensional spatiotemporal coordinates ($x,y,z,t$) into a high-dimensional latent space. Finally, we adaptively and efficiently fuse the latent features from neural fields and the geometric features from Gaussian deformation fields. NeuroGauss4D-PCI outperforms existing methods in point cloud frame interpolation, delivering leading performance on both object-level (DHB) and large-scale autonomous driving datasets (NL-Drive), with scalability to auto-labeling and point cloud densification tasks. The source code is released at https://github.com/jiangchaokang/NeuroGauss4D-PCI.
|
1905.01574
|
Qi Wang
|
Qi Wang, Junyu Gao, Yuan Yuan
|
A Joint Convolutional Neural Networks and Context Transfer for Street
Scenes Labeling
|
IEEE T-ITS 2018
| null |
10.1109/TITS.2017.2726546
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Street scene understanding is an essential task for autonomous driving. One
important step towards this direction is scene labeling, which annotates each
pixel in the images with a correct class label. Although many approaches have
been developed, there are still some weak points. Firstly, many methods are
based on the hand-crafted features whose image representation ability is
limited. Secondly, they can not label foreground objects accurately due to the
dataset bias. Thirdly, in the refinement stage, the traditional Markov Random
Filed (MRF) inference is prone to over smoothness. For improving the above
problems, this paper proposes a joint method of priori convolutional neural
networks at superpixel level (called as ``priori s-CNNs'') and soft restricted
context transfer. Our contributions are threefold: (1) A priori s-CNNs model
that learns priori location information at superpixel level is proposed to
describe various objects discriminatingly; (2) A hierarchical data augmentation
method is presented to alleviate dataset bias in the priori s-CNNs training
stage, which improves foreground objects labeling significantly; (3) A soft
restricted MRF energy function is defined to improve the priori s-CNNs model's
labeling performance and reduce the over smoothness at the same time. The
proposed approach is verified on CamVid dataset (11 classes) and SIFT Flow
Street dataset (16 classes) and achieves competitive performance.
|
[
{
"created": "Sun, 5 May 2019 01:24:19 GMT",
"version": "v1"
}
] |
2019-05-07
|
[
[
"Wang",
"Qi",
""
],
[
"Gao",
"Junyu",
""
],
[
"Yuan",
"Yuan",
""
]
] |
Street scene understanding is an essential task for autonomous driving. One important step towards this direction is scene labeling, which annotates each pixel in the images with a correct class label. Although many approaches have been developed, there are still some weak points. Firstly, many methods are based on the hand-crafted features whose image representation ability is limited. Secondly, they can not label foreground objects accurately due to the dataset bias. Thirdly, in the refinement stage, the traditional Markov Random Filed (MRF) inference is prone to over smoothness. For improving the above problems, this paper proposes a joint method of priori convolutional neural networks at superpixel level (called as ``priori s-CNNs'') and soft restricted context transfer. Our contributions are threefold: (1) A priori s-CNNs model that learns priori location information at superpixel level is proposed to describe various objects discriminatingly; (2) A hierarchical data augmentation method is presented to alleviate dataset bias in the priori s-CNNs training stage, which improves foreground objects labeling significantly; (3) A soft restricted MRF energy function is defined to improve the priori s-CNNs model's labeling performance and reduce the over smoothness at the same time. The proposed approach is verified on CamVid dataset (11 classes) and SIFT Flow Street dataset (16 classes) and achieves competitive performance.
|
2204.14170
|
Benjie Wang
|
Benjie Wang, Matthew Wicker, Marta Kwiatkowska
|
Tractable Uncertainty for Structure Learning
|
ICML 2022 (long talk); 20 pages
| null | null | null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Bayesian structure learning allows one to capture uncertainty over the causal
directed acyclic graph (DAG) responsible for generating given data. In this
work, we present Tractable Uncertainty for STructure learning (TRUST), a
framework for approximate posterior inference that relies on probabilistic
circuits as the representation of our posterior belief. In contrast to
sample-based posterior approximations, our representation can capture a much
richer space of DAGs, while also being able to tractably reason about the
uncertainty through a range of useful inference queries. We empirically show
how probabilistic circuits can be used as an augmented representation for
structure learning methods, leading to improvement in both the quality of
inferred structures and posterior uncertainty. Experimental results on
conditional query answering further demonstrate the practical utility of the
representational capacity of TRUST.
|
[
{
"created": "Fri, 29 Apr 2022 15:54:39 GMT",
"version": "v1"
},
{
"created": "Fri, 1 Jul 2022 18:41:28 GMT",
"version": "v2"
}
] |
2022-07-05
|
[
[
"Wang",
"Benjie",
""
],
[
"Wicker",
"Matthew",
""
],
[
"Kwiatkowska",
"Marta",
""
]
] |
Bayesian structure learning allows one to capture uncertainty over the causal directed acyclic graph (DAG) responsible for generating given data. In this work, we present Tractable Uncertainty for STructure learning (TRUST), a framework for approximate posterior inference that relies on probabilistic circuits as the representation of our posterior belief. In contrast to sample-based posterior approximations, our representation can capture a much richer space of DAGs, while also being able to tractably reason about the uncertainty through a range of useful inference queries. We empirically show how probabilistic circuits can be used as an augmented representation for structure learning methods, leading to improvement in both the quality of inferred structures and posterior uncertainty. Experimental results on conditional query answering further demonstrate the practical utility of the representational capacity of TRUST.
|
2308.13265
|
Zahra Taghiyarrenani Ms
|
Zahra Taghiyarrenani, Abdallah Alabdallah, Slawomir Nowaczyk, Sepideh
Pashami
|
Heterogeneous Federated Learning via Personalized Generative Networks
| null | null | null | null |
cs.LG
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Federated Learning (FL) allows several clients to construct a common global
machine-learning model without having to share their data. FL, however, faces
the challenge of statistical heterogeneity between the client's data, which
degrades performance and slows down the convergence toward the global model. In
this paper, we provide theoretical proof that minimizing heterogeneity between
clients facilitates the convergence of a global model for every single client.
This becomes particularly important under empirical concept shifts among
clients, rather than merely considering imbalanced classes, which have been
studied until now. Therefore, we propose a method for knowledge transfer
between clients where the server trains client-specific generators. Each
generator generates samples for the corresponding client to remove the conflict
with other clients' models. Experiments conducted on synthetic and real data,
along with a theoretical study, support the effectiveness of our method in
constructing a well-generalizable global model by reducing the conflict between
local models.
|
[
{
"created": "Fri, 25 Aug 2023 09:37:02 GMT",
"version": "v1"
},
{
"created": "Thu, 25 Jan 2024 10:16:46 GMT",
"version": "v2"
}
] |
2024-01-26
|
[
[
"Taghiyarrenani",
"Zahra",
""
],
[
"Alabdallah",
"Abdallah",
""
],
[
"Nowaczyk",
"Slawomir",
""
],
[
"Pashami",
"Sepideh",
""
]
] |
Federated Learning (FL) allows several clients to construct a common global machine-learning model without having to share their data. FL, however, faces the challenge of statistical heterogeneity between the client's data, which degrades performance and slows down the convergence toward the global model. In this paper, we provide theoretical proof that minimizing heterogeneity between clients facilitates the convergence of a global model for every single client. This becomes particularly important under empirical concept shifts among clients, rather than merely considering imbalanced classes, which have been studied until now. Therefore, we propose a method for knowledge transfer between clients where the server trains client-specific generators. Each generator generates samples for the corresponding client to remove the conflict with other clients' models. Experiments conducted on synthetic and real data, along with a theoretical study, support the effectiveness of our method in constructing a well-generalizable global model by reducing the conflict between local models.
|
2407.21091
|
Di Zhang
|
Di Zhang and Suvrajeet Sen
|
The Stochastic Conjugate Subgradient Algorithm For Kernel Support Vector
Machines
|
arXiv admin note: text overlap with arXiv:2407.20944
| null | null | null |
cs.LG math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Stochastic First-Order (SFO) methods have been a cornerstone in addressing a
broad spectrum of modern machine learning (ML) challenges. However, their
efficacy is increasingly questioned, especially in large-scale applications
where empirical evidence indicates potential performance limitations. In
response, this paper proposes an innovative method specifically designed for
kernel support vector machines (SVMs). This method not only achieves faster
convergence per iteration but also exhibits enhanced scalability when compared
to conventional SFO techniques. Diverging from traditional sample average
approximation strategies that typically frame kernel SVM as an 'all-in-one'
Quadratic Program (QP), our approach adopts adaptive sampling. This strategy
incrementally refines approximation accuracy on an 'as-needed' basis.
Crucially, this approach also inspires a decomposition-based algorithm,
effectively decomposing parameter selection from error estimation, with the
latter being independently determined for each data point. To exploit the
quadratic nature of the kernel matrix, we introduce a stochastic conjugate
subgradient method. This method preserves many benefits of first-order
approaches while adeptly handling both nonlinearity and non-smooth aspects of
the SVM problem. Thus, it extends beyond the capabilities of standard SFO
algorithms for non-smooth convex optimization. The convergence rate of this
novel method is thoroughly analyzed within this paper. Our experimental results
demonstrate that the proposed algorithm not only maintains but potentially
exceeds the scalability of SFO methods. Moreover, it significantly enhances
both speed and accuracy of the optimization process.
|
[
{
"created": "Tue, 30 Jul 2024 17:03:19 GMT",
"version": "v1"
}
] |
2024-08-01
|
[
[
"Zhang",
"Di",
""
],
[
"Sen",
"Suvrajeet",
""
]
] |
Stochastic First-Order (SFO) methods have been a cornerstone in addressing a broad spectrum of modern machine learning (ML) challenges. However, their efficacy is increasingly questioned, especially in large-scale applications where empirical evidence indicates potential performance limitations. In response, this paper proposes an innovative method specifically designed for kernel support vector machines (SVMs). This method not only achieves faster convergence per iteration but also exhibits enhanced scalability when compared to conventional SFO techniques. Diverging from traditional sample average approximation strategies that typically frame kernel SVM as an 'all-in-one' Quadratic Program (QP), our approach adopts adaptive sampling. This strategy incrementally refines approximation accuracy on an 'as-needed' basis. Crucially, this approach also inspires a decomposition-based algorithm, effectively decomposing parameter selection from error estimation, with the latter being independently determined for each data point. To exploit the quadratic nature of the kernel matrix, we introduce a stochastic conjugate subgradient method. This method preserves many benefits of first-order approaches while adeptly handling both nonlinearity and non-smooth aspects of the SVM problem. Thus, it extends beyond the capabilities of standard SFO algorithms for non-smooth convex optimization. The convergence rate of this novel method is thoroughly analyzed within this paper. Our experimental results demonstrate that the proposed algorithm not only maintains but potentially exceeds the scalability of SFO methods. Moreover, it significantly enhances both speed and accuracy of the optimization process.
|
2304.11313
|
Tsung-Han Kuo
|
Tsung-Han Kuo, Zhenge Jia, Tei-Wei Kuo, Jingtong Hu
|
BiTrackGAN: Cascaded CycleGANs to Constraint Face Aging
|
V1.0
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the increased accuracy of modern computer vision technology, many access
control systems are equipped with face recognition functions for faster
identification. In order to maintain high recognition accuracy, it is necessary
to keep the face database up-to-date. However, it is impractical to collect the
latest facial picture of the system's user through human effort. Thus, we
propose a bottom-up training method for our proposed network to address this
challenge. Essentially, our proposed network is a translation pipeline that
cascades two CycleGAN blocks (a widely used unpaired image-to-image translation
generative adversarial network) called BiTrackGAN. By bottom-up training, it
induces an ideal intermediate state between these two CycleGAN blocks, namely
the constraint mechanism. Experimental results show that BiTrackGAN achieves
more reasonable and diverse cross-age facial synthesis than other
CycleGAN-related methods. As far as we know, it is a novel and effective
constraint mechanism for more reason and accurate aging synthesis through the
CycleGAN approach.
|
[
{
"created": "Sat, 22 Apr 2023 04:35:40 GMT",
"version": "v1"
}
] |
2023-04-25
|
[
[
"Kuo",
"Tsung-Han",
""
],
[
"Jia",
"Zhenge",
""
],
[
"Kuo",
"Tei-Wei",
""
],
[
"Hu",
"Jingtong",
""
]
] |
With the increased accuracy of modern computer vision technology, many access control systems are equipped with face recognition functions for faster identification. In order to maintain high recognition accuracy, it is necessary to keep the face database up-to-date. However, it is impractical to collect the latest facial picture of the system's user through human effort. Thus, we propose a bottom-up training method for our proposed network to address this challenge. Essentially, our proposed network is a translation pipeline that cascades two CycleGAN blocks (a widely used unpaired image-to-image translation generative adversarial network) called BiTrackGAN. By bottom-up training, it induces an ideal intermediate state between these two CycleGAN blocks, namely the constraint mechanism. Experimental results show that BiTrackGAN achieves more reasonable and diverse cross-age facial synthesis than other CycleGAN-related methods. As far as we know, it is a novel and effective constraint mechanism for more reason and accurate aging synthesis through the CycleGAN approach.
|
2206.06260
|
Rahul Pandita
|
Dylan Lee and Austin Henley and Bill Hinshaw and Rahul Pandita
|
OpenCBS: An Open-Source COBOL Defects Benchmark Suite
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
As the current COBOL workforce retires, entry-level developers are left to
keep complex legacy systems maintained and operational. This creates a massive
gap in knowledge and ability as companies are having their veteran developers
replaced with a new, inexperienced workforce. Additionally, the lack of COBOL
and mainframe technology in the current academic curriculum further increases
the learning curve for this new generation of developers. These issues are
becoming even more pressing due to the business-critical nature of these
systems, which makes migrating or replacing the mainframe and COBOL anytime
soon very unlikely. As a result, there is now a huge need for tools and
resources to increase new developers' code comprehension and ability to perform
routine tasks such as debugging and defect location. Extensive work has been
done in the software engineering field on the creation of such resources.
However, the proprietary nature of COBOL and mainframe systems has restricted
the amount of work and the number of open-source tools available for this
domain. To address this issue, our work leverages the publicly available
technical forum data to build an open-source collection of COBOL programs
embodying issues/defects faced by COBOL developers. These programs were
reconstructed and organized in a benchmark suite to facilitate the testing of
developer tools. Our goal is to provide an open-source COBOL benchmark and
testing suite that encourage community contribution and serve as a resource for
researchers and tool-smiths in this domain.
|
[
{
"created": "Mon, 13 Jun 2022 15:42:31 GMT",
"version": "v1"
}
] |
2022-06-14
|
[
[
"Lee",
"Dylan",
""
],
[
"Henley",
"Austin",
""
],
[
"Hinshaw",
"Bill",
""
],
[
"Pandita",
"Rahul",
""
]
] |
As the current COBOL workforce retires, entry-level developers are left to keep complex legacy systems maintained and operational. This creates a massive gap in knowledge and ability as companies are having their veteran developers replaced with a new, inexperienced workforce. Additionally, the lack of COBOL and mainframe technology in the current academic curriculum further increases the learning curve for this new generation of developers. These issues are becoming even more pressing due to the business-critical nature of these systems, which makes migrating or replacing the mainframe and COBOL anytime soon very unlikely. As a result, there is now a huge need for tools and resources to increase new developers' code comprehension and ability to perform routine tasks such as debugging and defect location. Extensive work has been done in the software engineering field on the creation of such resources. However, the proprietary nature of COBOL and mainframe systems has restricted the amount of work and the number of open-source tools available for this domain. To address this issue, our work leverages the publicly available technical forum data to build an open-source collection of COBOL programs embodying issues/defects faced by COBOL developers. These programs were reconstructed and organized in a benchmark suite to facilitate the testing of developer tools. Our goal is to provide an open-source COBOL benchmark and testing suite that encourage community contribution and serve as a resource for researchers and tool-smiths in this domain.
|
2304.11829
|
Lu Zeyu
|
Zeyu Lu, Chengyue Wu, Xinyuan Chen, Yaohui Wang, Lei Bai, Yu Qiao,
Xihui Liu
|
Hierarchical Diffusion Autoencoders and Disentangled Image Manipulation
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Diffusion models have attained impressive visual quality for image synthesis.
However, how to interpret and manipulate the latent space of diffusion models
has not been extensively explored. Prior work diffusion autoencoders encode the
semantic representations into a semantic latent code, which fails to reflect
the rich information of details and the intrinsic feature hierarchy. To
mitigate those limitations, we propose Hierarchical Diffusion Autoencoders
(HDAE) that exploit the fine-grained-to-abstract and lowlevel-to-high-level
feature hierarchy for the latent space of diffusion models. The hierarchical
latent space of HDAE inherently encodes different abstract levels of semantics
and provides more comprehensive semantic representations. In addition, we
propose a truncated-feature-based approach for disentangled image manipulation.
We demonstrate the effectiveness of our proposed approach with extensive
experiments and applications on image reconstruction, style mixing,
controllable interpolation, detail-preserving and disentangled image
manipulation, and multi-modal semantic image synthesis.
|
[
{
"created": "Mon, 24 Apr 2023 05:35:59 GMT",
"version": "v1"
},
{
"created": "Tue, 25 Apr 2023 17:11:34 GMT",
"version": "v2"
}
] |
2023-04-26
|
[
[
"Lu",
"Zeyu",
""
],
[
"Wu",
"Chengyue",
""
],
[
"Chen",
"Xinyuan",
""
],
[
"Wang",
"Yaohui",
""
],
[
"Bai",
"Lei",
""
],
[
"Qiao",
"Yu",
""
],
[
"Liu",
"Xihui",
""
]
] |
Diffusion models have attained impressive visual quality for image synthesis. However, how to interpret and manipulate the latent space of diffusion models has not been extensively explored. Prior work diffusion autoencoders encode the semantic representations into a semantic latent code, which fails to reflect the rich information of details and the intrinsic feature hierarchy. To mitigate those limitations, we propose Hierarchical Diffusion Autoencoders (HDAE) that exploit the fine-grained-to-abstract and lowlevel-to-high-level feature hierarchy for the latent space of diffusion models. The hierarchical latent space of HDAE inherently encodes different abstract levels of semantics and provides more comprehensive semantic representations. In addition, we propose a truncated-feature-based approach for disentangled image manipulation. We demonstrate the effectiveness of our proposed approach with extensive experiments and applications on image reconstruction, style mixing, controllable interpolation, detail-preserving and disentangled image manipulation, and multi-modal semantic image synthesis.
|
2211.01659
|
Dimitrios Tyrovolas
|
Alexandros Papadopoulos, Antonios Lalas, Konstantinos Votis, Dimitrios
Tyrovolas, George K. Karagiannidis, Sotiris Ioannidis, Christos Liaskos
|
An Open Platform for Simulating the Physical Layer of 6G Communication
Systems with Multiple Intelligent Surfaces
| null | null | null | null |
cs.ET eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
Reconfigurable Intelligent Surfaces (RIS) constitute a promising technology
that could fulfill the extreme performance and capacity needs of the upcoming
6G wireless networks, by offering software-defined control over wireless
propagation phenomena. Despite the existence of many theoretical models
describing various aspects of RIS from the signal processing perspective (e.g.,
channel fading models), there is no open platform to simulate and study their
actual physical-layer behavior, especially in the multi-RIS case. In this
paper, we develop an open simulation platform, aimed at modeling the
physical-layer electromagnetic coupling and propagation between RIS pairs. We
present the platform by initially designing a basic unit cell, and then
proceeding to progressively model and simulate multiple and larger RISs. The
platform can be used for producing verifiable stochastic models for wireless
communication in multi-RIS deployments, such as vehicle-to-everything (V2X)
communications in autonomous vehicles and cybersecurity schemes, while its code
is freely available to the public.
|
[
{
"created": "Thu, 3 Nov 2022 09:02:59 GMT",
"version": "v1"
}
] |
2022-11-04
|
[
[
"Papadopoulos",
"Alexandros",
""
],
[
"Lalas",
"Antonios",
""
],
[
"Votis",
"Konstantinos",
""
],
[
"Tyrovolas",
"Dimitrios",
""
],
[
"Karagiannidis",
"George K.",
""
],
[
"Ioannidis",
"Sotiris",
""
],
[
"Liaskos",
"Christos",
""
]
] |
Reconfigurable Intelligent Surfaces (RIS) constitute a promising technology that could fulfill the extreme performance and capacity needs of the upcoming 6G wireless networks, by offering software-defined control over wireless propagation phenomena. Despite the existence of many theoretical models describing various aspects of RIS from the signal processing perspective (e.g., channel fading models), there is no open platform to simulate and study their actual physical-layer behavior, especially in the multi-RIS case. In this paper, we develop an open simulation platform, aimed at modeling the physical-layer electromagnetic coupling and propagation between RIS pairs. We present the platform by initially designing a basic unit cell, and then proceeding to progressively model and simulate multiple and larger RISs. The platform can be used for producing verifiable stochastic models for wireless communication in multi-RIS deployments, such as vehicle-to-everything (V2X) communications in autonomous vehicles and cybersecurity schemes, while its code is freely available to the public.
|
1912.01706
|
Nicolas Garneau
|
Nicolas Garneau, Mathieu Godbout, David Beauchemin, Audrey Durand, Luc
Lamontagne
|
A Robust Self-Learning Method for Fully Unsupervised Cross-Lingual
Mappings of Word Embeddings: Making the Method Robustly Reproducible as Well
|
Accept in REPROLANG@LREC2020
| null | null | null |
cs.LG cs.CL stat.ML
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In this paper, we reproduce the experiments of Artetxe et al. (2018b)
regarding the robust self-learning method for fully unsupervised cross-lingual
mappings of word embeddings. We show that the reproduction of their method is
indeed feasible with some minor assumptions. We further investigate the
robustness of their model by introducing four new languages that are less
similar to English than the ones proposed by the original paper. In order to
assess the stability of their model, we also conduct a grid search over
sensible hyperparameters. We then propose key recommendations applicable to any
research project in order to deliver fully reproducible research.
|
[
{
"created": "Tue, 3 Dec 2019 22:07:47 GMT",
"version": "v1"
},
{
"created": "Tue, 3 Mar 2020 14:30:50 GMT",
"version": "v2"
}
] |
2020-03-04
|
[
[
"Garneau",
"Nicolas",
""
],
[
"Godbout",
"Mathieu",
""
],
[
"Beauchemin",
"David",
""
],
[
"Durand",
"Audrey",
""
],
[
"Lamontagne",
"Luc",
""
]
] |
In this paper, we reproduce the experiments of Artetxe et al. (2018b) regarding the robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. We show that the reproduction of their method is indeed feasible with some minor assumptions. We further investigate the robustness of their model by introducing four new languages that are less similar to English than the ones proposed by the original paper. In order to assess the stability of their model, we also conduct a grid search over sensible hyperparameters. We then propose key recommendations applicable to any research project in order to deliver fully reproducible research.
|
2009.12577
|
Mohamed Ali Souibgui
|
Mohamed Ali Souibgui and Alicia Forn\'es and Yousri Kessentini and
Crina Tudor
|
A Few-shot Learning Approach for Historical Ciphered Manuscript
Recognition
|
Accepted in the 25th International Conference on Pattern Recognition
(ICPR2020), Milan, Italy 10 - 15 January 2021 (Camera Ready Version)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Encoded (or ciphered) manuscripts are a special type of historical documents
that contain encrypted text. The automatic recognition of this kind of
documents is challenging because: 1) the cipher alphabet changes from one
document to another, 2) there is a lack of annotated corpus for training and 3)
touching symbols make the symbol segmentation difficult and complex. To
overcome these difficulties, we propose a novel method for handwritten ciphers
recognition based on few-shot object detection. Our method first detects all
symbols of a given alphabet in a line image, and then a decoding step maps the
symbol similarity scores to the final sequence of transcribed symbols. By
training on synthetic data, we show that the proposed architecture is able to
recognize handwritten ciphers with unseen alphabets. In addition, if few
labeled pages with the same alphabet are used for fine tuning, our method
surpasses existing unsupervised and supervised HTR methods for ciphers
recognition.
|
[
{
"created": "Sat, 26 Sep 2020 11:49:18 GMT",
"version": "v1"
}
] |
2020-09-29
|
[
[
"Souibgui",
"Mohamed Ali",
""
],
[
"Fornés",
"Alicia",
""
],
[
"Kessentini",
"Yousri",
""
],
[
"Tudor",
"Crina",
""
]
] |
Encoded (or ciphered) manuscripts are a special type of historical documents that contain encrypted text. The automatic recognition of this kind of documents is challenging because: 1) the cipher alphabet changes from one document to another, 2) there is a lack of annotated corpus for training and 3) touching symbols make the symbol segmentation difficult and complex. To overcome these difficulties, we propose a novel method for handwritten ciphers recognition based on few-shot object detection. Our method first detects all symbols of a given alphabet in a line image, and then a decoding step maps the symbol similarity scores to the final sequence of transcribed symbols. By training on synthetic data, we show that the proposed architecture is able to recognize handwritten ciphers with unseen alphabets. In addition, if few labeled pages with the same alphabet are used for fine tuning, our method surpasses existing unsupervised and supervised HTR methods for ciphers recognition.
|
1704.03885
|
Alexander Mart\'inez M\'endez
|
H. Asorey, A. Mart\'inez-M\'endez, L.A. N\'u\~nez, A. Valbuena-Delgado
(for the LAGO Collaboration)
|
Lago Distributed Network Of Data Repositories
| null | null | null | null |
cs.DL
|
http://creativecommons.org/licenses/by/4.0/
|
We describe a set of tools, services and strategies of the Latin American
Giant Observatory (LAGO) data repository network, to implement Data
Accessibility, Reproducibility and Trustworthiness.
|
[
{
"created": "Wed, 12 Apr 2017 18:12:48 GMT",
"version": "v1"
}
] |
2019-08-13
|
[
[
"Asorey",
"H.",
"",
"for the LAGO Collaboration"
],
[
"Martínez-Méndez",
"A.",
"",
"for the LAGO Collaboration"
],
[
"Núñez",
"L. A.",
"",
"for the LAGO Collaboration"
],
[
"Valbuena-Delgado",
"A.",
"",
"for the LAGO Collaboration"
]
] |
We describe a set of tools, services and strategies of the Latin American Giant Observatory (LAGO) data repository network, to implement Data Accessibility, Reproducibility and Trustworthiness.
|
2107.12100
|
Christoph Gote
|
Christoph Gote and Vincenzo Perri and Ingo Scholtes
|
Predicting Influential Higher-Order Patterns in Temporal Network Data
|
18 pages, 4 figures, 2 tables
| null | null | null |
cs.SI cs.IT cs.LG math.IT physics.data-an stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Networks are frequently used to model complex systems comprised of
interacting elements. While edges capture the topology of direct interactions,
the true complexity of many systems originates from higher-order patterns in
paths by which nodes can indirectly influence each other. Path data,
representing ordered sequences of consecutive direct interactions, can be used
to model these patterns. On the one hand, to avoid overfitting, such models
should only consider those higher-order patterns for which the data provide
sufficient statistical evidence. On the other hand, we hypothesise that network
models, which capture only direct interactions, underfit higher-order patterns
present in data. Consequently, both approaches are likely to misidentify
influential nodes in complex networks. We contribute to this issue by proposing
five centrality measures based on MOGen, a multi-order generative model that
accounts for all indirect influences up to a maximum distance but disregards
influences at higher distances. We compare MOGen-based centralities to
equivalent measures for network models and path data in a prediction experiment
where we aim to identify influential nodes in out-of-sample data. Our results
show strong evidence supporting our hypothesis. MOGen consistently outperforms
both the network model and path-based prediction. We further show that the
performance difference between MOGen and the path-based approach disappears if
we have sufficient observations, confirming that the error is due to
overfitting.
|
[
{
"created": "Mon, 26 Jul 2021 10:44:46 GMT",
"version": "v1"
},
{
"created": "Mon, 3 Oct 2022 14:09:43 GMT",
"version": "v2"
}
] |
2022-10-04
|
[
[
"Gote",
"Christoph",
""
],
[
"Perri",
"Vincenzo",
""
],
[
"Scholtes",
"Ingo",
""
]
] |
Networks are frequently used to model complex systems comprised of interacting elements. While edges capture the topology of direct interactions, the true complexity of many systems originates from higher-order patterns in paths by which nodes can indirectly influence each other. Path data, representing ordered sequences of consecutive direct interactions, can be used to model these patterns. On the one hand, to avoid overfitting, such models should only consider those higher-order patterns for which the data provide sufficient statistical evidence. On the other hand, we hypothesise that network models, which capture only direct interactions, underfit higher-order patterns present in data. Consequently, both approaches are likely to misidentify influential nodes in complex networks. We contribute to this issue by proposing five centrality measures based on MOGen, a multi-order generative model that accounts for all indirect influences up to a maximum distance but disregards influences at higher distances. We compare MOGen-based centralities to equivalent measures for network models and path data in a prediction experiment where we aim to identify influential nodes in out-of-sample data. Our results show strong evidence supporting our hypothesis. MOGen consistently outperforms both the network model and path-based prediction. We further show that the performance difference between MOGen and the path-based approach disappears if we have sufficient observations, confirming that the error is due to overfitting.
|
2103.00879
|
Kailun Yang
|
Shuo Chen, Kailun Yang, Rainer Stiefelhagen
|
DR-TANet: Dynamic Receptive Temporal Attention Network for Street Scene
Change Detection
|
8 pages, 9 figures, 6 tables. Accepted to IEEE Intelligent Vehicles
Symposium 2021 (IV2021). Code is available at
https://github.com/Herrccc/DR-TANet
| null | null | null |
cs.CV cs.LG cs.RO eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Street scene change detection continues to capture researchers' interests in
the computer vision community. It aims to identify the changed regions of the
paired street-view images captured at different times. The state-of-the-art
network based on the encoder-decoder architecture leverages the feature maps at
the corresponding level between two channels to gain sufficient information of
changes. Still, the efficiency of feature extraction, feature correlation
calculation, even the whole network requires further improvement. This paper
proposes the temporal attention and explores the impact of the dependency-scope
size of temporal attention on the performance of change detection. In addition,
based on the Temporal Attention Module (TAM), we introduce a more efficient and
light-weight version - Dynamic Receptive Temporal Attention Module (DRTAM) and
propose the Concurrent Horizontal and Vertical Attention (CHVA) to improve the
accuracy of the network on specific challenging entities. On street scene
datasets `GSV', `TSUNAMI' and `VL-CMU-CD', our approach gains excellent
performance, establishing new state-of-the-art scores without bells and
whistles, while maintaining high efficiency applicable in autonomous vehicles.
|
[
{
"created": "Mon, 1 Mar 2021 10:01:35 GMT",
"version": "v1"
},
{
"created": "Fri, 28 May 2021 09:15:49 GMT",
"version": "v2"
}
] |
2021-05-31
|
[
[
"Chen",
"Shuo",
""
],
[
"Yang",
"Kailun",
""
],
[
"Stiefelhagen",
"Rainer",
""
]
] |
Street scene change detection continues to capture researchers' interests in the computer vision community. It aims to identify the changed regions of the paired street-view images captured at different times. The state-of-the-art network based on the encoder-decoder architecture leverages the feature maps at the corresponding level between two channels to gain sufficient information of changes. Still, the efficiency of feature extraction, feature correlation calculation, even the whole network requires further improvement. This paper proposes the temporal attention and explores the impact of the dependency-scope size of temporal attention on the performance of change detection. In addition, based on the Temporal Attention Module (TAM), we introduce a more efficient and light-weight version - Dynamic Receptive Temporal Attention Module (DRTAM) and propose the Concurrent Horizontal and Vertical Attention (CHVA) to improve the accuracy of the network on specific challenging entities. On street scene datasets `GSV', `TSUNAMI' and `VL-CMU-CD', our approach gains excellent performance, establishing new state-of-the-art scores without bells and whistles, while maintaining high efficiency applicable in autonomous vehicles.
|
2201.02450
|
Masahito Hayashi
|
Masahito Hayashi
|
Analytical calculation formulas for capacities of classical and
classical-quantum channels
| null | null |
10.1109/TIT.2022.3215178
| null |
cs.IT math.IT quant-ph
|
http://creativecommons.org/licenses/by/4.0/
|
We derive an analytical calculation formula for the channel capacity of a
classical channel without any iteration while its existing algorithms require
iterations and the number of iteration depends on the required precision level.
Hence, our formula is its first analytical formula without any iteration. We
apply the obtained formula to examples and see how the obtained formula works
in these examples. Then, we extend it to the channel capacity of a
classical-quantum (cq-) channel. Many existing studies proposed algorithms for
a cq-channel and all of them require iterations. Our extended analytical
algorithm have also no iteration and output the exactly optimum values.
|
[
{
"created": "Fri, 7 Jan 2022 13:39:09 GMT",
"version": "v1"
},
{
"created": "Tue, 14 Feb 2023 09:01:04 GMT",
"version": "v2"
}
] |
2023-02-15
|
[
[
"Hayashi",
"Masahito",
""
]
] |
We derive an analytical calculation formula for the channel capacity of a classical channel without any iteration while its existing algorithms require iterations and the number of iteration depends on the required precision level. Hence, our formula is its first analytical formula without any iteration. We apply the obtained formula to examples and see how the obtained formula works in these examples. Then, we extend it to the channel capacity of a classical-quantum (cq-) channel. Many existing studies proposed algorithms for a cq-channel and all of them require iterations. Our extended analytical algorithm have also no iteration and output the exactly optimum values.
|
1904.03569
|
Zaoyu Lu
|
Feng Shu, Zaoyu Lu, Shuo Zhang, Jin Wang, Xiaobo Zhou, Linlin Sun,
Jinhui Lu, Jinyong Lin, Wenlong Cai
|
Optimal Power Allocation for Secure Directional Modulation Networks with
a Full-duplex UAV User
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper make an investigation of a secure unmanned aerial vehicle
(UAV)-aided communication network based on directional modulation(DM), in which
one ground base station (Alice), one legitimate full-duplex (FD) user (Bob) and
one illegal receiver (Eve) are involved. In this network, Alice acts as a
control center to transmit confidential message and artificial noise (AN). The
UAV user, moving along a linear flight trajectory, is intended to receive the
useful information from Alice. At the same time, it also sends AN signals to
further interference Eve's channel. Aiming at maximizing secrecy rate during
the UAV flight process, a joint optimization problem is formulated
corresponding to power allocation (PA) factors, beamforming vector, AN
projection matrices. For simplicity, maximum ratio transmission, null-space
projection and the leakage-based method are applied to form the transmit
beamforming vector, AN projection matrix at Alice, and AN projection vector at
Bob, respectively. Following this, the optimization problem reduces into a
bivariate optimization programme with two PA factors. We put forward an
alternating iterative algorithm to optimize the two PA factors. Simulation
results demonstrate that the proposed strategy for FD mode achieves a higher SR
than the half-duplex (HD) mode, and outperforms the FD mode with fixed PA
strategy.
|
[
{
"created": "Sun, 7 Apr 2019 02:37:59 GMT",
"version": "v1"
}
] |
2019-04-09
|
[
[
"Shu",
"Feng",
""
],
[
"Lu",
"Zaoyu",
""
],
[
"Zhang",
"Shuo",
""
],
[
"Wang",
"Jin",
""
],
[
"Zhou",
"Xiaobo",
""
],
[
"Sun",
"Linlin",
""
],
[
"Lu",
"Jinhui",
""
],
[
"Lin",
"Jinyong",
""
],
[
"Cai",
"Wenlong",
""
]
] |
This paper make an investigation of a secure unmanned aerial vehicle (UAV)-aided communication network based on directional modulation(DM), in which one ground base station (Alice), one legitimate full-duplex (FD) user (Bob) and one illegal receiver (Eve) are involved. In this network, Alice acts as a control center to transmit confidential message and artificial noise (AN). The UAV user, moving along a linear flight trajectory, is intended to receive the useful information from Alice. At the same time, it also sends AN signals to further interference Eve's channel. Aiming at maximizing secrecy rate during the UAV flight process, a joint optimization problem is formulated corresponding to power allocation (PA) factors, beamforming vector, AN projection matrices. For simplicity, maximum ratio transmission, null-space projection and the leakage-based method are applied to form the transmit beamforming vector, AN projection matrix at Alice, and AN projection vector at Bob, respectively. Following this, the optimization problem reduces into a bivariate optimization programme with two PA factors. We put forward an alternating iterative algorithm to optimize the two PA factors. Simulation results demonstrate that the proposed strategy for FD mode achieves a higher SR than the half-duplex (HD) mode, and outperforms the FD mode with fixed PA strategy.
|
2103.04590
|
Pranav Rajpurkar
|
Siyu Shi, Ishaan Malhi, Kevin Tran, Andrew Y. Ng, Pranav Rajpurkar
|
CheXseen: Unseen Disease Detection for Deep Learning Interpretation of
Chest X-rays
|
Accepted at MIDL Conference 2021. Previous version accepted at ACM
Conference on Health, Inference, and Learning (ACM-CHIL) Workshop 2021
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We systematically evaluate the performance of deep learning models in the
presence of diseases not labeled for or present during training. First, we
evaluate whether deep learning models trained on a subset of diseases (seen
diseases) can detect the presence of any one of a larger set of diseases. We
find that models tend to falsely classify diseases outside of the subset
(unseen diseases) as "no disease". Second, we evaluate whether models trained
on seen diseases can detect seen diseases when co-occurring with diseases
outside the subset (unseen diseases). We find that models are still able to
detect seen diseases even when co-occurring with unseen diseases. Third, we
evaluate whether feature representations learned by models may be used to
detect the presence of unseen diseases given a small labeled set of unseen
diseases. We find that the penultimate layer of the deep neural network
provides useful features for unseen disease detection. Our results can inform
the safe clinical deployment of deep learning models trained on a
non-exhaustive set of disease classes.
|
[
{
"created": "Mon, 8 Mar 2021 08:13:21 GMT",
"version": "v1"
},
{
"created": "Mon, 17 May 2021 05:15:55 GMT",
"version": "v2"
}
] |
2021-05-18
|
[
[
"Shi",
"Siyu",
""
],
[
"Malhi",
"Ishaan",
""
],
[
"Tran",
"Kevin",
""
],
[
"Ng",
"Andrew Y.",
""
],
[
"Rajpurkar",
"Pranav",
""
]
] |
We systematically evaluate the performance of deep learning models in the presence of diseases not labeled for or present during training. First, we evaluate whether deep learning models trained on a subset of diseases (seen diseases) can detect the presence of any one of a larger set of diseases. We find that models tend to falsely classify diseases outside of the subset (unseen diseases) as "no disease". Second, we evaluate whether models trained on seen diseases can detect seen diseases when co-occurring with diseases outside the subset (unseen diseases). We find that models are still able to detect seen diseases even when co-occurring with unseen diseases. Third, we evaluate whether feature representations learned by models may be used to detect the presence of unseen diseases given a small labeled set of unseen diseases. We find that the penultimate layer of the deep neural network provides useful features for unseen disease detection. Our results can inform the safe clinical deployment of deep learning models trained on a non-exhaustive set of disease classes.
|
1908.00059
|
Yu Chen
|
Yu Chen, Lingfei Wu and Mohammed J. Zaki
|
GraphFlow: Exploiting Conversation Flow with Graph Neural Networks for
Conversational Machine Comprehension
|
7 pages. Accepted by IJCAI 2020. Final Version. The SOLE copyright
holder is IJCAI (https://www.ijcai.org), all rights reserved
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Conversational machine comprehension (MC) has proven significantly more
challenging compared to traditional MC since it requires better utilization of
conversation history. However, most existing approaches do not effectively
capture conversation history and thus have trouble handling questions involving
coreference or ellipsis. Moreover, when reasoning over passage text, most of
them simply treat it as a word sequence without exploring rich semantic
relationships among words. In this paper, we first propose a simple yet
effective graph structure learning technique to dynamically construct a
question and conversation history aware context graph at each conversation
turn. Then we propose a novel Recurrent Graph Neural Network, and based on
that, we introduce a flow mechanism to model the temporal dependencies in a
sequence of context graphs. The proposed GraphFlow model can effectively
capture conversational flow in a dialog, and shows competitive performance
compared to existing state-of-the-art methods on CoQA, QuAC and DoQA
benchmarks. In addition, visualization experiments show that our proposed model
can offer good interpretability for the reasoning process.
|
[
{
"created": "Wed, 31 Jul 2019 19:23:38 GMT",
"version": "v1"
},
{
"created": "Wed, 15 Jul 2020 17:43:03 GMT",
"version": "v2"
}
] |
2020-07-16
|
[
[
"Chen",
"Yu",
""
],
[
"Wu",
"Lingfei",
""
],
[
"Zaki",
"Mohammed J.",
""
]
] |
Conversational machine comprehension (MC) has proven significantly more challenging compared to traditional MC since it requires better utilization of conversation history. However, most existing approaches do not effectively capture conversation history and thus have trouble handling questions involving coreference or ellipsis. Moreover, when reasoning over passage text, most of them simply treat it as a word sequence without exploring rich semantic relationships among words. In this paper, we first propose a simple yet effective graph structure learning technique to dynamically construct a question and conversation history aware context graph at each conversation turn. Then we propose a novel Recurrent Graph Neural Network, and based on that, we introduce a flow mechanism to model the temporal dependencies in a sequence of context graphs. The proposed GraphFlow model can effectively capture conversational flow in a dialog, and shows competitive performance compared to existing state-of-the-art methods on CoQA, QuAC and DoQA benchmarks. In addition, visualization experiments show that our proposed model can offer good interpretability for the reasoning process.
|
2305.19213
|
Xiao Liu
|
Xiao Liu, Da Yin, Chen Zhang, Yansong Feng, Dongyan Zhao
|
The Magic of IF: Investigating Causal Reasoning Abilities in Large
Language Models of Code
|
Findings of ACL 2023. Code and data are available at
https://github.com/xxxiaol/magic-if
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Causal reasoning, the ability to identify cause-and-effect relationship, is
crucial in human thinking. Although large language models (LLMs) succeed in
many NLP tasks, it is still challenging for them to conduct complex causal
reasoning like abductive reasoning and counterfactual reasoning. Given the fact
that programming code may express causal relations more often and explicitly
with conditional statements like ``if``, we want to explore whether Code-LLMs
acquire better causal reasoning abilities. Our experiments show that compared
to text-only LLMs, Code-LLMs with code prompts are significantly better in
causal reasoning. We further intervene on the prompts from different aspects,
and discover that the programming structure is crucial in code prompt design,
while Code-LLMs are robust towards format perturbations.
|
[
{
"created": "Tue, 30 May 2023 17:02:58 GMT",
"version": "v1"
}
] |
2023-05-31
|
[
[
"Liu",
"Xiao",
""
],
[
"Yin",
"Da",
""
],
[
"Zhang",
"Chen",
""
],
[
"Feng",
"Yansong",
""
],
[
"Zhao",
"Dongyan",
""
]
] |
Causal reasoning, the ability to identify cause-and-effect relationship, is crucial in human thinking. Although large language models (LLMs) succeed in many NLP tasks, it is still challenging for them to conduct complex causal reasoning like abductive reasoning and counterfactual reasoning. Given the fact that programming code may express causal relations more often and explicitly with conditional statements like ``if``, we want to explore whether Code-LLMs acquire better causal reasoning abilities. Our experiments show that compared to text-only LLMs, Code-LLMs with code prompts are significantly better in causal reasoning. We further intervene on the prompts from different aspects, and discover that the programming structure is crucial in code prompt design, while Code-LLMs are robust towards format perturbations.
|
2109.08565
|
Ahmed Magooda
|
Ahmed Magooda, Mohamed Elaraby, Diane Litman
|
Exploring Multitask Learning for Low-Resource AbstractiveSummarization
|
To appear in proceedings of EMNLP 2021 (https://2021.emnlp.org/)
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper explores the effect of using multitask learning for abstractive
summarization in the context of small training corpora. In particular, we
incorporate four different tasks (extractive summarization, language modeling,
concept detection, and paraphrase detection) both individually and in
combination, with the goal of enhancing the target task of abstractive
summarization via multitask learning. We show that for many task combinations,
a model trained in a multitask setting outperforms a model trained only for
abstractive summarization, with no additional summarization data introduced.
Additionally, we do a comprehensive search and find that certain tasks (e.g.
paraphrase detection) consistently benefit abstractive summarization, not only
when combined with other tasks but also when using different architectures and
training corpora.
|
[
{
"created": "Fri, 17 Sep 2021 14:23:58 GMT",
"version": "v1"
}
] |
2021-09-20
|
[
[
"Magooda",
"Ahmed",
""
],
[
"Elaraby",
"Mohamed",
""
],
[
"Litman",
"Diane",
""
]
] |
This paper explores the effect of using multitask learning for abstractive summarization in the context of small training corpora. In particular, we incorporate four different tasks (extractive summarization, language modeling, concept detection, and paraphrase detection) both individually and in combination, with the goal of enhancing the target task of abstractive summarization via multitask learning. We show that for many task combinations, a model trained in a multitask setting outperforms a model trained only for abstractive summarization, with no additional summarization data introduced. Additionally, we do a comprehensive search and find that certain tasks (e.g. paraphrase detection) consistently benefit abstractive summarization, not only when combined with other tasks but also when using different architectures and training corpora.
|
2103.12213
|
Michael Weber
|
Michael Weber, Tassilo Wald, J. Marius Z\"ollner
|
Temporal Feature Networks for CNN based Object Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For reliable environment perception, the use of temporal information is
essential in some situations. Especially for object detection, sometimes a
situation can only be understood in the right perspective through temporal
information. Since image-based object detectors are currently based almost
exclusively on CNN architectures, an extension of their feature extraction with
temporal features seems promising.
Within this work we investigate different architectural components for a
CNN-based temporal information extraction. We present a Temporal Feature
Network which is based on the insights gained from our architectural
investigations. This network is trained from scratch without any ImageNet
information based pre-training as these images are not available with temporal
information. The object detector based on this network is evaluated against the
non-temporal counterpart as baseline and achieves competitive results in an
evaluation on the KITTI object detection dataset.
|
[
{
"created": "Mon, 22 Mar 2021 22:39:42 GMT",
"version": "v1"
}
] |
2021-03-24
|
[
[
"Weber",
"Michael",
""
],
[
"Wald",
"Tassilo",
""
],
[
"Zöllner",
"J. Marius",
""
]
] |
For reliable environment perception, the use of temporal information is essential in some situations. Especially for object detection, sometimes a situation can only be understood in the right perspective through temporal information. Since image-based object detectors are currently based almost exclusively on CNN architectures, an extension of their feature extraction with temporal features seems promising. Within this work we investigate different architectural components for a CNN-based temporal information extraction. We present a Temporal Feature Network which is based on the insights gained from our architectural investigations. This network is trained from scratch without any ImageNet information based pre-training as these images are not available with temporal information. The object detector based on this network is evaluated against the non-temporal counterpart as baseline and achieves competitive results in an evaluation on the KITTI object detection dataset.
|
1305.1681
|
Yury Makarychev
|
Konstantin Makarychev, Yury Makarychev, Aravindan Vijayaraghavan
|
Bilu-Linial Stable Instances of Max Cut and Minimum Multiway Cut
|
24 pages
| null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We investigate the notion of stability proposed by Bilu and Linial. We obtain
an exact polynomial-time algorithm for $\gamma$-stable Max Cut instances with
$\gamma \geq c\sqrt{\log n}\log\log n$ for some absolute constant $c > 0$. Our
algorithm is robust: it never returns an incorrect answer; if the instance is
$\gamma$-stable, it finds the maximum cut, otherwise, it either finds the
maximum cut or certifies that the instance is not $\gamma$-stable. We prove
that there is no robust polynomial-time algorithm for $\gamma$-stable instances
of Max Cut when $\gamma < \alpha_{SC}(n/2)$, where $\alpha_{SC}$ is the best
approximation factor for Sparsest Cut with non-uniform demands.
Our algorithm is based on semidefinite programming. We show that the standard
SDP relaxation for Max Cut (with $\ell_2^2$ triangle inequalities) is integral
if $\gamma \geq D_{\ell_2^2\to \ell_1}(n)$, where $D_{\ell_2^2\to \ell_1}(n)$
is the least distortion with which every $n$ point metric space of negative
type embeds into $\ell_1$. On the negative side, we show that the SDP
relaxation is not integral when $\gamma < D_{\ell_2^2\to \ell_1}(n/2)$.
Moreover, there is no tractable convex relaxation for $\gamma$-stable instances
of Max Cut when $\gamma < \alpha_{SC}(n/2)$. That suggests that solving
$\gamma$-stable instances with $\gamma =o(\sqrt{\log n})$ might be difficult or
impossible.
Our results significantly improve previously known results. The best
previously known algorithm for $\gamma$-stable instances of Max Cut required
that $\gamma \geq c\sqrt{n}$ (for some $c > 0$) [Bilu, Daniely, Linial, and
Saks]. No hardness results were known for the problem. Additionally, we present
an algorithm for 4-stable instances of Minimum Multiway Cut. We also study a
relaxed notion of weak stability.
|
[
{
"created": "Tue, 7 May 2013 23:54:03 GMT",
"version": "v1"
},
{
"created": "Mon, 8 Jul 2013 13:31:34 GMT",
"version": "v2"
},
{
"created": "Mon, 11 Nov 2013 23:25:58 GMT",
"version": "v3"
}
] |
2013-11-13
|
[
[
"Makarychev",
"Konstantin",
""
],
[
"Makarychev",
"Yury",
""
],
[
"Vijayaraghavan",
"Aravindan",
""
]
] |
We investigate the notion of stability proposed by Bilu and Linial. We obtain an exact polynomial-time algorithm for $\gamma$-stable Max Cut instances with $\gamma \geq c\sqrt{\log n}\log\log n$ for some absolute constant $c > 0$. Our algorithm is robust: it never returns an incorrect answer; if the instance is $\gamma$-stable, it finds the maximum cut, otherwise, it either finds the maximum cut or certifies that the instance is not $\gamma$-stable. We prove that there is no robust polynomial-time algorithm for $\gamma$-stable instances of Max Cut when $\gamma < \alpha_{SC}(n/2)$, where $\alpha_{SC}$ is the best approximation factor for Sparsest Cut with non-uniform demands. Our algorithm is based on semidefinite programming. We show that the standard SDP relaxation for Max Cut (with $\ell_2^2$ triangle inequalities) is integral if $\gamma \geq D_{\ell_2^2\to \ell_1}(n)$, where $D_{\ell_2^2\to \ell_1}(n)$ is the least distortion with which every $n$ point metric space of negative type embeds into $\ell_1$. On the negative side, we show that the SDP relaxation is not integral when $\gamma < D_{\ell_2^2\to \ell_1}(n/2)$. Moreover, there is no tractable convex relaxation for $\gamma$-stable instances of Max Cut when $\gamma < \alpha_{SC}(n/2)$. That suggests that solving $\gamma$-stable instances with $\gamma =o(\sqrt{\log n})$ might be difficult or impossible. Our results significantly improve previously known results. The best previously known algorithm for $\gamma$-stable instances of Max Cut required that $\gamma \geq c\sqrt{n}$ (for some $c > 0$) [Bilu, Daniely, Linial, and Saks]. No hardness results were known for the problem. Additionally, we present an algorithm for 4-stable instances of Minimum Multiway Cut. We also study a relaxed notion of weak stability.
|
1012.5625
|
Paolo Magrassi
|
Paolo Magrassi
|
Free and Open-Source Software is not an Emerging Property but Rather the
Result of Studied Design
| null | null | null | null |
cs.CY cs.SI
|
http://creativecommons.org/licenses/by-nc-sa/3.0/
|
Free and open source software (FOSS) is considered by many, along with
Wikipedia, the proof of an ongoing paradigm shift from hierarchically-managed
and market-driven production of knowledge to heterarchical, collaborative and
commons-based production styles. In such perspective, it has become common
place to refer to FOSS as a manifestation of collective intelligence where
deliverables and artefacts emerge by virtue of mere cooperation, with no need
for supervising leadership. The paper argues that this assumption is based on
limited understanding of the software development process, and may lead to
wrong conclusions as to the potential of peer production. The development of a
less than trivial piece of software, irrespective of whether it be FOSS or
proprietary, is a complex cooperative effort requiring the participation of
many (often thousands of) individuals. A subset of the participants always play
the role of leading system and subsystem designers, determining architecture
and functionality; the rest of the people work "underneath" them in a logical,
functional sense. While new and powerful forces, including FOSS, are clearly at
work in the post-industrial, networked econ-omy, the currently ingenuous stage
of research in the field of collective intelligence and networked cooperation
must give way to a deeper level of consciousness, which requires an
understanding of the software development process.
|
[
{
"created": "Mon, 27 Dec 2010 15:23:37 GMT",
"version": "v1"
}
] |
2010-12-30
|
[
[
"Magrassi",
"Paolo",
""
]
] |
Free and open source software (FOSS) is considered by many, along with Wikipedia, the proof of an ongoing paradigm shift from hierarchically-managed and market-driven production of knowledge to heterarchical, collaborative and commons-based production styles. In such perspective, it has become common place to refer to FOSS as a manifestation of collective intelligence where deliverables and artefacts emerge by virtue of mere cooperation, with no need for supervising leadership. The paper argues that this assumption is based on limited understanding of the software development process, and may lead to wrong conclusions as to the potential of peer production. The development of a less than trivial piece of software, irrespective of whether it be FOSS or proprietary, is a complex cooperative effort requiring the participation of many (often thousands of) individuals. A subset of the participants always play the role of leading system and subsystem designers, determining architecture and functionality; the rest of the people work "underneath" them in a logical, functional sense. While new and powerful forces, including FOSS, are clearly at work in the post-industrial, networked econ-omy, the currently ingenuous stage of research in the field of collective intelligence and networked cooperation must give way to a deeper level of consciousness, which requires an understanding of the software development process.
|
1707.05518
|
Mohammad Khodaei
|
Mohammad Khodaei, Hongyu Jin, and Panos Papadimitratos
|
SECMACE: Scalable and Robust Identity and Credential Management
Infrastructure in Vehicular Communication Systems
|
14 pages, 9 figures, 10 tables, IEEE Transactions on Intelligent
Transportation Systems
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Several years of academic and industrial research efforts have converged to a
common understanding on fundamental security building blocks for the upcoming
Vehicular Communication (VC) systems. There is a growing consensus towards
deploying a special-purpose identity and credential management infrastructure,
i.e., a Vehicular Public-Key Infrastructure (VPKI), enabling pseudonymous
authentication, with standardization efforts towards that direction. In spite
of the progress made by standardization bodies (IEEE 1609.2 and ETSI) and
harmonization efforts (Car2Car Communication Consortium (C2C-CC)), significant
questions remain unanswered towards deploying a VPKI. Deep understanding of the
VPKI, a central building block of secure and privacy-preserving VC systems, is
still lacking. This paper contributes to the closing of this gap. We present
SECMACE, a VPKI system, which is compatible with the IEEE 1609.2 and ETSI
standards specifications. We provide a detailed description of our
state-of-the-art VPKI that improves upon existing proposals in terms of
security and privacy protection, and efficiency. SECMACE facilitates
multi-domain operations in the VC systems and enhances user privacy, notably
preventing linking pseudonyms based on timing information and offering
increased protection even against honest-but-curious VPKI entities. We propose
multiple policies for the vehicle-VPKI interactions, based on which and two
large-scale mobility trace datasets, we evaluate the full-blown implementation
of SECMACE. With very little attention on the VPKI performance thus far, our
results reveal that modest computing resources can support a large area of
vehicles with very low delays and the most promising policy in terms of privacy
protection can be supported with moderate overhead.
|
[
{
"created": "Tue, 18 Jul 2017 08:19:32 GMT",
"version": "v1"
}
] |
2017-07-19
|
[
[
"Khodaei",
"Mohammad",
""
],
[
"Jin",
"Hongyu",
""
],
[
"Papadimitratos",
"Panos",
""
]
] |
Several years of academic and industrial research efforts have converged to a common understanding on fundamental security building blocks for the upcoming Vehicular Communication (VC) systems. There is a growing consensus towards deploying a special-purpose identity and credential management infrastructure, i.e., a Vehicular Public-Key Infrastructure (VPKI), enabling pseudonymous authentication, with standardization efforts towards that direction. In spite of the progress made by standardization bodies (IEEE 1609.2 and ETSI) and harmonization efforts (Car2Car Communication Consortium (C2C-CC)), significant questions remain unanswered towards deploying a VPKI. Deep understanding of the VPKI, a central building block of secure and privacy-preserving VC systems, is still lacking. This paper contributes to the closing of this gap. We present SECMACE, a VPKI system, which is compatible with the IEEE 1609.2 and ETSI standards specifications. We provide a detailed description of our state-of-the-art VPKI that improves upon existing proposals in terms of security and privacy protection, and efficiency. SECMACE facilitates multi-domain operations in the VC systems and enhances user privacy, notably preventing linking pseudonyms based on timing information and offering increased protection even against honest-but-curious VPKI entities. We propose multiple policies for the vehicle-VPKI interactions, based on which and two large-scale mobility trace datasets, we evaluate the full-blown implementation of SECMACE. With very little attention on the VPKI performance thus far, our results reveal that modest computing resources can support a large area of vehicles with very low delays and the most promising policy in terms of privacy protection can be supported with moderate overhead.
|
2310.14893
|
Samuel Ackerman
|
Dipak Wani, Samuel Ackerman, Eitan Farchi, Xiaotong Liu, Hau-wen
Chang, Sarasi Lalithsena
|
Data Drift Monitoring for Log Anomaly Detection Pipelines
| null | null | null | null |
cs.LG cs.SY eess.SY stat.AP
|
http://creativecommons.org/licenses/by/4.0/
|
Logs enable the monitoring of infrastructure status and the performance of
associated applications. Logs are also invaluable for diagnosing the root
causes of any problems that may arise. Log Anomaly Detection (LAD) pipelines
automate the detection of anomalies in logs, providing assistance to site
reliability engineers (SREs) in system diagnosis. Log patterns change over
time, necessitating updates to the LAD model defining the `normal' log activity
profile. In this paper, we introduce a Bayes Factor-based drift detection
method that identifies when intervention, retraining, and updating of the LAD
model are required with human involvement. We illustrate our method using
sequences of log activity, both from unaltered data, and simulated activity
with controlled levels of anomaly contamination, based on real collected log
data.
|
[
{
"created": "Tue, 17 Oct 2023 09:10:40 GMT",
"version": "v1"
}
] |
2023-10-24
|
[
[
"Wani",
"Dipak",
""
],
[
"Ackerman",
"Samuel",
""
],
[
"Farchi",
"Eitan",
""
],
[
"Liu",
"Xiaotong",
""
],
[
"Chang",
"Hau-wen",
""
],
[
"Lalithsena",
"Sarasi",
""
]
] |
Logs enable the monitoring of infrastructure status and the performance of associated applications. Logs are also invaluable for diagnosing the root causes of any problems that may arise. Log Anomaly Detection (LAD) pipelines automate the detection of anomalies in logs, providing assistance to site reliability engineers (SREs) in system diagnosis. Log patterns change over time, necessitating updates to the LAD model defining the `normal' log activity profile. In this paper, we introduce a Bayes Factor-based drift detection method that identifies when intervention, retraining, and updating of the LAD model are required with human involvement. We illustrate our method using sequences of log activity, both from unaltered data, and simulated activity with controlled levels of anomaly contamination, based on real collected log data.
|
0902.3056
|
Rahul Jain
|
Rahul Jain, Hartmut Klauck
|
New Results in the Simultaneous Message Passing Model
|
16 pages, version 1
| null | null | null |
cs.DC cs.CC cs.IT math.IT quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Consider the following Simultaneous Message Passing (SMP) model for computing
a relation f subset of X x Y x Z. In this model Alice, on input x in X and Bob,
on input y in Y, send one message each to a third party Referee who then
outputs a z in Z such that (x,y,z) in f. We first show optimal 'Direct sum'
results for all relations f in this model, both in the quantum and classical
settings, in the situation where we allow shared resources (shared entanglement
in quantum protocols and public coins in classical protocols) between Alice and
Referee and Bob and Referee and no shared resource between Alice and Bob. This
implies that, in this model, the communication required to compute k
simultaneous instances of f, with constant success overall, is at least k-times
the communication required to compute one instance with constant success.
This in particular implies an earlier Direct sum result, shown by
Chakrabarti, Shi, Wirth and Yao, 2001, for the Equality function (and a class
of other so-called robust functions), in the classical smp model with no shared
resources between any parties.
Furthermore we investigate the gap between the smp model and the one-way
model in communication complexity and exhibit a partial function that is
exponentially more expensive in the former if quantum communication with
entanglement is allowed, compared to the latter even in the deterministic case.
|
[
{
"created": "Wed, 18 Feb 2009 06:38:51 GMT",
"version": "v1"
}
] |
2009-02-26
|
[
[
"Jain",
"Rahul",
""
],
[
"Klauck",
"Hartmut",
""
]
] |
Consider the following Simultaneous Message Passing (SMP) model for computing a relation f subset of X x Y x Z. In this model Alice, on input x in X and Bob, on input y in Y, send one message each to a third party Referee who then outputs a z in Z such that (x,y,z) in f. We first show optimal 'Direct sum' results for all relations f in this model, both in the quantum and classical settings, in the situation where we allow shared resources (shared entanglement in quantum protocols and public coins in classical protocols) between Alice and Referee and Bob and Referee and no shared resource between Alice and Bob. This implies that, in this model, the communication required to compute k simultaneous instances of f, with constant success overall, is at least k-times the communication required to compute one instance with constant success. This in particular implies an earlier Direct sum result, shown by Chakrabarti, Shi, Wirth and Yao, 2001, for the Equality function (and a class of other so-called robust functions), in the classical smp model with no shared resources between any parties. Furthermore we investigate the gap between the smp model and the one-way model in communication complexity and exhibit a partial function that is exponentially more expensive in the former if quantum communication with entanglement is allowed, compared to the latter even in the deterministic case.
|
1408.1847
|
Marc Heinrich
|
Marc Heinrich and Alexander Munteanu and Christian Sohler
|
Asymptotically exact streaming algorithms
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a new computational model for data streams: asymptotically exact
streaming algorithms. These algorithms have an approximation ratio that tends
to one as the length of the stream goes to infinity while the memory used by
the algorithm is restricted to polylog(n) size. Thus, the output of the
algorithm is optimal in the limit. We show positive results in our model for a
series of important problems that have been discussed in the streaming
literature. These include computing the frequency moments, clustering problems
and least squares regression. Our results also include lower bounds for
problems, which have streaming algorithms in the ordinary setting but do not
allow for sublinear space algorithms in our model.
|
[
{
"created": "Fri, 8 Aug 2014 13:27:31 GMT",
"version": "v1"
}
] |
2014-08-11
|
[
[
"Heinrich",
"Marc",
""
],
[
"Munteanu",
"Alexander",
""
],
[
"Sohler",
"Christian",
""
]
] |
We introduce a new computational model for data streams: asymptotically exact streaming algorithms. These algorithms have an approximation ratio that tends to one as the length of the stream goes to infinity while the memory used by the algorithm is restricted to polylog(n) size. Thus, the output of the algorithm is optimal in the limit. We show positive results in our model for a series of important problems that have been discussed in the streaming literature. These include computing the frequency moments, clustering problems and least squares regression. Our results also include lower bounds for problems, which have streaming algorithms in the ordinary setting but do not allow for sublinear space algorithms in our model.
|
1808.09785
|
Farhan Khawar
|
Farhan Khawar and Nevin L. Zhang
|
Using Taste Groups for Collaborative Filtering
|
RecSys 2018 LBRS. arXiv admin note: substantial text overlap with
arXiv:1704.01889
| null | null | null |
cs.IR cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Implicit feedback is the simplest form of user feedback that can be used for
item recommendation. It is easy to collect and domain independent. However,
there is a lack of negative examples. Existing works circumvent this problem by
making various assumptions regarding the unconsumed items, which fail to hold
when the user did not consume an item because she was unaware of it. In this
paper, we propose as a novel method for addressing the lack of negative
examples in implicit feedback. The motivation is that if there is a large group
of users who share the same taste and none of them consumed an item, then it is
highly likely that the item is irrelevant to this taste. We use Hierarchical
Latent Tree Analysis(HLTA) to identify taste-based user groups and make
recommendations for a user based on her memberships in the groups.
|
[
{
"created": "Tue, 28 Aug 2018 15:53:14 GMT",
"version": "v1"
}
] |
2018-08-30
|
[
[
"Khawar",
"Farhan",
""
],
[
"Zhang",
"Nevin L.",
""
]
] |
Implicit feedback is the simplest form of user feedback that can be used for item recommendation. It is easy to collect and domain independent. However, there is a lack of negative examples. Existing works circumvent this problem by making various assumptions regarding the unconsumed items, which fail to hold when the user did not consume an item because she was unaware of it. In this paper, we propose as a novel method for addressing the lack of negative examples in implicit feedback. The motivation is that if there is a large group of users who share the same taste and none of them consumed an item, then it is highly likely that the item is irrelevant to this taste. We use Hierarchical Latent Tree Analysis(HLTA) to identify taste-based user groups and make recommendations for a user based on her memberships in the groups.
|
1707.06286
|
Amin Jourabloo
|
Amin Jourabloo, Mao Ye, Xiaoming Liu, Liu Ren
|
Pose-Invariant Face Alignment with a Single CNN
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Face alignment has witnessed substantial progress in the last decade. One of
the recent focuses has been aligning a dense 3D face shape to face images with
large head poses. The dominant technology used is based on the cascade of
regressors, e.g., CNN, which has shown promising results. Nonetheless, the
cascade of CNNs suffers from several drawbacks, e.g., lack of end-to-end
training, hand-crafted features and slow training speed. To address these
issues, we propose a new layer, named visualization layer, that can be
integrated into the CNN architecture and enables joint optimization with
different loss functions. Extensive evaluation of the proposed method on
multiple datasets demonstrates state-of-the-art accuracy, while reducing the
training time by more than half compared to the typical cascade of CNNs. In
addition, we compare multiple CNN architectures with the visualization layer to
further demonstrate the advantage of its utilization.
|
[
{
"created": "Wed, 19 Jul 2017 20:34:08 GMT",
"version": "v1"
}
] |
2017-07-21
|
[
[
"Jourabloo",
"Amin",
""
],
[
"Ye",
"Mao",
""
],
[
"Liu",
"Xiaoming",
""
],
[
"Ren",
"Liu",
""
]
] |
Face alignment has witnessed substantial progress in the last decade. One of the recent focuses has been aligning a dense 3D face shape to face images with large head poses. The dominant technology used is based on the cascade of regressors, e.g., CNN, which has shown promising results. Nonetheless, the cascade of CNNs suffers from several drawbacks, e.g., lack of end-to-end training, hand-crafted features and slow training speed. To address these issues, we propose a new layer, named visualization layer, that can be integrated into the CNN architecture and enables joint optimization with different loss functions. Extensive evaluation of the proposed method on multiple datasets demonstrates state-of-the-art accuracy, while reducing the training time by more than half compared to the typical cascade of CNNs. In addition, we compare multiple CNN architectures with the visualization layer to further demonstrate the advantage of its utilization.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.