id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2412.06441
BoRA: Bi-dimensional Weight-Decomposed Low-Rank Adaptation
In recent years, Parameter-Efficient Fine-Tuning (PEFT) methods like Low-Rank Adaptation (LoRA) have significantly enhanced the adaptability of large-scale pre-trained models. Weight-Decomposed Low-Rank Adaptation (DoRA) improves upon LoRA by separating the magnitude and direction components of the weight matrix, leading to superior performance. However, DoRA's improvements are limited to the vertical dimension, resulting in an asymmetrical pattern between horizontal and vertical dimensions. This paper introduces BoRA, an innovative extension of LoRA and DoRA, characterized by symmetrical properties across horizontal and vertical dimensions. Our approach optimizes the weight matrix symmetrically by adjusting both column-wise and row-wise magnitudes. Extensive experiments demonstrate that BoRA surpasses state-of-the-art PEFT methods, including LoRA and DoRA, achieving superior results across various benchmarks.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
515,247
2110.11166
Driving the Herd: Search Engines as Content Influencers
In competitive search settings such as the Web, many documents' authors (publishers) opt to have their documents highly ranked for some queries. To this end, they modify the documents - specifically, their content - in response to induced rankings. Thus, the search engine affects the content in the corpus via its ranking decisions. We present a first study of the ability of search engines to drive pre-defined, targeted, content effects in the corpus using simple techniques. The first is based on the herding phenomenon - a celebrated result from the economics literature - and the second is based on biasing the relevance ranking function. The types of content effects we study are either topical or touch on specific document properties - length and inclusion of query terms. Analysis of ranking competitions we organized between incentivized publishers shows that the types of content effects we target can indeed be attained by applying our suggested techniques. These findings have important implications with regard to the role of search engines in shaping the corpus.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
262,382
2201.11871
Infrastructure-Based Object Detection and Tracking for Cooperative Driving Automation: A Survey
Object detection plays a fundamental role in enabling Cooperative Driving Automation (CDA), which is regarded as the revolutionary solution to addressing safety, mobility, and sustainability issues of contemporary transportation systems. Although current computer vision technologies could provide satisfactory object detection results in occlusion-free scenarios, the perception performance of onboard sensors could be inevitably limited by the range and occlusion. Owing to flexible position and pose for sensor installation, infrastructure-based detection and tracking systems can enhance the perception capability for connected vehicles and thus quickly become one of the most popular research topics. In this paper, we review the research progress for infrastructure-based object detection and tracking systems. Architectures of roadside perception systems based on different types of sensors are reviewed to show a high-level description of the workflows for infrastructure-based perception systems. Roadside sensors and different perception methodologies are reviewed and analyzed with detailed literature to provide a low-level explanation for specific methods followed by Datasets and Simulators to draw an overall landscape of infrastructure-based object detection and tracking methods. Discussions are conducted to point out current opportunities, open problems, and anticipated future trends.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
277,431
2404.03324
A Comparative Analysis of Word-Level Metric Differential Privacy: Benchmarking The Privacy-Utility Trade-off
The application of Differential Privacy to Natural Language Processing techniques has emerged in relevance in recent years, with an increasing number of studies published in established NLP outlets. In particular, the adaptation of Differential Privacy for use in NLP tasks has first focused on the $\textit{word-level}$, where calibrated noise is added to word embedding vectors to achieve "noisy" representations. To this end, several implementations have appeared in the literature, each presenting an alternative method of achieving word-level Differential Privacy. Although each of these includes its own evaluation, no comparative analysis has been performed to investigate the performance of such methods relative to each other. In this work, we conduct such an analysis, comparing seven different algorithms on two NLP tasks with varying hyperparameters, including the $\textit{epsilon ($\varepsilon$)}$ parameter, or privacy budget. In addition, we provide an in-depth analysis of the results with a focus on the privacy-utility trade-off, as well as open-source our implementation code for further reproduction. As a result of our analysis, we give insight into the benefits and challenges of word-level Differential Privacy, and accordingly, we suggest concrete steps forward for the research field.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
444,211
1709.06161
PrivyNet: A Flexible Framework for Privacy-Preserving Deep Neural Network Training
Massive data exist among user local platforms that usually cannot support deep neural network (DNN) training due to computation and storage resource constraints. Cloud-based training schemes provide beneficial services but suffer from potential privacy risks due to excessive user data collection. To enable cloud-based DNN training while protecting the data privacy simultaneously, we propose to leverage the intermediate representations of the data, which is achieved by splitting the DNNs and deploying them separately onto local platforms and the cloud. The local neural network (NN) is used to generate the feature representations. To avoid local training and protect data privacy, the local NN is derived from pre-trained NNs. The cloud NN is then trained based on the extracted intermediate representations for the target learning task. We validate the idea of DNN splitting by characterizing the dependency of privacy loss and classification accuracy on the local NN topology for a convolutional NN (CNN) based image classification task. Based on the characterization, we further propose PrivyNet to determine the local NN topology, which optimizes the accuracy of the target learning task under the constraints on privacy loss, local computation, and storage. The efficiency and effectiveness of PrivyNet are demonstrated with the CIFAR-10 dataset.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
81,040
2403.19324
Rapid nonlinear convex guidance using a monomial method
This paper addresses the challenge of accommodating nonlinear dynamics and constraints in rapid trajectory optimization, envisioned for use in the context of onboard guidance. We present a novel framework that uniquely employs overparameterized monomial coordinates and pre-computed fundamental solution expansions to facilitate rapid optimization while minimizing real-time computational requirements. The fundamental solution expansions are pre-computed using differential algebra. Unlike traditional approaches that repeatedly evaluate the nonlinear dynamics and constraints as part of complex shooting or collocation-based schemes, this method replaces the nonlinearity inherent to dynamics and constraint functions entirely with a computationally simpler manifold constraint. With this approach, trajectory optimization is posed efficiently as a path planning problem on the manifold. This problem is entirely convex except for the manifold constraint, readily lending itself to solution via sequential convex programming. We demonstrate the effectiveness of our approach in computing fast and accurate delta-V optimal solutions for long-range spacecraft rendezvous, including problems with nonlinear state constraints.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
442,298
2404.07556
Attention-Aware Laparoscopic Image Desmoking Network with Lightness Embedding and Hybrid Guided Embedding
This paper presents a novel method of smoke removal from the laparoscopic images. Due to the heterogeneous nature of surgical smoke, a two-stage network is proposed to estimate the smoke distribution and reconstruct a clear, smoke-free surgical scene. The utilization of the lightness channel plays a pivotal role in providing vital information pertaining to smoke density. The reconstruction of smoke-free image is guided by a hybrid embedding, which combines the estimated smoke mask with the initial image. Experimental results demonstrate that the proposed method boasts a Peak Signal to Noise Ratio that is $2.79\%$ higher than the state-of-the-art methods, while also exhibits a remarkable $38.2\%$ reduction in run-time. Overall, the proposed method offers comparable or even superior performance in terms of both smoke removal quality and computational efficiency when compared to existing state-of-the-art methods. This work will be publicly available on http://homepage.hit.edu.cn/wpgao
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
445,885
2211.09945
VeriCompress: A Tool to Streamline the Synthesis of Verified Robust Compressed Neural Networks from Scratch
AI's widespread integration has led to neural networks (NNs) deployment on edge and similar limited-resource platforms for safety-critical scenarios. Yet, NN's fragility raises concerns about reliable inference. Moreover, constrained platforms demand compact networks. This study introduces VeriCompress, a tool that automates the search and training of compressed models with robustness guarantees. These models are well-suited for safety-critical applications and adhere to predefined architecture and size limitations, making them deployable on resource-restricted platforms. The method trains models 2-3 times faster than the state-of-the-art approaches, surpassing relevant baseline approaches by average accuracy and robustness gains of 15.1 and 9.8 percentage points, respectively. When deployed on a resource-restricted generic platform, these models require 5-8 times less memory and 2-4 times less inference time than models used in verified robustness literature. Our comprehensive evaluation across various model architectures and datasets, including MNIST, CIFAR, SVHN, and a relevant pedestrian detection dataset, showcases VeriCompress's capacity to identify compressed verified robust models with reduced computation overhead compared to current standards. This underscores its potential as a valuable tool for end users, such as developers of safety-critical applications on edge or Internet of Things platforms, empowering them to create suitable models for safety-critical, resource-constrained platforms in their respective domains.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
331,155
1805.01365
Optimal Resource Allocation in Full-Duplex Ambient Backscatter Communication Networks for Green IoT
Ambient backscatter communication (AmBC) enables wireless-powered backscatter devices (BDs) to transmit information over ambient radio-frequency (RF) carriers without using an RF transmitter, and thus has emerged as a promising technology for green Internet-of-Things. This paper considers an AmBC network in which a full-duplex access point (FAP) simultaneously transmits downlink orthogonal frequency division multiplexing (OFDM) signals to its legacy user (LU) and receives uplink signals backscattered from multiple BDs in a time-division-multiple-access manner. To enhance the system performance from multiple design dimensions and ensure fairness, we maximize the minimum throughput among all BDs by jointly optimizing the BDs' backscatter time portions, the BDs' power reflection coefficients, and the FAP's subcarrier power allocation, subject to the LU's throughput constraint, the BDs' harvested-energy constraints, and other practical constraints. As such, we propose an efficient iterative algorithm for solving the formulated non-convex problem by leveraging the block coordinated decent and successive convex optimization techniques. We further show the convergence of the proposed algorithm, and analyze its complexity. Finally, extensive simulation results show that the proposed joint design achieves significant throughput gains as compared to the benchmark scheme with equal resource allocation.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
96,644
2007.05060
Program Synthesis with Pragmatic Communication
Program synthesis techniques construct or infer programs from user-provided specifications, such as input-output examples. Yet most specifications, especially those given by end-users, leave the synthesis problem radically ill-posed, because many programs may simultaneously satisfy the specification. Prior work resolves this ambiguity by using various inductive biases, such as a preference for simpler programs. This work introduces a new inductive bias derived by modeling the program synthesis task as rational communication, drawing insights from recursive reasoning models of pragmatics. Given a specification, we score a candidate program both on its consistency with the specification, and also whether a rational speaker would chose this particular specification to communicate that program. We develop efficient algorithms for such an approach when learning from input-output examples, and build a pragmatic program synthesizer over a simple grid-like layout domain. A user study finds that end-user participants communicate more effectively with the pragmatic program synthesizer over a non-pragmatic one.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
186,549
2406.16982
Research on Disease Prediction Model Construction Based on Computer AI deep Learning Technology
The prediction of disease risk factors can screen vulnerable groups for effective prevention and treatment, so as to reduce their morbidity and mortality. Machine learning has a great demand for high-quality labeling information, and labeling noise in medical big data poses a great challenge to efficient disease risk warning methods. Therefore, this project intends to study the robust learning algorithm and apply it to the early warning of infectious disease risk. A dynamic truncated loss model is proposed, which combines the traditional mutual entropy implicit weight feature with the mean variation feature. It is robust to label noise. A lower bound on training loss is constructed, and a method based on sampling rate is proposed to reduce the gradient of suspected samples to reduce the influence of noise on training results. The effectiveness of this method under different types of noise was verified by using a stroke screening data set as an example. This method enables robust learning of data containing label noise.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
467,379
2003.12563
DA-NAS: Data Adapted Pruning for Efficient Neural Architecture Search
Efficient search is a core issue in Neural Architecture Search (NAS). It is difficult for conventional NAS algorithms to directly search the architectures on large-scale tasks like ImageNet. In general, the cost of GPU hours for NAS grows with regard to training dataset size and candidate set size. One common way is searching on a smaller proxy dataset (e.g., CIFAR-10) and then transferring to the target task (e.g., ImageNet). These architectures optimized on proxy data are not guaranteed to be optimal on the target task. Another common way is learning with a smaller candidate set, which may require expert knowledge and indeed betrays the essence of NAS. In this paper, we present DA-NAS that can directly search the architecture for large-scale target tasks while allowing a large candidate set in a more efficient manner. Our method is based on an interesting observation that the learning speed for blocks in deep neural networks is related to the difficulty of recognizing distinct categories. We carefully design a progressive data adapted pruning strategy for efficient architecture search. It will quickly trim low performed blocks on a subset of target dataset (e.g., easy classes), and then gradually find the best blocks on the whole target dataset. At this time, the original candidate set becomes as compact as possible, providing a faster search in the target task. Experiments on ImageNet verify the effectiveness of our approach. It is 2x faster than previous methods while the accuracy is currently state-of-the-art, at 76.2% under small FLOPs constraint. It supports an argument search space (i.e., more candidate blocks) to efficiently search the best-performing architecture.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
169,944
2410.14685
Leveraging Event Streams with Deep Reinforcement Learning for End-to-End UAV Tracking
In this paper, we present our proposed approach for active tracking to increase the autonomy of Unmanned Aerial Vehicles (UAVs) using event cameras, low-energy imaging sensors that offer significant advantages in speed and dynamic range. The proposed tracking controller is designed to respond to visual feedback from the mounted event sensor, adjusting the drone movements to follow the target. To leverage the full motion capabilities of a quadrotor and the unique properties of event sensors, we propose an end-to-end deep-reinforcement learning (DRL) framework that maps raw sensor data from event streams directly to control actions for the UAV. To learn an optimal policy under highly variable and challenging conditions, we opt for a simulation environment with domain randomization for effective transfer to real-world environments. We demonstrate the effectiveness of our approach through experiments in challenging scenarios, including fast-moving targets and changing lighting conditions, which result in improved generalization capabilities.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
true
false
false
500,145
2412.00797
Online Poisoning Attack Against Reinforcement Learning under Black-box Environments
This paper proposes an online environment poisoning algorithm tailored for reinforcement learning agents operating in a black-box setting, where an adversary deliberately manipulates training data to lead the agent toward a mischievous policy. In contrast to prior studies that primarily investigate white-box settings, we focus on a scenario characterized by \textit{unknown} environment dynamics to the attacker and a \textit{flexible} reinforcement learning algorithm employed by the targeted agent. We first propose an attack scheme that is capable of poisoning the reward functions and state transitions. The poisoning task is formalized as a constrained optimization problem, following the framework of \cite{ma2019policy}. Given the transition probabilities are unknown to the attacker in a black-box environment, we apply a stochastic gradient descent algorithm, where the exact gradients are approximated using sample-based estimates. A penalty-based method along with a bilevel reformulation is then employed to transform the problem into an unconstrained counterpart and to circumvent the double-sampling issue. The algorithm's effectiveness is validated through a maze environment.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
512,821
2312.17050
KeDuSR: Real-World Dual-Lens Super-Resolution via Kernel-Free Matching
Dual-lens super-resolution (SR) is a practical scenario for reference (Ref) based SR by utilizing the telephoto image (Ref) to assist the super-resolution of the low-resolution wide-angle image (LR input). Different from general RefSR, the Ref in dual-lens SR only covers the overlapped field of view (FoV) area. However, current dual-lens SR methods rarely utilize these specific characteristics and directly perform dense matching between the LR input and Ref. Due to the resolution gap between LR and Ref, the matching may miss the best-matched candidate and destroy the consistent structures in the overlapped FoV area. Different from them, we propose to first align the Ref with the center region (namely the overlapped FoV area) of the LR input by combining global warping and local warping to make the aligned Ref be sharp and consistent. Then, we formulate the aligned Ref and LR center as value-key pairs, and the corner region of the LR is formulated as queries. In this way, we propose a kernel-free matching strategy by matching between the LR-corner (query) and LR-center (key) regions, and the corresponding aligned Ref (value) can be warped to the corner region of the target. Our kernel-free matching strategy avoids the resolution gap between LR and Ref, which makes our network have better generalization ability. In addition, we construct a DuSR-Real dataset with (LR, Ref, HR) triples, where the LR and HR are well aligned. Experiments on three datasets demonstrate that our method outperforms the second-best method by a large margin. Our code and dataset are available at https://github.com/ZifanCui/KeDuSR.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
418,607
1501.03879
A new ADMM algorithm for the Euclidean median and its application to robust patch regression
The Euclidean Median (EM) of a set of points $\Omega$ in an Euclidean space is the point x minimizing the (weighted) sum of the Euclidean distances of x to the points in $\Omega$. While there exits no closed-form expression for the EM, it can nevertheless be computed using iterative methods such as the Wieszfeld algorithm. The EM has classically been used as a robust estimator of centrality for multivariate data. It was recently demonstrated that the EM can be used to perform robust patch-based denoising of images by generalizing the popular Non-Local Means algorithm. In this paper, we propose a novel algorithm for computing the EM (and its box-constrained counterpart) using variable splitting and the method of augmented Lagrangian. The attractive feature of this approach is that the subproblems involved in the ADMM-based optimization of the augmented Lagrangian can be resolved using simple closed-form projections. The proposed ADMM solver is used for robust patch-based image denoising and is shown to exhibit faster convergence compared to an existing solver.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
39,303
2101.01137
Gauss-Legendre Features for Gaussian Process Regression
Gaussian processes provide a powerful probabilistic kernel learning framework, which allows learning high quality nonparametric regression models via methods such as Gaussian process regression. Nevertheless, the learning phase of Gaussian process regression requires massive computations which are not realistic for large datasets. In this paper, we present a Gauss-Legendre quadrature based approach for scaling up Gaussian process regression via a low rank approximation of the kernel matrix. We utilize the structure of the low rank approximation to achieve effective hyperparameter learning, training and prediction. Our method is very much inspired by the well-known random Fourier features approach, which also builds low-rank approximations via numerical integration. However, our method is capable of generating high quality approximation to the kernel using an amount of features which is poly-logarithmic in the number of training points, while similar guarantees will require an amount that is at the very least linear in the number of training points when random Fourier features. Furthermore, the structure of the low-rank approximation that our method builds is subtly different from the one generated by random Fourier features, and this enables much more efficient hyperparameter learning. The utility of our method for learning with low-dimensional datasets is demonstrated using numerical experiments.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
214,291
2406.13267
The Kinetics Observer: A Tightly Coupled Estimator for Legged Robots
In this paper, we propose the "Kinetics Observer", a novel estimator addressing the challenge of state estimation for legged robots using proprioceptive sensors (encoders, IMU and force/torque sensors). Based on a Multiplicative Extended Kalman Filter, the Kinetics Observer allows the real-time simultaneous estimation of contact and perturbation forces, and of the robot's kinematics, which are accurate enough to perform proprioceptive odometry. Thanks to a visco-elastic model of the contacts linking their kinematics to the ones of the centroid of the robot, the Kinetics Observer ensures a tight coupling between the whole-body kinematics and dynamics of the robot. This coupling entails a redundancy of the measurements that enhances the robustness and the accuracy of the estimation. This estimator was tested on two humanoid robots performing long distance walking on even terrain and non-coplanar multi-contact locomotion.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
465,779
2312.03707
Multi-label Text Classification using GloVe and Neural Network Models
This study addresses the challenges of multi-label text classification. The difficulties arise from imbalanced data sets, varied text lengths, and numerous subjective feature labels. Existing solutions include traditional machine learning and deep neural networks for predictions. However, both approaches have their limitations. Traditional machine learning often overlooks the associations between words, while deep neural networks, despite their better classification performance, come with increased training complexity and time. This paper proposes a method utilizing the bag-of-words model approach based on the GloVe model and the CNN-BiLSTM network. The principle is to use the word vector matrix trained by the GloVe model as the input for the text embedding layer. Given that the GloVe model requires no further training, the neural network model can be trained more efficiently. The method achieves an accuracy rate of 87.26% on the test set and an F1 score of 0.8737, showcasing promising results.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
413,366
2306.12356
Provably Efficient Representation Learning with Tractable Planning in Low-Rank POMDP
In this paper, we study representation learning in partially observable Markov Decision Processes (POMDPs), where the agent learns a decoder function that maps a series of high-dimensional raw observations to a compact representation and uses it for more efficient exploration and planning. We focus our attention on the sub-classes of \textit{$\gamma$-observable} and \textit{decodable POMDPs}, for which it has been shown that statistically tractable learning is possible, but there has not been any computationally efficient algorithm. We first present an algorithm for decodable POMDPs that combines maximum likelihood estimation (MLE) and optimism in the face of uncertainty (OFU) to perform representation learning and achieve efficient sample complexity, while only calling supervised learning computational oracles. We then show how to adapt this algorithm to also work in the broader class of $\gamma$-observable POMDPs.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
374,912
2408.15946
Sigma Flows for Image and Data Labeling and Learning Structured Prediction
This paper introduces the sigma flow model for the prediction of structured labelings of data observed on Riemannian manifolds, including Euclidean image domains as special case. The approach combines the Laplace-Beltrami framework for image denoising and enhancement, introduced by Sochen, Kimmel and Malladi about 25 years ago, and the assignment flow approach introduced and studied by the authors. The sigma flow arises as Riemannian gradient flow of generalized harmonic energies and thus is governed by a nonlinear geometric PDE which determines a harmonic map from a closed Riemannian domain manifold to a statistical manifold, equipped with the Fisher-Rao metric from information geometry. A specific ingredient of the sigma flow is the mutual dependency of the Riemannian metric of the domain manifold on the evolving state. This makes the approach amenable to machine learning in a specific way, by realizing this dependency through a mapping with compact time-variant parametrization that can be learned from data. Proof of concept experiments demonstrate the expressivity of the sigma flow model and prediction performance. Structural similarities to transformer network architectures and networks generated by the geometric integration of sigma flows are pointed out, which highlights the connection to deep learning and, conversely, may stimulate the use of geometric design principles for structured prediction in other areas of scientific machine learning.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
484,132
2312.15263
Self-Supervised Depth Completion Guided by 3D Perception and Geometry Consistency
Depth completion, aiming to predict dense depth maps from sparse depth measurements, plays a crucial role in many computer vision related applications. Deep learning approaches have demonstrated overwhelming success in this task. However, high-precision depth completion without relying on the ground-truth data, which are usually costly, still remains challenging. The reason lies on the ignorance of 3D structural information in most previous unsupervised solutions, causing inaccurate spatial propagation and mixed-depth problems. To alleviate the above challenges, this paper explores the utilization of 3D perceptual features and multi-view geometry consistency to devise a high-precision self-supervised depth completion method. Firstly, a 3D perceptual spatial propagation algorithm is constructed with a point cloud representation and an attention weighting mechanism to capture more reasonable and favorable neighboring features during the iterative depth propagation process. Secondly, the multi-view geometric constraints between adjacent views are explicitly incorporated to guide the optimization of the whole depth completion model in a self-supervised manner. Extensive experiments on benchmark datasets of NYU-Depthv2 and VOID demonstrate that the proposed model achieves the state-of-the-art depth completion performance compared with other unsupervised methods, and competitive performance compared with previous supervised methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
417,945
1612.00137
RMPE: Regional Multi-person Pose Estimation
Multi-person pose estimation in the wild is challenging. Although state-of-the-art human detectors have demonstrated good performance, small errors in localization and recognition are inevitable. These errors can cause failures for a single-person pose estimator (SPPE), especially for methods that solely depend on human detection results. In this paper, we propose a novel regional multi-person pose estimation (RMPE) framework to facilitate pose estimation in the presence of inaccurate human bounding boxes. Our framework consists of three components: Symmetric Spatial Transformer Network (SSTN), Parametric Pose Non-Maximum-Suppression (NMS), and Pose-Guided Proposals Generator (PGPG). Our method is able to handle inaccurate bounding boxes and redundant detections, allowing it to achieve a 17% increase in mAP over the state-of-the-art methods on the MPII (multi person) dataset.Our model and source codes are publicly available.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
64,830
2005.03834
Streamline-Based Control of Underwater Gliders in 3D Environments
Autonomous underwater gliders use buoyancy control to achieve forward propulsion via a sawtooth-like, rise-and-fall trajectory. Because gliders are slow-moving relative to ocean currents, glider control must consider the effect of oceanic flows. In previous work, we proposed a method to control underwater vehicles in the (horizontal) plane by describing such oceanic flows in terms of streamlines, which are the level sets of stream functions. However, the general analytical form of streamlines in 3D is unknown. In this paper, we show how streamline control can be used in 3D environments by assuming a 2.5D model of ocean currents. We provide an efficient algorithm that acts as a steering function for a single rise or dive component of the glider's sawtooth trajectory, integrate this algorithm within a sampling-based motion planning framework to support long-distance path planning, and provide several examples in simulation in comparison with a baseline method. The key to our method's computational efficiency is an elegant dimensionality reduction to a 1D control region. Streamline-based control can be integrated within various sampling-based frameworks and allows for online planning for gliders in complicated oceanic flows.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
176,273
1211.3384
An Efficient Soft Decoder of Block Codes Based on Compact Genetic Algorithm
Soft-decision decoding is NP-hard problem of great interest to developers of communication system. We present an efficient soft-decision decoding of linear block codes based on compact genetic algorithm (cGA) and compare its performance with various other decoding algorithms including Shakeel algorithms. The proposed algorithm uses the dual code in contrast to Shakeel algorithm that uses the code itself. Hence, this new approach reduces the decoding complexity of high rates codes. The complexity and an optimized version of this new algorithm is also presented and discussed.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
19,738
1706.09288
On Probability of Support Recovery for Orthogonal Matching Pursuit Using Mutual Coherence
In this paper we present a new coherence-based performance guarantee for the Orthogonal Matching Pursuit (OMP) algorithm. A lower bound for the probability of correctly identifying the support of a sparse signal with additive white Gaussian noise is derived. Compared to previous work, the new bound takes into account the signal parameters such as dynamic range, noise variance, and sparsity. Numerical simulations show significant improvements over previous work and a closer match to empirically obtained results of the OMP algorithm.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
76,113
2402.06014
Trustful Coopetitive Infrastructures for the New Space Exploration Era
In the new space economy, space agencies, large enterprises, and start-ups aim to launch space multi-robot systems (MRS) for various in-situ resource utilization (ISRU) purposes, such as mapping, soil evaluation, and utility provisioning. However, these stakeholders' competing economic interests may hinder effective collaboration on a centralized digital platform. To address this issue, neutral and transparent infrastructures could facilitate coordination and value exchange among heterogeneous space MRS. While related work has expressed legitimate concerns about the technical challenges associated with blockchain use in space, we argue that weighing its potential economic benefits against its drawbacks is necessary. This paper presents a novel architectural framework and a comprehensive set of requirements for integrating blockchain technology in MRS, aiming to enhance coordination and data integrity in space exploration missions. We explored distributed ledger technology (DLT) to design a non-proprietary architecture for heterogeneous MRS and validated the prototype in a simulated lunar environment. The analyses of our implementation suggest global ISRU efficiency improvements for map exploration, compared to a corresponding group of individually acting robots, and that fostering a coopetitive environment may provide additional revenue opportunities for stakeholders.
false
true
false
false
false
false
false
true
false
false
true
false
false
false
true
false
false
true
428,118
2010.13766
Contextual Latent-Movements Off-Policy Optimization for Robotic Manipulation Skills
Parameterized movement primitives have been extensively used for imitation learning of robotic tasks. However, the high-dimensionality of the parameter space hinders the improvement of such primitives in the reinforcement learning (RL) setting, especially for learning with physical robots. In this paper we propose a novel view on handling the demonstrated trajectories for acquiring low-dimensional, non-linear latent dynamics, using mixtures of probabilistic principal component analyzers (MPPCA) on the movements' parameter space. Moreover, we introduce a new contextual off-policy RL algorithm, named LAtent-Movements Policy Optimization (LAMPO). LAMPO can provide gradient estimates from previous experience using self-normalized importance sampling, hence, making full use of samples collected in previous learning iterations. These advantages combined provide a complete framework for sample-efficient off-policy optimization of movement primitives for robot learning of high-dimensional manipulation skills. Our experimental results conducted both in simulation and on a real robot show that LAMPO provides sample-efficient policies against common approaches in literature.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
203,246
2410.04751
Intriguing Properties of Large Language and Vision Models
Recently, large language and vision models (LLVMs) have received significant attention and development efforts due to their remarkable generalization performance across a wide range of tasks requiring perception and cognitive abilities. A key factor behind their success is their simple architecture, which consists of a vision encoder, a projector, and a large language model (LLM). Despite their achievements in advanced reasoning tasks, their performance on fundamental perception-related tasks (e.g., MMVP) remains surprisingly low. This discrepancy raises the question of how LLVMs truly perceive images and exploit the advantages of the vision encoder. To address this, we systematically investigate this question regarding several aspects: permutation invariance, robustness, math reasoning, alignment preserving and importance, by evaluating the most common LLVM's families (i.e., LLaVA) across 10 evaluation benchmarks. Our extensive experiments reveal several intriguing properties of current LLVMs: (1) they internally process the image in a global manner, even when the order of visual patch sequences is randomly permuted; (2) they are sometimes able to solve math problems without fully perceiving detailed numerical information; (3) the cross-modal alignment is overfitted to complex reasoning tasks, thereby, causing them to lose some of the original perceptual capabilities of their vision encoder; (4) the representation space in the lower layers (<25%) plays a crucial role in determining performance and enhancing visual understanding. Lastly, based on the above observations, we suggest potential future directions for building better LLVMs and constructing more challenging evaluation benchmarks.
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
495,424
2402.00920
Deep Learning Approaches for Network Traffic Classification in the Internet of Things (IoT): A Survey
The Internet of Things (IoT) has witnessed unprecedented growth, resulting in a massive influx of diverse network traffic from interconnected devices. Effectively classifying this network traffic is crucial for optimizing resource allocation, enhancing security measures, and ensuring efficient network management in IoT systems. Deep learning has emerged as a powerful technique for network traffic classification due to its ability to automatically learn complex patterns and representations from raw data. This survey paper aims to provide a comprehensive overview of the existing deep learning approaches employed in network traffic classification specifically tailored for IoT environments. By systematically analyzing and categorizing the latest research contributions in this domain, we explore the strengths and limitations of various deep learning models in handling the unique challenges posed by IoT network traffic. Through this survey, we aim to offer researchers and practitioners valuable insights, identify research gaps, and provide directions for future research to further enhance the effectiveness and efficiency of deep learning-based network traffic classification in IoT.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
true
425,785
2211.08583
Empirical Study on Optimizer Selection for Out-of-Distribution Generalization
Modern deep learning systems do not generalize well when the test data distribution is slightly different to the training data distribution. While much promising work has been accomplished to address this fragility, a systematic study of the role of optimizers and their out-of-distribution generalization performance has not been undertaken. In this study, we examine the performance of popular first-order optimizers for different classes of distributional shift under empirical risk minimization and invariant risk minimization. We address this question for image and text classification using DomainBed, WILDS, and Backgrounds Challenge as testbeds for studying different types of shifts -- namely correlation and diversity shift. We search over a wide range of hyperparameters and examine classification accuracy (in-distribution and out-of-distribution) for over 20,000 models. We arrive at the following findings, which we expect to be helpful for practitioners: i) adaptive optimizers (e.g., Adam) perform worse than non-adaptive optimizers (e.g., SGD, momentum SGD) on out-of-distribution performance. In particular, even though there is no significant difference in in-distribution performance, we show a measurable difference in out-of-distribution performance. ii) in-distribution performance and out-of-distribution performance exhibit three types of behavior depending on the dataset -- linear returns, increasing returns, and diminishing returns. For example, in the training of natural language data using Adam, fine-tuning the performance of in-distribution performance does not significantly contribute to the out-of-distribution generalization performance.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
330,685
2409.01909
LUK: Empowering Log Understanding with Expert Knowledge from Large Language Models
Logs play a critical role in providing essential information for system monitoring and troubleshooting. Recently, with the success of pre-trained language models (PLMs) and large language models (LLMs) in natural language processing (NLP), smaller PLMs (such as BERT) and LLMs (like GPT-4) have become the current mainstream approaches for log analysis. Despite the remarkable capabilities of LLMs, their higher cost and inefficient inference present significant challenges in leveraging the full potential of LLMs to analyze logs. In contrast, smaller PLMs can be fine-tuned for specific tasks even with limited computational resources, making them more practical. However, these smaller PLMs face challenges in understanding logs comprehensively due to their limited expert knowledge. To address the lack of expert knowledge and enhance log understanding for smaller PLMs, this paper introduces a novel and practical knowledge enhancement framework, called LUK, which acquires expert knowledge from LLMs automatically and then enhances the smaller PLM for log analysis with these expert knowledge. LUK can take full advantage of both types of models. Specifically, we design a multi-expert collaboration framework based on LLMs with different roles to acquire expert knowledge. In addition, we propose two novel pre-training tasks to enhance the log pre-training with expert knowledge. LUK achieves state-of-the-art results on different log analysis tasks and extensive experiments demonstrate expert knowledge from LLMs can be utilized more effectively to understand logs. Our source code and detailed experimental data are available at https://github.com/LeaperOvO/LUK.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
485,506
1903.08589
DC-SPP-YOLO: Dense Connection and Spatial Pyramid Pooling Based YOLO for Object Detection
Although the YOLOv2 method is extremely fast on object detection, its detection accuracy is restricted due to the low performance of its backbone network and the underutilization of multi-scale region features. Therefore, a dense connection (DC) and spatial pyramid pooling (SPP) based YOLO (DC-SPP-YOLO) method for ameliorating the object detection accuracy of YOLOv2 is proposed in this paper. Specifically, the dense connection of convolution layers is employed in the backbone network of YOLOv2 to strengthen the feature extraction and alleviate the vanishing-gradient problem. Moreover, an improved spatial pyramid pooling is introduced to pool and concatenate the multi-scale region features, so that the network can learn the object features more comprehensively. The DC-SPP-YOLO model is established and trained based on a new loss function composed of MSE (mean square error) loss and cross-entropy loss. The experimental results indicated that the mAP (mean Average Precision) of DC-SPP-YOLO is higher than that of YOLOv2 on the PASCAL VOC datasets and the UA-DETRAC datasets. The effectiveness of DC-SPP-YOLO method proposed is demonstrated.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
124,863
2305.02260
Standardized Benchmark Dataset for Localized Exposure to a Realistic Source at 10$-$90 GHz
The lack of freely available standardized datasets represents an aggravating factor during the development and testing the performance of novel computational techniques in exposure assessment and dosimetry research. This hinders progress as researchers are required to generate numerical data (field, power and temperature distribution) anew using simulation software for each exposure scenario. Other than being time consuming, this approach is highly susceptible to errors that occur during the configuration of the electromagnetic model. To address this issue, in this paper, the limited available data on the incident power density and resultant maximum temperature rise on the skin surface considering various steady-state exposure scenarios at 10$-$90 GHz have been statistically modeled. The synthetic data have been sampled from the fitted statistical multivariate distribution with respect to predetermined dosimetric constraints. We thus present a comprehensive and open-source dataset compiled of the high-fidelity numerical data considering various exposures to a realistic source. Furthermore, different surrogate models for predicting maximum temperature rise on the skin surface were fitted based on the synthetic dataset. All surrogate models were tested on the originally available data where satisfactory predictive performance has been demonstrated. A simple technique of combining quadratic polynomial and tensor-product spline surrogates, each operating on its own cluster of data, has achieved the lowest mean absolute error of 0.058 {\deg}C. Therefore, overall experimental results indicate the validity of the proposed synthetic dataset.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
361,969
2206.07886
Generalization Bounds for Data-Driven Numerical Linear Algebra
Data-driven algorithms can adapt their internal structure or parameters to inputs from unknown application-specific distributions, by learning from a training sample of inputs. Several recent works have applied this approach to problems in numerical linear algebra, obtaining significant empirical gains in performance. However, no theoretical explanation for their success was known. In this work we prove generalization bounds for those algorithms, within the PAC-learning framework for data-driven algorithm selection proposed by Gupta and Roughgarden (SICOMP 2017). Our main results are closely matching upper and lower bounds on the fat shattering dimension of the learning-based low rank approximation algorithm of Indyk et al.~(NeurIPS 2019). Our techniques are general, and provide generalization bounds for many other recently proposed data-driven algorithms in numerical linear algebra, covering both sketching-based and multigrid-based methods. This considerably broadens the class of data-driven algorithms for which a PAC-learning analysis is available.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
302,924
2106.15023
Evading Adversarial Example Detection Defenses with Orthogonal Projected Gradient Descent
Evading adversarial example detection defenses requires finding adversarial examples that must simultaneously (a) be misclassified by the model and (b) be detected as non-adversarial. We find that existing attacks that attempt to satisfy multiple simultaneous constraints often over-optimize against one constraint at the cost of satisfying another. We introduce Orthogonal Projected Gradient Descent, an improved attack technique to generate adversarial examples that avoids this problem by orthogonalizing the gradients when running standard gradient-based attacks. We use our technique to evade four state-of-the-art detection defenses, reducing their accuracy to 0% while maintaining a 0% detection rate.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
243,587
2202.06493
FLHub: a Federated Learning model sharing service
As easy-to-use deep learning libraries such as Tensorflow and Pytorch are popular, it has become convenient to develop machine learning models. Due to privacy issues with centralized machine learning, recently, federated learning in the distributed computing framework is attracting attention. The central server does not collect sensitive and personal data from clients in federated learning, but it only aggregates the model parameters. Though federated learning helps protect privacy, it is difficult for machine learning developers to share the models that they could utilize for different-domain applications. In this paper, we propose a federated learning model sharing service named Federated Learning Hub (FLHub). Users can upload, download, and contribute the model developed by other developers similarly to GitHub. We demonstrate that a forked model can finish training faster than the existing model and that learning progressed more quickly for each federated round.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
280,243
2209.03789
Impact of dataset size and long-term ECoG-based BCI usage on deep learning decoders performance
In brain-computer interfaces (BCI) research, recording data is time-consuming and expensive, which limits access to big datasets. This may influence the BCI system performance as machine learning methods depend strongly on the training dataset size. Important questions arise: taking into account neuronal signal characteristics (e.g., non-stationarity), can we achieve higher decoding performance with more data to train decoders? What is the perspective for further improvement with time in the case of long-term BCI studies? In this study, we investigated the impact of long-term recordings on motor imagery decoding from two main perspectives: model requirements regarding dataset size and potential for patient adaptation. We evaluated the multilinear model and two deep learning (DL) models on a long-term BCI and Tetraplegia NCT02550522 clinical trial dataset containing 43 sessions of ECoG recordings performed with a tetraplegic patient. In the experiment, a participant executed 3D virtual hand translation using motor imagery patterns. We designed multiple computational experiments in which training datasets were increased or translated to investigate the relationship between models' performance and different factors influencing recordings. Our analysis showed that adding more data to the training dataset may not instantly increase performance for datasets already containing 40 minutes of the signal. DL decoders showed similar requirements regarding the dataset size compared to the multilinear model while demonstrating higher decoding performance. Moreover, high decoding performance was obtained with relatively small datasets recorded later in the experiment, suggesting motor imagery patterns improvement and patient adaptation. Finally, we proposed UMAP embeddings and local intrinsic dimensionality as a way to visualize the data and potentially evaluate data quality.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
316,600
2305.17289
Fourier-DeepONet: Fourier-enhanced deep operator networks for full waveform inversion with improved accuracy, generalizability, and robustness
Full waveform inversion (FWI) infers the subsurface structure information from seismic waveform data by solving a non-convex optimization problem. Data-driven FWI has been increasingly studied with various neural network architectures to improve accuracy and computational efficiency. Nevertheless, the applicability of pre-trained neural networks is severely restricted by potential discrepancies between the source function used in the field survey and the one utilized during training. Here, we develop a Fourier-enhanced deep operator network (Fourier-DeepONet) for FWI with the generalization of seismic sources, including the frequencies and locations of sources. Specifically, we employ the Fourier neural operator as the decoder of DeepONet, and we utilize source parameters as one input of Fourier-DeepONet, facilitating the resolution of FWI with variable sources. To test Fourier-DeepONet, we develop three new and realistic FWI benchmark datasets (FWI-F, FWI-L, and FWI-FL) with varying source frequencies, locations, or both. Our experiments demonstrate that compared with existing data-driven FWI methods, Fourier-DeepONet obtains more accurate predictions of subsurface structures in a wide range of source parameters. Moreover, the proposed Fourier-DeepONet exhibits superior robustness when handling data with Gaussian noise or missing traces and sources with Gaussian noise, paving the way for more reliable and accurate subsurface imaging across diverse real conditions.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
368,503
2005.09170
On Intrinsic Dataset Properties for Adversarial Machine Learning
Deep neural networks (DNNs) have played a key role in a wide range of machine learning applications. However, DNN classifiers are vulnerable to human-imperceptible adversarial perturbations, which can cause them to misclassify inputs with high confidence. Thus, creating robust DNNs which can defend against malicious examples is critical in applications where security plays a major role. In this paper, we study the effect of intrinsic dataset properties on the performance of adversarial attack and defense methods, testing on five popular image classification datasets - MNIST, Fashion-MNIST, CIFAR10/CIFAR100, and ImageNet. We find that input size and image contrast play key roles in attack and defense success. Our discoveries highlight that dataset design and data preprocessing steps are important to boost the adversarial robustness of DNNs. To our best knowledge, this is the first comprehensive work that studies the effect of intrinsic dataset properties on adversarial machine learning.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
177,843
2208.02606
TunaOil: A Tuning Algorithm Strategy for Reservoir Simulation Workloads
Reservoir simulations for petroleum fields and seismic imaging are known as the most demanding workloads for high-performance computing (HPC) in the oil and gas (O&G) industry. The optimization of the simulator numerical parameters plays a vital role as it could save considerable computational efforts. State-of-the-art optimization techniques are based on running numerous simulations, specific for that purpose, to find good parameter candidates. However, using such an approach is highly costly in terms of time and computing resources. This work presents TunaOil, a new methodology to enhance the search for optimal numerical parameters of reservoir flow simulations using a performance model. In the O&G industry, it is common to use ensembles of models in different workflows to reduce the uncertainty associated with forecasting O&G production. We leverage the runs of those ensembles in such workflows to extract information from each simulation and optimize the numerical parameters in their subsequent runs. To validate the methodology, we implemented it in a history matching (HM) process that uses a Kalman filter algorithm to adjust an ensemble of reservoir models to match the observed data from the real field. We mine past execution logs from many simulations with different numerical configurations and build a machine learning model based on extracted features from the data. These features include properties of the reservoir models themselves, such as the number of active cells, to statistics of the simulation's behavior, such as the number of iterations of the linear solver. A sampling technique is used to query the oracle to find the numerical parameters that can reduce the elapsed time without significantly impacting the quality of the results. Our experiments show that the predictions can improve the overall HM workflow runtime on average by 31%.
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
311,515
2403.04502
Matched-filter Precoded Rate Splitting Multiple Access: A Simple and Energy-efficient Design
We introduce an energy-efficient downlink rate splitting multiple access (RSMA) scheme, employing a simple matched filter (MF) for precoding. We consider a transmitter equipped with multiple antennas, serving several single-antenna users at the same frequency-time resource, each with distinct message requests. Within the conventional 1-layer RSMA framework, requested messages undergo splitting into common and private streams, which are then precoded separately before transmission. In contrast, we propose a novel strategy where only an MF is employed to precode both the common and private streams in RSMA, promising significantly improved energy efficiency and reduced complexity. We demonstrate that this MF-precoded RSMA achieves the same delivery performance as conventional RSMA, where the common stream is beamformed using maximal ratio transmission (MRT) and the private streams are precoded by MF. Taking into account imperfect channel state information at the transmitter, we proceed to analyze the delivery performance of the MF-precoded RSMA. We derive the ergodic rates for decoding the common and private streams at a target user respectively in the massive MIMO regime. Finally, numerical simulations validate the accuracy of our analytical models, as well as demonstrate the advantages over conventional RSMA.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
435,624
2009.10444
Self-Adapting Variable Impedance Actuator Control for Precision and Dynamic Tasks
Variable impedance actuators (VIAs) as tool devices for teleoperation could extend the range of tasks that humans can perform through a teleoperated robot by mimicking the change of upper limb stiffness that humans perform for different tasks, increasing the dynamic range of the robot. This requires appropriate impedance control. Goal of this study is to show the effectiveness of a controller that does not require additional sensors, reducing system complexity and increasing ease of use. The controller should allow to perform precise positioning tasks and dynamic tasks like hammering through teleoperation with a VIA tool device automatically adapting the impedance setting of the VIA. This is achieved by a control law according to the principle "slow-stiff/fast-soft". The controller was tested in a human user study with 24 participants comparing the human-machine performance with the self-adapting controller in a bilateral telemanipulation experiment with two tasks (precision/dynamic) using three impedance settings (high/low/adaptive impedance). The results indicate that the proposed system performs equally well as state of the art stiff teleoperation devices for precision tasks, while having benefits in terms of increased safety and reduced wear for dynamic tasks. This is a step towards teleoperation with a wide dynamic range.
true
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
196,894
2410.21698
On the Role of Depth and Looping for In-Context Learning with Task Diversity
The intriguing in-context learning (ICL) abilities of deep Transformer models have lately garnered significant attention. By studying in-context linear regression on unimodal Gaussian data, recent empirical and theoretical works have argued that ICL emerges from Transformers' abilities to simulate learning algorithms like gradient descent. However, these works fail to capture the remarkable ability of Transformers to learn multiple tasks in context. To this end, we study in-context learning for linear regression with diverse tasks, characterized by data covariance matrices with condition numbers ranging from $[1, \kappa]$, and highlight the importance of depth in this setting. More specifically, (a) we show theoretical lower bounds of $\log(\kappa)$ (or $\sqrt{\kappa}$) linear attention layers in the unrestricted (or restricted) attention setting and, (b) we show that multilayer Transformers can indeed solve such tasks with a number of layers that matches the lower bounds. However, we show that this expressivity of multilayer Transformer comes at the price of robustness. In particular, multilayer Transformers are not robust to even distributional shifts as small as $O(e^{-L})$ in Wasserstein distance, where $L$ is the depth of the network. We then demonstrate that Looped Transformers -- a special class of multilayer Transformers with weight-sharing -- not only exhibit similar expressive power but are also provably robust under mild assumptions. Besides out-of-distribution generalization, we also show that Looped Transformers are the only models that exhibit a monotonic behavior of loss with respect to depth.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
503,357
2109.04460
Protein Folding Neural Networks Are Not Robust
Deep neural networks such as AlphaFold and RoseTTAFold predict remarkably accurate structures of proteins compared to other algorithmic approaches. It is known that biologically small perturbations in the protein sequence do not lead to drastic changes in the protein structure. In this paper, we demonstrate that RoseTTAFold does not exhibit such a robustness despite its high accuracy, and biologically small perturbations for some input sequences result in radically different predicted protein structures. This raises the challenge of detecting when these predicted protein structures cannot be trusted. We define the robustness measure for the predicted structure of a protein sequence to be the inverse of the root-mean-square distance (RMSD) in the predicted structure and the structure of its adversarially perturbed sequence. We use adversarial attack methods to create adversarial protein sequences, and show that the RMSD in the predicted protein structure ranges from 0.119\r{A} to 34.162\r{A} when the adversarial perturbations are bounded by 20 units in the BLOSUM62 distance. This demonstrates very high variance in the robustness measure of the predicted structures. We show that the magnitude of the correlation (0.917) between our robustness measure and the RMSD between the predicted structure and the ground truth is high, that is, the predictions with low robustness measure cannot be trusted. This is the first paper demonstrating the susceptibility of RoseTTAFold to adversarial attacks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
254,415
2407.14185
Achieving Well-Informed Decision-Making in Drug Discovery: A Comprehensive Calibration Study using Neural Network-Based Structure-Activity Models
In the drug discovery process, where experiments can be costly and time-consuming, computational models that predict drug-target interactions are valuable tools to accelerate the development of new therapeutic agents. Estimating the uncertainty inherent in these neural network predictions provides valuable information that facilitates optimal decision-making when risk assessment is crucial. However, such models can be poorly calibrated, which results in unreliable uncertainty estimates that do not reflect the true predictive uncertainty. In this study, we compare different metrics, including accuracy and calibration scores, used for model hyperparameter tuning to investigate which model selection strategy achieves well-calibrated models. Furthermore, we propose to use a computationally efficient Bayesian uncertainty estimation method named Bayesian Linear Probing (BLP), which generates Hamiltonian Monte Carlo (HMC) trajectories to obtain samples for the parameters of a Bayesian Logistic Regression fitted to the hidden layer of the baseline neural network. We report that BLP improves model calibration and achieves the performance of common uncertainty quantification methods by combining the benefits of uncertainty estimation and probability calibration methods. Finally, we show that combining post hoc calibration method with well-performing uncertainty quantification approaches can boost model accuracy and calibration.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
474,682
1706.02462
Regular Boardgames
We propose a new General Game Playing (GGP) language called Regular Boardgames (RBG), which is based on the theory of regular languages. The objective of RBG is to join key properties as expressiveness, efficiency, and naturalness of the description in one GGP formalism, compensating certain drawbacks of the existing languages. This often makes RBG more suitable for various research and practical developments in GGP. While dedicated mostly for describing board games, RBG is universal for the class of all finite deterministic turn-based games with perfect information. We establish foundations of RBG, and analyze it theoretically and experimentally, focusing on the efficiency of reasoning. Regular Boardgames is the first GGP language that allows efficient encoding and playing games with complex rules and with large branching factor (e.g.\ amazons, arimaa, large chess variants, go, international checkers, paper soccer).
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
74,982
2404.08549
Practical Guidelines for Cell Segmentation Models Under Optical Aberrations in Microscopy
Cell segmentation is essential in biomedical research for analyzing cellular morphology and behavior. Deep learning methods, particularly convolutional neural networks (CNNs), have revolutionized cell segmentation by extracting intricate features from images. However, the robustness of these methods under microscope optical aberrations remains a critical challenge. This study evaluates cell image segmentation models under optical aberrations from fluorescence and bright field microscopy. By simulating different types of aberrations, including astigmatism, coma, spherical aberration, trefoil, and mixed aberrations, we conduct a thorough evaluation of various cell instance segmentation models using the DynamicNuclearNet (DNN) and LIVECell datasets, representing fluorescence and bright field microscopy cell datasets, respectively. We train and test several segmentation models, including the Otsu threshold method and Mask R-CNN with different network heads (FPN, C3) and backbones (ResNet, VGG, Swin Transformer), under aberrated conditions. Additionally, we provide usage recommendations for the Cellpose 2.0 Toolbox on complex cell degradation images. The results indicate that the combination of FPN and SwinS demonstrates superior robustness in handling simple cell images affected by minor aberrations. In contrast, Cellpose 2.0 proves effective for complex cell images under similar conditions. Furthermore, we innovatively propose the Point Spread Function Image Label Classification Model (PLCM). This model can quickly and accurately identify aberration types and amplitudes from PSF images, assisting researchers without optical training. Through PLCM, researchers can better apply our proposed cell segmentation guidelines.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
446,285
2210.16440
ODNet: A Convolutional Neural Network for Asteroid Occultation Detection
We propose to design and build an algorithm that will use a Convolutional Neural Network (CNN) and observations from the Unistellar network to reliably detect asteroid occultations. The Unistellar Network, made of more than 10,000 digital telescopes owned by citizen scientists, and is regularly used to record asteroid occultations. In order to process the increasing amount of observational produced by this network, we need a quick and reliable way to analyze occultations. In an effort to solve this problem, we trained a CNN with artificial images of stars with twenty different types of photometric signals. Inputs to the network consists of two stacks of snippet images of stars, one around the star that is supposed to be occulted and a reference star used for comparison. We need the reference star to distinguish between a true occultation and artefacts introduced by poor atmospheric condition. Our Occultation Detection Neural Network (ODNet), can analyze three sequence of stars per second with 91\% of precision and 87\% of recall. The algorithm is sufficiently fast and robust so we can envision incorporating onboard the eVscopes to deliver real-time results. We conclude that citizen science represents an important opportunity for the future studies and discoveries in the occultations, and that application of artificial intelligence will permit us to to take better advantage of the ever-growing quantity of data to categorize asteroids.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
327,331
1706.03317
Fault Tolerant Consensus Agreement Algorithm
Recently a new fault tolerant and simple mechanism was designed for solving commit consensus problem. It is based on replicated validation of messages sent between transaction participants and a special dispatcher validator manager node. This paper presents a correctness, safety proofs and performance analysis of this algorithm.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
true
75,145
2006.13827
Risk-Sensitive Reinforcement Learning: Near-Optimal Risk-Sample Tradeoff in Regret
We study risk-sensitive reinforcement learning in episodic Markov decision processes with unknown transition kernels, where the goal is to optimize the total reward under the risk measure of exponential utility. We propose two provably efficient model-free algorithms, Risk-Sensitive Value Iteration (RSVI) and Risk-Sensitive Q-learning (RSQ). These algorithms implement a form of risk-sensitive optimism in the face of uncertainty, which adapts to both risk-seeking and risk-averse modes of exploration. We prove that RSVI attains an $\tilde{O}\big(\lambda(|\beta| H^2) \cdot \sqrt{H^{3} S^{2}AT} \big)$ regret, while RSQ attains an $\tilde{O}\big(\lambda(|\beta| H^2) \cdot \sqrt{H^{4} SAT} \big)$ regret, where $\lambda(u) = (e^{3u}-1)/u$ for $u>0$. In the above, $\beta$ is the risk parameter of the exponential utility function, $S$ the number of states, $A$ the number of actions, $T$ the total number of timesteps, and $H$ the episode length. On the flip side, we establish a regret lower bound showing that the exponential dependence on $|\beta|$ and $H$ is unavoidable for any algorithm with an $\tilde{O}(\sqrt{T})$ regret (even when the risk objective is on the same scale as the original reward), thus certifying the near-optimality of the proposed algorithms. Our results demonstrate that incorporating risk awareness into reinforcement learning necessitates an exponential cost in $|\beta|$ and $H$, which quantifies the fundamental tradeoff between risk sensitivity (related to aleatoric uncertainty) and sample efficiency (related to epistemic uncertainty). To the best of our knowledge, this is the first regret analysis of risk-sensitive reinforcement learning with the exponential utility.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
184,039
quant-ph/0511175
A Proof of the Security of Quantum Key Distribution
We prove the security of theoretical quantum key distribution against the most general attacks which can be performed on the channel, by an eavesdropper who has unlimited computation abilities, and the full power allowed by the rules of classical and quantum physics. A key created that way can then be used to transmit secure messages such that their security is also unaffected in the future.
false
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
540,890
1906.09450
Semantically Driven Auto-completion
The Bloomberg Terminal has been a leading source of financial data and analytics for over 30 years. Through its thousands of functions, the Terminal allows its users to query and run analytics over a large array of data sources, including structured, semi-structured, and unstructured data; as well as plot charts, set up event-driven alerts and triggers, create interactive maps, exchange information via instant and email-style messages, and so on. To improve user experience, we have been building question answering systems that can understand a wide range of natural language constructions for various domains that are of fundamental interest to our users. Such natural language interfaces, while exceedingly helpful to users, introduce a number of usability challenges of their own. We tackle some of these challenges through auto-completion for query formulation. A distinguishing mark of our auto-complete systems is that they are based on and guided by corresponding semantic parsing systems. We describe the auto-complete problem as it arises in this setting, the novel algorithms that we use to solve it, and report on the quality of the results and the efficiency of our approach.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
136,163
2302.05738
Cross-Modal Fine-Tuning: Align then Refine
Fine-tuning large-scale pretrained models has led to tremendous progress in well-studied modalities such as vision and NLP. However, similar gains have not been observed in many other modalities due to a lack of relevant pretrained models. In this work, we propose ORCA, a general cross-modal fine-tuning framework that extends the applicability of a single large-scale pretrained model to diverse modalities. ORCA adapts to a target task via an align-then-refine workflow: given the target input, ORCA first learns an embedding network that aligns the embedded feature distribution with the pretraining modality. The pretrained model is then fine-tuned on the embedded data to exploit the knowledge shared across modalities. Through extensive experiments, we show that ORCA obtains state-of-the-art results on 3 benchmarks containing over 60 datasets from 12 modalities, outperforming a wide range of hand-designed, AutoML, general-purpose, and task-specific methods. We highlight the importance of data alignment via a series of ablation studies and demonstrate ORCA's utility in data-limited regimes.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
345,149
2412.20321
Hypergraph-Based Dynamic Graph Node Classification
Node classification on static graphs has achieved significant success, but achieving accurate node classification on dynamic graphs where node topology, attributes, and labels change over time has not been well addressed. Existing methods based on RNNs and self-attention only aggregate features of the same node across different time slices, which cannot adequately address and capture the diverse dynamic changes in dynamic graphs. Therefore, we propose a novel model named Hypergraph-Based Multi-granularity Dynamic Graph Node Classification (HYDG). After obtaining basic node representations for each slice through a GNN backbone, HYDG models the representations of each node in the dynamic graph through two modules. The individual-level hypergraph captures the spatio-temporal node representations between individual nodes, while the group-level hypergraph captures the multi-granularity group temporal representations among nodes of the same class. Each hyperedge captures different temporal dependencies of varying lengths by connecting multiple nodes within specific time ranges. More accurate representations are obtained through weighted information propagation and aggregation by the hypergraph neural network. Extensive experiments on five real dynamic graph datasets using two GNN backbones demonstrate the superiority of our proposed framework.
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
false
521,171
2303.03207
Constrained Reinforcement Learning and Formal Verification for Safe Colonoscopy Navigation
The field of robotic Flexible Endoscopes (FEs) has progressed significantly, offering a promising solution to reduce patient discomfort. However, the limited autonomy of most robotic FEs results in non-intuitive and challenging manoeuvres, constraining their application in clinical settings. While previous studies have employed lumen tracking for autonomous navigation, they fail to adapt to the presence of obstructions and sharp turns when the endoscope faces the colon wall. In this work, we propose a Deep Reinforcement Learning (DRL)-based navigation strategy that eliminates the need for lumen tracking. However, the use of DRL methods poses safety risks as they do not account for potential hazards associated with the actions taken. To ensure safety, we exploit a Constrained Reinforcement Learning (CRL) method to restrict the policy in a predefined safety regime. Moreover, we present a model selection strategy that utilises Formal Verification (FV) to choose a policy that is entirely safe before deployment. We validate our approach in a virtual colonoscopy environment and report that out of the 300 trained policies, we could identify three policies that are entirely safe. Our work demonstrates that CRL, combined with model selection through FV, can improve the robustness and safety of robotic behaviour in surgical applications.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
349,647
2104.02581
WhONet: Wheel Odometry Neural Network for Vehicular Localisation in GNSS-Deprived Environments
In this paper, a deep learning approach is proposed to accurately position wheeled vehicles in Global Navigation Satellite Systems (GNSS) deprived environments. In the absence of GNSS signals, information on the speed of the wheels of a vehicle (or other robots alike), recorded from the wheel encoder, can be used to provide continuous positioning information for the vehicle, through the integration of the vehicle's linear velocity to displacement. However, the displacement estimation from the wheel speed measurements are characterised by uncertainties, which could be manifested as wheel slips or/and changes to the tyre size or pressure, from wet and muddy road drives or tyres wearing out. As such, we exploit recent advances in deep learning to propose the Wheel Odometry neural Network (WhONet) to learn the uncertainties in the wheel speed measurements needed for correction and accurate positioning. The performance of the proposed WhONet is first evaluated on several challenging driving scenarios, such as on roundabouts, sharp cornering, hard-brake and wet roads (drifts). WhONet's performance is then further and extensively evaluated on longer-term GNSS outage scenarios of 30s, 60s, 120s and 180s duration, respectively over a total distance of 493 km. The experimental results obtained show that the proposed method is able to accurately position the vehicle with up to 93% reduction in the positioning error of its original counterpart after any 180s of travel. WhONet's implementation can be found at https://github.com/onyekpeu/WhONet.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
228,774
2009.04067
Noise Reduction Technique for Raman Spectrum using Deep Learning Network
In a normal indoor environment, Raman spectrum encounters noise often conceal spectrum peak, leading to difficulty in spectrum interpretation. This paper proposes deep learning (DL) based noise reduction technique for Raman spectroscopy. The proposed DL network is developed with several training and test sets of noisy Raman spectrum. The proposed technique is applied to denoise and compare the performance with different wavelet noise reduction methods. Output signal-to-noise ratio (SNR), root-mean-square error (RMSE) and mean absolute percentage error (MAPE) are the performance evaluation index. It is shown that output SNR of the proposed noise reduction technology is 10.24 dB greater than that of the wavelet noise reduction method while the RMSE and the MAPE are 292.63 and 10.09, which are much better than the proposed technique.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
194,954
1006.1666
On the Proximity Factors of Lattice Reduction-Aided Decoding
Lattice reduction-aided decoding features reduced decoding complexity and near-optimum performance in multi-input multi-output communications. In this paper, a quantitative analysis of lattice reduction-aided decoding is presented. To this aim, the proximity factors are defined to measure the worst-case losses in distances relative to closest point search (in an infinite lattice). Upper bounds on the proximity factors are derived, which are functions of the dimension $n$ of the lattice alone. The study is then extended to the dual-basis reduction. It is found that the bounds for dual basis reduction may be smaller. Reasonably good bounds are derived in many cases. The constant bounds on proximity factors not only imply the same diversity order in fading channels, but also relate the error probabilities of (infinite) lattice decoding and lattice reduction-aided decoding.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
6,723
1403.2837
HPS: a hierarchical Persian stemming method
In this paper, a novel hierarchical Persian stemming approach based on the Part-Of-Speech of the word in a sentence is presented. The implemented stemmer includes hash tables and several deterministic finite automata in its different levels of hierarchy for removing the prefixes and suffixes of the words. We had two intentions in using hash tables in our method. The first one is that the DFA don't support some special words, so hash table can partly solve the addressed problem. the second goal is to speed up the implemented stemmer with omitting the time that deterministic finite automata need. Because of the hierarchical organization, this method is fast and flexible enough. Our experiments on test sets from Hamshahri collection and security news (istna.ir) show that our method has the average accuracy of 95.37% which is even improved in using the method on a test set with common topics.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
31,512
2404.19615
SemiPL: A Semi-supervised Method for Event Sound Source Localization
In recent years, Event Sound Source Localization has been widely applied in various fields. Recent works typically relying on the contrastive learning framework show impressive performance. However, all work is based on large relatively simple datasets. It's also crucial to understand and analyze human behaviors (actions and interactions of people), voices, and sounds in chaotic events in many applications, e.g., crowd management, and emergency response services. In this paper, we apply the existing model to a more complex dataset, explore the influence of parameters on the model, and propose a semi-supervised improvement method SemiPL. With the increase in data quantity and the influence of label quality, self-supervised learning will be an unstoppable trend. The experiment shows that the parameter adjustment will positively affect the existing model. In particular, SSPL achieved an improvement of 12.2% cIoU and 0.56% AUC in Chaotic World compared to the results provided. The code is available at: https://github.com/ly245422/SSPL
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
450,711
2201.12023
Alpa: Automating Inter- and Intra-Operator Parallelism for Distributed Deep Learning
Alpa automates model-parallel training of large deep learning (DL) models by generating execution plans that unify data, operator, and pipeline parallelism. Existing model-parallel training systems either require users to manually create a parallelization plan or automatically generate one from a limited space of model parallelism configurations. They do not suffice to scale out complex DL models on distributed compute devices. Alpa distributes the training of large DL models by viewing parallelisms as two hierarchical levels: inter-operator and intra-operator parallelisms. Based on it, Alpa constructs a new hierarchical space for massive model-parallel execution plans. Alpa designs a number of compilation passes to automatically derive efficient parallel execution plans at each parallelism level. Alpa implements an efficient runtime to orchestrate the two-level parallel execution on distributed compute devices. Our evaluation shows Alpa generates parallelization plans that match or outperform hand-tuned model-parallel training systems even on models they are designed for. Unlike specialized systems, Alpa also generalizes to models with heterogeneous architectures and models without manually-designed plans. Alpa's source code is publicly available at https://github.com/alpa-projects/alpa
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
277,498
2207.04375
An Input-Output Feedback Linearization based Exponentially Stable Controller for Multi-UAV Payload Transport
In this paper, an exponentially stable trajectory tracking controller is proposed for multi-UAV payload transport. The multi-UAV payload system has a 2-DOF magnetic spherical joint between the UAVs and the vertical rigid links of the payload frame, so the UAVs can roll or pitch freely. These vertical links are rigidly attached to the payload and cannot move. An input-output feedback linearized model is derived for the complete payload-UAV system along with thrust vectoring control for trajectory tracking of the payload. The theoretical analysis on tracking control laws shows that control law is exponentially stable, thus guaranteeing safe transportation along the desired trajectory. To validate the performance of the proposed control law, the results for a numerical simulation as well as a high-fidelity Gazebo real-time simulation are presented. Next, the robustness of the proposed controller is analyzed against two practical situations: External disturbance on the payload and payload mass uncertainty. The results clearly indicate that the proposed controller is robust and computationally efficient while achieving exponentially stable trajectory tracking.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
307,179
2008.08977
Generating Adjacency Matrix for Video Relocalization
In this paper, we continue our work on video relocalization task. Based on using graph convolution to extract intra-video and inter-video frame features, we improve the method by using similarity-metric based graph convolution, whose weighted adjacency matrix is achieved by calculating similarity metric between features of any two different time steps in the graph. Experiments on ActivityNet v1.2 and Thumos14 dataset show the effectiveness of this improvement, and it outperforms the state-of-the-art methods.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
192,573
2501.00632
Different thresholding methods on Nearest Shrunken Centroid algorithm
This article considers the impact of different thresholding methods to the Nearest Shrunken Centroid algorithm, which is popularly referred as the Prediction Analysis of Microarrays (PAM) for high-dimensional classification. PAM uses soft thresholding to achieve high computational efficiency and high classification accuracy but in the price of retaining too many features. When applied to microarray human cancers, PAM selected 2611 features on average from 10 multi-class datasets. Such a large number of features make it difficult to perform follow up study. One reason behind this problem is the soft thresholding, which is known to produce biased parameter estimate in regression analysis. In this article, we extend the PAM algorithm with two other thresholding methods, hard and order thresholding, and a deep search algorithm to achieve better thresholding parameter estimate. The modified algorithms are extensively tested and compared to the original one based on real data and Monte Carlo studies. In general, the modification not only gave better cancer status prediction accuracy, but also resulted in more parsimonious models with significantly smaller number of features.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
521,736
2010.02122
Quadratic approximate dynamic programming for scheduling water resources: a case study
We address the problem of scheduling water resources in a power system via approximate dynamic programming.To this goal, we model a finite horizon economic dispatch problemwith convex stage cost and affine dynamics, and consider aquadratic approximation of the value functions. Evaluating theachieved policy entails solving a quadratic program at each timestep, while value function fitting can be cast as a semidefiniteprogram. We test our proposed algorithm on a simplified versionof the Uruguayan power system, achieving a four percent costreduction with respect to the myopic policy
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
198,909
1804.10752
Syllable-Based Sequence-to-Sequence Speech Recognition with the Transformer in Mandarin Chinese
Sequence-to-sequence attention-based models have recently shown very promising results on automatic speech recognition (ASR) tasks, which integrate an acoustic, pronunciation and language model into a single neural network. In these models, the Transformer, a new sequence-to-sequence attention-based model relying entirely on self-attention without using RNNs or convolutions, achieves a new single-model state-of-the-art BLEU on neural machine translation (NMT) tasks. Since the outstanding performance of the Transformer, we extend it to speech and concentrate on it as the basic architecture of sequence-to-sequence attention-based model on Mandarin Chinese ASR tasks. Furthermore, we investigate a comparison between syllable based model and context-independent phoneme (CI-phoneme) based model with the Transformer in Mandarin Chinese. Additionally, a greedy cascading decoder with the Transformer is proposed for mapping CI-phoneme sequences and syllable sequences into word sequences. Experiments on HKUST datasets demonstrate that syllable based model with the Transformer performs better than CI-phoneme based counterpart, and achieves a character error rate (CER) of \emph{$28.77\%$}, which is competitive to the state-of-the-art CER of $28.0\%$ by the joint CTC-attention based encoder-decoder network.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
96,220
2408.07318
A systematic dataset generation technique applied to data-driven automotive aerodynamics
A novel strategy for generating datasets is developed within the context of drag prediction for automotive geometries using neural networks. A primary challenge in this space is constructing a training databse of sufficient size and diversity. Our method relies on a small number of starting data points, and provides a recipe to interpolate systematically between them, generating an arbitrary number of samples at the desired quality. We test this strategy using a realistic automotive geometry, and demonstrate that convolutional neural networks perform exceedingly well at predicting drag coefficients and surface pressures. Promising results are obtained in testing extrapolation performance. Our method can be applied to other problems of aerodynamic shape optimization.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
480,539
2006.07459
Consistent Estimation of Identifiable Nonparametric Mixture Models from Grouped Observations
Recent research has established sufficient conditions for finite mixture models to be identifiable from grouped observations. These conditions allow the mixture components to be nonparametric and have substantial (or even total) overlap. This work proposes an algorithm that consistently estimates any identifiable mixture model from grouped observations. Our analysis leverages an oracle inequality for weighted kernel density estimators of the distribution on groups, together with a general result showing that consistent estimation of the distribution on groups implies consistent estimation of mixture components. A practical implementation is provided for paired observations, and the approach is shown to outperform existing methods, especially when mixture components overlap significantly.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
181,800
2302.13286
Benchmarking of Cancelable Biometrics for Deep Templates
In this paper, we benchmark several cancelable biometrics (CB) schemes on different biometric characteristics. We consider BioHashing, Multi-Layer Perceptron (MLP) Hashing, Bloom Filters, and two schemes based on Index-of-Maximum (IoM) Hashing (i.e., IoM-URP and IoM-GRP). In addition to the mentioned CB schemes, we introduce a CB scheme (as a baseline) based on user-specific random transformations followed by binarization. We evaluate the unlinkability, irreversibility, and recognition performance (which are the required criteria by the ISO/IEC 24745 standard) of these CB schemes on deep learning based templates extracted from different physiological and behavioral biometric characteristics including face, voice, finger vein, and iris. In addition, we provide an open-source implementation of all the experiments presented to facilitate the reproducibility of our results.
false
false
false
false
false
false
false
false
false
false
false
true
true
false
false
false
false
false
347,892
2010.08265
Training Flexible Depth Model by Multi-Task Learning for Neural Machine Translation
The standard neural machine translation model can only decode with the same depth configuration as training. Restricted by this feature, we have to deploy models of various sizes to maintain the same translation latency, because the hardware conditions on different terminal devices (e.g., mobile phones) may vary greatly. Such individual training leads to increased model maintenance costs and slower model iterations, especially for the industry. In this work, we propose to use multi-task learning to train a flexible depth model that can adapt to different depth configurations during inference. Experimental results show that our approach can simultaneously support decoding in 24 depth configurations and is superior to the individual training and another flexible depth model training method -- LayerDrop.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
201,131
2011.01938
Power of data in quantum machine learning
The use of quantum computing for machine learning is among the most exciting prospective applications of quantum technologies. However, machine learning tasks where data is provided can be considerably different than commonly studied computational tasks. In this work, we show that some problems that are classically hard to compute can be easily predicted by classical machines learning from data. Using rigorous prediction error bounds as a foundation, we develop a methodology for assessing potential quantum advantage in learning tasks. The bounds are tight asymptotically and empirically predictive for a wide range of learning models. These constructions explain numerical results showing that with the help of data, classical machine learning models can be competitive with quantum models even if they are tailored to quantum problems. We then propose a projected quantum model that provides a simple and rigorous quantum speed-up for a learning problem in the fault-tolerant regime. For near-term implementations, we demonstrate a significant prediction advantage over some classical models on engineered data sets designed to demonstrate a maximal quantum advantage in one of the largest numerical tests for gate-based quantum machine learning to date, up to 30 qubits.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
204,768
2410.04541
On Evaluating LLMs' Capabilities as Functional Approximators: A Bayesian Perspective
Recent works have successfully applied Large Language Models (LLMs) to function modeling tasks. However, the reasons behind this success remain unclear. In this work, we propose a new evaluation framework to comprehensively assess LLMs' function modeling abilities. By adopting a Bayesian perspective of function modeling, we discover that LLMs are relatively weak in understanding patterns in raw data, but excel at utilizing prior knowledge about the domain to develop a strong understanding of the underlying function. Our findings offer new insights about the strengths and limitations of LLMs in the context of function modeling.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
495,335
2411.01765
Towards Pedagogical LLMs with Supervised Fine Tuning for Computing Education
This paper investigates supervised fine-tuning of large language models (LLMs) to improve their pedagogical alignment in computing education, addressing concerns that LLMs may hinder learning outcomes. The project utilised a proprietary dataset of 2,500 high quality question/answer pairs from programming course forums, and explores two research questions: the suitability of university course forums in contributing to fine-tuning datasets, and how supervised fine-tuning can improve LLMs' alignment with educational principles such as constructivism. Initial findings suggest benefits in pedagogical alignment of LLMs, with deeper evaluations required.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
505,212
2407.00509
Leveraging Ontologies to Document Bias in Data
Machine Learning (ML) systems are capable of reproducing and often amplifying undesired biases. This puts emphasis on the importance of operating under practices that enable the study and understanding of the intrinsic characteristics of ML pipelines, prompting the emergence of documentation frameworks with the idea that ``any remedy for bias starts with awareness of its existence''. However, a resource that can formally describe these pipelines in terms of biases detected is still amiss. To fill this gap, we present the Doc-BiasO ontology, a resource that aims to create an integrated vocabulary of biases defined in the \textit{fair-ML} literature and their measures, as well as to incorporate relevant terminology and the relationships between them. Overseeing ontology engineering best practices, we re-use existing vocabulary on machine learning and AI, to foster knowledge sharing and interoperability between the actors concerned with its research, development, regulation, among others. Overall, our main objective is to contribute towards clarifying existing terminology on bias research as it rapidly expands to all areas of AI and to improve the interpretation of bias in data and downstream impact.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
468,895
2111.05784
Maximum-distance Race Strategies for a Fully Electric Endurance Race Car
This paper presents a bi-level optimization framework to compute the maximum-distance stint and charging strategies for a fully electric endurance race car. Thereby, the lower level computes the minimum-stint-time Powertrain Operation (PO) for a given battery energy budget and stint length, whilst the upper level leverages that information to jointly optimize the stint length, charge time and number of pit stops, in order to maximize the driven distance in the course of a fixed-time endurance race. Specifically, we first extend a convex lap time optimization framework to capture multiple laps and force-based electric motor models, and use it to create a map linking the charge time and stint length to the achievable stint time. Second, we leverage the map to frame the maximum-race-distance problem as a mixed-integer second order conic program that can be efficiently solved to the global optimum with off-the-shelf optimization algorithms. Finally, we showcase our framework on a 6 h race around the Zandvoort circuit. Our results show that a flat-out strategy can be extremely detrimental, and that, compared to when the stints are optimized for a fixed number of pit stops, jointly optimizing the stints and number of pit stops can increase the driven distance of several laps.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
265,875
2402.07723
Generalization Bounds for Heavy-Tailed SDEs through the Fractional Fokker-Planck Equation
Understanding the generalization properties of heavy-tailed stochastic optimization algorithms has attracted increasing attention over the past years. While illuminating interesting aspects of stochastic optimizers by using heavy-tailed stochastic differential equations as proxies, prior works either provided expected generalization bounds, or introduced non-computable information theoretic terms. Addressing these drawbacks, in this work, we prove high-probability generalization bounds for heavy-tailed SDEs which do not contain any nontrivial information theoretic terms. To achieve this goal, we develop new proof techniques based on estimating the entropy flows associated with the so-called fractional Fokker-Planck equation (a partial differential equation that governs the evolution of the distribution of the corresponding heavy-tailed SDE). In addition to obtaining high-probability bounds, we show that our bounds have a better dependence on the dimension of parameters as compared to prior art. Our results further identify a phase transition phenomenon, which suggests that heavy tails can be either beneficial or harmful depending on the problem structure. We support our theory with experiments conducted in a variety of settings.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
428,823
2110.00665
Online Distribution System State Estimation via Stochastic Gradient Algorithm
Distribution network operation is becoming more challenging because of the growing integration of intermittent and volatile distributed energy resources (DERs). This motivates the development of new distribution system state estimation (DSSE) paradigms that can operate at fast timescale based on real-time data stream of asynchronous measurements enabled by modern information and communications technology. To solve the real-time DSSE with asynchronous measurements effectively and accurately, this paper formulates a weighted least squares DSSE problem and proposes an online stochastic gradient algorithm to solve it. The performance of the proposed scheme is analytically guaranteed and is numerically corroborated with realistic data on IEEE-123 bus feeder.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
258,477
2308.04009
Safe Control Synthesis for Multicopter via Control Barrier Function Backstepping
A safe controller for multicopter is proposed using control barrier function. Multicopter dynamics are reformulated to deal with mixed-relative-degree and non-strict-feedback-form dynamics, and a time-varying safe backstepping controller is designed. Despite the time-varying variation, it is proven that the control input can be obtained by solving quadratic programming with affine inequality constraints. The proposed controller does not utilize a cascade control system design, unlike existing studies on the safe control of multicopter. Various safety constraints on angular velocity, total thrust direction, velocity, and position can be considered. Numerical simulation results support that the proposed safe controller does not violate all safety constraints including low- and high-level dynamics.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
384,250
2311.09781
Online Reachability Analysis and Space Convexification for Autonomous Racing
This paper presents an optimisation-based approach for an obstacle avoidance problem within an autonomous vehicle racing context. Our control regime leverages online reachability analysis and sensor data to compute the maximal safe traversable region that an agent can traverse within the environment. The idea is to first compute a non-convex safe region, which then can be convexified via a novel coupled separating hyperplane algorithm. This derived safe area is then used to formulate a nonlinear model-predictive control problem that seeks to find an optimal and safe driving trajectory. We evaluate the proposed approach through a series of diverse experiments and assess the runtime requirements of our proposed approach through an analysis of the effects of a set of varying optimisation objectives for generating these coupled hyperplanes.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
408,292
1306.1650
OPS-QFTs: A new type of quaternion Fourier transforms based on the orthogonal planes split with one or two general pure quaternions
We explain the orthogonal planes split (OPS) of quaternions based on the arbitrary choice of one or two linearly independent pure unit quaternions $f,g$. Next we systematically generalize the quaternionic Fourier transform (QFT) applied to quaternion fields to conform with the OPS determined by $f,g$, or by only one pure unit quaternion $f$, comment on their geometric meaning, and establish inverse transformations. Keywords: Clifford geometric algebra, quaternion geometry, quaternion Fourier transform, inverse Fourier transform, orthogonal planes split
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
25,064
2003.04973
Localized Flood DetectionWith Minimal Labeled Social Media Data Using Transfer Learning
Social media generates an enormous amount of data on a daily basis but it is very challenging to effectively utilize the data without annotating or labeling it according to the target application. We investigate the problem of localized flood detection using the social sensing model (Twitter) in order to provide an efficient, reliable and accurate flood text classification model with minimal labeled data. This study is important since it can immensely help in providing the flood-related updates and notifications to the city officials for emergency decision making, rescue operations, and early warnings, etc. We propose to perform the text classification using the inductive transfer learning method i.e pre-trained language model ULMFiT and fine-tune it in order to effectively classify the flood-related feeds in any new location. Finally, we show that using very little new labeled data in the target domain we can successfully build an efficient and high performing model for flood detection and analysis with human-generated facts and observations from Twitter.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
167,715
1711.00355
Detection for 5G-NOMA: An Online Adaptive Machine Learning Approach
Non-orthogonal multiple access (NOMA) has emerged as a promising radio access technique for enabling the performance enhancements promised by the fifth-generation (5G) networks in terms of connectivity, low latency, and high spectrum efficiency. In the NOMA uplink, successive interference cancellation (SIC) based detection with device clustering has been suggested. In the case of multiple receive antennas, SIC can be combined with the minimum mean-squared error (MMSE) beamforming. However, there exists a tradeoff between the NOMA cluster size and the incurred SIC error. Larger clusters lead to larger errors but they are desirable from the spectrum efficiency and connectivity point of view. We propose a novel online learning based detection for the NOMA uplink. In particular, we design an online adaptive filter in the sum space of linear and Gaussian reproducing kernel Hilbert spaces (RKHSs). Such a sum space design is robust against variations of a dynamic wireless network that can deteriorate the performance of a purely nonlinear adaptive filter. We demonstrate by simulations that the proposed method outperforms the MMSE-SIC based detection for large cluster sizes.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
83,704
2410.21296
The Trap of Presumed Equivalence: Artificial General Intelligence Should Not Be Assessed on the Scale of Human Intelligence
A traditional approach to assessing emerging intelligence in the theory of intelligent systems is based on the similarity, "imitation" of human-like actions and behaviors, benchmarking the performance of intelligent systems on the scale of human cognitive skills. In this work we attempt to outline the shortcomings of this line of thought, which is based on the implicit presumption of the equivalence and compatibility of the originating and emergent intelligences. We provide arguments to the point that under some natural assumptions, developing intelligent systems will be able to form their own intents and objectives. Then, the difference in the rate of progress of natural and artificial systems that was noted on multiple occasions in the discourse on artificial intelligence can lead to the scenario of a progressive divergence of the intelligences, in their cognitive abilities, functions and resources, values, ethical frameworks, worldviews, intents and existential objectives: the scenario of the AGI evolutionary gap. We discuss evolutionary processes that can guide the development of emergent intelligent systems and attempt to identify the starting point of the progressive divergence scenario.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
503,180
2405.14088
High-dimensional Learning with Noisy Labels
This paper provides theoretical insights into high-dimensional binary classification with class-conditional noisy labels. Specifically, we study the behavior of a linear classifier with a label noisiness aware loss function, when both the dimension of data $p$ and the sample size $n$ are large and comparable. Relying on random matrix theory by supposing a Gaussian mixture data model, the performance of the linear classifier when $p,n\to \infty$ is shown to converge towards a limit, involving scalar statistics of the data. Importantly, our findings show that the low-dimensional intuitions to handle label noise do not hold in high-dimension, in the sense that the optimal classifier in low-dimension dramatically fails in high-dimension. Based on our derivations, we design an optimized method that is shown to be provably more efficient in handling noisy labels in high dimensions. Our theoretical conclusions are further confirmed by experiments on real datasets, where we show that our optimized approach outperforms the considered baselines.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
456,238
2103.03180
A Critical Note on Social Cloud
The idea of a social cloud has emerged as a resource sharing paradigm in a social network context. Undoubtedly, state-of-the-art social cloud systems demonstrate the potential of the social cloud acting as complementary to other computing paradigms such as the cloud, grid, peer-to-peer and volunteer computing. However, in this note, we have done a critical survey of the social cloud literature and come to the conclusion that these initial efforts fail to offer a general framework of the social cloud, also, to show the uniqueness of the social cloud. This short note reveals that there are significant differences regarding the concept of social cloud, resource definition, resource sharing and allocation mechanism, and its application and stakeholders. This study is an attempt to express a need for a general framework of the social cloud, which can incorporate various views and resource sharing setups discussed in the literature.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
true
223,196
2201.04572
Transmission Scheme, Detection and Power Allocation for Uplink User Cooperation with NOMA and RSMA
In this paper, we propose two novel cooperative-non-orthogonal-multiple-access (C-NOMA) and cooperative-rate-splitting-multiple-access (C-RSMA) schemes for uplink user cooperation. At the first mini-slot of these schemes, each user transmits its signal and receives the transmitted signal of the other user in full-duplex mode, and at the second mini-slot, each user relays the other user's message with amplify-and-forward (AF) protocol. At both schemes, to achieve better spectral efficiency, users transmit signals in the non-orthogonal mode in both mini-slots. In C-RSMA, we also apply the rate-splitting method in which the message of each user is divided into two streams. In the proposed detection schemes for C-NOMA and C-RSMA, we apply a combination of maximum-ratio-combining (MRC) and successive-interference-cancellation (SIC). Then, we derive the achievable rates for C-NOMA and C-RSMA, and formulate two optimization problems to maximize the minimum rate of two users by considering the proportional fairness coefficient. We propose two power allocation algorithms based on successive-convex-approximation (SCA) and geometric-programming (GP) to solve these non-convex problems. Next, we derive the asymptotic outage probability of the proposed C-NOMA and C-RSMA schemes, and prove that they achieve diversity order of two. Finally, the above-mentioned performance is confirmed by simulations.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
275,137
1006.4925
Simulating information creation in social Semantic Web applications
Appropriate ranking algorithms and incentive mechanisms are essential to the creation of high-quality information by users of a social network. However, evaluating such mechanisms in a quantifiable way is a difficult problem. Studies of live social networks of limited utility, due to the subjective nature of ranking and the lack of experimental control. Simulation provides a valuable alternative: insofar as the simulation resembles the live social network, fielding a new algorithm within a simulated network can predict the effect it will have on the live network. In this paper, we propose a simulation model based on the actor-conceptinstance model of semantic social networks, then we evaluate the model against a number of common ranking algorithms.We observe their effects on information creation in such a network, and we extend our results to the evaluation of generic ranking algorithms and incentive mechanisms.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
6,883
2501.19195
Rethinking Early Stopping: Refine, Then Calibrate
Machine learning classifiers often produce probabilistic predictions that are critical for accurate and interpretable decision-making in various domains. The quality of these predictions is generally evaluated with proper losses like cross-entropy, which decompose into two components: calibration error assesses general under/overconfidence, while refinement error measures the ability to distinguish different classes. In this paper, we provide theoretical and empirical evidence that these two errors are not minimized simultaneously during training. Selecting the best training epoch based on validation loss thus leads to a compromise point that is suboptimal for both calibration error and, most importantly, refinement error. To address this, we introduce a new metric for early stopping and hyperparameter tuning that makes it possible to minimize refinement error during training. The calibration error is minimized after training, using standard techniques. Our method integrates seamlessly with any architecture and consistently improves performance across diverse classification tasks.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
529,052
2404.00487
Contextual AI Journaling: Integrating LLM and Time Series Behavioral Sensing Technology to Promote Self-Reflection and Well-being using the MindScape App
MindScape aims to study the benefits of integrating time series behavioral patterns (e.g., conversational engagement, sleep, location) with Large Language Models (LLMs) to create a new form of contextual AI journaling, promoting self-reflection and well-being. We argue that integrating behavioral sensing in LLMs will likely lead to a new frontier in AI. In this Late-Breaking Work paper, we discuss the MindScape contextual journal App design that uses LLMs and behavioral sensing to generate contextual and personalized journaling prompts crafted to encourage self-reflection and emotional development. We also discuss the MindScape study of college students based on a preliminary user study and our upcoming study to assess the effectiveness of contextual AI journaling in promoting better well-being on college campuses. MindScape represents a new application class that embeds behavioral intelligence in AI.
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
442,951
2311.03810
Rethinking and Improving Multi-task Learning for End-to-end Speech Translation
Significant improvements in end-to-end speech translation (ST) have been achieved through the application of multi-task learning. However, the extent to which auxiliary tasks are highly consistent with the ST task, and how much this approach truly helps, have not been thoroughly studied. In this paper, we investigate the consistency between different tasks, considering different times and modules. We find that the textual encoder primarily facilitates cross-modal conversion, but the presence of noise in speech impedes the consistency between text and speech representations. Furthermore, we propose an improved multi-task learning (IMTL) approach for the ST task, which bridges the modal gap by mitigating the difference in length and representation. We conduct experiments on the MuST-C dataset. The results demonstrate that our method attains state-of-the-art results. Moreover, when additional data is used, we achieve the new SOTA result on MuST-C English to Spanish task with 20.8% of the training time required by the current SOTA method.
false
false
true
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
406,005
2208.12339
LinCQA: Faster Consistent Query Answering with Linear Time Guarantees
Most data analytical pipelines often encounter the problem of querying inconsistent data that violate pre-determined integrity constraints. Data cleaning is an extensively studied paradigm that singles out a consistent repair of the inconsistent data. Consistent query answering (CQA) is an alternative approach to data cleaning that asks for all tuples guaranteed to be returned by a given query on all (in most cases, exponentially many) repairs of the inconsistent data. This paper identifies a class of acyclic select-project-join (SPJ) queries for which CQA can be solved via SQL rewriting with a linear time guarantee. Our rewriting method can be viewed as a generalization of Yannakakis's algorithm for acyclic joins to the inconsistent setting. We present LinCQA, a system that can output rewritings in both SQL and non-recursive Datalog rules for every query in this class. We show that LinCQA often outperforms the existing CQA systems on both synthetic and real-world workloads, and in some cases, by orders of magnitude.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
314,692
1607.05523
Dendritic Spine Shape Analysis: A Clustering Perspective
Functional properties of neurons are strongly coupled with their morphology. Changes in neuronal activity alter morphological characteristics of dendritic spines. First step towards understanding the structure-function relationship is to group spines into main spine classes reported in the literature. Shape analysis of dendritic spines can help neuroscientists understand the underlying relationships. Due to unavailability of reliable automated tools, this analysis is currently performed manually which is a time-intensive and subjective task. Several studies on spine shape classification have been reported in the literature, however, there is an on-going debate on whether distinct spine shape classes exist or whether spines should be modeled through a continuum of shape variations. Another challenge is the subjectivity and bias that is introduced due to the supervised nature of classification approaches. In this paper, we aim to address these issues by presenting a clustering perspective. In this context, clustering may serve both confirmation of known patterns and discovery of new ones. We perform cluster analysis on two-photon microscopic images of spines using morphological, shape, and appearance based features and gain insights into the spine shape analysis problem. We use histogram of oriented gradients (HOG), disjunctive normal shape models (DNSM), morphological features, and intensity profile based features for cluster analysis. We use x-means to perform cluster analysis that selects the number of clusters automatically using the Bayesian information criterion (BIC). For all features, this analysis produces 4 clusters and we observe the formation of at least one cluster consisting of spines which are difficult to be assigned to a known class. This observation supports the argument of intermediate shape types.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
58,767
2007.05871
Lifted Codes and Lattices from Codes Over Finite Chain Rings
In this paper we give the generalization of lifted codes over any finite chain ring. This has been done by using the construction of finite chain rings from $p$-adic fields. Further we propose a lattice construction from linear codes over finite chain rings using lifted codes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
186,810
1912.12223
Bitopological Duality for Algebras of Fittings logic and Natural Duality extension
In this paper, we investigate a bitopological duality for algebras of Fitting's multi-valued logic. We also extend the natural duality theory for $\mathbb{ISP_I}(\mathcal{L})$ by developing a duality for $\mathbb{ISP}(\mathcal{L})$, where $\mathcal{L}$ is a finite algebra in which underlying lattice is bounded distributive.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
158,802
2501.12824
Enhancing Monocular Depth Estimation with Multi-Source Auxiliary Tasks
Monocular depth estimation (MDE) is a challenging task in computer vision, often hindered by the cost and scarcity of high-quality labeled datasets. We tackle this challenge using auxiliary datasets from related vision tasks for an alternating training scheme with a shared decoder built on top of a pre-trained vision foundation model, while giving a higher weight to MDE. Through extensive experiments we demonstrate the benefits of incorporating various in-domain auxiliary datasets and tasks to improve MDE quality on average by ~11%. Our experimental analysis shows that auxiliary tasks have different impacts, confirming the importance of task selection, highlighting that quality gains are not achieved by merely adding data. Remarkably, our study reveals that using semantic segmentation datasets as Multi-Label Dense Classification (MLDC) often results in additional quality gains. Lastly, our method significantly improves the data efficiency for the considered MDE datasets, enhancing their quality while reducing their size by at least 80%. This paves the way for using auxiliary data from related tasks to improve MDE quality despite limited availability of high-quality labeled data. Code is available at https://jugit.fz-juelich.de/ias-8/mdeaux.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
526,448
2010.10239
Complete Multilingual Neural Machine Translation
Multilingual Neural Machine Translation (MNMT) models are commonly trained on a joint set of bilingual corpora which is acutely English-centric (i.e. English either as the source or target language). While direct data between two languages that are non-English is explicitly available at times, its use is not common. In this paper, we first take a step back and look at the commonly used bilingual corpora (WMT), and resurface the existence and importance of implicit structure that existed in it: multi-way alignment across examples (the same sentence in more than two languages). We set out to study the use of multi-way aligned examples to enrich the original English-centric parallel corpora. We reintroduce this direct parallel data from multi-way aligned corpora between all source and target languages. By doing so, the English-centric graph expands into a complete graph, every language pair being connected. We call MNMT with such connectivity pattern complete Multilingual Neural Machine Translation (cMNMT) and demonstrate its utility and efficacy with a series of experiments and analysis. In combination with a novel training data sampling strategy that is conditioned on the target language only, cMNMT yields competitive translation quality for all language pairs. We further study the size effect of multi-way aligned data, its transfer learning capabilities and how it eases adding a new language in MNMT. Finally, we stress test cMNMT at scale and demonstrate that we can train a cMNMT model with up to 111*112=12,432 language pairs that provides competitive translation quality for all language pairs.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
201,827
2404.09051
Rethinking Iterative Stereo Matching from Diffusion Bridge Model Perspective
Recently, iteration-based stereo matching has shown great potential. However, these models optimize the disparity map using RNN variants. The discrete optimization process poses a challenge of information loss, which restricts the level of detail that can be expressed in the generated disparity map. In order to address these issues, we propose a novel training approach that incorporates diffusion models into the iterative optimization process. We designed a Time-based Gated Recurrent Unit (T-GRU) to correlate temporal and disparity outputs. Unlike standard recurrent units, we employ Agent Attention to generate more expressive features. We also designed an attention-based context network to capture a large amount of contextual information. Experiments on several public benchmarks show that we have achieved competitive stereo matching performance. Our model ranks first in the Scene Flow dataset, achieving over a 7% improvement compared to competing methods, and requires only 8 iterations to achieve state-of-the-art results.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
446,521
2005.10777
Manifold Alignment for Semantically Aligned Style Transfer
Most existing style transfer methods follow the assumption that styles can be represented with global statistics (e.g., Gram matrices or covariance matrices), and thus address the problem by forcing the output and style images to have similar global statistics. An alternative is the assumption of local style patterns, where algorithms are designed to swap similar local features of content and style images. However, the limitation of these existing methods is that they neglect the semantic structure of the content image which may lead to corrupted content structure in the output. In this paper, we make a new assumption that image features from the same semantic region form a manifold and an image with multiple semantic regions follows a multi-manifold distribution. Based on this assumption, the style transfer problem is formulated as aligning two multi-manifold distributions and a Manifold Alignment based Style Transfer (MAST) framework is proposed. The proposed framework allows semantically similar regions between the output and the style image share similar style patterns. Moreover, the proposed manifold alignment method is flexible to allow user editing or using semantic segmentation maps as guidance for style transfer. To allow the method to be applicable to photorealistic style transfer, we propose a new adaptive weight skip connection network structure to preserve the content details. Extensive experiments verify the effectiveness of the proposed framework for both artistic and photorealistic style transfer. Code is available at https://github.com/NJUHuoJing/MAST.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
178,281
2405.08019
AdaKD: Dynamic Knowledge Distillation of ASR models using Adaptive Loss Weighting
Knowledge distillation, a widely used model compression technique, works on the basis of transferring knowledge from a cumbersome teacher model to a lightweight student model. The technique involves jointly optimizing the task specific and knowledge distillation losses with a weight assigned to them. Despite these weights playing a crucial role in the performance of the distillation process, current methods provide equal weight to both losses, leading to suboptimal performance. In this paper, we propose Adaptive Knowledge Distillation, a novel technique inspired by curriculum learning to adaptively weigh the losses at instance level. This technique goes by the notion that sample difficulty increases with teacher loss. Our method follows a plug-and-play paradigm that can be applied on top of any task-specific and distillation objectives. Experiments show that our method performs better than conventional knowledge distillation method and existing instance-level loss functions.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
453,955