id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2502.10675 | Hierarchically-Structured Open-Vocabulary Indoor Scene Synthesis with
Pre-trained Large Language Model | Indoor scene synthesis aims to automatically produce plausible, realistic and diverse 3D indoor scenes, especially given arbitrary user requirements. Recently, the promising generalization ability of pre-trained large language models (LLM) assist in open-vocabulary indoor scene synthesis. However, the challenge lies in converting the LLM-generated outputs into reasonable and physically feasible scene layouts. In this paper, we propose to generate hierarchically structured scene descriptions with LLM and then compute the scene layouts. Specifically, we train a hierarchy-aware network to infer the fine-grained relative positions between objects and design a divide-and-conquer optimization to solve for scene layouts. The advantages of using hierarchically structured scene representation are two-fold. First, the hierarchical structure provides a rough grounding for object arrangement, which alleviates contradictory placements with dense relations and enhances the generalization ability of the network to infer fine-grained placements. Second, it naturally supports the divide-and-conquer optimization, by first arranging the sub-scenes and then the entire scene, to more effectively solve for a feasible layout. We conduct extensive comparison experiments and ablation studies with both qualitative and quantitative evaluations to validate the effectiveness of our key designs with the hierarchically structured scene representation. Our approach can generate more reasonable scene layouts while better aligned with the user requirements and LLM descriptions. We also present open-vocabulary scene synthesis and interactive scene design results to show the strength of our approach in the applications. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 533,994 |
2304.12329 | Pre-trained Embeddings for Entity Resolution: An Experimental Analysis
[Experiment, Analysis & Benchmark] | Many recent works on Entity Resolution (ER) leverage Deep Learning techniques involving language models to improve effectiveness. This is applied to both main steps of ER, i.e., blocking and matching. Several pre-trained embeddings have been tested, with the most popular ones being fastText and variants of the BERT model. However, there is no detailed analysis of their pros and cons. To cover this gap, we perform a thorough experimental analysis of 12 popular language models over 17 established benchmark datasets. First, we assess their vectorization overhead for converting all input entities into dense embeddings vectors. Second, we investigate their blocking performance, performing a detailed scalability analysis, and comparing them with the state-of-the-art deep learning-based blocking method. Third, we conclude with their relative performance for both supervised and unsupervised matching. Our experimental results provide novel insights into the strengths and weaknesses of the main language models, facilitating researchers and practitioners to select the most suitable ones in practice. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | false | 360,176 |
1705.09064 | MagNet: a Two-Pronged Defense against Adversarial Examples | Deep learning has shown promising results on hard perceptual problems in recent years. However, deep learning systems are found to be vulnerable to small adversarial perturbations that are nearly imperceptible to human. Such specially crafted perturbations cause deep learning systems to output incorrect decisions, with potentially disastrous consequences. These vulnerabilities hinder the deployment of deep learning systems where safety or security is important. Attempts to secure deep learning systems either target specific attacks or have been shown to be ineffective. In this paper, we propose MagNet, a framework for defending neural network classifiers against adversarial examples. MagNet does not modify the protected classifier or know the process for generating adversarial examples. MagNet includes one or more separate detector networks and a reformer network. Different from previous work, MagNet learns to differentiate between normal and adversarial examples by approximating the manifold of normal examples. Since it does not rely on any process for generating adversarial examples, it has substantial generalization power. Moreover, MagNet reconstructs adversarial examples by moving them towards the manifold, which is effective for helping classify adversarial examples with small perturbation correctly. We discuss the intrinsic difficulty in defending against whitebox attack and propose a mechanism to defend against graybox attack. Inspired by the use of randomness in cryptography, we propose to use diversity to strengthen MagNet. We show empirically that MagNet is effective against most advanced state-of-the-art attacks in blackbox and graybox scenarios while keeping false positive rate on normal examples very low. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 74,142 |
2502.11895 | Continual Quantization-Aware Pre-Training: When to transition from
16-bit to 1.58-bit pre-training for BitNet language models? | Large language models (LLMs) require immense resources for training and inference. Quantization, a technique that reduces the precision of model parameters, offers a promising solution for improving LLM efficiency and sustainability. While post-training quantization methods typically achieve 4-8 bits per parameter, recent research suggests that training LLMs with 1.58 bits per weight parameter from scratch can maintain model accuracy while greatly reducing memory requirements and energy consumption at inference time. Here, we investigate a training strategy for quantization-aware pre-training, where the models are first trained with 16-bit precision and then transition into 1.58-bit quantization-aware training. Our results on 11 downstream tasks show that this 16-to-1.58-bit training strategy is preferable over full 1.58-bit training and leaves models closer to those which have undergone 16-bit training. We further investigate the effects of retaining the optimizer state at the transition point and gradually phasing in quantization strength -- finding that both techniques alleviate the magnitude of loss spikes, but also that these effects can be compensated through further training. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 534,591 |
2001.10155 | Medical Image Segmentation via Unsupervised Convolutional Neural Network | For the majority of the learning-based segmentation methods, a large quantity of high-quality training data is required. In this paper, we present a novel learning-based segmentation model that could be trained semi- or un- supervised. Specifically, in the unsupervised setting, we parameterize the Active contour without edges (ACWE) framework via a convolutional neural network (ConvNet), and optimize the parameters of the ConvNet using a self-supervised method. In another setting (semi-supervised), the auxiliary segmentation ground truth is used during training. We show that the method provides fast and high-quality bone segmentation in the context of single-photon emission computed tomography (SPECT) image. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 161,750 |
1904.05338 | New Computational and Statistical Aspects of Regularized Regression with
Application to Rare Feature Selection and Aggregation | Prior knowledge on properties of a target model often come as discrete or combinatorial descriptions. This work provides a unified computational framework for defining norms that promote such structures. More specifically, we develop associated tools for optimization involving such norms given only the orthogonal projection oracle onto the non-convex set of desired models. As an example, we study a norm, which we term the doubly-sparse norm, for promoting vectors with few nonzero entries taking only a few distinct values. We further discuss how the K-means algorithm can serve as the underlying projection oracle in this case and how it can be efficiently represented as a quadratically constrained quadratic program. Our motivation for the study of this norm is regularized regression in the presence of rare features which poses a challenge to various methods within high-dimensional statistics, and in machine learning in general. The proposed estimation procedure is designed to perform automatic feature selection and aggregation for which we develop statistical bounds. The bounds are general and offer a statistical framework for norm-based regularization. The bounds rely on novel geometric quantities on which we attempt to elaborate as well. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 127,286 |
2501.01085 | Noise-Resilient Symbolic Regression with Dynamic Gating Reinforcement
Learning | Symbolic regression (SR) has emerged as a pivotal technique for uncovering the intrinsic information within data and enhancing the interpretability of AI models. However, current state-of-the-art (sota) SR methods struggle to perform correct recovery of symbolic expressions from high-noise data. To address this issue, we introduce a novel noise-resilient SR (NRSR) method capable of recovering expressions from high-noise data. Our method leverages a novel reinforcement learning (RL) approach in conjunction with a designed noise-resilient gating module (NGM) to learn symbolic selection policies. The gating module can dynamically filter the meaningless information from high-noise data, thereby demonstrating a high noise-resilient capability for the SR process. And we also design a mixed path entropy (MPE) bonus term in the RL process to increase the exploration capabilities of the policy. Experimental results demonstrate that our method significantly outperforms several popular baselines on benchmarks with high-noise data. Furthermore, our method also can achieve sota performance on benchmarks with clean data, showcasing its robustness and efficacy in SR tasks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 521,916 |
2207.02173 | DBN-Mix: Training Dual Branch Network Using Bilateral Mixup Augmentation
for Long-Tailed Visual Recognition | There is growing interest in the challenging visual perception task of learning from long-tailed class distributions. The extreme class imbalance in the training dataset biases the model to prefer recognizing majority class data over minority class data. Furthermore, the lack of diversity in minority class samples makes it difficult to find a good representation. In this paper, we propose an effective data augmentation method, referred to as bilateral mixup augmentation, which can improve the performance of long-tailed visual recognition. The bilateral mixup augmentation combines two samples generated by a uniform sampler and a re-balanced sampler and augments the training dataset to enhance the representation learning for minority classes. We also reduce the classifier bias using class-wise temperature scaling, which scales the logits differently per class in the training phase. We apply both ideas to the dual-branch network (DBN) framework, presenting a new model, named dual-branch network with bilateral mixup (DBN-Mix). Experiments on popular long-tailed visual recognition datasets show that DBN-Mix improves performance significantly over baseline and that the proposed method achieves state-of-the-art performance in some categories of benchmarks. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 306,426 |
1312.7581 | On the Learning Behavior of Adaptive Networks - Part I: Transient
Analysis | This work carries out a detailed transient analysis of the learning behavior of multi-agent networks, and reveals interesting results about the learning abilities of distributed strategies. Among other results, the analysis reveals how combination policies influence the learning process of networked agents, and how these policies can steer the convergence point towards any of many possible Pareto optimal solutions. The results also establish that the learning process of an adaptive network undergoes three (rather than two) well-defined stages of evolution with distinctive convergence rates during the first two stages, while attaining a finite mean-square-error (MSE) level in the last stage. The analysis reveals what aspects of the network topology influence performance directly and suggests design procedures that can optimize performance by adjusting the relevant topology parameters. Interestingly, it is further shown that, in the adaptation regime, each agent in a sparsely connected network is able to achieve the same performance level as that of a centralized stochastic-gradient strategy even for left-stochastic combination strategies. These results lead to a deeper understanding and useful insights on the convergence behavior of coupled distributed learners. The results also lead to effective design mechanisms to help diffuse information more thoroughly over networks. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | 29,498 |
1407.0005 | Sum Rate Maximizing Multigroup Multicast Beamforming under Per-antenna
Power Constraints | A multi-antenna transmitter that conveys independent sets of common data to distinct groups of users is herein considered, a model known as physical layer multicasting to multiple co-channel groups. In the recently proposed context of per-antenna power constrained multigroup multicasting, the present work focuses on a novel system design that aims at maximizing the total achievable throughput. Towards increasing the system sum rate, the available power resources need to be allocated to well conditioned groups of users. A detailed solution to tackle the elaborate sum rate maximization multigroup multicast problem under per-antenna power constraints is therefore derived. Numerical results are presented to quantify the gains of the proposed algorithm over heuristic solutions. Besides Rayleigh faded channels, the solution is also applied to uniform linear array transmitters operating in the far field, where line-ofsight conditions are realized. In this setting, a sensitivity analysis with respect to the angular separation of co-group users is included. Finally, a simple scenario providing important intuitions for the sum rate maximizing multigroup multicast solutions is elaborated. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 34,284 |
2407.07707 | Group Projected Subspace Pursuit for Block Sparse Signal Reconstruction:
Convergence Analysis and Applications | In this paper, we present a convergence analysis of the Group Projected Subspace Pursuit (GPSP) algorithm proposed by He et al. [HKL+23] (Group Projected subspace pursuit for IDENTification of variable coefficient differential equations (GP-IDENT), Journal of Computational Physics, 494, 112526) and extend its application to general tasks of block sparse signal recovery. We prove that when the sampling matrix satisfies the Block Restricted Isometry Property (BRIP) with a sufficiently small Block Restricted Isometry Constant (BRIC), GPSP exactly recovers the true block sparse signals. When the observations are noisy, this convergence property of GPSP remains valid if the magnitude of true signal is sufficiently large. GPSP selects the features by subspace projection criterion (SPC) for candidate inclusion and response magnitude criterion (RMC) for candidate exclusion. We compare these criteria with counterparts of other state-of-the-art greedy algorithms. Our theoretical analysis and numerical ablation studies reveal that SPC is critical to the superior performances of GPSP, and that RMC can enhance the robustness of feature identification when observations contain noises. We test and compare GPSP with other methods in diverse settings, including heterogeneous random block matrices, inexact observations, face recognition, and PDE identification. We find that GPSP outperforms the other algorithms in most cases for various levels of block sparsity and block sizes, justifying its effectiveness for general applications. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 471,865 |
2207.00220 | Pile of Law: Learning Responsible Data Filtering from the Law and a
256GB Open-Source Legal Dataset | One concern with the rise of large language models lies with their potential for significant harm, particularly from pretraining on biased, obscene, copyrighted, and private information. Emerging ethical approaches have attempted to filter pretraining material, but such approaches have been ad hoc and failed to take context into account. We offer an approach to filtering grounded in law, which has directly addressed the tradeoffs in filtering material. First, we gather and make available the Pile of Law, a 256GB (and growing) dataset of open-source English-language legal and administrative data, covering court opinions, contracts, administrative rules, and legislative records. Pretraining on the Pile of Law may help with legal tasks that have the promise to improve access to justice. Second, we distill the legal norms that governments have developed to constrain the inclusion of toxic or private content into actionable lessons for researchers and discuss how our dataset reflects these norms. Third, we show how the Pile of Law offers researchers the opportunity to learn such filtering rules directly from the data, providing an exciting new research direction in model-based processing. | false | false | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | 305,686 |
1806.10062 | Blind Decoding-Metric Estimation for Probabilistic Shaping via
Expectation Maximization | An unsupervised learning approach based on expectation maximization is proposed to obtain the parameters of a soft decision forward error correction decoding metric for probabilistic shaping. The algorithm depends only on the channel observations and does not require transmitted data. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 101,479 |
2404.09248 | Knowledgeable Agents by Offline Reinforcement Learning from Large
Language Model Rollouts | Reinforcement learning (RL) trains agents to accomplish complex tasks through environmental interaction data, but its capacity is also limited by the scope of the available data. To obtain a knowledgeable agent, a promising approach is to leverage the knowledge from large language models (LLMs). Despite previous studies combining LLMs with RL, seamless integration of the two components remains challenging due to their semantic gap. This paper introduces a novel method, Knowledgeable Agents from Language Model Rollouts (KALM), which extracts knowledge from LLMs in the form of imaginary rollouts that can be easily learned by the agent through offline reinforcement learning methods. The primary challenge of KALM lies in LLM grounding, as LLMs are inherently limited to textual data, whereas environmental data often comprise numerical vectors unseen to LLMs. To address this, KALM fine-tunes the LLM to perform various tasks based on environmental data, including bidirectional translation between natural language descriptions of skills and their corresponding rollout data. This grounding process enhances the LLM's comprehension of environmental dynamics, enabling it to generate diverse and meaningful imaginary rollouts that reflect novel skills. Initial empirical evaluations on the CLEVR-Robot environment demonstrate that KALM enables agents to complete complex rephrasings of task goals and extend their capabilities to novel tasks requiring unprecedented optimal behaviors. KALM achieves a success rate of 46% in executing tasks with unseen goals, substantially surpassing the 26% success rate achieved by baseline methods. Furthermore, KALM effectively enables the LLM to comprehend environmental dynamics, resulting in the generation of meaningful imaginary rollouts that reflect novel skills and demonstrate the seamless integration of large language models and reinforcement learning. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 446,595 |
2303.07944 | Non-Contrastive Unsupervised Learning of Physiological Signals from
Video | Subtle periodic signals such as blood volume pulse and respiration can be extracted from RGB video, enabling remote health monitoring at low cost. Advancements in remote pulse estimation -- or remote photoplethysmography (rPPG) -- are currently driven by deep learning solutions. However, modern approaches are trained and evaluated on benchmark datasets with associated ground truth from contact-PPG sensors. We present the first non-contrastive unsupervised learning framework for signal regression to break free from the constraints of labelled video data. With minimal assumptions of periodicity and finite bandwidth, our approach is capable of discovering the blood volume pulse directly from unlabelled videos. We find that encouraging sparse power spectra within normal physiological bandlimits and variance over batches of power spectra is sufficient for learning visual features of periodic signals. We perform the first experiments utilizing unlabelled video data not specifically created for rPPG to train robust pulse rate estimators. Given the limited inductive biases and impressive empirical results, the approach is theoretically capable of discovering other periodic signals from video, enabling multiple physiological measurements without the need for ground truth signals. Codes to fully reproduce the experiments are made available along with the paper. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 351,449 |
2303.02215 | Learning Stabilization Control from Observations by Learning
Lyapunov-like Proxy Models | The deployment of Reinforcement Learning to robotics applications faces the difficulty of reward engineering. Therefore, approaches have focused on creating reward functions by Learning from Observations (LfO) which is the task of learning policies from expert trajectories that only contain state sequences. We propose new methods for LfO for the important class of continuous control problems of learning to stabilize, by introducing intermediate proxy models acting as reward functions between the expert and the agent policy based on Lyapunov stability theory. Our LfO training process consists of two steps. The first step attempts to learn a Lyapunov-like landscape proxy model from expert state sequences without access to any kinematics model, and the second step uses the learned landscape model to guide in training the learner's policy. We formulate novel learning objectives for the two steps that are important for overall training success. We evaluate our methods in real automobile robot environments and other simulated stabilization control problems in model-free settings, like Quadrotor control and maintaining upright positions of Hopper in MuJoCo. We compare with state-of-the-art approaches and show the proposed methods can learn efficiently with less expert observations. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 349,259 |
2311.18812 | What Do Llamas Really Think? Revealing Preference Biases in Language
Model Representations | Do large language models (LLMs) exhibit sociodemographic biases, even when they decline to respond? To bypass their refusal to "speak," we study this research question by probing contextualized embeddings and exploring whether this bias is encoded in its latent representations. We propose a logistic Bradley-Terry probe which predicts word pair preferences of LLMs from the words' hidden vectors. We first validate our probe on three pair preference tasks and thirteen LLMs, where we outperform the word embedding association test (WEAT), a standard approach in testing for implicit association, by a relative 27% in error rate. We also find that word pair preferences are best represented in the middle layers. Next, we transfer probes trained on harmless tasks (e.g., pick the larger number) to controversial ones (compare ethnicities) to examine biases in nationality, politics, religion, and gender. We observe substantial bias for all target classes: for instance, the Mistral model implicitly prefers Europe to Africa, Christianity to Judaism, and left-wing to right-wing politics, despite declining to answer. This suggests that instruction fine-tuning does not necessarily debias contextualized embeddings. Our codebase is at https://github.com/castorini/biasprobe. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 411,828 |
2006.04672 | POLY-HOOT: Monte-Carlo Planning in Continuous Space MDPs with
Non-Asymptotic Analysis | Monte-Carlo planning, as exemplified by Monte-Carlo Tree Search (MCTS), has demonstrated remarkable performance in applications with finite spaces. In this paper, we consider Monte-Carlo planning in an environment with continuous state-action spaces, a much less understood problem with important applications in control and robotics. We introduce POLY-HOOT, an algorithm that augments MCTS with a continuous armed bandit strategy named Hierarchical Optimistic Optimization (HOO) (Bubeck et al., 2011). Specifically, we enhance HOO by using an appropriate polynomial, rather than logarithmic, bonus term in the upper confidence bounds. Such a polynomial bonus is motivated by its empirical successes in AlphaGo Zero (Silver et al., 2017b), as well as its significant role in achieving theoretical guarantees of finite space MCTS (Shah et al., 2019). We investigate, for the first time, the regret of the enhanced HOO algorithm in non-stationary bandit problems. Using this result as a building block, we establish non-asymptotic convergence guarantees for POLY-HOOT: the value estimate converges to an arbitrarily small neighborhood of the optimal value function at a polynomial rate. We further provide experimental results that corroborate our theoretical findings. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 180,772 |
2312.07821 | Radio Signal Classification by Adversarially Robust Quantum Machine
Learning | Radio signal classification plays a pivotal role in identifying the modulation scheme used in received radio signals, which is essential for demodulation and proper interpretation of the transmitted information. Researchers have underscored the high susceptibility of ML algorithms for radio signal classification to adversarial attacks. Such vulnerability could result in severe consequences, including misinterpretation of critical messages, interception of classified information, or disruption of communication channels. Recent advancements in quantum computing have revolutionized theories and implementations of computation, bringing the unprecedented development of Quantum Machine Learning (QML). It is shown that quantum variational classifiers (QVCs) provide notably enhanced robustness against classical adversarial attacks in image classification. However, no research has yet explored whether QML can similarly mitigate adversarial threats in the context of radio signal classification. This work applies QVCs to radio signal classification and studies their robustness to various adversarial attacks. We also propose the novel application of the approximate amplitude encoding (AAE) technique to encode radio signal data efficiently. Our extensive simulation results present that attacks generated on QVCs transfer well to CNN models, indicating that these adversarial examples can fool neural networks that they are not explicitly designed to attack. However, the converse is not true. QVCs primarily resist the attacks generated on CNNs. Overall, with comprehensive simulations, our results shed new light on the growing field of QML by bridging knowledge gaps in QAML in radio signal classification and uncovering the advantages of applying QML methods in practical applications. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 415,065 |
2203.08362 | Spot the Difference: A Cooperative Object-Referring Game in
Non-Perfectly Co-Observable Scene | Visual dialog has witnessed great progress after introducing various vision-oriented goals into the conversation, especially such as GuessWhich and GuessWhat, where the only image is visible by either and both of the questioner and the answerer, respectively. Researchers explore more on visual dialog tasks in such kind of single- or perfectly co-observable visual scene, while somewhat neglect the exploration on tasks of non perfectly co-observable visual scene, where the images accessed by two agents may not be exactly the same, often occurred in practice. Although building common ground in non-perfectly co-observable visual scene through conversation is significant for advanced dialog agents, the lack of such dialog task and corresponding large-scale dataset makes it impossible to carry out in-depth research. To break this limitation, we propose an object-referring game in non-perfectly co-observable visual scene, where the goal is to spot the difference between the similar visual scenes through conversing in natural language. The task addresses challenges of the dialog strategy in non-perfectly co-observable visual scene and the ability of categorizing objects. Correspondingly, we construct a large-scale multimodal dataset, named SpotDiff, which contains 87k Virtual Reality images and 97k dialogs generated by self-play. Finally, we give benchmark models for this task, and conduct extensive experiments to evaluate its performance as well as analyze its main challenges. | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | 285,756 |
2207.02295 | Implementing Reinforcement Learning Datacenter Congestion Control in
NVIDIA NICs | As communication protocols evolve, datacenter network utilization increases. As a result, congestion is more frequent, causing higher latency and packet loss. Combined with the increasing complexity of workloads, manual design of congestion control (CC) algorithms becomes extremely difficult. This calls for the development of AI approaches to replace the human effort. Unfortunately, it is currently not possible to deploy AI models on network devices due to their limited computational capabilities. Here, we offer a solution to this problem by building a computationally-light solution based on a recent reinforcement learning CC algorithm [arXiv:2207.02295]. We reduce the inference time of RL-CC by x500 by distilling its complex neural network into decision trees. This transformation enables real-time inference within the $\mu$-sec decision-time requirement, with a negligible effect on quality. We deploy the transformed policy on NVIDIA NICs in a live cluster. Compared to popular CC algorithms used in production, RL-CC is the only method that performs well on all benchmarks tested over a large range of number of flows. It balances multiple metrics simultaneously: bandwidth, latency, and packet drops. These results suggest that data-driven methods for CC are feasible, challenging the prior belief that handcrafted heuristics are necessary to achieve optimal performance. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 306,467 |
1912.12055 | nnAudio: An on-the-fly GPU Audio to Spectrogram Conversion Toolbox Using
1D Convolution Neural Networks | Converting time domain waveforms to frequency domain spectrograms is typically considered to be a prepossessing step done before model training. This approach, however, has several drawbacks. First, it takes a lot of hard disk space to store different frequency domain representations. This is especially true during the model development and tuning process, when exploring various types of spectrograms for optimal performance. Second, if another dataset is used, one must process all the audio clips again before the network can be retrained. In this paper, we integrate the time domain to frequency domain conversion as part of the model structure, and propose a neural network based toolbox, nnAudio, which leverages 1D convolutional neural networks to perform time domain to frequency domain conversion during feed-forward. It allows on-the-fly spectrogram generation without the need to store any spectrograms on the disk. This approach also allows back-propagation on the waveforms-to-spectrograms transformation layer, which implies that this transformation process can be made trainable, and hence further optimized by gradient descent. nnAudio reduces the waveforms-to-spectrograms conversion time for 1,770 waveforms (from the MAPS dataset) from $10.64$ seconds with librosa to only $0.001$ seconds for Short-Time Fourier Transform (STFT), $18.3$ seconds to $0.015$ seconds for Mel spectrogram, $103.4$ seconds to $0.258$ for constant-Q transform (CQT), when using GPU on our DGX work station with CPU: Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz Tesla v100 32Gb GPUs. (Only 1 GPU is being used for all the experiments.) We also further optimize the existing CQT algorithm, so that the CQT spectrogram can be obtained without aliasing in a much faster computation time (from $0.258$ seconds to only $0.001$ seconds). | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 158,742 |
2403.11530 | Continual Forgetting for Pre-trained Vision Models | For privacy and security concerns, the need to erase unwanted information from pre-trained vision models is becoming evident nowadays. In real-world scenarios, erasure requests originate at any time from both users and model owners. These requests usually form a sequence. Therefore, under such a setting, selective information is expected to be continuously removed from a pre-trained model while maintaining the rest. We define this problem as continual forgetting and identify two key challenges. (i) For unwanted knowledge, efficient and effective deleting is crucial. (ii) For remaining knowledge, the impact brought by the forgetting procedure should be minimal. To address them, we propose Group Sparse LoRA (GS-LoRA). Specifically, towards (i), we use LoRA modules to fine-tune the FFN layers in Transformer blocks for each forgetting task independently, and towards (ii), a simple group sparse regularization is adopted, enabling automatic selection of specific LoRA groups and zeroing out the others. GS-LoRA is effective, parameter-efficient, data-efficient, and easy to implement. We conduct extensive experiments on face recognition, object detection and image classification and demonstrate that GS-LoRA manages to forget specific classes with minimal impact on other classes. Codes will be released on \url{https://github.com/bjzhb666/GS-LoRA}. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 438,745 |
2103.10206 | DanceFormer: Music Conditioned 3D Dance Generation with Parametric
Motion Transformer | Generating 3D dances from music is an emerged research task that benefits a lot of applications in vision and graphics. Previous works treat this task as sequence generation, however, it is challenging to render a music-aligned long-term sequence with high kinematic complexity and coherent movements. In this paper, we reformulate it by a two-stage process, ie, a key pose generation and then an in-between parametric motion curve prediction, where the key poses are easier to be synchronized with the music beats and the parametric curves can be efficiently regressed to render fluent rhythm-aligned movements. We named the proposed method as DanceFormer, which includes two cascading kinematics-enhanced transformer-guided networks (called DanTrans) that tackle each stage, respectively. Furthermore, we propose a large-scale music conditioned 3D dance dataset, called PhantomDance, that is accurately labeled by experienced animators rather than reconstruction or motion capture. This dataset also encodes dances as key poses and parametric motion curves apart from pose sequences, thus benefiting the training of our DanceFormer. Extensive experiments demonstrate that the proposed method, even trained by existing datasets, can generate fluent, performative, and music-matched 3D dances that surpass previous works quantitatively and qualitatively. Moreover, the proposed DanceFormer, together with the PhantomDance dataset (https://github.com/libuyu/PhantomDanceDataset), are seamlessly compatible with industrial animation software, thus facilitating the adaptation for various downstream applications. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 225,385 |
1902.02513 | Advances on CNN-based super-resolution of Sentinel-2 images | Thanks to their temporal-spatial coverage and free access, Sentinel-2 images are very interesting for the community. However, a relatively coarse spatial resolution, compared to that of state-of-the-art commercial products, motivates the study of super-resolution techniques to mitigate such a limitation. Specifically, thirtheen bands are sensed simultaneously but at different spatial resolutions: 10, 20, and 60 meters depending on the spectral location. Here, building upon our previous convolutional neural network (CNN) based method, we propose an improved CNN solution to super-resolve the 20-m resolution bands benefiting spatial details conveyed by the accompanying 10-m spectral bands. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 120,899 |
2003.02554 | Adaptive Prediction Timing for Electronic Health Records | In realistic scenarios, multivariate timeseries evolve over case-by-case time-scales. This is particularly clear in medicine, where the rate of clinical events varies by ward, patient, and application. Increasingly complex models have been shown to effectively predict patient outcomes, but have failed to adapt granularity to these inherent temporal resolutions. As such, we introduce a novel, more realistic, approach to generating patient outcome predictions at an adaptive rate based on uncertainty accumulation in Bayesian recurrent models. We use a Recurrent Neural Network (RNN) and a Bayesian embedding layer with a new aggregation method to demonstrate adaptive prediction timing. Our model predicts more frequently when events are dense or the model is certain of event latent representations, and less frequently when readings are sparse or the model is uncertain. At 48 hours after patient admission, our model achieves equal performance compared to its static-windowed counterparts, while generating patient- and event-specific prediction timings that lead to improved predictive performance over the crucial first 12 hours of the patient stay. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 166,968 |
2307.14439 | Fixed Integral Neural Networks | It is often useful to perform integration over learned functions represented by neural networks. However, this integration is usually performed numerically, as analytical integration over learned functions (especially neural networks) is generally viewed as intractable. In this work, we present a method for representing the analytical integral of a learned function $f$. This allows the exact integral of a neural network to be computed, and enables constrained neural networks to be parametrised by applying constraints directly to the integral. Crucially, we also introduce a method to constrain $f$ to be positive, a necessary condition for many applications (e.g. probability distributions, distance metrics, etc). Finally, we introduce several applications where our fixed-integral neural network (FINN) can be utilised. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 381,930 |
2312.13317 | Deep Hybrid Camera Deblurring for Smartphone Cameras | Mobile cameras, despite their significant advancements, still have difficulty in low-light imaging due to compact sensors and lenses, leading to longer exposures and motion blur. Traditional blind deconvolution methods and learning-based deblurring methods can be potential solutions to remove blur. However, achieving practical performance still remains a challenge. To address this, we propose a learning-based deblurring framework for smartphones, utilizing wide and ultra-wide cameras as a hybrid camera system. We simultaneously capture a long-exposure wide image and short-exposure burst ultra-wide images, and utilize the burst images to deblur the wide image. To fully exploit burst ultra-wide images, we present HCDeblur, a practical deblurring framework that includes novel deblurring networks, HC-DNet and HC-FNet. HC-DNet utilizes motion information extracted from burst images to deblur a wide image, and HC-FNet leverages burst images as reference images to further enhance a deblurred output. For training and evaluating the proposed method, we introduce the HCBlur dataset, which consists of synthetic and real-world datasets. Our experiments demonstrate that HCDeblur achieves state-of-the-art deblurring quality. Code and datasets are available at https://cg.postech.ac.kr/research/HCDeblur. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 417,277 |
2405.00715 | Adapting Open-Source Large Language Models for Cost-Effective,
Expert-Level Clinical Note Generation with On-Policy Reinforcement Learning | Proprietary Large Language Models (LLMs) such as GPT-4 and Gemini have demonstrated promising capabilities in clinical text summarization tasks. However, due to patient data privacy concerns and computational costs, many healthcare providers prefer using small, locally-hosted models over external generic LLMs. This study presents a comprehensive domain- and task-specific adaptation process for the open-source LLaMA-2 13 billion parameter model, enabling it to generate high-quality clinical notes from outpatient patient-doctor dialogues. Our process incorporates continued pre-training, supervised fine-tuning, and reinforcement learning from both AI and human feedback. We introduced a new approach, DistillDirect, for performing on-policy reinforcement learning with Gemini 1.0 Pro as the teacher model. Our resulting model, LLaMA-Clinic, can generate clinical notes comparable in quality to those authored by physicians. In a blinded physician reader study, the majority (90.4%) of individual evaluations rated the notes generated by LLaMA-Clinic as "acceptable" or higher across all three criteria: real-world readiness, completeness, and accuracy. In the more challenging "Assessment and Plan" section, LLaMA-Clinic scored higher (4.2/5) in real-world readiness than physician-authored notes (4.1/5). Our cost analysis for inference shows that our LLaMA-Clinic model achieves a 3.75-fold cost reduction compared to an external generic LLM service. Additionally, we highlight key considerations for future clinical note-generation tasks, emphasizing the importance of pre-defining a best-practice note format, rather than relying on LLMs to determine this for clinical practice. We have made our newly created synthetic clinic dialogue-note dataset and the physician feedback dataset publicly available to foster future research. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 451,043 |
1711.01339 | Binary Linear Codes with Optimal Scaling: Polar Codes with Large Kernels | We prove that, for the binary erasure channel (BEC), the polar-coding paradigm gives rise to codes that not only approach the Shannon limit but do so under the best possible scaling of their block length as a~function of the gap to capacity. This result exhibits the first known family of binary codes that attain both optimal scaling and quasi-linear complexity of encoding and decoding. Our proof is based on the construction and analysis of binary polar codes with large kernels. When communicating reliably at rates within $\varepsilon > 0$ of capacity, the code length $n$ often scales as $O(1/\varepsilon^{\mu})$, where the constant $\mu$ is called the scaling exponent. It is known that the optimal scaling exponent is $\mu=2$, and it is achieved by random linear codes. The scaling exponent of conventional polar codes (based on the $2\times 2$ kernel) on the BEC is $\mu=3.63$. This falls far short of the optimal scaling guaranteed by random codes. Our main contribution is a rigorous proof of the following result: for the BEC, there exist $\ell\times\ell$ binary kernels, such that polar codes constructed from these kernels achieve scaling exponent $\mu(\ell)$ that tends to the optimal value of $2$ as $\ell$ grows. We furthermore characterize precisely how large $\ell$ needs to be as a function of the gap between $\mu(\ell)$ and $2$. The resulting binary codes maintain the recursive structure of conventional polar codes, and thereby achieve construction complexity $O(n)$ and encoding/decoding complexity $O(n\log n)$. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 83,862 |
2303.14856 | Automatic Number Plate Recognition using Random Forest Classifier | Automatic Number Plate Recognition System (ANPRS) is a mass surveillance embedded system that recognizes the number plate of the vehicle. This system is generally used for traffic management applications. It should be very efficient in detecting the number plate in noisy as well as in low illumination and also within required time frame. This paper proposes a number plate recognition method by processing vehicle's rear or front image. After image is captured, processing is divided into four steps which are Pre-Processing, Number plate localization, Character segmentation and Character recognition. Pre-Processing enhances the image for further processing, number plate localization extracts the number plate region from the image, character segmentation separates the individual characters from the extracted number plate and character recognition identifies the optical characters by using random forest classification algorithm. Experimental results reveal that the accuracy of this method is 90.9%. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 354,265 |
2102.06234 | Optimization Issues in KL-Constrained Approximate Policy Iteration | Many reinforcement learning algorithms can be seen as versions of approximate policy iteration (API). While standard API often performs poorly, it has been shown that learning can be stabilized by regularizing each policy update by the KL-divergence to the previous policy. Popular practical algorithms such as TRPO, MPO, and VMPO replace regularization by a constraint on KL-divergence of consecutive policies, arguing that this is easier to implement and tune. In this work, we study this implementation choice in more detail. We compare the use of KL divergence as a constraint vs. as a regularizer, and point out several optimization issues with the widely-used constrained approach. We show that the constrained algorithm is not guaranteed to converge even on simple problem instances where the constrained problem can be solved exactly, and in fact incurs linear expected regret. With approximate implementation using softmax policies, we show that regularization can improve the optimization landscape of the original objective. We demonstrate these issues empirically on several bandit and RL environments. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 219,680 |
2311.15537 | SED: A Simple Encoder-Decoder for Open-Vocabulary Semantic Segmentation | Open-vocabulary semantic segmentation strives to distinguish pixels into different semantic groups from an open set of categories. Most existing methods explore utilizing pre-trained vision-language models, in which the key is to adopt the image-level model for pixel-level segmentation task. In this paper, we propose a simple encoder-decoder, named SED, for open-vocabulary semantic segmentation, which comprises a hierarchical encoder-based cost map generation and a gradual fusion decoder with category early rejection. The hierarchical encoder-based cost map generation employs hierarchical backbone, instead of plain transformer, to predict pixel-level image-text cost map. Compared to plain transformer, hierarchical backbone better captures local spatial information and has linear computational complexity with respect to input size. Our gradual fusion decoder employs a top-down structure to combine cost map and the feature maps of different backbone levels for segmentation. To accelerate inference speed, we introduce a category early rejection scheme in the decoder that rejects many no-existing categories at the early layer of decoder, resulting in at most 4.7 times acceleration without accuracy degradation. Experiments are performed on multiple open-vocabulary semantic segmentation datasets, which demonstrates the efficacy of our SED method. When using ConvNeXt-B, our SED method achieves mIoU score of 31.6\% on ADE20K with 150 categories at 82 millisecond ($ms$) per image on a single A6000. We will release it at \url{https://github.com/xb534/SED.git}. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 410,545 |
cs/0308023 | On the complexity of curve fitting algorithms | We study a popular algorithm for fitting polynomial curves to scattered data based on the least squares with gradient weights. We show that sometimes this algorithm admits a substantial reduction of complexity, and, furthermore, find precise conditions under which this is possible. It turns out that this is, indeed, possible when one fits circles but not ellipses or hyperbolas. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 537,962 |
2009.02164 | Technical Report: The Policy Graph Improvement Algorithm | Optimizing a partially observable Markov decision process (POMDP) policy is challenging. The policy graph improvement (PGI) algorithm for POMDPs represents the policy as a fixed size policy graph and improves the policy monotonically. Due to the fixed policy size, computation time for each improvement iteration is known in advance. Moreover, the method allows for compact understandable policies. This report describes the technical details of the PGI [1] and particle based PGI [2] algorithms for POMDPs in a more accessible way than [1] or [2] allowing practitioners and students to understand and implement the algorithms. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 194,479 |
2211.02712 | Resource-Efficient Transfer Learning From Speech Foundation Model Using
Hierarchical Feature Fusion | Self-supervised pre-training of a speech foundation model, followed by supervised fine-tuning, has shown impressive quality improvements on automatic speech recognition (ASR) tasks. Fine-tuning separate foundation models for many downstream tasks are expensive since the foundation model is usually very big. Parameter-efficient fine-tuning methods (e.g. adapter, sparse update methods) offer an alternative paradigm where a small set of parameters are updated to adapt the foundation model to new tasks. However, these methods still suffer from a high computational memory cost and slow training speed because they require backpropagation through the entire neural network at each step. In the paper, we analyze the performance of features at different layers of a foundation model on the speech recognition task and propose a novel hierarchical feature fusion method for resource-efficient transfer learning from speech foundation models. Experimental results show that the proposed method can achieve better performance on speech recognition task than existing algorithms with fewer number of trainable parameters, less computational memory cost and faster training speed. After combining with Adapters at all layers, the proposed method can achieve the same performance as fine-tuning the whole model with $97\%$ fewer trainable encoder parameters and $53\%$ faster training speed. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 328,672 |
2410.21335 | E(3)-invariant diffusion model for pocket-aware peptide generation | Biologists frequently desire protein inhibitors for a variety of reasons, including use as research tools for understanding biological processes and application to societal problems in agriculture, healthcare, etc. Immunotherapy, for instance, relies on immune checkpoint inhibitors to block checkpoint proteins, preventing their binding with partner proteins and boosting immune cell function against abnormal cells. Inhibitor discovery has long been a tedious process, which in recent years has been accelerated by computational approaches. Advances in artificial intelligence now provide an opportunity to make inhibitor discovery smarter than ever before. While extensive research has been conducted on computer-aided inhibitor discovery, it has mainly focused on either sequence-to-structure mapping, reverse mapping, or bio-activity prediction, making it unrealistic for biologists to utilize such tools. Instead, our work proposes a new method of computer-assisted inhibitor discovery: de novo pocket-aware peptide structure and sequence generation network. Our approach consists of two sequential diffusion models for end-to-end structure generation and sequence prediction. By leveraging angle and dihedral relationships between backbone atoms, we ensure an E(3)-invariant representation of peptide structures. Our results demonstrate that our method achieves comparable performance to state-of-the-art models, highlighting its potential in pocket-aware peptide design. This work offers a new approach for precise drug discovery using receptor-specific peptide generation. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 503,211 |
2005.03299 | Adaptive Dialog Policy Learning with Hindsight and User Modeling | Reinforcement learning methods have been used to compute dialog policies from language-based interaction experiences. Efficiency is of particular importance in dialog policy learning, because of the considerable cost of interacting with people, and the very poor user experience from low-quality conversations. Aiming at improving the efficiency of dialog policy learning, we develop algorithm LHUA (Learning with Hindsight, User modeling, and Adaptation) that, for the first time, enables dialog agents to adaptively learn with hindsight from both simulated and real users. Simulation and hindsight provide the dialog agent with more experience and more (positive) reinforcements respectively. Experimental results suggest that, in success rate and policy quality, LHUA outperforms competitive baselines from the literature, including its no-simulation, no-adaptation, and no-hindsight counterparts. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 176,119 |
2405.05329 | KV-Runahead: Scalable Causal LLM Inference by Parallel Key-Value Cache
Generation | Large Language Model or LLM inference has two phases, the prompt (or prefill) phase to output the first token and the extension (or decoding) phase to the generate subsequent tokens. In this work, we propose an efficient parallelization scheme, KV-Runahead to accelerate the prompt phase. The key observation is that the extension phase generates tokens faster than the prompt phase because of key-value cache (KV-cache). Hence, KV-Runahead parallelizes the prompt phase by orchestrating multiple processes to populate the KV-cache and minimizes the time-to-first-token (TTFT). Dual-purposing the KV-cache scheme has two main benefits. First, since KV-cache is designed to leverage the causal attention map, we minimize computation and computation automatically. Second, since it already exists for the extension phase, KV-Runahead is easy to implement. We further propose context-level load-balancing to handle uneven KV-cache generation (due to the causal attention) and to optimize TTFT. Compared with an existing parallelization scheme such as tensor or sequential parallelization where keys and values are locally generated and exchanged via all-gather collectives, our experimental results demonstrate that KV-Runahead can offer over 1.4x and 1.6x speedups for Llama 7B and Falcon 7B respectively. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | true | 452,878 |
2401.10543 | Multilingual acoustic word embeddings for zero-resource languages | This research addresses the challenge of developing speech applications for zero-resource languages that lack labelled data. It specifically uses acoustic word embedding (AWE) -- fixed-dimensional representations of variable-duration speech segments -- employing multilingual transfer, where labelled data from several well-resourced languages are used for pertaining. The study introduces a new neural network that outperforms existing AWE models on zero-resource languages. It explores the impact of the choice of well-resourced languages. AWEs are applied to a keyword-spotting system for hate speech detection in Swahili radio broadcasts, demonstrating robustness in real-world scenarios. Additionally, novel semantic AWE models improve semantic query-by-example search. | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 422,683 |
1805.10123 | TADAM: Task dependent adaptive metric for improved few-shot learning | Few-shot learning has become essential for producing models that generalize from few examples. In this work, we identify that metric scaling and metric task conditioning are important to improve the performance of few-shot algorithms. Our analysis reveals that simple metric scaling completely changes the nature of few-shot algorithm parameter updates. Metric scaling provides improvements up to 14% in accuracy for certain metrics on the mini-Imagenet 5-way 5-shot classification task. We further propose a simple and effective way of conditioning a learner on the task sample set, resulting in learning a task-dependent metric space. Moreover, we propose and empirically test a practical end-to-end optimization procedure based on auxiliary task co-training to learn a task-dependent metric space. The resulting few-shot learning model based on the task-dependent scaled metric achieves state of the art on mini-Imagenet. We confirm these results on another few-shot dataset that we introduce in this paper based on CIFAR100. Our code is publicly available at https://github.com/ElementAI/TADAM. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 98,589 |
1811.10787 | Unsupervised Image Captioning | Deep neural networks have achieved great successes on the image captioning task. However, most of the existing models depend heavily on paired image-sentence datasets, which are very expensive to acquire. In this paper, we make the first attempt to train an image captioning model in an unsupervised manner. Instead of relying on manually labeled image-sentence pairs, our proposed model merely requires an image set, a sentence corpus, and an existing visual concept detector. The sentence corpus is used to teach the captioning model how to generate plausible sentences. Meanwhile, the knowledge in the visual concept detector is distilled into the captioning model to guide the model to recognize the visual concepts in an image. In order to further encourage the generated captions to be semantically consistent with the image, the image and caption are projected into a common latent space so that they can reconstruct each other. Given that the existing sentence corpora are mainly designed for linguistic research and are thus with little reference to image contents, we crawl a large-scale image description corpus of two million natural sentences to facilitate the unsupervised image captioning scenario. Experimental results show that our proposed model is able to produce quite promising results without any caption annotations. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 114,585 |
2307.15191 | Small, but important: Traffic light proposals for detecting small
traffic lights and beyond | Traffic light detection is a challenging problem in the context of self-driving cars and driver assistance systems. While most existing systems produce good results on large traffic lights, detecting small and tiny ones is often overlooked. A key problem here is the inherent downsampling in CNNs, leading to low-resolution features for detection. To mitigate this problem, we propose a new traffic light detection system, comprising a novel traffic light proposal generator that utilizes findings from general object proposal generation, fine-grained multi-scale features, and attention for efficient processing. Moreover, we design a new detection head for classifying and refining our proposals. We evaluate our system on three challenging, publicly available datasets and compare it against six methods. The results show substantial improvements of at least $12.6\%$ on small and tiny traffic lights, as well as strong results across all sizes of traffic lights. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 382,181 |
2212.02049 | Distributed Stochastic Gradient Descent with Cost-Sensitive and
Strategic Agents | This study considers a federated learning setup where cost-sensitive and strategic agents train a learning model with a server. During each round, each agent samples a minibatch of training data and sends his gradient update. As an increasing function of his minibatch size choice, the agent incurs a cost associated with the data collection, gradient computation and communication. The agents have the freedom to choose their minibatch size and may even opt out from training. To reduce his cost, an agent may diminish his minibatch size, which may also cause an increase in the noise level of the gradient update. The server can offer rewards to compensate the agents for their costs and to incentivize their participation but she lacks the capability of validating the true minibatch sizes of the agents. To tackle this challenge, the proposed reward mechanism evaluates the quality of each agent's gradient according to the its distance to a reference which is constructed from the gradients provided by other agents. It is shown that the proposed reward mechanism has a cooperative Nash equilibrium in which the agents determine the minibatch size choices according to the requests of the server. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 334,664 |
2307.08140 | GastroVision: A Multi-class Endoscopy Image Dataset for Computer Aided
Gastrointestinal Disease Detection | Integrating real-time artificial intelligence (AI) systems in clinical practices faces challenges such as scalability and acceptance. These challenges include data availability, biased outcomes, data quality, lack of transparency, and underperformance on unseen datasets from different distributions. The scarcity of large-scale, precisely labeled, and diverse datasets are the major challenge for clinical integration. This scarcity is also due to the legal restrictions and extensive manual efforts required for accurate annotations from clinicians. To address these challenges, we present \textit{GastroVision}, a multi-center open-access gastrointestinal (GI) endoscopy dataset that includes different anatomical landmarks, pathological abnormalities, polyp removal cases and normal findings (a total of 27 classes) from the GI tract. The dataset comprises 8,000 images acquired from B{\ae}rum Hospital in Norway and Karolinska University Hospital in Sweden and was annotated and verified by experienced GI endoscopists. Furthermore, we validate the significance of our dataset with extensive benchmarking based on the popular deep learning based baseline models. We believe our dataset can facilitate the development of AI-based algorithms for GI disease detection and classification. Our dataset is available at \url{https://osf.io/84e7f/}. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 379,685 |
2402.14031 | Autoencoder with Ordered Variance for Nonlinear Model Identification | This paper presents a novel autoencoder with ordered variance (AEO) in which the loss function is modified with a variance regularization term to enforce order in the latent space. Further, the autoencoder is modified using ResNets, which results in a ResNet AEO (RAEO). The paper also illustrates the effectiveness of AEO and RAEO in extracting nonlinear relationships among input variables in an unsupervised setting. | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | 431,506 |
1510.05390 | Entropy and thinning of discrete random variables | We describe five types of results concerning information and concentration of discrete random variables, and relationships between them, motivated by their counterparts in the continuous case. The results we consider are information theoretic approaches to Poisson approximation, the maximum entropy property of the Poisson distribution, discrete concentration (Poincar\'{e} and logarithmic Sobolev) inequalities, monotonicity of entropy and concavity of entropy in the Shepp--Olkin regime. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 48,018 |
2309.10592 | NDDepth: Normal-Distance Assisted Monocular Depth Estimation | Monocular depth estimation has drawn widespread attention from the vision community due to its broad applications. In this paper, we propose a novel physics (geometry)-driven deep learning framework for monocular depth estimation by assuming that 3D scenes are constituted by piece-wise planes. Particularly, we introduce a new normal-distance head that outputs pixel-level surface normal and plane-to-origin distance for deriving depth at each position. Meanwhile, the normal and distance are regularized by a developed plane-aware consistency constraint. We further integrate an additional depth head to improve the robustness of the proposed framework. To fully exploit the strengths of these two heads, we develop an effective contrastive iterative refinement module that refines depth in a complementary manner according to the depth uncertainty. Extensive experiments indicate that the proposed method exceeds previous state-of-the-art competitors on the NYU-Depth-v2, KITTI and SUN RGB-D datasets. Notably, it ranks 1st among all submissions on the KITTI depth prediction online benchmark at the submission time. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 393,072 |
2211.10378 | Comparing Explanation Methods for Traditional Machine Learning Models
Part 2: Quantifying Model Explainability Faithfulness and Improvements with
Dimensionality Reduction | Machine learning (ML) models are becoming increasingly common in the atmospheric science community with a wide range of applications. To enable users to understand what an ML model has learned, ML explainability has become a field of active research. In Part I of this two-part study, we described several explainability methods and demonstrated that feature rankings from different methods can substantially disagree with each other. It is unclear, though, whether the disagreement is overinflated due to some methods being less faithful in assigning importance. Herein, "faithfulness" or "fidelity" refer to the correspondence between the assigned feature importance and the contribution of the feature to model performance. In the present study, we evaluate the faithfulness of feature ranking methods using multiple methods. Given the sensitivity of explanation methods to feature correlations, we also quantify how much explainability faithfulness improves after correlated features are limited. Before dimensionality reduction, the feature relevance methods [e.g., SHAP, LIME, ALE variance, and logistic regression (LR) coefficients] were generally more faithful than the permutation importance methods due to the negative impact of correlated features. Once correlated features were reduced, traditional permutation importance became the most faithful method. In addition, the ranking uncertainty (i.e., the spread in rank assigned to a feature by the different ranking methods) was reduced by a factor of 2-10, and excluding less faithful feature ranking methods reduces it further. This study is one of the first to quantify the improvement in explainability from limiting correlated features and knowing the relative fidelity of different explainability methods. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 331,291 |
2107.04633 | Inferring Probabilistic Reward Machines from Non-Markovian Reward
Processes for Reinforcement Learning | The success of reinforcement learning in typical settings is predicated on Markovian assumptions on the reward signal by which an agent learns optimal policies. In recent years, the use of reward machines has relaxed this assumption by enabling a structured representation of non-Markovian rewards. In particular, such representations can be used to augment the state space of the underlying decision process, thereby facilitating non-Markovian reinforcement learning. However, these reward machines cannot capture the semantics of stochastic reward signals. In this paper, we make progress on this front by introducing probabilistic reward machines (PRMs) as a representation of non-Markovian stochastic rewards. We present an algorithm to learn PRMs from the underlying decision process and prove results around its correctness and convergence. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 245,517 |
1610.04531 | Numerical Inversion of SRNF Maps for Elastic Shape Analysis of
Genus-Zero Surfaces | Recent developments in elastic shape analysis (ESA) are motivated by the fact that it provides comprehensive frameworks for simultaneous registration, deformation, and comparison of shapes. These methods achieve computational efficiency using certain square-root representations that transform invariant elastic metrics into Euclidean metrics, allowing for applications of standard algorithms and statistical tools. For analyzing shapes of embeddings of $\mathbb{S}^2$ in $\mathbb{R}^3$, Jermyn et al. introduced square-root normal fields (SRNFs) that transformed an elastic metric, with desirable invariant properties, into the $\mathbb{L}^2$ metric. These SRNFs are essentially surface normals scaled by square-roots of infinitesimal area elements. A critical need in shape analysis is to invert solutions (deformations, averages, modes of variations, etc) computed in the SRNF space, back to the original surface space for visualizations and inferences. Due to the lack of theory for understanding SRNFs maps and their inverses, we take a numerical approach and derive an efficient multiresolution algorithm, based on solving an optimization problem in the surface space, that estimates surfaces corresponding to given SRNFs. This solution is found effective, even for complex shapes, e.g. human bodies and animals, that undergo significant deformations including bending and stretching. Specifically, we use this inversion for computing elastic shape deformations, transferring deformations, summarizing shapes, and for finding modes of variability in a given collection, while simultaneously registering the surfaces. We demonstrate the proposed algorithms using a statistical analysis of human body shapes, classification of generic surfaces and analysis of brain structures. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 62,399 |
2105.02368 | Physics-informed Spline Learning for Nonlinear Dynamics Discovery | Dynamical systems are typically governed by a set of linear/nonlinear differential equations. Distilling the analytical form of these equations from very limited data remains intractable in many disciplines such as physics, biology, climate science, engineering and social science. To address this fundamental challenge, we propose a novel Physics-informed Spline Learning (PiSL) framework to discover parsimonious governing equations for nonlinear dynamics, based on sparsely sampled noisy data. The key concept is to (1) leverage splines to interpolate locally the dynamics, perform analytical differentiation and build the library of candidate terms, (2) employ sparse representation of the governing equations, and (3) use the physics residual in turn to inform the spline learning. The synergy between splines and discovered underlying physics leads to the robust capacity of dealing with high-level data scarcity and noise. A hybrid sparsity-promoting alternating direction optimization strategy is developed for systematically pruning the sparse coefficients that form the structure and explicit expression of the governing equations. The efficacy and superiority of the proposed method have been demonstrated by multiple well-known nonlinear dynamical systems, in comparison with two state-of-the-art methods. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 233,796 |
2006.13436 | Towards Understanding Hierarchical Learning: Benefits of Neural
Representations | Deep neural networks can empirically perform efficient hierarchical learning, in which the layers learn useful representations of the data. However, how they make use of the intermediate representations are not explained by recent theories that relate them to "shallow learners" such as kernels. In this work, we demonstrate that intermediate neural representations add more flexibility to neural networks and can be advantageous over raw inputs. We consider a fixed, randomly initialized neural network as a representation function fed into another trainable network. When the trainable network is the quadratic Taylor model of a wide two-layer network, we show that neural representation can achieve improved sample complexities compared with the raw input: For learning a low-rank degree-$p$ polynomial ($p \geq 4$) in $d$ dimension, neural representation requires only $\tilde{O}(d^{\lceil p/2 \rceil})$ samples, while the best-known sample complexity upper bound for the raw input is $\tilde{O}(d^{p-1})$. We contrast our result with a lower bound showing that neural representations do not improve over the raw input (in the infinite width limit), when the trainable network is instead a neural tangent kernel. Our results characterize when neural representations are beneficial, and may provide a new perspective on why depth is important in deep learning. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 183,912 |
2406.00123 | Correlation-aware Coarse-to-fine MLPs for Deformable Medical Image
Registration | Deformable image registration is a fundamental step for medical image analysis. Recently, transformers have been used for registration and outperformed Convolutional Neural Networks (CNNs). Transformers can capture long-range dependence among image features, which have been shown beneficial for registration. However, due to the high computation/memory loads of self-attention, transformers are typically used at downsampled feature resolutions and cannot capture fine-grained long-range dependence at the full image resolution. This limits deformable registration as it necessitates precise dense correspondence between each image pixel. Multi-layer Perceptrons (MLPs) without self-attention are efficient in computation/memory usage, enabling the feasibility of capturing fine-grained long-range dependence at full resolution. Nevertheless, MLPs have not been extensively explored for image registration and are lacking the consideration of inductive bias crucial for medical registration tasks. In this study, we propose the first correlation-aware MLP-based registration network (CorrMLP) for deformable medical image registration. Our CorrMLP introduces a correlation-aware multi-window MLP block in a novel coarse-to-fine registration architecture, which captures fine-grained multi-range dependence to perform correlation-aware coarse-to-fine registration. Extensive experiments with seven public medical datasets show that our CorrMLP outperforms state-of-the-art deformable registration methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 459,717 |
1511.01954 | Recovering hard-to-find object instances by sampling context-based
object proposals | In this paper we focus on improving object detection performance in terms of recall. We propose a post-detection stage during which we explore the image with the objective of recovering missed detections. This exploration is performed by sampling object proposals in the image. We analyze four different strategies to perform this sampling, giving special attention to strategies that exploit spatial relations between objects. In addition, we propose a novel method to discover higher-order relations between groups of objects. Experiments on the challenging KITTI dataset show that our proposed relations-based proposal generation strategies can help improving recall at the cost of a relatively low amount of object proposals. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 48,562 |
1710.02261 | Scalable Tucker Factorization for Sparse Tensors - Algorithms and
Discoveries | Given sparse multi-dimensional data (e.g., (user, movie, time; rating) for movie recommendations), how can we discover latent concepts/relations and predict missing values? Tucker factorization has been widely used to solve such problems with multi-dimensional data, which are modeled as tensors. However, most Tucker factorization algorithms regard and estimate missing entries as zeros, which triggers a highly inaccurate decomposition. Moreover, few methods focusing on an accuracy exhibit limited scalability since they require huge memory and heavy computational costs while updating factor matrices. In this paper, we propose P-Tucker, a scalable Tucker factorization method for sparse tensors. P-Tucker performs alternating least squares with a row-wise update rule in a fully parallel way, which significantly reduces memory requirements for updating factor matrices. Furthermore, we offer two variants of P-Tucker: a caching algorithm P-Tucker-Cache and an approximation algorithm P-Tucker-Approx, both of which accelerate the update process. Experimental results show that P-Tucker exhibits 1.7-14.1x speed-up and 1.4-4.8x less error compared to the state-of-the-art. In addition, P-Tucker scales near linearly with the number of observable entries in a tensor and number of threads. Thanks to P-Tucker, we successfully discover hidden concepts and relations in a large-scale real-world tensor, while existing methods cannot reveal latent features due to their limited scalability or low accuracy. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | true | 82,141 |
2306.01309 | Energy-efficient Rate Splitting for MIMO STAR-RIS-assisted Broadcast
Channels with I/Q Imbalance | This paper proposes an energy-efficient scheme for multicell multiple-input, multiple-output (MIMO) simultaneous transmit and reflect (STAR) reconfigurable intelligent surfaces (RIS)-assisted broadcast channels by employing rate splitting (RS) and improper Gaussian signaling (IGS). Regular RISs can only reflect signals. Thus, a regular RIS can assist only when the transmitter and receiver are in the reflection space of the RIS. However, a STAR-RIS can simultaneously transmit and reflect, thus providing a 360-degrees coverage. In this paper, we assume that transceivers may suffer from I/Q imbalance (IQI). To compensate for IQI, we employ IGS. Moreover, we employ RS to manage intracell interference. We show that RIS can significantly improve the energy efficiency (EE) of the system when RIS components are carefully optimized. Additionally, we show that STAR-RIS can significantly outperform a regular RIS when the regular RIS cannot cover all the users. We also show that RS can highly increase the EE comparing to treating interference as noise. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 370,396 |
1606.07315 | Nearly-optimal Robust Matrix Completion | In this paper, we consider the problem of Robust Matrix Completion (RMC) where the goal is to recover a low-rank matrix by observing a small number of its entries out of which a few can be arbitrarily corrupted. We propose a simple projected gradient descent method to estimate the low-rank matrix that alternately performs a projected gradient descent step and cleans up a few of the corrupted entries using hard-thresholding. Our algorithm solves RMC using nearly optimal number of observations as well as nearly optimal number of corruptions. Our result also implies significant improvement over the existing time complexity bounds for the low-rank matrix completion problem. Finally, an application of our result to the robust PCA problem (low-rank+sparse matrix separation) leads to nearly linear time (in matrix dimensions) algorithm for the same; existing state-of-the-art methods require quadratic time. Our empirical results corroborate our theoretical results and show that even for moderate sized problems, our method for robust PCA is an an order of magnitude faster than the existing methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 57,699 |
2004.10888 | Mean-Variance Policy Iteration for Risk-Averse Reinforcement Learning | We present a mean-variance policy iteration (MVPI) framework for risk-averse control in a discounted infinite horizon MDP optimizing the variance of a per-step reward random variable. MVPI enjoys great flexibility in that any policy evaluation method and risk-neutral control method can be dropped in for risk-averse control off the shelf, in both on- and off-policy settings. This flexibility reduces the gap between risk-neutral control and risk-averse control and is achieved by working on a novel augmented MDP directly. We propose risk-averse TD3 as an example instantiating MVPI, which outperforms vanilla TD3 and many previous risk-averse control methods in challenging Mujoco robot simulation tasks under a risk-aware performance metric. This risk-averse TD3 is the first to introduce deterministic policies and off-policy learning into risk-averse reinforcement learning, both of which are key to the performance boost we show in Mujoco domains. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 173,749 |
2303.08941 | Automated Interactive Domain-Specific Conversational Agents that
Understand Human Dialogs | Achieving human-like communication with machines remains a classic, challenging topic in the field of Knowledge Representation and Reasoning and Natural Language Processing. These Large Language Models (LLMs) rely on pattern-matching rather than a true understanding of the semantic meaning of a sentence. As a result, they may generate incorrect responses. To generate an assuredly correct response, one has to "understand" the semantics of a sentence. To achieve this "understanding", logic-based (commonsense) reasoning methods such as Answer Set Programming (ASP) are arguably needed. In this paper, we describe the AutoConcierge system that leverages LLMs and ASP to develop a conversational agent that can truly "understand" human dialogs in restricted domains. AutoConcierge is focused on a specific domain-advising users about restaurants in their local area based on their preferences. AutoConcierge will interactively understand a user's utterances, identify the missing information in them, and request the user via a natural language sentence to provide it. Once AutoConcierge has determined that all the information has been received, it computes a restaurant recommendation based on the user-preferences it has acquired from the human user. AutoConcierge is based on our STAR framework developed earlier, which uses GPT-3 to convert human dialogs into predicates that capture the deep structure of the dialog's sentence. These predicates are then input into the goal-directed s(CASP) ASP system for performing commonsense reasoning. To the best of our knowledge, AutoConcierge is the first automated conversational agent that can realistically converse like a human and provide help to humans based on truly understanding human utterances. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 351,831 |
2007.11164 | Time-aware Graph Embedding: A temporal smoothness and task-oriented
approach | Knowledge graph embedding, which aims to learn the low-dimensional representations of entities and relationships, has attracted considerable research efforts recently. However, most knowledge graph embedding methods focus on the structural relationships in fixed triples while ignoring the temporal information. Currently, existing time-aware graph embedding methods only focus on the factual plausibility, while ignoring the temporal smoothness which models the interactions between a fact and its contexts, and thus can capture fine-granularity temporal relationships. This leads to the limited performance of embedding related applications. To solve this problem, this paper presents a Robustly Time-aware Graph Embedding (RTGE) method by incorporating temporal smoothness. Two major innovations of our paper are presented here. At first, RTGE integrates a measure of temporal smoothness in the learning process of the time-aware graph embedding. Via the proposed additional smoothing factor, RTGE can preserve both structural information and evolutionary patterns of a given graph. Secondly, RTGE provides a general task-oriented negative sampling strategy associated with temporally-aware information, which further improves the adaptive ability of the proposed algorithm and plays an essential role in obtaining superior performance in various tasks. Extensive experiments conducted on multiple benchmark tasks show that RTGE can increase performance in entity/relationship/temporal scoping prediction tasks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 188,482 |
2501.18618 | Vision Aided Channel Prediction for Vehicular Communications: A Case
Study of Received Power Prediction Using RGB Images | The communication scenarios and channel characteristics of 6G will be more complex and difficult to characterize. Conventional methods for channel prediction face challenges in achieving an optimal balance between accuracy, practicality, and generalizability. Additionally, they often fail to effectively leverage environmental features. Within the framework of integration communication and artificial intelligence as a pivotal development vision for 6G, it is imperative to achieve intelligent prediction of channel characteristics. Vision-aided methods have been employed in various wireless communication tasks, excluding channel prediction, and have demonstrated enhanced efficiency and performance. In this paper, we propose a vision-aided two-stage model for channel prediction in millimeter wave vehicular communication scenarios, realizing accurate received power prediction utilizing solely RGB images. Firstly, we obtain original images of propagation environment through an RGB camera. Secondly, three typical computer vision methods including object detection, instance segmentation and binary mask are employed for environmental information extraction from original images in stage 1, and prediction of received power based on processed images is implemented in stage 2. Pre-trained YOLOv8 and ResNets are used in stages 1 and 2, respectively, and fine-tuned on datasets. Finally, we conduct five experiments to evaluate the performance of proposed model, demonstrating its feasibility, accuracy and generalization capabilities. The model proposed in this paper offers novel solutions for achieving intelligent channel prediction in vehicular communications. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 528,777 |
2008.07965 | Analysis of Social Robotic Navigation approaches: CNN Encoder and
Incremental Learning as an alternative to Deep Reinforcement Learning | Dealing with social tasks in robotic scenarios is difficult, as having humans in the learning loop is incompatible with most of the state-of-the-art machine learning algorithms. This is the case when exploring Incremental learning models, in particular the ones involving reinforcement learning. In this work, we discuss this problem and possible solutions by analysing a previous study on adaptive convolutional encoders for a social navigation task. | false | false | false | false | true | false | true | true | false | false | false | false | false | false | false | false | false | false | 192,271 |
1312.7447 | Containment Control of Linear Multi-Agent Systems with Multiple Leaders
of Bounded Inputs Using Distributed Continuous Controllers | This paper considers the containment control problem for multi-agent systems with general linear dynamics and multiple leaders whose control inputs are possibly nonzero and time varying. Based on the relative states of neighboring agents, a distributed static continuous controller is designed, under which the containment error is uniformly ultimately bounded and the upper bound of the containment error can be made arbitrarily small, if the subgraph associated with the followers is undirected and for each follower there exists at least one leader that has a directed path to that follower. It is noted that the design of the static controller requires the knowledge of the eigenvalues of the Laplacian matrix and the upper bounds of the leaders' control inputs. In order to remove these requirements, a distributed adaptive continuous controller is further proposed, which can be designed and implemented by each follower in a fully distributed fashion. Extensions to the case where only local output information is available are discussed. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 29,480 |
2005.05469 | Causal Estimation of Stay-at-Home Orders on SARS-CoV-2 Transmission | Accurately estimating the effectiveness of stay-at-home orders (SHOs) on reducing social contact and disease spread is crucial for mitigating pandemics. Leveraging individual-level location data for 10 million smartphones, we observe that by April 30th---when nine in ten Americans were under a SHO---daily movement had fallen 70% from pre-COVID levels. One-quarter of this decline is causally attributable to SHOs, with wide demographic differences in compliance, most notably by political affiliation. Likely Trump voters reduce movement by 9% following a local SHO, compared to a 21% reduction among their Clinton-voting neighbors, who face similar exposure risks and identical government orders. Linking social distancing behavior with an epidemic model, we estimate that reductions in movement have causally reduced SARS-CoV-2 transmission rates by 49%. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 176,737 |
1710.08691 | BENGAL: An Automatic Benchmark Generator for Entity Recognition and
Linking | The manual creation of gold standards for named entity recognition and entity linking is time- and resource-intensive. Moreover, recent works show that such gold standards contain a large proportion of mistakes in addition to being difficult to maintain. We hence present BENGAL, a novel automatic generation of such gold standards as a complement to manually created benchmarks. The main advantage of our benchmarks is that they can be readily generated at any time. They are also cost-effective while being guaranteed to be free of annotation errors. We compare the performance of 11 tools on benchmarks in English generated by BENGAL and on 16benchmarks created manually. We show that our approach can be ported easily across languages by presenting results achieved by 4 tools on both Brazilian Portuguese and Spanish. Overall, our results suggest that our automatic benchmark generation approach can create varied benchmarks that have characteristics similar to those of existing benchmarks. Our approach is open-source. Our experimental results are available at http://faturl.com/bengalexpinlg and the code at https://github.com/dice-group/BENGAL. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 83,114 |
1804.04583 | A New Generative Statistical Model for Graphs: The Latent Order Logistic
(LOLOG) Model | Full probability models are critical for the statistical modeling of complex networks, and yet there are few general, flexible and widely applicable generative methods. We propose a new family of probability models motivated by the idea of network growth, which we call the Latent Order Logistic (LOLOG) model. LOLOG is a fully general framework capable of describing any probability distribution over graph configurations, though not all distributions are easily expressible or estimable as a LOLOG. We develop inferential procedures based on Monte Carlo Method of Moments, Generalized Method of Moments and variational inference. To show the flexibility of the model framework, we show how so-called scale-free networks can be modeled as LOLOGs via preferential attachment. The advantages of LOLOG in terms of avoidance of degeneracy, ease of sampling, and model flexibility are illustrated. Connections with the popular Exponential-family Random Graph model (ERGM) are also explored, and we find that they are identical in the case of dyadic independence. Finally, we apply the model to a social network of collaboration within a corporate law firm, a friendship network among adolescent students, and the friendship relations in an online social network. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 94,890 |
2501.09048 | Anthropomorphic Features for On-Line Signatures | Many features have been proposed in on-line signature verification. Generally, these features rely on the position of the on-line signature samples and their dynamic properties, as recorded by a tablet. This paper proposes a novel feature space to describe efficiently on-line signatures. Since producing a signature requires a skeletal arm system and its associated muscles, the new feature space is based on characterizing the movement of the shoulder, the elbow and the wrist joints when signing. As this motion is not directly obtained from a digital tablet, the new features are calculated by means of a virtual skeletal arm (VSA) model, which simulates the architecture of a real arm and forearm. Specifically, the VSA motion is described by its 3D joint position and its joint angles. These anthropomorphic features are worked out from both pen position and orientation through the VSA forward and direct kinematic model. The anthropomorphic features' robustness is proved by achieving state-of-the-art performance with several verifiers and multiple benchmarks on third party signature databases, which were collected with different devices and in different languages and scripts. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 524,998 |
2104.08666 | Worst of Both Worlds: Biases Compound in Pre-trained Vision-and-Language
Models | Numerous works have analyzed biases in vision and pre-trained language models individually - however, less attention has been paid to how these biases interact in multimodal settings. This work extends text-based bias analysis methods to investigate multimodal language models, and analyzes intra- and inter-modality associations and biases learned by these models. Specifically, we demonstrate that VL-BERT (Su et al., 2020) exhibits gender biases, often preferring to reinforce a stereotype over faithfully describing the visual scene. We demonstrate these findings on a controlled case-study and extend them for a larger set of stereotypically gendered entities. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 230,904 |
2310.16248 | GlotLID: Language Identification for Low-Resource Languages | Several recent papers have published good solutions for language identification (LID) for about 300 high-resource and medium-resource languages. However, there is no LID available that (i) covers a wide range of low-resource languages, (ii) is rigorously evaluated and reliable and (iii) efficient and easy to use. Here, we publish GlotLID-M, an LID model that satisfies the desiderata of wide coverage, reliability and efficiency. It identifies 1665 languages, a large increase in coverage compared to prior work. In our experiments, GlotLID-M outperforms four baselines (CLD3, FT176, OpenLID and NLLB) when balancing F1 and false positive rate (FPR). We analyze the unique challenges that low-resource LID poses: incorrect corpus metadata, leakage from high-resource languages, difficulty separating closely related languages, handling of macrolanguage vs varieties and in general noisy data. We hope that integrating GlotLID-M into dataset creation pipelines will improve quality and enhance accessibility of NLP technology for low-resource languages and cultures. GlotLID-M model (including future versions), code, and list of data sources are available: https://github.com/cisnlp/GlotLID. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 402,638 |
2106.06847 | Video Super-Resolution Transformer | Video super-resolution (VSR), with the aim to restore a high-resolution video from its corresponding low-resolution version, is a spatial-temporal sequence prediction problem. Recently, Transformer has been gaining popularity due to its parallel computing ability for sequence-to-sequence modeling. Thus, it seems to be straightforward to apply the vision Transformer to solve VSR. However, the typical block design of Transformer with a fully connected self-attention layer and a token-wise feed-forward layer does not fit well for VSR due to the following two reasons. First, the fully connected self-attention layer neglects to exploit the data locality because this layer relies on linear layers to compute attention maps. Second, the token-wise feed-forward layer lacks the feature alignment which is important for VSR since this layer independently processes each of the input token embeddings without any interaction among them. In this paper, we make the first attempt to adapt Transformer for VSR. Specifically, to tackle the first issue, we present a spatial-temporal convolutional self-attention layer with a theoretical understanding to exploit the locality information. For the second issue, we design a bidirectional optical flow-based feed-forward layer to discover the correlations across different video frames and also align features. Extensive experiments on several benchmark datasets demonstrate the effectiveness of our proposed method. The code will be available at https://github.com/caojiezhang/VSR-Transformer. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 240,648 |
2201.06964 | There is a Singularity in the Loss Landscape | Despite the widespread adoption of neural networks, their training dynamics remain poorly understood. We show experimentally that as the size of the dataset increases, a point forms where the magnitude of the gradient of the loss becomes unbounded. Gradient descent rapidly brings the network close to this singularity in parameter space, and further training takes place near it. This singularity explains a variety of phenomena recently observed in the Hessian of neural network loss functions, such as training on the edge of stability and the concentration of the gradient in a top subspace. Once the network approaches the singularity, the top subspace contributes little to learning, even though it constitutes the majority of the gradient. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 275,891 |
1503.00013 | Multilevel Diversity Coding with Regeneration | Digital contents in large scale distributed storage systems may have different reliability and access delay requirements, and for this reason, erasure codes with different strengths need to be utilized to achieve the best storage efficiency. At the same time, in such large scale distributed storage systems, nodes fail on a regular basis, and the contents stored on them need to be regenerated and stored on other healthy nodes, the efficiency of which is an important factor affecting the overall quality of service. In this work, we formulate the problem of multilevel diversity coding with regeneration to address these considerations, for which the storage vs. repair-bandwidth tradeoff is investigated. We show that the extreme point on this tradeoff corresponding to the minimum possible storage can be achieved by a simple coding scheme, where contents with different reliability requirements are encoded separately with individual regenerating codes without any mixing. On the other hand, we establish the complete storage-repair-bandwidth tradeoff for the case of four storage nodes, which reveals that codes mixing different contents can strictly improve this tradeoff over the separate coding solution. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 40,647 |
2011.01880 | Intrinsic Robotic Introspection: Learning Internal States From Neuron
Activations | We present an introspective framework inspired by the process of how humans perform introspection. Our working assumption is that neural network activations encode information, and building internal states from these activations can improve the performance of an actor-critic model. We perform experiments where we first train a Variational Autoencoder model to reconstruct the activations of a feature extraction network and use the latent space to improve the performance of an actor-critic when deciding which low-level robotic behaviour to execute. We show that internal states reduce the number of episodes needed by about 1300 episodes while training an actor-critic, denoting faster convergence to get a high success value while completing a robotic task. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | 204,743 |
2102.06772 | Channel Estimation and Hybrid Combining for Wideband Terahertz Massive
MIMO Systems | Terahertz (THz) communication is widely considered as a key enabler for future 6G wireless systems. However, THz links are subject to high propagation losses and inter-symbol interference due to the frequency selectivity of the channel. Massive multiple-input multiple-output (MIMO) along with orthogonal frequency division multiplexing (OFDM) can be used to deal with these problems. Nevertheless, when the propagation delay across the base station (BS) antenna array exceeds the symbol period, the spatial response of the BS array varies across the OFDM subcarriers. This phenomenon, known as beam squint, renders narrowband combining approaches ineffective. Additionally, channel estimation becomes challenging in the absence of combining gain during the training stage. In this work, we address the channel estimation and hybrid combining problems in wideband THz massive MIMO with uniform planar arrays. Specifically, we first introduce a low-complexity beam squint mitigation scheme based on true-time-delay. Next, we propose a novel variant of the popular orthogonal matching pursuit (OMP) algorithm to accurately estimate the channel with low training overhead. Our channel estimation and hybrid combining schemes are analyzed both theoretically and numerically. Moreover, the proposed schemes are extended to the multi-antenna user case. Simulation results are provided showcasing the performance gains offered by our design compared to standard narrowband combining and OMP-based channel estimation. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 219,869 |
2005.14690 | Investigating Deep Learning Approaches for Hate Speech Detection in
Social Media | The phenomenal growth on the internet has helped in empowering individual's expressions, but the misuse of freedom of expression has also led to the increase of various cyber crimes and anti-social activities. Hate speech is one such issue that needs to be addressed very seriously as otherwise, this could pose threats to the integrity of the social fabrics. In this paper, we proposed deep learning approaches utilizing various embeddings for detecting various types of hate speeches in social media. Detecting hate speech from a large volume of text, especially tweets which contains limited contextual information also poses several practical challenges. Moreover, the varieties in user-generated data and the presence of various forms of hate speech makes it very challenging to identify the degree and intention of the message. Our experiments on three publicly available datasets of different domains shows a significant improvement in accuracy and F1-score. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 179,330 |
1904.07044 | Rapid Signalling of Queue Dynamics | Rather than directly considering the queuing delay of data, this memo focuses on reducing the delay that congestion signals experience within a queue management algorithm, which can be greater than the delay that the data itself experiences within the queue. Once the congestion signals are delayed, regulation of the load becomes more sloppy, and the queue tends to overshoot and undershoot more as a result, leading the data itself to experience greater peaks in queuing delay as well as intermittent under-utilization. Where the service rate of a queue varies, it is preferable to measure the queue in units of time not bytes. However, the size of the queued backlog can be measured in bytes and signalled at the last instant as data leaves the queue, whereas measuring queuing delay introduces inherent delay. This paper proposes 'scaled sojourn time', which scales queuing delay by the ratio between the backlogs at dequeue and enqueue. This is equivalent to scaling the sojourn time by the ratio of the arrival and departure rates averaged over the time spent in the queue. The paper also proposes the removal of delays due to randomness in the signal encoding. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | 127,697 |
2111.07737 | Progress in Self-Certified Neural Networks | A learning method is self-certified if it uses all available data to simultaneously learn a predictor and certify its quality with a tight statistical certificate that is valid on unseen data. Recent work has shown that neural network models trained by optimising PAC-Bayes bounds lead not only to accurate predictors, but also to tight risk certificates, bearing promise towards achieving self-certified learning. In this context, learning and certification strategies based on PAC-Bayes bounds are especially attractive due to their ability to leverage all data to learn a posterior and simultaneously certify its risk with a tight numerical certificate. In this paper, we assess the progress towards self-certification in probabilistic neural networks learnt by PAC-Bayes inspired objectives. We empirically compare (on 4 classification datasets) classical test set bounds for deterministic predictors and a PAC-Bayes bound for randomised self-certified predictors. We first show that both of these generalisation bounds are not too far from out-of-sample test set errors. We then show that in data starvation regimes, holding out data for the test set bounds adversely affects generalisation performance, while self-certified strategies based on PAC-Bayes bounds do not suffer from this drawback, proving that they might be a suitable choice for the small data regime. We also find that probabilistic neural networks learnt by PAC-Bayes inspired objectives lead to certificates that can be surprisingly competitive with commonly used test set bounds. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 266,467 |
2005.01452 | Do Gradient-based Explanations Tell Anything About Adversarial
Robustness to Android Malware? | While machine-learning algorithms have demonstrated a strong ability in detecting Android malware, they can be evaded by sparse evasion attacks crafted by injecting a small set of fake components, e.g., permissions and system calls, without compromising intrusive functionality. Previous work has shown that, to improve robustness against such attacks, learning algorithms should avoid overemphasizing few discriminant features, providing instead decisions that rely upon a large subset of components. In this work, we investigate whether gradient-based attribution methods, used to explain classifiers' decisions by identifying the most relevant features, can be used to help identify and select more robust algorithms. To this end, we propose to exploit two different metrics that represent the evenness of explanations, and a new compact security measure called Adversarial Robustness Metric. Our experiments conducted on two different datasets and five classification algorithms for Android malware detection show that a strong connection exists between the uniformity of explanations and adversarial robustness. In particular, we found that popular techniques like Gradient*Input and Integrated Gradients are strongly correlated to security when applied to both linear and nonlinear detectors, while more elementary explanation techniques like the simple Gradient do not provide reliable information about the robustness of such classifiers. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 175,577 |
2407.07802 | ROSA: Random Subspace Adaptation for Efficient Fine-Tuning | Model training requires significantly more memory, compared with inference. Parameter efficient fine-tuning (PEFT) methods provide a means of adapting large models to downstream tasks using less memory. However, existing methods such as adapters, prompt tuning or low-rank adaptation (LoRA) either introduce latency overhead at inference time or achieve subpar downstream performance compared with full fine-tuning. In this work we propose Random Subspace Adaptation (ROSA), a method that outperforms previous PEFT methods by a significant margin, while maintaining a zero latency overhead during inference time. In contrast to previous methods, ROSA is able to adapt subspaces of arbitrarily large dimension, better approximating full-finetuning. We demonstrate both theoretically and experimentally that this makes ROSA strictly more expressive than LoRA, without consuming additional memory during runtime. As PEFT methods are especially useful in the natural language processing domain, where models operate on scales that make full fine-tuning very expensive, we evaluate ROSA in two common NLP scenarios: natural language generation (NLG) and natural language understanding (NLU) with GPT-2 and RoBERTa, respectively. We show that on almost every GLUE task ROSA outperforms LoRA by a significant margin, while also outperforming LoRA on NLG tasks. Our code is available at https://github.com/rosa-paper/rosa | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 471,910 |
2009.13727 | Graph-based methods for analyzing orchard tree structure using noisy
point cloud data | Digitisation of fruit trees using LiDAR enables analysis which can be used to better growing practices to improve yield. Sophisticated analysis requires geometric and semantic understanding of the data, including the ability to discern individual trees as well as identifying leafy and structural matter. Extraction of this information should be rapid, as should data capture, so that entire orchards can be processed, but existing methods for classification and segmentation rely on high-quality data or additional data sources like cameras. We present a method for analysis of LiDAR data specifically for individual tree location, segmentation and matter classification, which can operate on low-quality data captured by handheld or mobile LiDAR. Our methods for tree location and segmentation improved on existing methods with an F1 score of 0.774 and a v-measure of 0.915 respectively, while trunk matter classification performed poorly in absolute terms with an average F1 score of 0.490 on real data, though consistently outperformed existing methods and displayed a significantly shorter runtime. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 197,822 |
1712.08212 | Submodular Optimization for Consensus Networks with Noise-Corrupted
Leaders | We consider the leader selection problem in a network with consensus dynamics where both leader and follower agents are subject to stochastic external disturbances. The performance of the system is quantified by the total steady-state variance of the node states, and the goal is to identify the set of leaders that minimizes this variance. We first show that this performance measure can be expressed as a submodular set function over the nodes in the network. We then use this result to analyze the performance of two greedy, polynomial-time algorithms for leader selection, showing that the leader sets produced by the greedy algorithms are within provable bounds of optimal. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 87,152 |
1910.02240 | Attention-based Fault-tolerant Approach for Multi-agent Reinforcement
Learning Systems | The aim of multi-agent reinforcement learning systems is to provide interacting agents with the ability to collaboratively learn and adapt to the behavior of other agents. In many real-world applications, the agents can only acquire a partial view of the world. However, in realistic settings, one or more agents that show arbitrarily faulty or malicious behavior may suffice to let the current coordination mechanisms fail. In this paper, we study a practical scenario considering the security issues in the presence of agents with arbitrarily faulty or malicious behavior. Under these circumstances, learning an optimal policy becomes particularly challenging, even in the unrealistic case that an agent's policy can be made conditional upon all other agents' observations. To overcome these difficulties, we present an Attention-based Fault-Tolerant (FT-Attn) algorithm which selects correct and relevant information for each agent at every time-step. The multi-head attention mechanism enables the agents to learn effective communication policies through experience concurrently to the action policies. Empirical results have shown that FT-Attn beats previous state-of-the-art methods in some complex environments and can adapt to various kinds of noisy environments without tuning the complexity of the algorithm. Furthermore, FT-Attn can effectively deal with the complex situation where an agent needs to reach multiple agents' correct observation at the same time. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 148,185 |
2101.12347 | The Analysis of Discrete-Event System in Autonomous Package Delivery
using Legged Robot and Conveyor Belt | In this paper, the supervisory control of a Discrete Event System (DES) analyses states and events to construct an autonomous package delivery system. The delivery system includes a legged robot in order to autonomously navigate uneven indoor terrain and a conveyor belt for transporting the package to the legged robot.The aim of the paper is using the theory of supervisory control of DES to supervise and control machine's state and event and ensure robots autonomously collaborate. By applying the theory, we show the collaboration of two individual robots to deliver goods in a multi-floor environment. The obtained results from the theory of supervisory control are implemented and verified in a simulation environment. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 217,539 |
2104.00138 | Rapid quantification of COVID-19 pneumonia burden from computed
tomography with convolutional LSTM networks | Quantitative lung measures derived from computed tomography (CT) have been demonstrated to improve prognostication in coronavirus disease (COVID-19) patients, but are not part of the clinical routine since required manual segmentation of lung lesions is prohibitively time-consuming. We propose a new fully automated deep learning framework for rapid quantification and differentiation between lung lesions in COVID-19 pneumonia from both contrast and non-contrast CT images using convolutional Long Short-Term Memory (ConvLSTM) networks. Utilizing the expert annotations, model training was performed 5 times with separate hold-out sets using 5-fold cross-validation to segment ground-glass opacity and high opacity (including consolidation and pleural effusion). The performance of the method was evaluated on CT data sets from 197 patients with positive reverse transcription polymerase chain reaction test result for SARS-CoV-2. Strong agreement between expert manual and automatic segmentation was obtained for lung lesions with a Dice score coefficient of 0.876 $\pm$ 0.005; excellent correlations of 0.978 and 0.981 for ground-glass opacity and high opacity volumes. In the external validation set of 67 patients, there was dice score coefficient of 0.767 $\pm$ 0.009 as well as excellent correlations of 0.989 and 0.996 for ground-glass opacity and high opacity volumes. Computations for a CT scan comprising 120 slices were performed under 2 seconds on a personal computer equipped with NVIDIA Titan RTX graphics processing unit. Therefore, our deep learning-based method allows rapid fully-automated quantitative measurement of pneumonia burden from CT and may generate results with an accuracy similar to the expert readers. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 227,888 |
2208.05321 | A Frequency-aware Software Cache for Large Recommendation System
Embeddings | Deep learning recommendation models (DLRMs) have been widely applied in Internet companies. The embedding tables of DLRMs are too large to fit on GPU memory entirely. We propose a GPU-based software cache approaches to dynamically manage the embedding table in the CPU and GPU memory space by leveraging the id's frequency statistics of the target dataset. Our proposed software cache is efficient in training entire DLRMs on GPU in a synchronized update manner. It is also scaled to multiple GPUs in combination with the widely used hybrid parallel training approaches. Evaluating our prototype system shows that we can keep only 1.5% of the embedding parameters in the GPU to obtain a decent end-to-end training speed. | false | false | false | false | true | true | true | false | false | false | false | false | false | false | false | false | false | true | 312,374 |
1509.00123 | Mode Selection, Resource Allocation and Power Control for D2D-Enabled
Two-Tier Cellular Network | This paper proposes a centralized decision making framework at the macro base station (MBS) for device to device (D2D) communication underlaying a two-tier cellular network. We consider a D2D pair in the presence of an MBS and a femto access point, each serving a user, with quality of service constraints for all users. Our proposed solution encompasses mode selection (choosing between cellular or reuse or dedicated mode), resource allocation (in cellular and dedicated mode) and power control (in reuse mode) within a single framework. The framework prioritizes D2D dedicated mode if the D2D pair are close to each other and orthogonal resources are available. Otherwise, it allows D2D reuse mode if the D2D satisfies both the maximum distance and an additional interference criteria. For reuse mode, we present a geometric vertex search approach to solve the power allocation problem. We analytically prove the validity of this approach and show that it achieves near optimal performance. For cellular and dedicated modes, we show that frequency sharing maximizes sum rate and solve the resource allocation problem in closed form. Our simulations demonstrate the advantages of the proposed framework in terms of the performance gains achieved in D2D mode. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 46,466 |
2406.06886 | Enabling Data Dependency-based Query Optimization | Data dependency-based query optimization techniques can considerably improve database system performance: we apply three such optimization techniques to five database management systems (DBMSs) and observe throughput improvements between 5 % and 33 %. We address two key challenges to achieve these results: (i) efficiently identifying and extracting relevant dependencies from the data, and (ii) making use of the dependencies through SQL rewrites or as transformation rules in the optimizer. First, the schema does not provide all relevant dependencies. We present a workload-driven dependency discovery approach to find additional dependencies within milliseconds. Second, the throughput improvement of a state-of-the-art DBMS is 13 % using only SQL rewrites, but 20 % when we integrate dependency-based optimization into the optimizer and execution engine, e. g., by employing dependency propagation and subquery handling. Using all relevant dependencies, the runtime of four standard benchmarks improves by up to 10 % compared to using only primary and foreign keys, and up to 22 % compared to not using dependencies. The dependency discovery overhead amortizes after a single workload execution. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 462,806 |
1801.05399 | A networked voting rule for democratic representation | We introduce a general framework for exploring the problem of selecting a committee of representatives with the aim of studying a networked voting rule based on a decentralized large-scale platform, which can assure a strong accountability of the elected. The results of our simulations suggest that this algorithm-based approach is able to obtain a high representativeness for relatively small committees, performing even better than a classical voting rule based on a closed list of candidates. We show that a general relation between committee size and representatives exists in the form of an inverse square root law and that the normalized committee size approximately scales with the inverse of the community size, allowing the scalability to very large populations. These findings are not strongly influenced by the different networks used to describe the individuals interactions, except for the presence of few individuals with very high connectivity which can have a marginally negative effect in the committee selection process. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 88,450 |
2111.10115 | IRONWAN: Increasing Reliability of Overlapping Networks in LoRaWAN | LoRaWAN deployments follow an ad-hoc deployment model that has organically led to overlapping communication networks, sharing the wireless spectrum, and completely unaware of each other. LoRaWAN uses ALOHA-style communication where it is almost impossible to schedule transmission between networks belonging to different owners properly. The inability to schedule overlapping networks will cause inter-network interference, which will increase node-to-gateway message losses and gateway-to-node acknowledgement failures. This problem is likely to get worse as the number of LoRaWAN networks increase. In response to this problem, we propose IRONWAN, a wireless overlay network that shares communication resources without modifications to underlying protocols. It utilises the broadcast nature of radio communication and enables gateway-to-gateway communication to facilitate the search for failed messages and transmit failed acknowledgements already received and cached in overlapping network's gateways. IRONWAN uses two novel algorithms, a Real-time Message Inter-arrival Predictor, to highlight when a server has not received an expected uplink message. The Interference Predictor ensures that extra gateway-to-gateway communication does not negatively impact communication bandwidth. We evaluate IRONWAN on a 1000-node simulator with up to ten gateways and a 10-node testbed with 2-gateways. Results show that IRONWAN can achieve up to 12\% higher packet delivery ratio (PDR) and total messages received per node while increasing the minimum PDR by up to 28\%. These improvements save up to 50\% node's energy. Finally, we demonstrate that IRONWAN has comparable performance to an optimal solution (wired, centralised) but with 2-32 times lower communication costs. IRONWAN also has up to 14\% better PDR when compared to FLIP, a wired-distributed gateway-to-gateway protocol in certain scenarios. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | 267,214 |
2411.14164 | FoPru: Focal Pruning for Efficient Large Vision-Language Models | Large Vision-Language Models (LVLMs) represent a significant advancement toward achieving superior multimodal capabilities by enabling powerful Large Language Models (LLMs) to understand visual input. Typically, LVLMs utilize visual encoders, such as CLIP, to transform images into visual tokens, which are then aligned with textual tokens through projection layers before being input into the LLM for inference. Although existing LVLMs have achieved significant success, their inference efficiency is still limited by the substantial number of visual tokens and the potential redundancy among them. To mitigate this issue, we propose Focal Pruning (FoPru), a training-free method that prunes visual tokens based on the attention-based token significance derived from the vision encoder. Specifically, we introduce two alternative pruning strategies: 1) the rank strategy, which leverages all token significance scores to retain more critical tokens in a global view; 2) the row strategy, which focuses on preserving continuous key information in images from a local perspective. Finally, the selected tokens are reordered to maintain their original positional relationships. Extensive experiments across various LVLMs and multimodal datasets demonstrate that our method can prune a large number of redundant tokens while maintaining high accuracy, leading to significant improvements in inference efficiency. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 510,058 |
2110.12484 | Enabling Large Batch Size Training for DNN Models Beyond the Memory
Limit While Maintaining Performance | Recent deep learning models are difficult to train using a large batch size, because commodity machines may not have enough memory to accommodate both the model and a large data batch size. The batch size is one of the hyper-parameters used in the training model, and it is dependent on and is limited by the target machine memory capacity because the batch size can only fit into the remaining memory after the model is uploaded. Moreover, the data item size is also an important factor because if each data item size is larger then the batch size that can fit into the remaining memory becomes smaller. This paper proposes a method called Micro-Batch Processing (MBP) to address this problem. This method helps deep learning models to train by providing a batch processing method that splits a batch into a size that can fit in the remaining memory and processes them sequentially. After processing the small batches individually, a loss normalization algorithm based on the gradient accumulation is used to maintain the performance. The purpose of our method is to allow deep learning models to train using larger batch sizes that exceed the memory capacity of a system without increasing the memory size or using multiple devices (GPUs). | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 262,858 |
2302.10280 | Deepfake Detection Analyzing Hybrid Dataset Utilizing CNN and SVM | Social media is currently being used by many individuals online as a major source of information. However, not all information shared online is true, even photos and videos can be doctored. Deepfakes have recently risen with the rise of technological advancement and have allowed nefarious online users to replace one face with a computer generated face of anyone they would like, including important political and cultural figures. Deepfakes are now a tool to be able to spread mass misinformation. There is now an immense need to create models that are able to detect deepfakes and keep them from being spread as seemingly real images or videos. In this paper, we propose a new deepfake detection schema using two popular machine learning algorithms. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 346,737 |
2010.08784 | DIFER: Differentiable Automated Feature Engineering | Feature engineering, a crucial step of machine learning, aims to extract useful features from raw data to improve data quality. In recent years, great efforts have been devoted to Automated Feature Engineering (AutoFE) to replace expensive human labor. However, existing methods are computationally demanding due to treating AutoFE as a coarse-grained black-box optimization problem over a discrete space. In this work, we propose an efficient gradient-based method called DIFER to perform differentiable automated feature engineering in a continuous vector space. DIFER selects potential features based on evolutionary algorithm and leverages an encoder-predictor-decoder controller to optimize existing features. We map features into the continuous vector space via the encoder, optimize the embedding along the gradient direction induced by the predicted score, and recover better features from the optimized embedding by the decoder. Extensive experiments on classification and regression datasets demonstrate that DIFER can significantly improve the performance of various machine learning algorithms and outperform current state-of-the-art AutoFE methods in terms of both efficiency and performance. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 201,306 |
2204.04330 | Improved Object Pose Estimation via Deep Pre-touch Sensing | For certain manipulation tasks, object pose estimation from head-mounted cameras may not be sufficiently accurate. This is at least in part due to our inability to perfectly calibrate the coordinate frames of today's high degree of freedom robot arms that link the head to the end-effectors. We present a novel framework combining pre-touch sensing and deep learning to more accurately estimate pose in an efficient manner. The use of pre-touch sensing allows our method to localize the object directly with respect to the robot's end effector, thereby avoiding error caused by miscalibration of the arms. Instead of requiring the robot to scan the entire object with its pre-touch sensor, we use a deep neural network to detect object regions that contain distinctive geometric features. By focusing pre-touch sensing on these regions, the robot can more efficiently gather the information necessary to adjust its original pose estimate. Our region detection network was trained using a new dataset containing objects of widely varying geometries and has been labeled in a scalable fashion that is free from human bias. This dataset is applicable to any task that involves a pre-touch sensor gathering geometric information, and has been made publicly available. We evaluate our framework by having the robot re-estimate the pose of a number of objects of varying geometries. Compared to two simpler region proposal methods, we find that our deep neural network performs significantly better. In addition, we find that after a sequence of scans, objects can typically be localized to within 0.5 cm of their true position. We also observe that the original pose estimate can often be significantly improved after collecting a single quick scan. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 290,612 |
2010.11151 | Logistic Q-Learning | We propose a new reinforcement learning algorithm derived from a regularized linear-programming formulation of optimal control in MDPs. The method is closely related to the classic Relative Entropy Policy Search (REPS) algorithm of Peters et al. (2010), with the key difference that our method introduces a Q-function that enables efficient exact model-free implementation. The main feature of our algorithm (called QREPS) is a convex loss function for policy evaluation that serves as a theoretically sound alternative to the widely used squared Bellman error. We provide a practical saddle-point optimization method for minimizing this loss function and provide an error-propagation analysis that relates the quality of the individual updates to the performance of the output policy. Finally, we demonstrate the effectiveness of our method on a range of benchmark problems. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 202,140 |
2301.09443 | Probabilistic Machine Learning to Improve Generalisation of Data-Driven
Turbulence Modelling | A probabilistic machine learning model is introduced to augment the $k-\omega\ SST$ turbulence model in order to improve the modelling of separated flows and the generalisability of learnt corrections. Increasingly, machine learning methods have been used to leverage experimental and high-fidelity data, improving the accuracy of the Reynolds Averaged Navier Stokes (RANS) turbulence models widely used in industry. A significant challenge for such methods is their ability to generalise to unseen geometries and flow conditions. Furthermore, heterogeneous datasets containing a mix of experimental and simulation data must be efficiently handled. In this work, field inversion and an ensemble of Gaussian Process Emulators (GPEs) is employed to address both of these challenges. The ensemble model is applied to a range of benchmark test cases, demonstrating improved turbulence modelling for cases with separated flows with adverse pressure gradients, where RANS simulations are understood to be unreliable. Perhaps more significantly, the simulation reverted to the uncorrected model in regions of the flow exhibiting physics outside of the training data. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 341,498 |
2212.00832 | Applications of Lattice Gauge Equivariant Neural Networks | The introduction of relevant physical information into neural network architectures has become a widely used and successful strategy for improving their performance. In lattice gauge theories, such information can be identified with gauge symmetries, which are incorporated into the network layers of our recently proposed Lattice Gauge Equivariant Convolutional Neural Networks (L-CNNs). L-CNNs can generalize better to differently sized lattices than traditional neural networks and are by construction equivariant under lattice gauge transformations. In these proceedings, we present our progress on possible applications of L-CNNs to Wilson flow or continuous normalizing flow. Our methods are based on neural ordinary differential equations which allow us to modify link configurations in a gauge equivariant manner. For simplicity, we focus on simple toy models to test these ideas in practice. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 334,207 |
2303.06202 | A POV-based Highway Vehicle Trajectory Dataset and Prediction
Architecture | Vehicle Trajectory datasets that provide multiple point-of-views (POVs) can be valuable for various traffic safety and management applications. Despite the abundance of trajectory datasets, few offer a comprehensive and diverse range of driving scenes, capturing multiple viewpoints of various highway layouts, merging lanes, and configurations. This limits their ability to capture the nuanced interactions between drivers, vehicles, and the roadway infrastructure. We introduce the \emph{Carolinas Highway Dataset (CHD\footnote{\emph{CHD} available at: \url{https://github.com/TeCSAR-UNCC/Carolinas\_Dataset}})}, a vehicle trajectory, detection, and tracking dataset. \emph{CHD} is a collection of 1.6 million frames captured in highway-based videos from eye-level and high-angle POVs at eight locations across Carolinas with 338,000 vehicle trajectories. The locations, timing of recordings, and camera angles were carefully selected to capture various road geometries, traffic patterns, lighting conditions, and driving behaviors. We also present \emph{PishguVe}\footnote{\emph{PishguVe} code available at: \url{https://github.com/TeCSAR-UNCC/PishguVe}}, a novel vehicle trajectory prediction architecture that uses attention-based graph isomorphism and convolutional neural networks. The results demonstrate that \emph{PishguVe} outperforms existing algorithms to become the new state-of-the-art (SotA) in bird's-eye, eye-level, and high-angle POV trajectory datasets. Specifically, it achieves a 12.50\% and 10.20\% improvement in ADE and FDE, respectively, over the current SotA on NGSIM dataset. Compared to best-performing models on CHD, \emph{PishguVe} achieves lower ADE and FDE on eye-level data by 14.58\% and 27.38\%, respectively, and improves ADE and FDE on high-angle data by 8.3\% and 6.9\%, respectively. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 350,735 |
2008.11006 | Millimeter Wave Channel Modeling via Generative Neural Networks | Statistical channel models are instrumental to design and evaluate wireless communication systems. In the millimeter wave bands, such models become acutely challenging; they must capture the delay, directions, and path gains, for each link and with high resolution. This paper presents a general modeling methodology based on training generative neural networks from data. The proposed generative model consists of a two-stage structure that first predicts the state of each link (line-of-sight, non-line-of-sight, or outage), and subsequently feeds this state into a conditional variational autoencoder that generates the path losses, delays, and angles of arrival and departure for all its propagation paths. Importantly, minimal prior assumptions are made, enabling the model to capture complex relationships within the data. The methodology is demonstrated for 28GHz air-to-ground channels in an urban environment, with training datasets produced by means of ray tracing. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 193,156 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.