id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2105.00310 | MARL: Multimodal Attentional Representation Learning for Disease
Prediction | Existing learning models often utilise CT-scan images to predict lung diseases. These models are posed by high uncertainties that affect lung segmentation and visual feature learning. We introduce MARL, a novel Multimodal Attentional Representation Learning model architecture that learns useful features from multimodal data under uncertainty. We feed the proposed model with both the lung CT-scan images and their perspective historical patients' biological records collected over times. Such rich data offers to analyse both spatial and temporal aspects of the disease. MARL employs Fuzzy-based image spatial segmentation to overcome uncertainties in CT-scan images. We then utilise a pre-trained Convolutional Neural Network (CNN) to learn visual representation vectors from images. We augment patients' data with statistical features from the segmented images. We develop a Long Short-Term Memory (LSTM) network to represent the augmented data and learn sequential patterns of disease progressions. Finally, we inject both CNN and LSTM feature vectors to an attention layer to help focus on the best learning features. We evaluated MARL on regression of lung disease progression and status classification. MARL outperforms state-of-the-art CNN architectures, such as EfficientNet and DenseNet, and baseline prediction models. It achieves a 91% R^2 score, which is higher than the other models by a range of 8% to 27%. Also, MARL achieves 97% and 92% accuracy for binary and multi-class classification, respectively. MARL improves the accuracy of state-of-the-art CNN models with a range of 19% to 57%. The results show that combining spatial and sequential temporal features produces better discriminative feature. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 233,169 |
2011.09567 | Predicting metrical patterns in Spanish poetry with language models | In this paper, we compare automated metrical pattern identification systems available for Spanish against extensive experiments done by fine-tuning language models trained on the same task. Despite being initially conceived as a model suitable for semantic tasks, our results suggest that BERT-based models retain enough structural information to perform reasonably well for Spanish scansion. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 207,226 |
1603.02514 | Variational Autoencoders for Semi-supervised Text Classification | Although semi-supervised variational autoencoder (SemiVAE) works in image classification task, it fails in text classification task if using vanilla LSTM as its decoder. From a perspective of reinforcement learning, it is verified that the decoder's capability to distinguish between different categorical labels is essential. Therefore, Semi-supervised Sequential Variational Autoencoder (SSVAE) is proposed, which increases the capability by feeding label into its decoder RNN at each time-step. Two specific decoder structures are investigated and both of them are verified to be effective. Besides, in order to reduce the computational complexity in training, a novel optimization method is proposed, which estimates the gradient of the unlabeled objective function by sampling, along with two variance reduction techniques. Experimental results on Large Movie Review Dataset (IMDB) and AG's News corpus show that the proposed approach significantly improves the classification accuracy compared with pure-supervised classifiers, and achieves competitive performance against previous advanced methods. State-of-the-art results can be obtained by integrating other pretraining-based methods. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 53,019 |
2403.16205 | Blur2Blur: Blur Conversion for Unsupervised Image Deblurring on Unknown
Domains | This paper presents an innovative framework designed to train an image deblurring algorithm tailored to a specific camera device. This algorithm works by transforming a blurry input image, which is challenging to deblur, into another blurry image that is more amenable to deblurring. The transformation process, from one blurry state to another, leverages unpaired data consisting of sharp and blurry images captured by the target camera device. Learning this blur-to-blur transformation is inherently simpler than direct blur-to-sharp conversion, as it primarily involves modifying blur patterns rather than the intricate task of reconstructing fine image details. The efficacy of the proposed approach has been demonstrated through comprehensive experiments on various benchmarks, where it significantly outperforms state-of-the-art methods both quantitatively and qualitatively. Our code and data are available at https://zero1778.github.io/blur2blur/ | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 440,924 |
2306.08424 | Selective Concept Models: Permitting Stakeholder Customisation at
Test-Time | Concept-based models perform prediction using a set of concepts that are interpretable to stakeholders. However, such models often involve a fixed, large number of concepts, which may place a substantial cognitive load on stakeholders. We propose Selective COncept Models (SCOMs) which make predictions using only a subset of concepts and can be customised by stakeholders at test-time according to their preferences. We show that SCOMs only require a fraction of the total concepts to achieve optimal accuracy on multiple real-world datasets. Further, we collect and release a new dataset, CUB-Sel, consisting of human concept set selections for 900 bird images from the popular CUB dataset. Using CUB-Sel, we show that humans have unique individual preferences for the choice of concepts they prefer to reason about, and struggle to identify the most theoretically informative concepts. The customisation and concept selection provided by SCOM improves the efficiency of interpretation and intervention for stakeholders. | true | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 373,410 |
2406.14594 | Age of Information Versions: a Semantic View of Markov Source Monitoring | We consider the problem of real-time remote monitoring of a two-state Markov process, where a sensor observes the state of the source and makes a decision on whether to transmit the status updates over an unreliable channel or not. We introduce a modified randomized stationary sampling and transmission policy where the decision to perform sampling occurs probabilistically depending on the current state of the source and whether the system was in a sync state during the previous time slot or not. We then propose two new performance metrics, coined the Version Innovation Age (VIA) and the Age of Incorrect Version (AoIV) and analyze their performance under the modified randomized stationary and other state-of-the-art sampling and transmission policies. Specifically, we derive closed-form expressions for the distribution and the average of VIA, AoIV, and Age of Incorrect Information (AoII) under these policies. Furthermore, we formulate and solve three constrained optimization problems. The first optimization problem aims to minimize the average VIA subject to constraints on the time-averaged sampling cost and time-averaged reconstruction error. In the second and third problems, the objective is to minimize the average AoIV and AoII, respectively, while considering a constraint on the time-averaged sampling cost. Finally, we compare the performance of various sampling and transmission policies and identify the conditions under which each policy outperforms the others in optimizing the proposed metrics. | false | false | false | false | false | false | false | false | false | true | true | false | false | false | false | false | false | true | 466,389 |
2411.07503 | A Novel Automatic Real-time Motion Tracking Method for Magnetic
Resonance Imaging-guided Radiotherapy: Leveraging the Enhanced
Tracking-Learning-Detection Framework with Automatic Segmentation | Background and Purpose: Accurate motion tracking in MRI-guided Radiotherapy (MRIgRT) is essential for effective treatment delivery. This study aimed to enhance motion tracking precision in MRIgRT through an automatic real-time markerless tracking method using an enhanced Tracking-Learning-Detection (ETLD) framework with automatic segmentation. Materials and Methods: We developed a novel MRIgRT motion tracking and segmentation method by integrating the ETLD framework with an improved Chan-Vese model (ICV), named ETLD+ICV. The ETLD framework was upgraded for real-time cine MRI, including advanced image preprocessing, no-reference image quality assessment, an enhanced median-flow tracker, and a refined detector with dynamic search region adjustments. ICV was used for precise target volume coverage, refining the segmented region frame by frame using tracking results, with key parameters optimized. The method was tested on 3.5D MRI scans from 10 patients with liver metastases. Results: Evaluation of 106,000 frames across 77 treatment fractions showed sub-millimeter tracking errors of less than 0.8mm, with over 99% precision and 98% recall for all subjects in the Beam Eye View(BEV)/Beam Path View(BPV) orientation. The ETLD+ICV method achieved a dice global score of more than 82% for all subjects, demonstrating the method's extensibility and precise target volume coverage. Conclusion: This study successfully developed an automatic real-time markerless motion tracking method for MRIgRT that significantly outperforms current methods. The novel method not only delivers exceptional precision in tracking and segmentation but also shows enhanced adaptability to clinical demands, making it an indispensable asset in improving the efficacy of radiotherapy treatments. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 507,553 |
2010.14492 | Asymptotic Bounds on the Rate of Locally Repairable Codes | New asymptotic upper bounds are presented on the rate of sequences of locally repairable codes (LRCs) with a prescribed relative minimum distance and locality over a finite field $F$. The bounds apply to LRCs in which the recovery functions are linear; in particular, the bounds apply to linear LRCs over $F$. The new bounds are shown to improve on previously published results, especially when the repair groups are disjoint, namely, they form a partition of the set of coordinates. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 203,462 |
2306.04299 | Timing Process Interventions with Causal Inference and Reinforcement
Learning | The shift from the understanding and prediction of processes to their optimization offers great benefits to businesses and other organizations. Precisely timed process interventions are the cornerstones of effective optimization. Prescriptive process monitoring (PresPM) is the sub-field of process mining that concentrates on process optimization. The emerging PresPM literature identifies state-of-the-art methods, causal inference (CI) and reinforcement learning (RL), without presenting a quantitative comparison. Most experiments are carried out using historical data, causing problems with the accuracy of the methods' evaluations and preempting online RL. Our contribution consists of experiments on timed process interventions with synthetic data that renders genuine online RL and the comparison to CI possible, and allows for an accurate evaluation of the results. Our experiments reveal that RL's policies outperform those from CI and are more robust at the same time. Indeed, the RL policies approach perfect policies. Unlike CI, the unaltered online RL approach can be applied to other, more generic PresPM problems such as next best activity recommendations. Nonetheless, CI has its merits in settings where online learning is not an option. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 371,687 |
1603.09429 | Ordinal Conditional Functions for Nearly Counterfactual Revision | We are interested in belief revision involving conditional statements where the antecedent is almost certainly false. To represent such problems, we use Ordinal Conditional Functions that may take infinite values. We model belief change in this context through simple arithmetical operations that allow us to capture the intuition that certain antecedents can not be validated by any number of observations. We frame our approach as a form of finite belief improvement, and we propose a model of conditional belief revision in which only the "right" hypothetical levels of implausibility are revised. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 53,915 |
2211.15081 | Mitigating Overfitting in Graph Neural Networks via Feature and
Hyperplane Perturbation | Graph neural networks (GNNs) are commonly used in semi-supervised settings. Previous research has primarily focused on finding appropriate graph filters (e.g. aggregation methods) to perform well on both homophilic and heterophilic graphs. While these methods are effective, they can still suffer from the sparsity of node features, where the initial data contain few non-zero elements. This can lead to overfitting in certain dimensions in the first projection matrix, as training samples may not cover the entire range of graph filters (hyperplanes). To address this, we propose a novel data augmentation strategy. Specifically, by flipping both the initial features and hyperplane, we create additional space for training, which leads to more precise updates of the learnable parameters and improved robustness for unseen features during inference. To the best of our knowledge, this is the first attempt to mitigate the overfitting caused by the initial features. Extensive experiments on real-world datasets show that our proposed technique increases node classification accuracy by up to 46.5% relatively. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 333,112 |
1910.00838 | Data-Driven Identification of Rayleigh-Damped Second-Order Systems | In this paper, we present a data-driven approach to identify second-order systems, having internal Rayleigh damping. This means that the damping matrix is given as a linear combination of the mass and stiffness matrices. These systems typically appear when performing various engineering studies, e.g., vibrational and structural analysis. In an experimental setup, the frequency response of a system can be measured via various approaches, for instance, by measuring the vibrations using an accelerometer. As a consequence, given frequency samples, the identification of the underlying system relies on rational approximation. To that aim, we propose an identification of the corresponding second-order system, extending the Loewner framework for this class of systems. The efficiency of the proposed method is demonstrated by means of various numerical benchmarks. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | 147,776 |
2308.08148 | Hierarchical Topological Ordering with Conditional Independence Test for
Limited Time Series | Learning directed acyclic graphs (DAGs) to identify causal relations underlying observational data is crucial but also poses significant challenges. Recently, topology-based methods have emerged as a two-step approach to discovering DAGs by first learning the topological ordering of variables and then eliminating redundant edges, while ensuring that the graph remains acyclic. However, one limitation is that these methods would generate numerous spurious edges that require subsequent pruning. To overcome this limitation, in this paper, we propose an improvement to topology-based methods by introducing limited time series data, consisting of only two cross-sectional records that need not be adjacent in time and are subject to flexible timing. By incorporating conditional instrumental variables as exogenous interventions, we aim to identify descendant nodes for each variable. Following this line, we propose a hierarchical topological ordering algorithm with conditional independence test (HT-CIT), which enables the efficient learning of sparse DAGs with a smaller search space compared to other popular approaches. The HT-CIT algorithm greatly reduces the number of edges that need to be pruned. Empirical results from synthetic and real-world datasets demonstrate the superiority of the proposed HT-CIT algorithm. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 385,781 |
0903.0735 | Modeling the Experience of Emotion | Affective computing has proven to be a viable field of research comprised of a large number of multidisciplinary researchers resulting in work that is widely published. The majority of this work consists of computational models of emotion recognition, computational modeling of causal factors of emotion and emotion expression through rendered and robotic faces. A smaller part is concerned with modeling the effects of emotion, formal modeling of cognitive appraisal theory and models of emergent emotions. Part of the motivation for affective computing as a field is to better understand emotional processes through computational modeling. One of the four major topics in affective computing is computers that have emotions (the others are recognizing, expressing and understanding emotions). A critical and neglected aspect of having emotions is the experience of emotion (Barrett, Mesquita, Ochsner, and Gross, 2007): what does the content of an emotional episode look like, how does this content change over time and when do we call the episode emotional. Few modeling efforts have these topics as primary focus. The launch of a journal on synthetic emotions should motivate research initiatives in this direction, and this research should have a measurable impact on emotion research in psychology. I show that a good way to do so is to investigate the psychological core of what an emotion is: an experience. I present ideas on how the experience of emotion could be modeled and provide evidence that several computational models of emotion are already addressing the issue. | true | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | 3,285 |
1908.02435 | Improved Adversarial Robustness by Reducing Open Space Risk via Tent
Activations | Adversarial examples contain small perturbations that can remain imperceptible to human observers but alter the behavior of even the best performing deep learning models and yield incorrect outputs. Since their discovery, adversarial examples have drawn significant attention in machine learning: researchers try to reveal the reasons for their existence and improve the robustness of machine learning models to adversarial perturbations. The state-of-the-art defense is the computationally expensive and very time consuming adversarial training via projected gradient descent (PGD). We hypothesize that adversarial attacks exploit the open space risk of classic monotonic activation functions. This paper introduces the tent activation function with bounded open space risk and shows that tents make deep learning models more robust to adversarial attacks. We demonstrate on the MNIST dataset that a classifier with tents yields an average accuracy of 91.8% against six white-box adversarial attacks, which is more than 15 percentage points above the state of the art. On the CIFAR-10 dataset, our approach improves the average accuracy against the six white-box adversarial attacks to 73.5% from 41.8% achieved by adversarial training via PGD. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 141,002 |
2310.12544 | Neural Likelihood Approximation for Integer Valued Time Series Data | Stochastic processes defined on integer valued state spaces are popular within the physical and biological sciences. These models are necessary for capturing the dynamics of small systems where the individual nature of the populations cannot be ignored and stochastic effects are important. The inference of the parameters of such models, from time series data, is challenging due to intractability of the likelihood. To work at all, current simulation based inference methods require the generation of realisations of the model conditional on the data, which can be both tricky to implement and computationally expensive. In this paper we instead construct a neural likelihood approximation that can be trained using unconditional simulation of the underlying model, which is much simpler. We demonstrate our method by performing inference on a number of ecological and epidemiological models, showing that we can accurately approximate the true posterior while achieving significant computational speed ups compared to current best methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 401,063 |
1905.11293 | Underactuation Design for Tendon-driven Hands via Optimization of
Mechanically Realizable Manifolds in Posture and Torque Spaces | Grasp synergies represent a useful idea to reduce grasping complexity without compromising versatility. Synergies describe coordination patterns between joints, either in terms of position (joint angles) or effort (joint torques). In both of these cases, a grasp synergy can be represented as a low-dimensional manifold lying in the high-dimensional joint posture or torque space. In this paper, we use the term \textit{Mechanically Realizable Manifolds} to refer to the subset of such manifolds (in either posture or torque space) that can be achieved via mechanical coupling of the joints in underactuated hands. We present a method to optimize the design parameters of an underactuated hand in order to shape the Mechanically Realizable Manifolds to fit a pre-defined set of desired grasps. Our method guarantees that the resulting synergies can be physically implemented in an underactuated hand, and will enable the resulting hand to both reach the desired grasp postures and achieve quasistatic equilibrium while loading the grasps. We demonstrate this method on three concrete design examples motivated by a real use case, and evaluate and compare their performance in practice. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 132,385 |
1912.07195 | Fingerprint Synthesis: Search with 100 Million Prints | Evaluation of large-scale fingerprint search algorithms has been limited due to lack of publicly available datasets. To address this problem, we utilize a Generative Adversarial Network (GAN) to synthesize a fingerprint dataset consisting of 100 million fingerprint images. In contrast to existing fingerprint synthesis algorithms, we incorporate an identity loss which guides the generator to synthesize fingerprints corresponding to more distinct identities. The characteristics of our synthesized fingerprints are shown to be more similar to real fingerprints than existing methods via eight different metrics (minutiae count - block and template, minutiae direction - block and template, minutiae convex hull area, minutiae spatial distribution, block minutiae quality distribution, and NFIQ 2.0 scores). Additionally, the synthetic fingerprints based on our approach are shown to be more distinct than synthetic fingerprints based on published methods through search results and imposter distribution statistics. Finally, we report for the first time in open literature, search accuracy against a gallery of 100 million fingerprint images (NIST SD4 Rank-1 accuracy of 89.7%). | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 157,541 |
2408.03626 | On the choice of the non-trainable internal weights in random feature
maps | The computationally cheap machine learning architecture of random feature maps can be viewed as a single-layer feedforward network in which the weights of the hidden layer are random but fixed and only the outer weights are learned via linear regression. The internal weights are typically chosen from a prescribed distribution. The choice of the internal weights significantly impacts the accuracy of random feature maps. We address here the task of how to best select the internal weights. In particular, we consider the forecasting problem whereby random feature maps are used to learn a one-step propagator map for a dynamical system. We provide a computationally cheap hit-and-run algorithm to select good internal weights which lead to good forecasting skill. We show that the number of good features is the main factor controlling the forecasting skill of random feature maps and acts as an effective feature dimension. Lastly, we compare random feature maps with single-layer feedforward neural networks in which the internal weights are now learned using gradient descent. We find that random feature maps have superior forecasting capabilities whilst having several orders of magnitude lower computational cost. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 479,095 |
2412.15224 | Multi-Branch Mutual-Distillation Transformer for EEG-Based Seizure
Subtype Classification | Cross-subject electroencephalogram (EEG) based seizure subtype classification is very important in precise epilepsy diagnostics. Deep learning is a promising solution, due to its ability to automatically extract latent patterns. However, it usually requires a large amount of training data, which may not always be available in clinical practice. This paper proposes Multi-Branch Mutual-Distillation (MBMD) Transformer for cross-subject EEG-based seizure subtype classification, which can be effectively trained from small labeled data. MBMD Transformer replaces all even-numbered encoder blocks of the vanilla Vision Transformer by our designed multi-branch encoder blocks. A mutual-distillation strategy is proposed to transfer knowledge between the raw EEG data and its wavelets of different frequency bands. Experiments on two public EEG datasets demonstrated that our proposed MBMD Transformer outperformed several traditional machine learning and state-of-the-art deep learning approaches. To our knowledge, this is the first work on knowledge distillation for EEG-based seizure subtype classification. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 518,993 |
2311.08933 | Design and Implementation of a Hybrid Wireless Power and Communication
System for Medical Implants | Data collection and analysis from multiple implant nodes in humans can provide targeted medicine and treatment strategies that can prevent many chronic diseases. This data can be collected for a long time and processed using artificial intelligence (AI) techniques in a medical network for early detection and prevention of diseases. Additionally, machine learning (ML) algorithms can be applied for the analysis of big data for health monitoring of the population. Wireless powering, sensing, and communication are essential parts of future wireless implants that aim to achieve the aforementioned goals. In this paper, we present the technical development of a wireless implant that is powered by radio frequency (RF) at 401 MHz, with the sensor data being communicated to an on-body reader. The implant communication is based on two simultaneous wireless links: RF backscatter for implant-to-on-body communication and a galvanic link for intra-body implant-to-implant connectivity. It is demonstrated that RF powering, using the proposed compact antennas, can provide an efficient and integrable system for powering up to an 8 cm depth inside body tissues. Furthermore, the same antennas are utilized for backscatter and galvanic communication. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 407,922 |
2206.12291 | A Design of A Simple Yet Effective Exercise Recommendation System in
K-12 Online Learning | We propose a simple but effective method to recommend exercises with high quality and diversity for students. Our method is made up of three key components: (1) candidate generation module; (2) diversity-promoting module; and (3) scope restriction module. The proposed method improves the overall recommendation performance in terms of recall, and increases the diversity of the recommended candidates by 0.81\% compared to the baselines. | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | false | 304,543 |
2401.00393 | Generative Model-Driven Synthetic Training Image Generation: An Approach
to Cognition in Rail Defect Detection | Recent advancements in cognitive computing, with the integration of deep learning techniques, have facilitated the development of intelligent cognitive systems (ICS). This is particularly beneficial in the context of rail defect detection, where the ICS would emulate human-like analysis of image data for defect patterns. Despite the success of Convolutional Neural Networks (CNN) in visual defect classification, the scarcity of large datasets for rail defect detection remains a challenge due to infrequent accident events that would result in defective parts and images. Contemporary researchers have addressed this data scarcity challenge by exploring rule-based and generative data augmentation models. Among these, Variational Autoencoder (VAE) models can generate realistic data without extensive baseline datasets for noise modeling. This study proposes a VAE-based synthetic image generation technique for rail defects, incorporating weight decay regularization and image reconstruction loss to prevent overfitting. The proposed method is applied to create a synthetic dataset for the Canadian Pacific Railway (CPR) with just 50 real samples across five classes. Remarkably, 500 synthetic samples are generated with a minimal reconstruction loss of 0.021. A Visual Transformer (ViT) model underwent fine-tuning using this synthetic CPR dataset, achieving high accuracy rates (98%-99%) in classifying the five defect classes. This research offers a promising solution to the data scarcity challenge in rail defect detection, showcasing the potential for robust ICS development in this domain. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | true | 418,973 |
1501.04232 | Maximum Entropy Models of Shortest Path and Outbreak Distributions in
Networks | Properties of networks are often characterized in terms of features such as node degree distributions, average path lengths, diameters, or clustering coefficients. Here, we study shortest path length distributions. On the one hand, average as well as maximum distances can be determined therefrom; on the other hand, they are closely related to the dynamics of network spreading processes. Because of the combinatorial nature of networks, we apply maximum entropy arguments to derive a general, physically plausible model. In particular, we establish the generalized Gamma distribution as a continuous characterization of shortest path length histograms of networks or arbitrary topology. Experimental evaluations corroborate our theoretical results. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 39,339 |
2408.05676 | A Decoding Acceleration Framework for Industrial Deployable LLM-based
Recommender Systems | Recently, increasing attention has been paid to LLM-based recommender systems, but their deployment is still under exploration in the industry. Most deployments utilize LLMs as feature enhancers, generating augmentation knowledge in the offline stage. However, in recommendation scenarios, involving numerous users and items, even offline generation with LLMs consumes considerable time and resources. This generation inefficiency stems from the autoregressive nature of LLMs, and a promising direction for acceleration is speculative decoding, a Draft-then-Verify paradigm that increases the number of generated tokens per decoding step. In this paper, we first identify that recommendation knowledge generation is suitable for retrieval-based speculative decoding. Then, we discern two characteristics: (1) extensive items and users in RSs bring retrieval inefficiency, and (2) RSs exhibit high diversity tolerance for text generated by LLMs. Based on the above insights, we propose a Decoding Acceleration Framework for LLM-based Recommendation (dubbed DARE), with Customized Retrieval Pool to improve retrieval efficiency and Relaxed Verification to increase the acceptance rate of draft tokens, respectively. Extensive experiments demonstrate that DARE achieves a 3-5x speedup and is compatible with various frameworks and backbone LLMs. DARE has also been deployed to online advertising scenarios within a large-scale commercial environment, achieving a 3.45x speedup while maintaining the downstream performance. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 479,885 |
1910.07860 | Can I teach a robot to replicate a line art | Line art is arguably one of the fundamental and versatile modes of expression. We propose a pipeline for a robot to look at a grayscale line art and redraw it. The key novel elements of our pipeline are: a) we propose a novel task of mimicking line drawings, b) to solve the pipeline we modify the Quick-draw dataset to obtain supervised training for converting a line drawing into a series of strokes c) we propose a multi-stage segmentation and graph interpretation pipeline for solving the problem. The resultant method has also been deployed on a CNC plotter as well as a robotic arm. We have trained several variations of the proposed methods and evaluate these on a dataset obtained from Quick-draw. Through the best methods we observe an accuracy of around 98% for this task, which is a significant improvement over the baseline architecture we adapted from. This therefore allows for deployment of the method on robots for replicating line art in a reliable manner. We also show that while the rule-based vectorization methods do suffice for simple drawings, it fails for more complicated sketches, unlike our method which generalizes well to more complicated distributions. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 149,725 |
2412.14615 | Additive codes attaining the Griesmer bound | Additive codes may have better parameters than linear codes. However, still very few cases are known and the explicit construction of such codes is a challenging problem. Here we show that a Griesmer type bound for the length of additive codes can always be attained with equality if the minimum distance is sufficiently large. This solves the problem for the optimal parameters of additive codes when the minimum distance is large and yields many infinite series of additive codes that outperform linear codes. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 518,790 |
2305.12534 | BertRLFuzzer: A BERT and Reinforcement Learning Based Fuzzer | We present a novel tool BertRLFuzzer, a BERT and Reinforcement Learning (RL) based fuzzer aimed at finding security vulnerabilities for Web applications. BertRLFuzzer works as follows: given a set of seed inputs, the fuzzer performs grammar-adhering and attack-provoking mutation operations on them to generate candidate attack vectors. The key insight of BertRLFuzzer is the use of RL with a BERT model as an agent to guide the fuzzer to efficiently learn grammar-adhering and attack-provoking mutation operators. In order to establish the efficacy of BertRLFuzzer we compare it against a total of 13 black box and white box fuzzers over a benchmark of 9 victim websites with over 16K LOC. We observed a significant improvement relative to the nearest competing tool in terms of time to first attack (54% less), new vulnerabilities found (17 new vulnerabilities), and attack rate (4.4% more attack vectors generated). | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | true | 366,047 |
2410.00432 | Scalable Multi-Task Transfer Learning for Molecular Property Prediction | Molecules have a number of distinct properties whose importance and application vary. Often, in reality, labels for some properties are hard to achieve despite their practical importance. A common solution to such data scarcity is to use models of good generalization with transfer learning. This involves domain experts for designing source and target tasks whose features are shared. However, this approach has limitations: i). Difficulty in accurate design of source-target task pairs due to the large number of tasks, and ii). corresponding computational burden verifying many trials and errors of transfer learning design, thereby iii). constraining the potential of foundation modeling of multi-task molecular property prediction. We address the limitations of the manual design of transfer learning via data-driven bi-level optimization. The proposed method enables scalable multi-task transfer learning for molecular property prediction by automatically obtaining the optimal transfer ratios. Empirically, the proposed method improved the prediction performance of 40 molecular properties and accelerated training convergence. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 493,377 |
2305.10167 | Pragmatic Reasoning in Structured Signaling Games | In this work we introduce a structured signaling game, an extension of the classical signaling game with a similarity structure between meanings in the context, along with a variant of the Rational Speech Act (RSA) framework which we call structured-RSA (sRSA) for pragmatic reasoning in structured domains. We explore the behavior of the sRSA in the domain of color and show that pragmatic agents using sRSA on top of semantic representations, derived from the World Color Survey, attain efficiency very close to the information theoretic limit after only 1 or 2 levels of recursion. We also explore the interaction between pragmatic reasoning and learning in multi-agent reinforcement learning framework. Our results illustrate that artificial agents using sRSA develop communication closer to the information theoretic frontier compared to agents using RSA and just reinforcement learning. We also find that the ambiguity of the semantic representation increases as the pragmatic agents are allowed to perform deeper reasoning about each other during learning. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 364,939 |
2206.14255 | Target alignment in truncated kernel ridge regression | Kernel ridge regression (KRR) has recently attracted renewed interest due to its potential for explaining the transient effects, such as double descent, that emerge during neural network training. In this work, we study how the alignment between the target function and the kernel affects the performance of the KRR. We focus on the truncated KRR (TKRR) which utilizes an additional parameter that controls the spectral truncation of the kernel matrix. We show that for polynomial alignment, there is an \emph{over-aligned} regime, in which TKRR can achieve a faster rate than what is achievable by full KRR. The rate of TKRR can improve all the way to the parametric rate, while that of full KRR is capped at a sub-optimal value. This shows that target alignemnt can be better leveraged by utilizing spectral truncation in kernel methods. We also consider the bandlimited alignment setting and show that the regularization surface of TKRR can exhibit transient effects including multiple descent and non-monotonic behavior. Our results show that there is a strong and quantifable relation between the shape of the \emph{alignment spectrum} and the generalization performance of kernel methods, both in terms of rates and in finite samples. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 305,218 |
0812.1560 | Achievable Rates and Training Optimization for Fading Relay Channels
with Memory | In this paper, transmission over time-selective, flat fading relay channels is studied. It is assumed that channel fading coefficients are not known a priori. Transmission takes place in two phases: network training phase and data transmission phase. In the training phase, pilot symbols are sent and the receivers employ single-pilot MMSE estimation or noncausal Wiener filter to learn the channel. Amplify-and-Forward (AF) and Decode-and-Forward (DF) techniques are considered in the data transmission phase and achievable rate expressions are obtained. The training period, and data and training power allocations are jointly optimized by using the achievable rate expressions. Numerical results are obtained considering Gauss-Markov and lowpass fading models. Achievable rates are computed and energy-per-bit requirements are investigated. The optimal power distributions among pilot and data symbols are provided. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 2,765 |
2110.14895 | Pipeline Parallelism for Inference on Heterogeneous Edge Computing | Deep neural networks with large model sizes achieve state-of-the-art results for tasks in computer vision (CV) and natural language processing (NLP). However, these large-scale models are too compute- or memory-intensive for resource-constrained edge devices. Prior works on parallel and distributed execution primarily focus on training -- rather than inference -- using homogeneous accelerators in data centers. We propose EdgePipe, a distributed framework for edge systems that uses pipeline parallelism to both speed up inference and enable running larger (and more accurate) models that otherwise cannot fit on single edge devices. EdgePipe achieves these results by using an optimal partition strategy that considers heterogeneity in compute, memory, and network bandwidth. Our empirical evaluation demonstrates that EdgePipe achieves $10.59\times$ and $11.88\times$ speedup using 16 edge devices for the ViT-Large and ViT-Huge models, respectively, with no accuracy loss. Similarly, EdgePipe improves ViT-Huge throughput by $3.93\times$ over a 4-node baseline using 16 edge devices, which independently cannot fit the model in memory. Finally, we show up to $4.16\times$ throughput improvement over the state-of-the-art PipeDream when using a heterogeneous set of devices. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 263,685 |
2410.11924 | A Prompt-Guided Spatio-Temporal Transformer Model for National-Wide
Nuclear Radiation Forecasting | Nuclear radiation (NR), which refers to the energy emitted from atomic nuclei during decay, poses substantial risks to human health and environmental safety. Accurate forecasting of nuclear radiation levels is crucial for informed decision-making by both individuals and governments. However, this task is challenging due to the imbalanced distribution of monitoring stations over a wide spatial range and the non-stationary radiation variation patterns. In this study, we introduce NRFormer, an innovative framework tailored for national-wide prediction of nuclear radiation variations. By integrating a non-stationary temporal attention module, an imbalance-aware spatial attention module, and a radiation propagation prompting module, NRFormer collectively captures complex spatio-temporal dynamics of nuclear radiation. Extensive experiments on two real-world datasets demonstrate the superiority of our proposed framework against seven baselines. This research not only enhances the accuracy and reliability in nuclear radiation forecasting but also contributes to advancing emergency response strategies and monitoring systems, thereby safeguarding environmental and public health. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 498,782 |
1608.05204 | Refining Geometry from Depth Sensors using IR Shading Images | We propose a method to refine geometry of 3D meshes from a consumer level depth camera, e.g. Kinect, by exploiting shading cues captured from an infrared (IR) camera. A major benefit to using an IR camera instead of an RGB camera is that the IR images captured are narrow band images that filter out most undesired ambient light, which makes our system robust against natural indoor illumination. Moreover, for many natural objects with colorful textures in the visible spectrum, the subjects appear to have a uniform albedo in the IR spectrum. Based on our analyses on the IR projector light of the Kinect, we define a near light source IR shading model that describes the captured intensity as a function of surface normals, albedo, lighting direction, and distance between light source and surface points. To resolve the ambiguity in our model between the normals and distances, we utilize an initial 3D mesh from the Kinect fusion and multi-view information to reliably estimate surface details that were not captured and reconstructed by the Kinect fusion. Our approach directly operates on the mesh model for geometry refinement. We ran experiments on our algorithm for geometries captured by both the Kinect I and Kinect II, as the depth acquisition in Kinect I is based on a structured-light technique and that of the Kinect II is based on a time-of-flight (ToF) technology. The effectiveness of our approach is demonstrated through several challenging real-world examples. We have also performed a user study to evaluate the quality of the mesh models before and after our refinements. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 59,946 |
1310.0307 | Using the Random Sprays Retinex Algorithm for Global Illumination
Estimation | In this paper the use of Random Sprays Retinex (RSR) algorithm for global illumination estimation is proposed and its feasibility tested. Like other algorithms based on the Retinex model, RSR also provides local illumination estimation and brightness adjustment for each pixel and it is faster than other path-wise Retinex algorithms. As the assumption of the uniform illumination holds in many cases, it should be possible to use the mean of local illumination estimations of RSR as a global illumination estimation for images with (assumed) uniform illumination allowing also the accuracy to be easily measured. Therefore we propose a method for estimating global illumination estimation based on local RSR results. To our best knowledge this is the first time that RSR algorithm is used to obtain global illumination estimation. For our tests we use a publicly available color constancy image database for testing. The results are presented and discussed and it turns out that the proposed method outperforms many existing unsupervised color constancy algorithms. The source code is available at http://www.fer.unizg.hr/ipg/resources/color_constancy/. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 27,472 |
2203.00826 | Using Geographic Load Shifting to Reduce Carbon Emissions | An increasing focus on the electricity use and carbon emissions associated with computing has lead to pledges by major cloud computing companies to lower their carbon footprint. Data centers have a unique ability to shift computing load between different geographical locations, giving rise to geographic load flexibility that can be employed to reduce carbon emissions. In this paper, we present a model where data centers shift load independently of the ISOs. We first consider the impact of load shifting guided by locational marginal carbon emissions, denoted by $\lambda_{\text{CO}_2}$, a sensitivity metric that measures the impact of incremental load shifts. Relative to previous models for data center load shifting, the presented model improves accuracy and include more realistic assumptions regarding the operation of both data centers and the electricity market. Further, we introduce a new benchmark model in which data centers have access to the full information about the power system and can identify optimal shifts for the current time period. We demonstrate the efficacy of our model on the IEEE RTS GMLC system using 5 minute load and generation data for an entire year. Our results show that the proposed accuracy improvements for the shifting model based on $\lambda_{\text{CO}_2}$ are highly effective, leading to results that outperform the benchmark model. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 283,135 |
2006.10864 | PEREGRiNN: Penalized-Relaxation Greedy Neural Network Verifier | Neural Networks (NNs) have increasingly apparent safety implications commensurate with their proliferation in real-world applications: both unanticipated as well as adversarial misclassifications can result in fatal outcomes. As a consequence, techniques of formal verification have been recognized as crucial to the design and deployment of safe NNs. In this paper, we introduce a new approach to formally verify the most commonly considered safety specifications for ReLU NNs -- i.e. polytopic specifications on the input and output of the network. Like some other approaches, ours uses a relaxed convex program to mitigate the combinatorial complexity of the problem. However, unique in our approach is the way we use a convex solver not only as a linear feasibility checker, but also as a means of penalizing the amount of relaxation allowed in solutions. In particular, we encode each ReLU by means of the usual linear constraints, and combine this with a convex objective function that penalizes the discrepancy between the output of each neuron and its relaxation. This convex function is further structured to force the largest relaxations to appear closest to the input layer; this provides the further benefit that the most problematic neurons are conditioned as early as possible, when conditioning layer by layer. This paradigm can be leveraged to create a verification algorithm that is not only faster in general than competing approaches, but is also able to verify considerably more safety properties; we evaluated PEREGRiNN on a standard MNIST robustness verification suite to substantiate these claims. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 183,024 |
2406.16568 | Star+: A New Multi-Domain Model for CTR Prediction | In this paper, we introduce Star+, a novel multi-domain model for click-through rate (CTR) prediction inspired by the Star model. Traditional single-domain approaches and existing multi-task learning techniques face challenges in multi-domain environments due to their inability to capture domain-specific data distributions and complex inter-domain relationships. Star+ addresses these limitations by enhancing the interaction between shared and domain-specific information through various fusion strategies, such as add, adaptive add, concatenation, and gating fusions, to find the optimal balance between domain-specific and shared information. We also investigate the impact of different normalization techniques, including layer normalization, batch normalization, and partition normalization, on the performance of our model. Our extensive experiments on both industrial and public datasets demonstrate that Star+ significantly improves prediction accuracy and efficiency. This work contributes to the advancement of recommendation systems by providing a robust, scalable, and adaptive solution for multi-domain environments. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 467,186 |
1801.09031 | Improving Word Vector with Prior Knowledge in Semantic Dictionary | Using low dimensional vector space to represent words has been very effective in many NLP tasks. However, it doesn't work well when faced with the problem of rare and unseen words. In this paper, we propose to leverage the knowledge in semantic dictionary in combination with some morphological information to build an enhanced vector space. We get an improvement of 2.3% over the state-of-the-art Heidel Time system in temporal expression recognition, and obtain a large gain in other name entity recognition (NER) tasks. The semantic dictionary Hownet alone also shows promising results in computing lexical similarity. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 89,031 |
2309.10546 | Mean Absolute Directional Loss as a New Loss Function for Machine
Learning Problems in Algorithmic Investment Strategies | This paper investigates the issue of an adequate loss function in the optimization of machine learning models used in the forecasting of financial time series for the purpose of algorithmic investment strategies (AIS) construction. We propose the Mean Absolute Directional Loss (MADL) function, solving important problems of classical forecast error functions in extracting information from forecasts to create efficient buy/sell signals in algorithmic investment strategies. Finally, based on the data from two different asset classes (cryptocurrencies: Bitcoin and commodities: Crude Oil), we show that the new loss function enables us to select better hyperparameters for the LSTM model and obtain more efficient investment strategies, with regard to risk-adjusted return metrics on the out-of-sample data. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 393,056 |
2309.10772 | Interactive Distillation of Large Single-Topic Corpora of Scientific
Papers | Highly specific datasets of scientific literature are important for both research and education. However, it is difficult to build such datasets at scale. A common approach is to build these datasets reductively by applying topic modeling on an established corpus and selecting specific topics. A more robust but time-consuming approach is to build the dataset constructively in which a subject matter expert (SME) handpicks documents. This method does not scale and is prone to error as the dataset grows. Here we showcase a new tool, based on machine learning, for constructively generating targeted datasets of scientific literature. Given a small initial "core" corpus of papers, we build a citation network of documents. At each step of the citation network, we generate text embeddings and visualize the embeddings through dimensionality reduction. Papers are kept in the dataset if they are "similar" to the core or are otherwise pruned through human-in-the-loop selection. Additional insight into the papers is gained through sub-topic modeling using SeNMFk. We demonstrate our new tool for literature review by applying it to two different fields in machine learning. | false | false | false | false | false | true | true | false | true | false | false | false | false | false | false | false | false | true | 393,145 |
1707.08951 | Handwritten character recognition using some (anti)-diagonal structural
features | In this paper, we present a methodology for off-line handwritten character recognition. The proposed methodology relies on a new feature extraction technique based on structural characteristics, histograms and profiles. As novelty, we propose the extraction of new eight histograms and four profiles from the $32\times 32$ matrices that represent the characters, creating 256-dimension feature vectors. These feature vectors are then employed in a classification step that uses a $k$-means algorithm. We performed experiments using the NIST database to evaluate our proposal. Namely, the recognition system was trained using 1000 samples and 64 classes for each symbol and was tested on 500 samples for each symbol. We obtain promising accuracy results that vary from 81.74\% to 93.75\%, depending on the difficulty of the character category, showing better accuracy results than other methods from the state of the art also based on structural characteristics. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 77,924 |
2408.07648 | See It All: Contextualized Late Aggregation for 3D Dense Captioning | 3D dense captioning is a task to localize objects in a 3D scene and generate descriptive sentences for each object. Recent approaches in 3D dense captioning have adopted transformer encoder-decoder frameworks from object detection to build an end-to-end pipeline without hand-crafted components. However, these approaches struggle with contradicting objectives where a single query attention has to simultaneously view both the tightly localized object regions and contextual environment. To overcome this challenge, we introduce SIA (See-It-All), a transformer pipeline that engages in 3D dense captioning with a novel paradigm called late aggregation. SIA simultaneously decodes two sets of queries-context query and instance query. The instance query focuses on localization and object attribute descriptions, while the context query versatilely captures the region-of-interest of relationships between multiple objects or with the global scene, then aggregated afterwards (i.e., late aggregation) via simple distance-based measures. To further enhance the quality of contextualized caption generation, we design a novel aggregator to generate a fully informed caption based on the surrounding context, the global environment, and object instances. Extensive experiments on two of the most widely-used 3D dense captioning datasets demonstrate that our proposed method achieves a significant improvement over prior methods. | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | 480,674 |
2409.01389 | CV-Probes: Studying the interplay of lexical and world knowledge in
visually grounded verb understanding | This study investigates the ability of various vision-language (VL) models to ground context-dependent and non-context-dependent verb phrases. To do that, we introduce the CV-Probes dataset, designed explicitly for studying context understanding, containing image-caption pairs with context-dependent verbs (e.g., "beg") and non-context-dependent verbs (e.g., "sit"). We employ the MM-SHAP evaluation to assess the contribution of verb tokens towards model predictions. Our results indicate that VL models struggle to ground context-dependent verb phrases effectively. These findings highlight the challenges in training VL models to integrate context accurately, suggesting a need for improved methodologies in VL model training and evaluation. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 485,319 |
1809.05361 | Advanced Soccer Skills and Team Play of RoboCup 2017 TeenSize Winner
NimbRo | In order to pursue the vision of the RoboCup Humanoid League of beating the soccer world champion by 2050, new rules and competitions are added or modified each year fostering novel technological advances. In 2017, the number of players in the TeenSize class soccer games was increase to 3 vs. 3, which allowed for more team play strategies. Improvements in individual skills were also demanded through a set of technical challenges. This paper presents the latest individual skills and team play developments used in RoboCup 2017 that lead our team Nimbro winning the 2017 TeenSize soccer tournament, the technical challenges, and the drop-in games. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 107,779 |
2209.15368 | Inharmonious Region Localization by Magnifying Domain Discrepancy | Inharmonious region localization aims to localize the region in a synthetic image which is incompatible with surrounding background. The inharmony issue is mainly attributed to the color and illumination inconsistency produced by image editing techniques. In this work, we tend to transform the input image to another color space to magnify the domain discrepancy between inharmonious region and background, so that the model can identify the inharmonious region more easily. To this end, we present a novel framework consisting of a color mapping module and an inharmonious region localization network, in which the former is equipped with a novel domain discrepancy magnification loss and the latter could be an arbitrary localization network. Extensive experiments on image harmonization dataset show the superiority of our designed framework. Our code is available at https://github.com/bcmi/MadisNet-Inharmonious-Region-Localization. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 320,580 |
2212.06905 | Query Time Optimized Deep Learning Based Video Inference System | This is a project report about how we tune Focus[1], a video inference system that provides low cost and low latency, through two phases. In this report, we will decrease the query time by saving the middle layer output of the neural network. This is a trade-off strategy that involves using more space to save time. We show how this scheme works using prototype systems, and it saves 20% of the time. The code repository URL is here, https://github.com/iphyer/CS744 FocousIngestOpt. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 336,243 |
0905.4163 | Cyclic Codes over Some Finite Rings | In this paper cyclic codes are established with respect to the Mannheim metric over some finite rings by using Gaussian integers and the decoding algorithm for these codes is given. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 3,771 |
2207.07915 | On Curating Responsible and Representative Healthcare Video
Recommendations for Patient Education and Health Literacy: An Augmented
Intelligence Approach | Studies suggest that one in three US adults use the Internet to diagnose or learn about a health concern. However, such access to health information online could exacerbate the disparities in health information availability and use. Health information seeking behavior (HISB) refers to the ways in which individuals seek information about their health, risks, illnesses, and health-protective behaviors. For patients engaging in searches for health information on digital media platforms, health literacy divides can be exacerbated both by their own lack of knowledge and by algorithmic recommendations, with results that disproportionately impact disadvantaged populations, minorities, and low health literacy users. This study reports on an exploratory investigation of the above challenges by examining whether responsible and representative recommendations can be generated using advanced analytic methods applied to a large corpus of videos and their metadata on a chronic condition (diabetes) from the YouTube social media platform. The paper focusses on biases associated with demographic characters of actors using videos on diabetes that were retrieved and curated for multiple criteria such as encoded medical content and their understandability to address patient education and population health literacy needs. This approach offers an immense opportunity for innovation in human-in-the-loop, augmented-intelligence, bias-aware and responsible algorithmic recommendations by combining the perspectives of health professionals and patients into a scalable and generalizable machine learning framework for patient empowerment and improved health outcomes. | false | false | false | false | true | true | true | false | false | false | false | false | false | false | false | false | false | false | 308,372 |
2501.18965 | The Surprising Agreement Between Convex Optimization Theory and
Learning-Rate Scheduling for Large Model Training | We show that learning-rate schedules for large model training behave surprisingly similar to a performance bound from non-smooth convex optimization theory. We provide a bound for the constant schedule with linear cooldown; in particular, the practical benefit of cooldown is reflected in the bound due to the absence of logarithmic terms. Further, we show that this surprisingly close match between optimization theory and practice can be exploited for learning-rate tuning: we achieve noticeable improvements for training 124M and 210M Llama-type models by (i) extending the schedule for continued training with optimal learning-rate, and (ii) transferring the optimal learning-rate across schedules. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 528,943 |
2206.00772 | On the reversibility of adversarial attacks | Adversarial attacks modify images with perturbations that change the prediction of classifiers. These modified images, known as adversarial examples, expose the vulnerabilities of deep neural network classifiers. In this paper, we investigate the predictability of the mapping between the classes predicted for original images and for their corresponding adversarial examples. This predictability relates to the possibility of retrieving the original predictions and hence reversing the induced misclassification. We refer to this property as the reversibility of an adversarial attack, and quantify reversibility as the accuracy in retrieving the original class or the true class of an adversarial example. We present an approach that reverses the effect of an adversarial attack on a classifier using a prior set of classification results. We analyse the reversibility of state-of-the-art adversarial attacks on benchmark classifiers and discuss the factors that affect the reversibility. | false | false | false | false | true | false | true | false | false | false | false | false | true | false | false | false | false | false | 300,242 |
1607.02613 | New approach to Bayesian high-dimensional linear regression | Consider the problem of estimating parameters $X^n \in \mathbb{R}^n $, generated by a stationary process, from $m$ response variables $Y^m = AX^n+Z^m$, under the assumption that the distribution of $X^n$ is known. This is the most general version of the Bayesian linear regression problem. The lack of computationally feasible algorithms that can employ generic prior distributions and provide a good estimate of $X^n$ has limited the set of distributions researchers use to model the data. In this paper, a new scheme called Q-MAP is proposed. The new method has the following properties: (i) It has similarities to the popular MAP estimation under the noiseless setting. (ii) In the noiseless setting, it achieves the "asymptotically optimal performance" when $X^n$ has independent and identically distributed components. (iii) It scales favorably with the dimensions of the problem and therefore is applicable to high-dimensional setups. (iv) The solution of the Q-MAP optimization can be found via a proposed iterative algorithm which is provably robust to the error (noise) in the response variables. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 58,382 |
1805.08594 | Neural Generative Models for Global Optimization with Gradients | The aim of global optimization is to find the global optimum of arbitrary classes of functions, possibly highly multimodal ones. In this paper we focus on the subproblem of global optimization for differentiable functions and we propose an Evolutionary Search-inspired solution where we model point search distributions via Generative Neural Networks. This approach enables us to model diverse and complex search distributions based on which we can efficiently explore complicated objective landscapes. In our experiments we show the practical superiority of our algorithm versus classical Evolutionary Search and gradient-based solutions on a benchmark set of multimodal functions, and demonstrate how it can be used to accelerate Bayesian Optimization with Gaussian Processes. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 98,185 |
0705.4485 | Mixed membership stochastic blockmodels | Observations consisting of measurements on relationships for pairs of objects arise in many settings, such as protein interaction and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing such data with probabilisic models can be delicate because the simple exchangeability assumptions underlying many boilerplate models no longer hold. In this paper, we describe a latent variable model of such data called the mixed membership stochastic blockmodel. This model extends blockmodels for relational data to ones which capture mixed membership latent relational structure, thus providing an object-specific low-dimensional representation. We develop a general variational inference algorithm for fast approximate posterior inference. We explore applications to social and protein interaction networks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 294 |
2004.05155 | Learning to Explore using Active Neural SLAM | This work presents a modular and hierarchical approach to learn policies for exploring 3D environments, called `Active Neural SLAM'. Our approach leverages the strengths of both classical and learning-based methods, by using analytical path planners with learned SLAM module, and global and local policies. The use of learning provides flexibility with respect to input modalities (in the SLAM module), leverages structural regularities of the world (in global policies), and provides robustness to errors in state estimation (in local policies). Such use of learning within each module retains its benefits, while at the same time, hierarchical decomposition and modular training allow us to sidestep the high sample complexities associated with training end-to-end policies. Our experiments in visually and physically realistic simulated 3D environments demonstrate the effectiveness of our approach over past learning and geometry-based approaches. The proposed model can also be easily transferred to the PointGoal task and was the winning entry of the CVPR 2019 Habitat PointGoal Navigation Challenge. | false | false | false | false | true | false | true | true | false | false | false | true | false | false | false | false | false | false | 172,106 |
2309.03904 | Exploring Sparse MoE in GANs for Text-conditioned Image Synthesis | Due to the difficulty in scaling up, generative adversarial networks (GANs) seem to be falling from grace on the task of text-conditioned image synthesis. Sparsely-activated mixture-of-experts (MoE) has recently been demonstrated as a valid solution to training large-scale models with limited computational resources. Inspired by such a philosophy, we present Aurora, a GAN-based text-to-image generator that employs a collection of experts to learn feature processing, together with a sparse router to help select the most suitable expert for each feature point. To faithfully decode the sampling stochasticity and the text condition to the final synthesis, our router adaptively makes its decision by taking into account the text-integrated global latent code. At 64x64 image resolution, our model trained on LAION2B-en and COYO-700M achieves 6.2 zero-shot FID on MS COCO. We release the code and checkpoints to facilitate the community for further development. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 390,555 |
1904.10699 | The VIA Annotation Software for Images, Audio and Video | In this paper, we introduce a simple and standalone manual annotation tool for images, audio and video: the VGG Image Annotator (VIA). This is a light weight, standalone and offline software package that does not require any installation or setup and runs solely in a web browser. The VIA software allows human annotators to define and describe spatial regions in images or video frames, and temporal segments in audio or video. These manual annotations can be exported to plain text data formats such as JSON and CSV and therefore are amenable to further processing by other software tools. VIA also supports collaborative annotation of a large dataset by a group of human annotators. The BSD open source license of this software allows it to be used in any academic project or commercial application. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 128,692 |
2106.08905 | Structure First Detail Next: Image Inpainting with Pyramid Generator | Recent deep generative models have achieved promising performance in image inpainting. However, it is still very challenging for a neural network to generate realistic image details and textures, due to its inherent spectral bias. By our understanding of how artists work, we suggest to adopt a `structure first detail next' workflow for image inpainting. To this end, we propose to build a Pyramid Generator by stacking several sub-generators, where lower-layer sub-generators focus on restoring image structures while the higher-layer sub-generators emphasize image details. Given an input image, it will be gradually restored by going through the entire pyramid in a bottom-up fashion. Particularly, our approach has a learning scheme of progressively increasing hole size, which allows it to restore large-hole images. In addition, our method could fully exploit the benefits of learning with high-resolution images, and hence is suitable for high-resolution image inpainting. Extensive experimental results on benchmark datasets have validated the effectiveness of our approach compared with state-of-the-art methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 241,474 |
1912.11464 | Attack-Resistant Federated Learning with Residual-based Reweighting | Federated learning has a variety of applications in multiple domains by utilizing private training data stored on different devices. However, the aggregation process in federated learning is highly vulnerable to adversarial attacks so that the global model may behave abnormally under attacks. To tackle this challenge, we present a novel aggregation algorithm with residual-based reweighting to defend federated learning. Our aggregation algorithm combines repeated median regression with the reweighting scheme in iteratively reweighted least squares. Our experiments show that our aggregation algorithm outperforms other alternative algorithms in the presence of label-flipping and backdoor attacks. We also provide theoretical analysis for our aggregation algorithm. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 158,571 |
2301.05494 | Multilingual Detection of Check-Worthy Claims using World Languages and
Adapter Fusion | Check-worthiness detection is the task of identifying claims, worthy to be investigated by fact-checkers. Resource scarcity for non-world languages and model learning costs remain major challenges for the creation of models supporting multilingual check-worthiness detection. This paper proposes cross-training adapters on a subset of world languages, combined by adapter fusion, to detect claims emerging globally in multiple languages. (1) With a vast number of annotators available for world languages and the storage-efficient adapter models, this approach is more cost efficient. Models can be updated more frequently and thus stay up-to-date. (2) Adapter fusion provides insights and allows for interpretation regarding the influence of each adapter model on a particular language. The proposed solution often outperformed the top multilingual approaches in our benchmark tasks. | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | 340,362 |
2309.10886 | GelSight Svelte Hand: A Three-finger, Two-DoF, Tactile-rich, Low-cost
Robot Hand for Dexterous Manipulation | This paper presents GelSight Svelte Hand, a novel 3-finger 2-DoF tactile robotic hand that is capable of performing precision grasps, power grasps, and intermediate grasps. Rich tactile signals are obtained from one camera on each finger, with an extended sensing area similar to the full length of a human finger. Each finger of GelSight Svelte Hand is supported by a semi-rigid endoskeleton and covered with soft silicone materials, which provide both rigidity and compliance. We describe the design, fabrication, functionalities, and tactile sensing capability of GelSight Svelte Hand in this paper. More information is available on our website: \url{https://gelsight-svelte.alanz.info}. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 393,186 |
2110.15444 | 10 Security and Privacy Problems in Large Foundation Models | Foundation models--such as GPT, CLIP, and DINO--have achieved revolutionary progress in the past several years and are commonly believed to be a promising approach for general-purpose AI. In particular, self-supervised learning is adopted to pre-train a foundation model using a large amount of unlabeled data. A pre-trained foundation model is like an ``operating system'' of the AI ecosystem. Specifically, a foundation model can be used as a feature extractor for many downstream tasks with little or no labeled training data. Existing studies on foundation models mainly focused on pre-training a better foundation model to improve its performance on downstream tasks in non-adversarial settings, leaving its security and privacy in adversarial settings largely unexplored. A security or privacy issue of a pre-trained foundation model leads to a single point of failure for the AI ecosystem. In this book chapter, we discuss 10 basic security and privacy problems for the pre-trained foundation models, including six confidentiality problems, three integrity problems, and one availability problem. For each problem, we discuss potential opportunities and challenges. We hope our book chapter will inspire future research on the security and privacy of foundation models. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 263,881 |
2206.03183 | Risk Measures and Upper Probabilities: Coherence and Stratification | Machine learning typically presupposes classical probability theory which implies that aggregation is built upon expectation. There are now multiple reasons to motivate looking at richer alternatives to classical probability theory as a mathematical foundation for machine learning. We systematically examine a powerful and rich class of alternative aggregation functionals, known variously as spectral risk measures, Choquet integrals or Lorentz norms. We present a range of characterization results, and demonstrate what makes this spectral family so special. In doing so we arrive at a natural stratification of all coherent risk measures in terms of the upper probabilities that they induce by exploiting results from the theory of rearrangement invariant Banach spaces. We empirically demonstrate how this new approach to uncertainty helps tackling practical machine learning problems. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 301,173 |
2104.06022 | Lessons on Parameter Sharing across Layers in Transformers | We propose a parameter sharing method for Transformers (Vaswani et al., 2017). The proposed approach relaxes a widely used technique, which shares parameters for one layer with all layers such as Universal Transformers (Dehghani et al., 2019), to increase the efficiency in the computational time. We propose three strategies: Sequence, Cycle, and Cycle (rev) to assign parameters to each layer. Experimental results show that the proposed strategies are efficient in the parameter size and computational time. Moreover, we indicate that the proposed strategies are also effective in the configuration where we use many training data such as the recent WMT competition. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 229,937 |
1909.10031 | LuNet: A Deep Neural Network for Network Intrusion Detection | Network attack is a significant security issue for modern society. From small mobile devices to large cloud platforms, almost all computing products, used in our daily life, are networked and potentially under the threat of network intrusion. With the fast-growing network users, network intrusions become more and more frequent, volatile and advanced. Being able to capture intrusions in time for such a large scale network is critical and very challenging. To this end, the machine learning (or AI) based network intrusion detection (NID), due to its intelligent capability, has drawn increasing attention in recent years. Compared to the traditional signature-based approaches, the AI-based solutions are more capable of detecting variants of advanced network attacks. However, the high detection rate achieved by the existing designs is usually accompanied by a high rate of false alarms, which may significantly discount the overall effectiveness of the intrusion detection system. In this paper, we consider the existence of spatial and temporal features in the network traffic data and propose a hierarchical CNN+RNN neural network, LuNet. In LuNet, the convolutional neural network (CNN) and the recurrent neural network (RNN) learn input traffic data in sync with a gradually increasing granularity such that both spatial and temporal features of the data can be effectively extracted. Our experiments on two network traffic datasets show that compared to the state-of-the-art network intrusion detection techniques, LuNet not only offers a high level of detection capability but also has a much low rate of false positive-alarm. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 146,429 |
2305.20065 | Latent Exploration for Reinforcement Learning | In Reinforcement Learning, agents learn policies by exploring and interacting with the environment. Due to the curse of dimensionality, learning policies that map high-dimensional sensory input to motor output is particularly challenging. During training, state of the art methods (SAC, PPO, etc.) explore the environment by perturbing the actuation with independent Gaussian noise. While this unstructured exploration has proven successful in numerous tasks, it can be suboptimal for overactuated systems. When multiple actuators, such as motors or muscles, drive behavior, uncorrelated perturbations risk diminishing each other's effect, or modifying the behavior in a task-irrelevant way. While solutions to introduce time correlation across action perturbations exist, introducing correlation across actuators has been largely ignored. Here, we propose LATent TIme-Correlated Exploration (Lattice), a method to inject temporally-correlated noise into the latent state of the policy network, which can be seamlessly integrated with on- and off-policy algorithms. We demonstrate that the noisy actions generated by perturbing the network's activations can be modeled as a multivariate Gaussian distribution with a full covariance matrix. In the PyBullet locomotion tasks, Lattice-SAC achieves state of the art results, and reaches 18% higher reward than unstructured exploration in the Humanoid environment. In the musculoskeletal control environments of MyoSuite, Lattice-PPO achieves higher reward in most reaching and object manipulation tasks, while also finding more energy-efficient policies with reductions of 20-60%. Overall, we demonstrate the effectiveness of structured action noise in time and actuator space for complex motor control tasks. The code is available at: https://github.com/amathislab/lattice. | false | false | false | false | true | false | true | true | false | false | false | false | false | false | false | false | false | false | 369,802 |
1011.5124 | Delay Constrained Utility Maximization in Multihop Random Access
Networks | Multi-hop random access networks have received much attention due to their distributed nature which facilitates deploying many new applications over the sensor and computer networks. Recently, utility maximization framework is applied in order to optimize performance of such networks, however proposed algorithms result in large transmission delays. In this paper, we will analyze delay in random access multi-hop networks and solve the delay-constrained utility maximization problem. We define the network utility as a combination of rate utility and energy cost functions and solve the following two problems: 'optimal medium access control with link delay constraint' and, 'optimal congestion and contention control with end-to-end delay constraint'. The optimal tradeoff between delay, rate, and energy is achieved for different values of delay constraint and the scaling factors between rate and energy. Eventually linear and super-linear distributed optimization solutions are proposed for each problem and their performance are compared in terms of convergence and complexity. | false | false | false | false | false | false | false | false | false | true | true | false | false | false | false | false | false | true | 8,314 |
2310.16960 | Privately Aligning Language Models with Reinforcement Learning | Positioned between pre-training and user deployment, aligning large language models (LLMs) through reinforcement learning (RL) has emerged as a prevailing strategy for training instruction following-models such as ChatGPT. In this work, we initiate the study of privacy-preserving alignment of LLMs through Differential Privacy (DP) in conjunction with RL. Following the influential work of Ziegler et al. (2020), we study two dominant paradigms: (i) alignment via RL without human in the loop (e.g., positive review generation) and (ii) alignment via RL from human feedback (RLHF) (e.g., summarization in a human-preferred way). We give a new DP framework to achieve alignment via RL, and prove its correctness. Our experimental results validate the effectiveness of our approach, offering competitive utility while ensuring strong privacy protections. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 402,935 |
2203.01652 | Informative Path Planning for Active Learning in Aerial Semantic Mapping | Semantic segmentation of aerial imagery is an important tool for mapping and earth observation. However, supervised deep learning models for segmentation rely on large amounts of high-quality labelled data, which is labour-intensive and time-consuming to generate. To address this, we propose a new approach for using unmanned aerial vehicles (UAVs) to autonomously collect useful data for model training. We exploit a Bayesian approach to estimate model uncertainty in semantic segmentation. During a mission, the semantic predictions and model uncertainty are used as input for terrain mapping. A key aspect of our pipeline is to link the mapped model uncertainty to a robotic planning objective based on active learning. This enables us to adaptively guide a UAV to gather the most informative terrain images to be labelled by a human for model training. Our experimental evaluation on real-world data shows the benefit of using our informative planning approach in comparison to static coverage paths in terms of maximising model performance and reducing labelling efforts. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 283,466 |
2204.09437 | Search-based Methods for Multi-Cloud Configuration | Multi-cloud computing has become increasingly popular with enterprises looking to avoid vendor lock-in. While most cloud providers offer similar functionality, they may differ significantly in terms of performance and/or cost. A customer looking to benefit from such differences will naturally want to solve the multi-cloud configuration problem: given a workload, which cloud provider should be chosen and how should its nodes be configured in order to minimize runtime or cost? In this work, we consider solutions to this optimization problem. We develop and evaluate possible adaptations of state-of-the-art cloud configuration solutions to the multi-cloud domain. Furthermore, we identify an analogy between multi-cloud configuration and the selection-configuration problems commonly studied in the automated machine learning (AutoML) field. Inspired by this connection, we utilize popular optimizers from AutoML to solve multi-cloud configuration. Finally, we propose a new algorithm for solving multi-cloud configuration, CloudBandit (CB). It treats the outer problem of cloud provider selection as a best-arm identification problem, in which each arm pull corresponds to running an arbitrary black-box optimizer on the inner problem of node configuration. Our experiments indicate that (a) many state-of-the-art cloud configuration solutions can be adapted to multi-cloud, with best results obtained for adaptations which utilize the hierarchical structure of the multi-cloud configuration domain, (b) hierarchical methods from AutoML can be used for the multi-cloud configuration task and can outperform state-of-the-art cloud configuration solutions and (c) CB achieves competitive or lower regret relative to other tested algorithms, whilst also identifying configurations that have 65% lower median cost and 20% lower median time in production, compared to choosing a random provider and configuration. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 292,441 |
2402.01481 | Pre-Training Protein Bi-level Representation Through Span Mask Strategy
On 3D Protein Chains | In recent years, there has been a surge in the development of 3D structure-based pre-trained protein models, representing a significant advancement over pre-trained protein language models in various downstream tasks. However, most existing structure-based pre-trained models primarily focus on the residue level, i.e., alpha carbon atoms, while ignoring other atoms like side chain atoms. We argue that modeling proteins at both residue and atom levels is important since the side chain atoms can also be crucial for numerous downstream tasks, for example, molecular docking. Nevertheless, we find that naively combining residue and atom information during pre-training typically fails. We identify a key reason is the information leakage caused by the inclusion of atom structure in the input, which renders residue-level pre-training tasks trivial and results in insufficiently expressive residue representations. To address this issue, we introduce a span mask pre-training strategy on 3D protein chains to learn meaningful representations of both residues and atoms. This leads to a simple yet effective approach to learning protein representation suitable for diverse downstream tasks. Extensive experimental results on binding site prediction and function prediction tasks demonstrate our proposed pre-training approach significantly outperforms other methods. Our code will be made public. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 426,042 |
2306.14237 | A Safe Genetic Algorithm Approach for Energy Efficient Federated
Learning in Wireless Communication Networks | Federated Learning (FL) has emerged as a decentralized technique, where contrary to traditional centralized approaches, devices perform a model training in a collaborative manner, while preserving data privacy. Despite the existing efforts made in FL, its environmental impact is still under investigation, since several critical challenges regarding its applicability to wireless networks have been identified. Towards mitigating the carbon footprint of FL, the current work proposes a Genetic Algorithm (GA) approach, targeting the minimization of both the overall energy consumption of an FL process and any unnecessary resource utilization, by orchestrating the computational and communication resources of the involved devices, while guaranteeing a certain FL model performance target. A penalty function is introduced in the offline phase of the GA that penalizes the strategies that violate the constraints of the environment, ensuring a safe GA process. Evaluation results show the effectiveness of the proposed scheme compared to two state-of-the-art baseline solutions, achieving a decrease of up to 83% in the total energy consumption. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | true | 375,601 |
2401.09471 | Brain Tumor Radiogenomic Classification | The RSNA-MICCAI brain tumor radiogenomic classification challenge aimed to predict MGMT biomarker status in glioblastoma through binary classification on Multi parameter mpMRI scans: T1w, T1wCE, T2w and FLAIR. The dataset is splitted into three main cohorts: training set, validation set which were used during training, and the testing were only used during final evaluation. Images were either in a DICOM format or in Png format. different architectures were used to investigate the problem including the 3D version of Vision Transformer (ViT3D), ResNet50, Xception and EfficientNet-B3. AUC was used as the main evaluation metric and the results showed an advantage for both the ViT3D and the Xception models achieving 0.6015 and 0.61745 respectively on the testing set. compared to other results, our results proved to be valid given the complexity of the task. further improvements can be made through exploring different strategies, different architectures and more diverse datasets. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 422,275 |
2001.03573 | Should Artificial Intelligence Governance be Centralised? Design Lessons
from History | Can effective international governance for artificial intelligence remain fragmented, or is there a need for a centralised international organisation for AI? We draw on the history of other international regimes to identify advantages and disadvantages in centralising AI governance. Some considerations, such as efficiency and political power, speak in favour of centralisation. Conversely, the risk of creating a slow and brittle institution speaks against it, as does the difficulty in securing participation while creating stringent rules. Other considerations depend on the specific design of a centralised institution. A well-designed body may be able to deter forum shopping and ensure policy coordination. However, forum shopping can be beneficial and a fragmented landscape of institutions can be self-organising. Centralisation entails trade-offs and the details matter. We conclude with two core recommendations. First, the outcome will depend on the exact design of a central institution. A well-designed centralised regime covering a set of coherent issues could be beneficial. But locking-in an inadequate structure may pose a fate worse than fragmentation. Second, for now fragmentation will likely persist. This should be closely monitored to see if it is self-organising or simply inadequate. | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | 160,014 |
0903.5054 | Flow of Activity in the Ouroboros Model | The Ouroboros Model is a new conceptual proposal for an algorithmic structure for efficient data processing in living beings as well as for artificial agents. Its central feature is a general repetitive loop where one iteration cycle sets the stage for the next. Sensory input activates data structures (schemata) with similar constituents encountered before, thus expectations are kindled. This corresponds to the highlighting of empty slots in the selected schema, and these expectations are compared with the actually encountered input. Depending on the outcome of this consumption analysis different next steps like search for further data or a reset, i.e. a new attempt employing another schema, are triggered. Monitoring of the whole process, and in particular of the flow of activation directed by the consumption analysis, yields valuable feedback for the optimum allocation of attention and resources including the selective establishment of useful new memory entries. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 3,435 |
2309.05380 | Collective PV-RCNN: A Novel Fusion Technique using Collective Detections
for Enhanced Local LiDAR-Based Perception | Comprehensive perception of the environment is crucial for the safe operation of autonomous vehicles. However, the perception capabilities of autonomous vehicles are limited due to occlusions, limited sensor ranges, or environmental influences. Collective Perception (CP) aims to mitigate these problems by enabling the exchange of information between vehicles. A major challenge in CP is the fusion of the exchanged information. Due to the enormous bandwidth requirement of early fusion approaches and the interchangeability issues of intermediate fusion approaches, only the late fusion of shared detections is practical. Current late fusion approaches neglect valuable information for local detection, this is why we propose a novel fusion method to fuse the detections of cooperative vehicles within the local LiDAR-based detection pipeline. Therefore, we present Collective PV-RCNN (CPV-RCNN), which extends the PV-RCNN++ framework to fuse collective detections. Code is available at https://github.com/ekut-es | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 391,065 |
2407.19435 | ASI-Seg: Audio-Driven Surgical Instrument Segmentation with Surgeon
Intention Understanding | Surgical instrument segmentation is crucial in surgical scene understanding, thereby facilitating surgical safety. Existing algorithms directly detected all instruments of pre-defined categories in the input image, lacking the capability to segment specific instruments according to the surgeon's intention. During different stages of surgery, surgeons exhibit varying preferences and focus toward different surgical instruments. Therefore, an instrument segmentation algorithm that adheres to the surgeon's intention can minimize distractions from irrelevant instruments and assist surgeons to a great extent. The recent Segment Anything Model (SAM) reveals the capability to segment objects following prompts, but the manual annotations for prompts are impractical during the surgery. To address these limitations in operating rooms, we propose an audio-driven surgical instrument segmentation framework, named ASI-Seg, to accurately segment the required surgical instruments by parsing the audio commands of surgeons. Specifically, we propose an intention-oriented multimodal fusion to interpret the segmentation intention from audio commands and retrieve relevant instrument details to facilitate segmentation. Moreover, to guide our ASI-Seg segment of the required surgical instruments, we devise a contrastive learning prompt encoder to effectively distinguish the required instruments from the irrelevant ones. Therefore, our ASI-Seg promotes the workflow in the operating rooms, thereby providing targeted support and reducing the cognitive load on surgeons. Extensive experiments are performed to validate the ASI-Seg framework, which reveals remarkable advantages over classical state-of-the-art and medical SAMs in both semantic segmentation and intention-oriented segmentation. The source code is available at https://github.com/Zonmgin-Zhang/ASI-Seg. | true | false | false | false | true | false | false | true | true | false | false | true | false | false | false | false | false | false | 476,785 |
1901.07871 | Analysis of the $(\mu/\mu_I,\lambda)$-CSA-ES with Repair by Projection
Applied to a Conically Constrained Problem | Theoretical analyses of evolution strategies are indispensable for gaining a deep understanding of their inner workings. For constrained problems, rather simple problems are of interest in the current research. This work presents a theoretical analysis of a multi-recombinative evolution strategy with cumulative step size adaptation applied to a conically constrained linear optimization problem. The state of the strategy is modeled by random variables and a stochastic iterative mapping is introduced. For the analytical treatment, fluctuations are neglected and the mean value iterative system is considered. Non-linear difference equations are derived based on one-generation progress rates. Based on that, expressions for the steady state of the mean value iterative system are derived. By comparison with real algorithm runs, it is shown that for the considered assumptions, the theoretical derivations are able to predict the dynamics and the steady state values of the real runs. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 119,324 |
1903.11891 | AED-Net: An Abnormal Event Detection Network | It is challenging to detect the anomaly in crowded scenes for quite a long time. In this paper, a self-supervised framework, abnormal event detection network (AED-Net), which is composed of PCAnet and kernel principal component analysis (kPCA), is proposed to address this problem. Using surveillance video sequences of different scenes as raw data, PCAnet is trained to extract high-level semantics of crowd's situation. Next, kPCA,a one-class classifier, is trained to determine anomaly of the scene. In contrast to some prevailing deep learning methods,the framework is completely self-supervised because it utilizes only video sequences in a normal situation. Experiments of global and local abnormal event detection are carried out on UMN and UCSD datasets, and competitive results with higher EER and AUC compared to other state-of-the-art methods are observed. Furthermore, by adding local response normalization (LRN) layer, we propose an improvement to original AED-Net. And it is proved to perform better by promoting the framework's generalization capacity according to the experiments. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 125,604 |
1312.3986 | Correlations between user voting data, budget, and box office for films
in the Internet Movie Database | The Internet Movie Database (IMDb) is one of the most-visited websites in the world and the premier source for information on films. Like Wikipedia, much of IMDb's information is user contributed. IMDb also allows users to voice their opinion on the quality of films through voting. We investigate whether there is a connection between this user voting data and certain economic film characteristics. To this end, we perform distribution and correlation analysis on a set of films chosen to mitigate effects of bias due to the language and country of origin of films. We show that production budget, box office gross, and total number of user votes for films are consistent with double-log normal distributions for certain time periods. Both total gross and user votes are consistent with a double-log normal distribution from the late 1980s onward, while for budget, it extends from 1935 to 1979. In addition, we find a strong correlation between number of user votes and the economic statistics, particularly budget. Remarkably, we find no evidence for a correlation between number of votes and average user rating. As previous studies have found a strong correlation between production budget and marketing expenses, our results suggest that total user votes is an indicator of a film's prominence or notability, which can be quantified by its promotional costs. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 29,086 |
1708.01654 | Better Together: Joint Reasoning for Non-rigid 3D Reconstruction with
Specularities and Shading | We demonstrate the use of shape-from-shading (SfS) to improve both the quality and the robustness of 3D reconstruction of dynamic objects captured by a single camera. Unlike previous approaches that made use of SfS as a post-processing step, we offer a principled integrated approach that solves dynamic object tracking and reconstruction and SfS as a single unified cost function. Moving beyond Lambertian S f S , we propose a general approach that models both specularities and shading while simultaneously tracking and reconstructing general dynamic objects. Solving these problems jointly prevents the kinds of tracking failures which can not be recovered from by pipeline approaches. We show state-of-the-art results both qualitatively and quantitatively. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 78,422 |
2112.06953 | Controlled Cue Generation for Play Scripts | In this paper, we use a large-scale play scripts dataset to propose the novel task of theatrical cue generation from dialogues. Using over one million lines of dialogue and cues, we approach the problem of cue generation as a controlled text generation task, and show how cues can be used to enhance the impact of dialogue using a language model conditioned on a dialogue/cue discriminator. In addition, we explore the use of topic keywords and emotions for controlled text generation. Extensive quantitative and qualitative experiments show that language models can be successfully used to generate plausible and attribute-controlled texts in highly specialised domains such as play scripts. Supporting materials can be found at: https://catlab-team.github.io/cuegen. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 271,334 |
2403.09380 | Impact of Synthetic Images on Morphing Attack Detection Using a Siamese
Network | This paper evaluated the impact of synthetic images on Morphing Attack Detection (MAD) using a Siamese network with a semi-hard-loss function. Intra and cross-dataset evaluations were performed to measure synthetic image generalisation capabilities using a cross-dataset for evaluation. Three different pre-trained networks were used as feature extractors from traditional MobileNetV2, MobileNetV3 and EfficientNetB0. Our results show that MAD trained on EfficientNetB0 from FERET, FRGCv2, and FRLL can reach a lower error rate in comparison with SOTA. Conversely, worse performances were reached when the system was trained only with synthetic images. A mixed approach (synthetic + digital) database may help to improve MAD and reduce the error rate. This fact shows that we still need to keep going with our efforts to include synthetic images in the training process. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 437,743 |
2110.00669 | Expanding the Design Space for Electrically-Driven Soft Robots through
Handed Shearing Auxetics | Handed Shearing Auxetics (HSA) are a promising structure for making electrically driven robots with distributed compliance that convert a motors rotation and torque into extension and force. We overcame past limitations on the range of actuation, blocked force, and stiffness by focusing on two key design parameters: the point of an HSA's auxetic trajectory that is energetically preferred, and the number of cells along the HSAs length. Modeling the HSA as a programmable spring, we characterize the effect of both on blocked force, minimum energy length, spring constant, angle range and holding torque. We also examined the effect viscoelasticity has on actuation forces over time. By varying the auxetic trajectory point, we were able to make actuators that can push, pull, or do both. We expanded the range of forces possible from 5N to 150N, and the range of stiffness from 2 N/mm to 89 N/mm. For a fixed point on the auxetic trajectory, we found decreasing length can improve force output, at the expense of needing higher torques, and having a shorter throw. We also found that the viscoelastic effects can limit the amount of force a 3D printed HSA can apply over time. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 258,480 |
1901.04167 | Age-Delay Tradeoffs in Single Server Systems | Information freshness and low latency communication is important to many emerging applications. While Age of Information (AoI) serves as a metric of information freshness, packet delay is a traditional metric of communication latency. We prove that there is a natural tradeoff between the AoI and packet delay. We consider a single server system, in which at most one update packet can be serviced at a time. The system designer controls the order in which the packets get serviced and the service time distribution, with a given service rate. We analyze two tradeoff problems that minimize packet delay and the variance in packet delay, respectively, subject to an average age constraint. We prove a strong age-delay and age-delay variance tradeoff, wherein, as the average age approaches its minimum, the delay and its variance approach infinity. We show that the service time distribution that mininizes average age, must necessarily have an unbounded-second moment. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 118,558 |
1205.5024 | Analytical Study of Hexapod miRNAs using Phylogenetic Methods | MicroRNAs (miRNAs) are a class of non-coding RNAs that regulate gene expression. Identification of total number of miRNAs even in completely sequenced organisms is still an open problem. However, researchers have been using techniques that can predict limited number of miRNA in an organism. In this paper, we have used homology based approach for comparative analysis of miRNA of hexapoda group .We have used Apis mellifera, Bombyx mori, Anopholes gambiae and Drosophila melanogaster miRNA datasets from miRBase repository. We have done pair wise as well as multiple alignments for the available miRNAs in the repository to identify and analyse conserved regions among related species. Unfortunately, to the best of our knowledge, miRNA related literature does not provide in depth analysis of hexapods. We have made an attempt to derive the commonality among the miRNAs and to identify the conserved regions which are still not available in miRNA repositories. The results are good approximation with a small number of mismatches. However, they are encouraging and may facilitate miRNA biogenesis for | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 16,138 |
2306.09855 | Runtime Construction of Large-Scale Spiking Neuronal Network Models on
GPU Devices | Simulation speed matters for neuroscientific research: this includes not only how quickly the simulated model time of a large-scale spiking neuronal network progresses, but also how long it takes to instantiate the network model in computer memory. On the hardware side, acceleration via highly parallel GPUs is being increasingly utilized. On the software side, code generation approaches ensure highly optimized code, at the expense of repeated code regeneration and recompilation after modifications to the network model. Aiming for a greater flexibility with respect to iterative model changes, here we propose a new method for creating network connections interactively, dynamically, and directly in GPU memory through a set of commonly used high-level connection rules. We validate the simulation performance with both consumer and data center GPUs on two neuroscientifically relevant models: a cortical microcircuit of about 77,000 leaky-integrate-and-fire neuron models and 300 million static synapses, and a two-population network recurrently connected using a variety of connection rules. With our proposed ad hoc network instantiation, both network construction and simulation times are comparable or shorter than those obtained with other state-of-the-art simulation technologies, while still meeting the flexibility demands of explorative network modeling. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 373,994 |
1602.03936 | Study of Interference Cancellation and Relay Selection Algorithms Using
Greedy Techniques for Cooperative DS-CDMA Systems | In this work, we study interference cancellation techniques and a multi-relay selection algorithm based on greedy methods for the uplink of cooperative direct-sequence code-division multiple access (DS-CDMA) systems. We first devise low-cost list-based successive interference cancellation (GL-SIC) and parallel interference cancellation (GL-PIC) algorithms with RAKE receivers as the front-end that can approach the maximum likelihood detector performance and be used at both the relays and the destination of cooperative systems. Unlike prior art, the proposed GL-SIC and GL-PIC algorithms exploit the Euclidean distance between users of interest and the potential nearest constellation point with a chosen threshold in order to build an effective list of detection candidates. A low-complexity multi-relay selection algorithm based on greedy techniques that can approach the performance of an exhaustive search is also proposed. A cross-layer design strategy that brings together the proposed multiuser detection algorithms and the greedy relay selection is then developed along with an analysis of the proposed techniques. Simulations show an excellent bit error rate performance of the proposed detection and relay selection algorithms as compared to existing techniques. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 52,066 |
2412.13179 | A Pipeline and NIR-Enhanced Dataset for Parking Lot Segmentation | Discussions of minimum parking requirement policies often include maps of parking lots, which are time consuming to construct manually. Open source datasets for such parking lots are scarce, particularly for US cities. This paper introduces the idea of using Near-Infrared (NIR) channels as input and several post-processing techniques to improve the prediction of off-street surface parking lots using satellite imagery. We constructed two datasets with 12,617 image-mask pairs each: one with 3-channel (RGB) and another with 4-channel (RGB + NIR). The datasets were used to train five deep learning models (OneFormer, Mask2Former, SegFormer, DeepLabV3, and FCN) for semantic segmentation, classifying images to differentiate between parking and non-parking pixels. Our results demonstrate that the NIR channel improved accuracy because parking lots are often surrounded by grass, even though the NIR channel needed to be upsampled from a lower resolution. Post-processing including eliminating erroneous holes, simplifying edges, and removing road and building footprints further improved the accuracy. Best model, OneFormer trained on 4-channel input and paired with post-processing techniques achieves a mean Intersection over Union (mIoU) of 84.9 percent and a pixel-wise accuracy of 96.3 percent. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 518,196 |
1803.10136 | Comprehending Real Numbers: Development of Bengali Real Number Speech
Corpus | Speech recognition has received a less attention in Bengali literature due to the lack of a comprehensive dataset. In this paper, we describe the development process of the first comprehensive Bengali speech dataset on real numbers. It comprehends all the possible words that may arise in uttering any Bengali real number. The corpus has ten speakers from the different regions of Bengali native people. It comprises of more than two thousands of speech samples in a total duration of closed to four hours. We also provide a deep analysis of our corpus, highlight some of the notable features of it, and finally evaluate the performances of two of the notable Bengali speech recognizers on it. | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 93,650 |
2412.20903 | WalkVLM:Aid Visually Impaired People Walking by Vision Language Model | Approximately 200 million individuals around the world suffer from varying degrees of visual impairment, making it crucial to leverage AI technology to offer walking assistance for these people. With the recent progress of vision-language models (VLMs), employing VLMs to improve this field has emerged as a popular research topic. However, most existing methods are studied on self-built question-answering datasets, lacking a unified training and testing benchmark for walk guidance. Moreover, in blind walking task, it is necessary to perform real-time streaming video parsing and generate concise yet informative reminders, which poses a great challenge for VLMs that suffer from redundant responses and low inference efficiency. In this paper, we firstly release a diverse, extensive, and unbiased walking awareness dataset, containing 12k video-manual annotation pairs from Europe and Asia to provide a fair training and testing benchmark for blind walking task. Furthermore, a WalkVLM model is proposed, which employs chain of thought for hierarchical planning to generate concise but informative reminders and utilizes temporal-aware adaptive prediction to reduce the temporal redundancy of reminders. Finally, we have established a solid benchmark for blind walking task and verified the advantages of WalkVLM in stream video processing for this task compared to other VLMs. Our dataset and code will be released at anonymous link https://walkvlm2024.github.io. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 521,397 |
2408.04840 | mPLUG-Owl3: Towards Long Image-Sequence Understanding in Multi-Modal
Large Language Models | Multi-modal Large Language Models (MLLMs) have demonstrated remarkable capabilities in executing instructions for a variety of single-image tasks. Despite this progress, significant challenges remain in modeling long image sequences. In this work, we introduce the versatile multi-modal large language model, mPLUG-Owl3, which enhances the capability for long image-sequence understanding in scenarios that incorporate retrieved image-text knowledge, interleaved image-text, and lengthy videos. Specifically, we propose novel hyper attention blocks to efficiently integrate vision and language into a common language-guided semantic space, thereby facilitating the processing of extended multi-image scenarios. Extensive experimental results suggest that mPLUG-Owl3 achieves state-of-the-art performance among models with a similar size on single-image, multi-image, and video benchmarks. Moreover, we propose a challenging long visual sequence evaluation named Distractor Resistance to assess the ability of models to maintain focus amidst distractions. Finally, with the proposed architecture, mPLUG-Owl3 demonstrates outstanding performance on ultra-long visual sequence inputs. We hope that mPLUG-Owl3 can contribute to the development of more efficient and powerful multimodal large language models. | false | false | false | false | true | false | true | false | true | false | false | true | false | false | false | false | false | false | 479,551 |
1308.2572 | Achieving Speedup in Aggregate Risk Analysis using Multiple GPUs | Stochastic simulation techniques employed for the analysis of portfolios of insurance/reinsurance risk, often referred to as `Aggregate Risk Analysis', can benefit from exploiting state-of-the-art high-performance computing platforms. In this paper, parallel methods to speed-up aggregate risk analysis for supporting real-time pricing are explored. An algorithm for analysing aggregate risk is proposed and implemented for multi-core CPUs and for many-core GPUs. Experimental studies indicate that GPUs offer a feasible alternative solution over traditional high-performance computing systems. A simulation of 1,000,000 trials with 1,000 catastrophic events per trial on a typical exposure set and contract structure is performed in less than 5 seconds on a multiple GPU platform. The key result is that the multiple GPU implementation can be used in real-time pricing scenarios as it is approximately 77x times faster than the sequential counterpart implemented on a CPU. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 26,393 |
1901.04654 | Reducing Age-of-Information for Computation-Intensive Messages via
Packet Replacement | Freshness of data is an important performance metric for real-time applications, which can be measured by age-of-information. For computation-intensive messages, the embedded information is not available until being computed. In this paper, we study the age-of-information for computation-intensive messages, which are firstly transmitted to a mobile edge server, and then processed in the edge server to extract the embedded information. The packet generation follows zero-wait policy, by which a new packet is generated when the last one is just delivered to the edge server. The queue in front of the edge server adopts one-packet-buffer replacement policy, meaning that only the latest received packet is preserved. We derive the expression of average age-of-information for exponentially distributed transmission time and computing time. With packet replacement, the average age is reduced compared with the case without packet replacement, especially when the transmission rate is close to or greater than the computing rate. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 118,637 |
2002.12041 | Attention-guided Chained Context Aggregation for Semantic Segmentation | The way features propagate in Fully Convolutional Networks is of momentous importance to capture multi-scale contexts for obtaining precise segmentation masks. This paper proposes a novel series-parallel hybrid paradigm called the Chained Context Aggregation Module (CAM) to diversify feature propagation. CAM gains features of various spatial scales through chain-connected ladder-style information flows and fuses them in a two-stage process, namely pre-fusion and re-fusion. The serial flow continuously increases receptive fields of output neurons and those in parallel encode different region-based contexts. Each information flow is a shallow encoder-decoder with appropriate down-sampling scales to sufficiently capture contextual information. We further adopt an attention model in CAM to guide feature re-fusion. Based on these developments, we construct the Chained Context Aggregation Network (CANet), which employs an asymmetric decoder to recover precise spatial details of prediction maps. We conduct extensive experiments on six challenging datasets, including Pascal VOC 2012, Pascal Context, Cityscapes, CamVid, SUN-RGBD and GATECH. Results evidence that CANet achieves state-of-the-art performance. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 165,912 |
1807.11182 | End-to-End Deep Kronecker-Product Matching for Person Re-identification | Person re-identification aims to robustly measure similarities between person images. The significant variation of person poses and viewing angles challenges for accurate person re-identification. The spatial layout and correspondences between query person images are vital information for tackling this problem but are ignored by most state-of-the-art methods. In this paper, we propose a novel Kronecker Product Matching module to match feature maps of different persons in an end-to-end trainable deep neural network. A novel feature soft warping scheme is designed for aligning the feature maps based on matching results, which is shown to be crucial for achieving superior accuracy. The multi-scale features based on hourglass-like networks and self-residual attention are also exploited to further boost the re-identification performance. The proposed approach outperforms state-of-the-art methods on the Market-1501, CUHK03, and DukeMTMC datasets, which demonstrates the effectiveness and generalization ability of our proposed approach. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 104,126 |
1906.04043 | GLTR: Statistical Detection and Visualization of Generated Text | The rapid improvement of language models has raised the specter of abuse of text generation systems. This progress motivates the development of simple methods for detecting generated text that can be used by and explained to non-experts. We develop GLTR, a tool to support humans in detecting whether a text was generated by a model. GLTR applies a suite of baseline statistical methods that can detect generation artifacts across common sampling schemes. In a human-subjects study, we show that the annotation scheme provided by GLTR improves the human detection-rate of fake text from 54% to 72% without any prior training. GLTR is open-source and publicly deployed, and has already been widely used to detect generated outputs | true | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 134,585 |
2407.11138 | Lessons from a human-in-the-loop machine learning approach for
identifying vacant, abandoned, and deteriorated properties in Savannah,
Georgia | Addressing strategies for managing vacant, abandoned, and deteriorated (VAD) properties is important for maintaining healthy communities. Yet, the process of identifying these properties can be difficult. Here, we create a human-in-the-loop machine learning (HITLML) model called VADecide and apply it to a parcel-level case study in Savannah, Georgia. The results show a higher prediction accuracy than was achieved when using a machine learning model without human input in the training. The HITLML approach also reveals differences between machine vs. human-generated results. Our findings contribute to knowledge about the advantages and challenges of HITLML in urban planning. [Accepted for Publication at a Peer Review Journal] | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 473,319 |
2103.16493 | Enabling Data Diversity: Efficient Automatic Augmentation via
Regularized Adversarial Training | Data augmentation has proved extremely useful by increasing training data variance to alleviate overfitting and improve deep neural networks' generalization performance. In medical image analysis, a well-designed augmentation policy usually requires much expert knowledge and is difficult to generalize to multiple tasks due to the vast discrepancies among pixel intensities, image appearances, and object shapes in different medical tasks. To automate medical data augmentation, we propose a regularized adversarial training framework via two min-max objectives and three differentiable augmentation models covering affine transformation, deformation, and appearance changes. Our method is more automatic and efficient than previous automatic augmentation methods, which still rely on pre-defined operations with human-specified ranges and costly bi-level optimization. Extensive experiments demonstrated that our approach, with less training overhead, achieves superior performance over state-of-the-art auto-augmentation methods on both tasks of 2D skin cancer classification and 3D organs-at-risk segmentation. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 227,600 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.