id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2011.05369
Predicting Water Temperature Dynamics of Unmonitored Lakes with Meta Transfer Learning
Most environmental data come from a minority of well-monitored sites. An ongoing challenge in the environmental sciences is transferring knowledge from monitored sites to unmonitored sites. Here, we demonstrate a novel transfer learning framework that accurately predicts depth-specific temperature in unmonitored lakes (targets) by borrowing models from well-monitored lakes (sources). This method, Meta Transfer Learning (MTL), builds a meta-learning model to predict transfer performance from candidate source models to targets using lake attributes and candidates' past performance. We constructed source models at 145 well-monitored lakes using calibrated process-based modeling (PB) and a recently developed approach called process-guided deep learning (PGDL). We applied MTL to either PB or PGDL source models (PB-MTL or PGDL-MTL, respectively) to predict temperatures in 305 target lakes treated as unmonitored in the Upper Midwestern United States. We show significantly improved performance relative to the uncalibrated process-based General Lake Model, where the median RMSE for the target lakes is $2.52^{\circ}C$. PB-MTL yielded a median RMSE of $2.43^{\circ}C$; PGDL-MTL yielded $2.16^{\circ}C$; and a PGDL-MTL ensemble of nine sources per target yielded $1.88^{\circ}C$. For sparsely monitored target lakes, PGDL-MTL often outperformed PGDL models trained on the target lakes themselves. Differences in maximum depth between the source and target were consistently the most important predictors. Our approach readily scales to thousands of lakes in the Midwestern United States, demonstrating that MTL with meaningful predictor variables and high-quality source models is a promising approach for many kinds of unmonitored systems and environmental variables.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
205,889
2501.08037
Enhanced SPS Velocity-adaptive Scheme: Access Fairness in 5G NR V2I Networks
Vehicle-to-Infrastructure (V2I) technology enables information exchange between vehicles and road infrastructure. Specifically, when a vehicle approaches a roadside unit (RSU), it can exchange information with the RSU to obtain accurate data that assists in driving. With the release of the 3rd Generation Partnership Project (3GPP) Release 16, which includes the 5G New Radio (NR) Vehicle-to-Everything (V2X) standards, vehicles typically adopt mode-2 communication using sensing-based semi-persistent scheduling (SPS) for resource allocation. In this approach, vehicles identify candidate resources within a selection window and exclude ineligible resources based on information from a sensing window. However, vehicles often drive at different speeds, resulting in varying amounts of data transmission with RSUs as they pass by, which leads to unfair access. Therefore, it is essential to design an access scheme that accounts for different vehicle speeds to achieve fair access across the network. This paper formulates an optimization problem for vehicular networks and proposes a multi-objective optimization scheme to address it by adjusting the selection window in the SPS mechanism of 5G NR V2I mode-2. Simulation results demonstrate the effectiveness of the proposed scheme
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
524,602
2302.01068
FedLAP-DP: Federated Learning by Sharing Differentially Private Loss Approximations
Conventional gradient-sharing approaches for federated learning (FL), such as FedAvg, rely on aggregation of local models and often face performance degradation under differential privacy (DP) mechanisms or data heterogeneity, which can be attributed to the inconsistency between the local and global objectives. To address this issue, we propose FedLAP-DP, a novel privacy-preserving approach for FL. Our formulation involves clients synthesizing a small set of samples that approximate local loss landscapes by simulating the gradients of real images within a local region. Acting as loss surrogates, these synthetic samples are aggregated on the server side to uncover the global loss landscape and enable global optimization. Building upon these insights, we offer a new perspective to enforce record-level differential privacy in FL. A formal privacy analysis demonstrates that FedLAP-DP incurs the same privacy costs as typical gradient-sharing schemes while achieving an improved trade-off between privacy and utility. Extensive experiments validate the superiority of our approach across various datasets with highly skewed distributions in both DP and non-DP settings. Beyond the promising performance, our approach presents a faster convergence speed compared to typical gradient-sharing methods and opens up the possibility of trading communication costs for better performance by sending a larger set of synthetic images. The source is available at \url{https://github.com/hui-po-wang/FedLAP-DP}.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
343,471
2502.08452
Learning to Group and Grasp Multiple Objects
Simultaneously grasping and transporting multiple objects can significantly enhance robotic work efficiency and has been a key research focus for decades. The primary challenge lies in determining how to push objects, group them, and execute simultaneous grasping for respective groups while considering object distribution and the hardware constraints of the robot. Traditional rule-based methods struggle to flexibly adapt to diverse scenarios. To address this challenge, this paper proposes an imitation learning-based approach. We collect a series of expert demonstrations through teleoperation and train a diffusion policy network, enabling the robot to dynamically generate action sequences for pushing, grouping, and grasping, thereby facilitating efficient multi-object grasping and transportation. We conducted experiments to evaluate the method under different training dataset sizes, varying object quantities, and real-world object scenarios. The results demonstrate that the proposed approach can effectively and adaptively generate multi-object grouping and grasping strategies. With the support of more training data, imitation learning is expected to be an effective approach for solving the multi-object grasping problem.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
533,019
1407.3543
Intermittent Control in Man and Machine
Intermittent control has a long history in the physiological literature and there is strong experimental evidence that some human control systems are intermittent. Intermittent control has also appeared in various forms in the engineering literature. This article discusses a particular mathematical model of Event-driven Intermittent Control which brings together engineering and physiological insights and builds on and extends previous work in this area. Illustrative examples of the properties of Intermittent Control in a physiological context are given together with suggestions for future research directions in both physiology and engineering.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
34,631
1808.06865
Machine Learning for Spatiotemporal Sequence Forecasting: A Survey
Spatiotemporal systems are common in the real-world. Forecasting the multi-step future of these spatiotemporal systems based on the past observations, or, Spatiotemporal Sequence Forecasting (STSF), is a significant and challenging problem. Although lots of real-world problems can be viewed as STSF and many research works have proposed machine learning based methods for them, no existing work has summarized and compared these methods from a unified perspective. This survey aims to provide a systematic review of machine learning for STSF. In this survey, we define the STSF problem and classify it into three subcategories: Trajectory Forecasting of Moving Point Cloud (TF-MPC), STSF on Regular Grid (STSF-RG) and STSF on Irregular Grid (STSF-IG). We then introduce the two major challenges of STSF: 1) how to learn a model for multi-step forecasting and 2) how to adequately model the spatial and temporal structures. After that, we review the existing works for solving these challenges, including the general learning strategies for multi-step forecasting, the classical machine learning based methods for STSF, and the deep learning based methods for STSF. We also compare these methods and point out some potential research directions.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
105,626
2209.08790
D&D: Learning Human Dynamics from Dynamic Camera
3D human pose estimation from a monocular video has recently seen significant improvements. However, most state-of-the-art methods are kinematics-based, which are prone to physically implausible motions with pronounced artifacts. Current dynamics-based methods can predict physically plausible motion but are restricted to simple scenarios with static camera view. In this work, we present D&D (Learning Human Dynamics from Dynamic Camera), which leverages the laws of physics to reconstruct 3D human motion from the in-the-wild videos with a moving camera. D&D introduces inertial force control (IFC) to explain the 3D human motion in the non-inertial local frame by considering the inertial forces of the dynamic camera. To learn the ground contact with limited annotations, we develop probabilistic contact torque (PCT), which is computed by differentiable sampling from contact probabilities and used to generate motions. The contact state can be weakly supervised by encouraging the model to generate correct motions. Furthermore, we propose an attentive PD controller that adjusts target pose states using temporal information to obtain smooth and accurate pose control. Our approach is entirely neural-based and runs without offline optimization or simulation in physics engines. Experiments on large-scale 3D human motion benchmarks demonstrate the effectiveness of D&D, where we exhibit superior performance against both state-of-the-art kinematics-based and dynamics-based methods. Code is available at https://github.com/Jeffsjtu/DnD
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
318,271
2410.10553
SLaNC: Static LayerNorm Calibration
The ever increasing sizes of Large Language Models (LLMs) beyond hundreds of billions of parameters have generated enormous pressure on the manufacturers of dedicated hardware accelerators and made the innovative design of the latter one of the most rapidly expanding fields of the AI industry. Various approaches have been explored to enable efficient and accurate processing of LLMs on the available accelerators given their computational and storage limitations. Among these, various quantization techniques have become the main focus of the community as a means of reducing the compute, communication and storage requirements. Quantization to lower precision formats naturally poses a number of challenges caused by the limited range of the available value representations. When it comes to processing the popular Transformer models on hardware, one of the main issues becomes calculation of the LayerNorm simply because accumulation of the variance requires a much wider dynamic range than the hardware enables. In this article, we address this matter and propose a computationally-efficient scaling technique that can be easily applied to Transformer models during inference. Our method suggests a straightforward way of scaling the LayerNorm inputs based on the static weights of the immediately preceding linear layers. The scaling factors are computed offline, based solely on the linear layer weights, hence no latency or computational overhead is added during inference. Most importantly, our technique ensures that no numerical issues such as overflow or underflow could happen during the compute. This approach offers smooth, accurate and resource-effective inference across a wide range of hardware architectures. The article provides theoretical justification as well as supporting numerical simulations.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
498,130
1710.09933
SEGMENT3D: A Web-based Application for Collaborative Segmentation of 3D images used in the Shoot Apical Meristem
The quantitative analysis of 3D confocal microscopy images of the shoot apical meristem helps understanding the growth process of some plants. Cell segmentation in these images is crucial for computational plant analysis and many automated methods have been proposed. However, variations in signal intensity across the image mitigate the effectiveness of those approaches with no easy way for user correction. We propose a web-based collaborative 3D image segmentation application, SEGMENT3D, to leverage automatic segmentation results. The image is divided into 3D tiles that can be either segmented interactively from scratch or corrected from a pre-existing segmentation. Individual segmentation results per tile are then automatically merged via consensus analysis and then stitched to complete the segmentation for the entire image stack. SEGMENT3D is a comprehensive application that can be applied to other 3D imaging modalities and general objects. It also provides an easy way to create supervised data to advance segmentation using machine learning models.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
83,283
1807.03528
Deep Underwater Image Enhancement
In an underwater scene, wavelength-dependent light absorption and scattering degrade the visibility of images, causing low contrast and distorted color casts. To address this problem, we propose a convolutional neural network based image enhancement model, i.e., UWCNN, which is trained efficiently using a synthetic underwater image database. Unlike the existing works that require the parameters of underwater imaging model estimation or impose inflexible frameworks applicable only for specific scenes, our model directly reconstructs the clear latent underwater image by leveraging on an automatic end-to-end and data-driven training mechanism. Compliant with underwater imaging models and optical properties of underwater scenes, we first synthesize ten different marine image databases. Then, we separately train multiple UWCNN models for each underwater image formation type. Experimental results on real-world and synthetic underwater images demonstrate that the presented method generalizes well on different underwater scenes and outperforms the existing methods both qualitatively and quantitatively. Besides, we conduct an ablation study to demonstrate the effect of each component in our network.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
102,551
2210.08506
ResAttUNet: Detecting Marine Debris using an Attention activated Residual UNet
Currently, a significant amount of research has been done in field of Remote Sensing with the use of deep learning techniques. The introduction of Marine Debris Archive (MARIDA), an open-source dataset with benchmark results, for marine debris detection opened new pathways to use deep learning techniques for the task of debris detection and segmentation. This paper introduces a novel attention based segmentation technique that outperforms the existing state-of-the-art results introduced with MARIDA. The paper presents a novel spatial aware encoder and decoder architecture to maintain the contextual information and structure of sparse ground truth patches present in the images. The attained results are expected to pave the path for further research involving deep learning using remote sensing images. The code is available at https://github.com/sheikhazhanmohammed/SADMA.git
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
324,182
1808.09801
PS-Sim: A Framework for Scalable Simulation of Participatory Sensing Data
Emergence of smartphone and the participatory sensing (PS) paradigm have paved the way for a new variant of pervasive computing. In PS, human user performs sensing tasks and generates notifications, typically in lieu of incentives. These notifications are real-time, large-volume, and multi-modal, which are eventually fused by the PS platform to generate a summary. One major limitation with PS is the sparsity of notifications owing to lack of active participation, thus inhibiting large scale real-life experiments for the research community. On the flip side, research community always needs ground truth to validate the efficacy of the proposed models and algorithms. Most of the PS applications involve human mobility and report generation following sensing of any event of interest in the adjacent environment. This work is an attempt to study and empirically model human participation behavior and event occurrence distributions through development of a location-sensitive data simulation framework, called PS-Sim. From extensive experiments it has been observed that the synthetic data generated by PS-Sim replicates real participation and event occurrence behaviors in PS applications, which may be considered for validation purpose in absence of the groundtruth. As a proof-of-concept, we have used real-life dataset from a vehicular traffic management application to train the models in PS-Sim and cross-validated the simulated data with other parts of the same dataset.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
true
106,268
2308.00887
Factor Graph Neural Networks
In recent years, we have witnessed a surge of Graph Neural Networks (GNNs), most of which can learn powerful representations in an end-to-end fashion with great success in many real-world applications. They have resemblance to Probabilistic Graphical Models (PGMs), but break free from some limitations of PGMs. By aiming to provide expressive methods for representation learning instead of computing marginals or most likely configurations, GNNs provide flexibility in the choice of information flowing rules while maintaining good performance. Despite their success and inspirations, they lack efficient ways to represent and learn higher-order relations among variables/nodes. More expressive higher-order GNNs which operate on k-tuples of nodes need increased computational resources in order to process higher-order tensors. We propose Factor Graph Neural Networks (FGNNs) to effectively capture higher-order relations for inference and learning. To do so, we first derive an efficient approximate Sum-Product loopy belief propagation inference algorithm for discrete higher-order PGMs. We then neuralize the novel message passing scheme into a Factor Graph Neural Network (FGNN) module by allowing richer representations of the message update rules; this facilitates both efficient inference and powerful end-to-end learning. We further show that with a suitable choice of message aggregation operators, our FGNN is also able to represent Max-Product belief propagation, providing a single family of architecture that can represent both Max and Sum-Product loopy belief propagation. Our extensive experimental evaluation on synthetic as well as real datasets demonstrates the potential of the proposed model.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
383,065
2501.08549
The Devil is in Temporal Token: High Quality Video Reasoning Segmentation
Existing methods for Video Reasoning Segmentation rely heavily on a single special token to represent the object in the keyframe or the entire video, inadequately capturing spatial complexity and inter-frame motion. To overcome these challenges, we propose VRS-HQ, an end-to-end video reasoning segmentation approach that leverages Multimodal Large Language Models (MLLMs) to inject rich spatiotemporal features into hierarchical tokens.Our key innovations include a Temporal Dynamic Aggregation (TDA) and a Token-driven Keyframe Selection (TKS). Specifically, we design frame-level <SEG> and temporal-level <TAK> tokens that utilize MLLM's autoregressive learning to effectively capture both local and global information. Subsequently, we apply a similarity-based weighted fusion and frame selection strategy, then utilize SAM2 to perform keyframe segmentation and propagation. To enhance keyframe localization accuracy, the TKS filters keyframes based on SAM2's occlusion scores during inference. VRS-HQ achieves state-of-the-art performance on ReVOS, surpassing VISA by 5.9%/12.5%/9.1% in J&F scores across the three subsets. These results highlight the strong temporal reasoning and segmentation capabilities of our method. Code and model weights will be released at VRS-HQ.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
524,809
2110.11601
Multimodal Semi-Supervised Learning for 3D Objects
In recent years, semi-supervised learning has been widely explored and shows excellent data efficiency for 2D data. There is an emerging need to improve data efficiency for 3D tasks due to the scarcity of labeled 3D data. This paper explores how the coherence of different modelities of 3D data (e.g. point cloud, image, and mesh) can be used to improve data efficiency for both 3D classification and retrieval tasks. We propose a novel multimodal semi-supervised learning framework by introducing instance-level consistency constraint and a novel multimodal contrastive prototype (M2CP) loss. The instance-level consistency enforces the network to generate consistent representations for multimodal data of the same object regardless of its modality. The M2CP maintains a multimodal prototype for each class and learns features with small intra-class variations by minimizing the feature distance of each object to its prototype while maximizing the distance to the others. Our proposed framework significantly outperforms all the state-of-the-art counterparts for both classification and retrieval tasks by a large margin on the modelNet10 and ModelNet40 datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
262,545
2311.14281
Multi-modal Instance Refinement for Cross-domain Action Recognition
Unsupervised cross-domain action recognition aims at adapting the model trained on an existing labeled source domain to a new unlabeled target domain. Most existing methods solve the task by directly aligning the feature distributions of source and target domains. However, this would cause negative transfer during domain adaptation due to some negative training samples in both domains. In the source domain, some training samples are of low-relevance to target domain due to the difference in viewpoints, action styles, etc. In the target domain, there are some ambiguous training samples that can be easily classified as another type of action under the case of source domain. The problem of negative transfer has been explored in cross-domain object detection, while it remains under-explored in cross-domain action recognition. Therefore, we propose a Multi-modal Instance Refinement (MMIR) method to alleviate the negative transfer based on reinforcement learning. Specifically, a reinforcement learning agent is trained in both domains for every modality to refine the training data by selecting out negative samples from each domain. Our method finally outperforms several other state-of-the-art baselines in cross-domain action recognition on the benchmark EPIC-Kitchens dataset, which demonstrates the advantage of MMIR in reducing negative transfer.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
410,053
1711.05482
Efficient Estimation of Generalization Error and Bias-Variance Components of Ensembles
For many applications, an ensemble of base classifiers is an effective solution. The tuning of its parameters(number of classes, amount of data on which each classifier is to be trained on, etc.) requires G, the generalization error of a given ensemble. The efficient estimation of G is the focus of this paper. The key idea is to approximate the variance of the class scores/probabilities of the base classifiers over the randomness imposed by the training subset by normal/beta distribution at each point x in the input feature space. We estimate the parameters of the distribution using a small set of randomly chosen base classifiers and use those parameters to give efficient estimation schemes for G. We give empirical evidence for the quality of the various estimators. We also demonstrate their usefulness in making design choices such as the number of classifiers in the ensemble and the size of a subset of data used for training that is needed to achieve a certain value of generalization error. Our approach also has great potential for designing distributed ensemble classifiers.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
84,585
2006.06431
Complementary Visual Neuronal Systems Model for Collision Sensing
Inspired by insects' visual brains, this paper presents original modelling of a complementary visual neuronal systems model for real-time and robust collision sensing. Two categories of wide-field motion sensitive neurons, i.e., the lobula giant movement detectors (LGMDs) in locusts and the lobula plate tangential cells (LPTCs) in flies, have been studied, intensively. The LGMDs have specific selectivity to approaching objects in depth that threaten collision; whilst the LPTCs are only sensitive to translating objects in horizontal and vertical directions. Though each has been modelled and applied in various visual scenes including robot scenarios, little has been done on investigating their complementary functionality and selectivity when functioning together. To fill this vacancy, we introduce a hybrid model combining two LGMDs (LGMD-1 and LGMD-2) with horizontally (rightward and leftward) sensitive LPTCs (LPTC-R and LPTC-L) specialising in fast collision perception. With coordination and competition between different activated neurons, the proximity feature by frontal approaching stimuli can be largely sharpened up by suppressing translating and receding motions. The proposed method has been implemented in ground micro-mobile robots as embedded systems. The multi-robot experiments have demonstrated the effectiveness and robustness of the proposed model for frontal collision sensing, which outperforms previous single-type neuron computation methods against translating interference.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
true
false
false
181,424
2408.06212
Computability of Classification and Deep Learning: From Theoretical Limits to Practical Feasibility through Quantization
The unwavering success of deep learning in the past decade led to the increasing prevalence of deep learning methods in various application fields. However, the downsides of deep learning, most prominently its lack of trustworthiness, may not be compatible with safety-critical or high-responsibility applications requiring stricter performance guarantees. Recently, several instances of deep learning applications have been shown to be subject to theoretical limitations of computability, undermining the feasibility of performance guarantees when employed on real-world computers. We extend the findings by studying computability in the deep learning framework from two perspectives: From an application viewpoint in the context of classification problems and a general limitation viewpoint in the context of training neural networks. In particular, we show restrictions on the algorithmic solvability of classification problems that also render the algorithmic detection of failure in computations in a general setting infeasible. Subsequently, we prove algorithmic limitations in training deep neural networks even in cases where the underlying problem is well-behaved. Finally, we end with a positive observation, showing that in quantized versions of classification and deep network training, computability restrictions do not arise or can be overcome to a certain degree.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
480,115
1701.05982
Observations on Factors Affecting Performance of MapReduce based Apriori on Hadoop Cluster
Designing fast and scalable algorithm for mining frequent itemsets is always being a most eminent and promising problem of data mining. Apriori is one of the most broadly used and popular algorithm of frequent itemset mining. Designing efficient algorithms on MapReduce framework to process and analyze big datasets is contemporary research nowadays. In this paper, we have focused on the performance of MapReduce based Apriori on homogeneous as well as on heterogeneous Hadoop cluster. We have investigated a number of factors that significantly affects the execution time of MapReduce based Apriori running on homogeneous and heterogeneous Hadoop Cluster. Factors are specific to both algorithmic and non-algorithmic improvements. Considered factors specific to algorithmic improvements are filtered transactions and data structures. Experimental results show that how an appropriate data structure and filtered transactions technique drastically reduce the execution time. The non-algorithmic factors include speculative execution, nodes with poor performance, data locality & distribution of data blocks, and parallelism control with input split size. We have applied strategies against these factors and fine tuned the relevant parameters in our particular application. Experimental results show that if cluster specific parameters are taken care of then there is a significant reduction in execution time. Also we have discussed the issues regarding MapReduce implementation of Apriori which may significantly influence the performance.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
true
67,058
quant-ph/0603098
Quantum broadcast channels
We consider quantum channels with one sender and two receivers, used in several different ways for the simultaneous transmission of independent messages. We begin by extending the technique of superposition coding to quantum channels with a classical input to give a general achievable region. We also give outer bounds to the capacity regions for various special cases from the classical literature and prove that superposition coding is optimal for a class of channels. We then consider extensions of superposition coding for channels with a quantum input, where some of the messages transmitted are quantum instead of classical, in the sense that the parties establish bipartite or tripartite GHZ entanglement. We conclude by using state merging to give achievable rates for establishing bipartite entanglement between different pairs of parties with the assistance of free classical communication.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
540,894
2309.16374
MHG-GNN: Combination of Molecular Hypergraph Grammar with Graph Neural Network
Property prediction plays an important role in material discovery. As an initial step to eventually develop a foundation model for material science, we introduce a new autoencoder called the MHG-GNN, which combines graph neural network (GNN) with Molecular Hypergraph Grammar (MHG). Results on a variety of property prediction tasks with diverse materials show that MHG-GNN is promising.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
395,327
2405.11640
Inquire, Interact, and Integrate: A Proactive Agent Collaborative Framework for Zero-Shot Multimodal Medical Reasoning
The adoption of large language models (LLMs) in healthcare has attracted significant research interest. However, their performance in healthcare remains under-investigated and potentially limited, due to i) they lack rich domain-specific knowledge and medical reasoning skills; and ii) most state-of-the-art LLMs are unimodal, text-only models that cannot directly process multimodal inputs. To this end, we propose a multimodal medical collaborative reasoning framework \textbf{MultiMedRes}, which incorporates a learner agent to proactively gain essential information from domain-specific expert models, to solve medical multimodal reasoning problems. Our method includes three steps: i) \textbf{Inquire}: The learner agent first decomposes given complex medical reasoning problems into multiple domain-specific sub-problems; ii) \textbf{Interact}: The agent then interacts with domain-specific expert models by repeating the ``ask-answer'' process to progressively obtain different domain-specific knowledge; iii) \textbf{Integrate}: The agent finally integrates all the acquired domain-specific knowledge to accurately address the medical reasoning problem. We validate the effectiveness of our method on the task of difference visual question answering for X-ray images. The experiments demonstrate that our zero-shot prediction achieves state-of-the-art performance, and even outperforms the fully supervised methods. Besides, our approach can be incorporated into various LLMs and multimodal LLMs to significantly boost their performance.
false
false
false
false
true
false
false
false
true
false
false
true
false
false
false
false
false
false
455,222
2207.00739
Deep Learning for Systemic Risk Measures
The aim of this paper is to study a new methodological framework for systemic risk measures by applying deep learning method as a tool to compute the optimal strategy of capital allocations. Under this new framework, systemic risk measures can be interpreted as the minimal amount of cash that secures the aggregated system by allocating capital to the single institutions before aggregating the individual risks. This problem has no explicit solution except in very limited situations. Deep learning is increasingly receiving attention in financial modelings and risk management and we propose our deep learning based algorithms to solve both the primal and dual problems of the risk measures, and thus to learn the fair risk allocations. In particular, our method for the dual problem involves the training philosophy inspired by the well-known Generative Adversarial Networks (GAN) approach and a newly designed direct estimation of Radon-Nikodym derivative. We close the paper with substantial numerical studies of the subject and provide interpretations of the risk allocations associated to the systemic risk measures. In the particular case of exponential preferences, numerical experiments demonstrate excellent performance of the proposed algorithm, when compared with the optimal explicit solution as a benchmark.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
305,871
1704.07242
Supervised Adversarial Networks for Image Saliency Detection
In the past few years, Generative Adversarial Network (GAN) became a prevalent research topic. By defining two convolutional neural networks (G-Network and D-Network) and introducing an adversarial procedure between them during the training process, GAN has ability to generate good quality images that look like natural images from a random vector. Besides image generation, GAN may have potential to deal with wide range of real world problems. In this paper, we follow the basic idea of GAN and propose a novel model for image saliency detection, which is called Supervised Adversarial Networks (SAN). Specifically, SAN also trains two models simultaneously: the G-Network takes natural images as inputs and generates corresponding saliency maps (synthetic saliency maps), and the D-Network is trained to determine whether one sample is a synthetic saliency map or ground-truth saliency map. However, different from GAN, the proposed method uses fully supervised learning to learn both G-Network and D-Network by applying class labels of the training set. Moreover, a novel kind of layer call conv-comparison layer is introduced into the D-Network to further improve the saliency performance by forcing the high-level feature of synthetic saliency maps and ground-truthes as similar as possible. Experimental results on Pascal VOC 2012 database show that the SAN model can generate high quality saliency maps for many complicate natural images.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
72,327
1906.07599
LTG-Oslo Hierarchical Multi-task Network: The importance of negation for document-level sentiment in Spanish
This paper details LTG-Oslo team's participation in the sentiment track of the NEGES 2019 evaluation campaign. We participated in the task with a hierarchical multi-task network, which used shared lower-layers in a deep BiLSTM to predict negation, while the higher layers were dedicated to predicting document-level sentiment. The multi-task component shows promise as a way to incorporate information on negation into deep neural sentiment classifiers, despite the fact that the absolute results on the test set were relatively low for a binary classification task.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
135,640
1203.4238
Do Linguistic Style and Readability of Scientific Abstracts affect their Virality?
Reactions to textual content posted in an online social network show different dynamics depending on the linguistic style and readability of the submitted content. Do similar dynamics exist for responses to scientific articles? Our intuition, supported by previous research, suggests that the success of a scientific article depends on its content, rather than on its linguistic style. In this article, we examine a corpus of scientific abstracts and three forms of associated reactions: article downloads, citations, and bookmarks. Through a class-based psycholinguistic analysis and readability indices tests, we show that certain stylistic and readability features of abstracts clearly concur in determining the success and viral capability of a scientific article.
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
false
false
true
15,026
2107.11013
Beamforming Design and Power Allocation for Transmissive RMS-based Transmitter Architectures
This letter investigates a downlink multiple input single output (MISO) system based on transmissive reconfigurable metasurface (RMS) transmitter. Specifically, a transmitter design based on a transmissive RMS equipped with a feed antenna is first proposed. Then, in order to maximize the achievable sum-rate of the system, the beamforming design and power allocation are jointly optimized. Since the optimization variables are coupled, this formulated optimization problem is non-convex, so it is difficult to solve it directly. To solve this problem, we propose an alternating optimization (AO) technique based on difference-of-convex (DC) programming and successive convex approximation (SCA). Simulation results verify that the proposed algorithm can achieve convergence and improve the achievable sum-rate of the system.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
247,471
2103.05741
Non-asymptotic Confidence Intervals of Off-policy Evaluation: Primal and Dual Bounds
Off-policy evaluation (OPE) is the task of estimating the expected reward of a given policy based on offline data previously collected under different policies. Therefore, OPE is a key step in applying reinforcement learning to real-world domains such as medical treatment, where interactive data collection is expensive or even unsafe. As the observed data tends to be noisy and limited, it is essential to provide rigorous uncertainty quantification, not just a point estimation, when applying OPE to make high stakes decisions. This work considers the problem of constructing non-asymptotic confidence intervals in infinite-horizon off-policy evaluation, which remains a challenging open question. We develop a practical algorithm through a primal-dual optimization-based approach, which leverages the kernel Bellman loss (KBL) of Feng et al.(2019) and a new martingale concentration inequality of KBL applicable to time-dependent data with unknown mixing conditions. Our algorithm makes minimum assumptions on the data and the function class of the Q-function, and works for the behavior-agnostic settings where the data is collected under a mix of arbitrary unknown behavior policies. We present empirical results that clearly demonstrate the advantages of our approach over existing methods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
224,069
1306.4071
A Microcontroller Based Device to Reduce Phanthom Power
In this paper we concern ourselves with the problem of minimizing the standby power consumption in some of the house hold appliances. Here we propose a remote controlled device through which we could reduce the amount of standby power consumed by the electrical appliances connected to it. This device provides an option of controlling each of the appliances connected to it individually or as a whole when required. The device has got number of plug points each of which could be controlled through the remote and also has a provision of switching off all the points at once.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
25,283
2403.11964
Probabilistic Calibration by Design for Neural Network Regression
Generating calibrated and sharp neural network predictive distributions for regression problems is essential for optimal decision-making in many real-world applications. To address the miscalibration issue of neural networks, various methods have been proposed to improve calibration, including post-hoc methods that adjust predictions after training and regularization methods that act during training. While post-hoc methods have shown better improvement in calibration compared to regularization methods, the post-hoc step is completely independent of model training. We introduce a novel end-to-end model training procedure called Quantile Recalibration Training, integrating post-hoc calibration directly into the training process without additional parameters. We also present a unified algorithm that includes our method and other post-hoc and regularization methods, as particular cases. We demonstrate the performance of our method in a large-scale experiment involving 57 tabular regression datasets, showcasing improved predictive accuracy while maintaining calibration. We also conduct an ablation study to evaluate the significance of different components within our proposed method, as well as an in-depth analysis of the impact of the base model and different hyperparameters on predictive accuracy.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
438,954
2106.14033
BiX-NAS: Searching Efficient Bi-directional Architecture for Medical Image Segmentation
The recurrent mechanism has recently been introduced into U-Net in various medical image segmentation tasks. Existing studies have focused on promoting network recursion via reusing building blocks. Although network parameters could be greatly saved, computational costs still increase inevitably in accordance with the pre-set iteration time. In this work, we study a multi-scale upgrade of a bi-directional skip connected network and then automatically discover an efficient architecture by a novel two-phase Neural Architecture Search (NAS) algorithm, namely BiX-NAS. Our proposed method reduces the network computational cost by sifting out ineffective multi-scale features at different levels and iterations. We evaluate BiX-NAS on two segmentation tasks using three different medical image datasets, and the experimental results show that our BiX-NAS searched architecture achieves the state-of-the-art performance with significantly lower computational cost.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
243,269
2012.08977
Visually Grounding Language Instruction for History-Dependent Manipulation
This paper emphasizes the importance of a robot's ability to refer to its task history, especially when it executes a series of pick-and-place manipulations by following language instructions given one by one. The advantage of referring to the manipulation history can be categorized into two folds: (1) the language instructions omitting details but using expressions referring to the past can be interpreted, and (2) the visual information of objects occluded by previous manipulations can be inferred. For this, we introduce a history-dependent manipulation task which objective is to visually ground a series of language instructions for proper pick-and-place manipulations by referring to the past. We also suggest a relevant dataset and model which can be a baseline, and show that our model trained with the proposed dataset can also be applied to the real world based on the CycleGAN. Our dataset and code are publicly available on the project website: https://sites.google.com/view/history-dependent-manipulation.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
211,927
1907.02188
Classifying Multi-Gas Spectrums using Monte Carlo KNN and Multi-Resolution CNN
A Monte Carlo k-nearest neighbours (KNN) and a multi-resolution convolutional neural network (CNN) were developed to detect the presences of multiple gasses in near infrared (IR) spectrums. High Resolution Transmission database was used to synthesize the near IR spectrums. Monte Carlo KNN determined the optimal kernel sizes and the optimal number of channels. The multi-resolution CNN, composed of multiple different kernels, was created using the optimal kernel sizes and the optimal number of channels. The multi-resolution CNN outperforms the multilayer perceptron and the partial least squares.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
137,547
1902.11015
Mobile Formation Coordination and Tracking Control for Multiple Non-holonomic Vehicles
This paper addresses forward motion control for trajectory tracking and mobile formation coordination for a group of non-holonomic vehicles on SE(2). Firstly, by constructing an intermediate attitude variable which involves vehicles' position information and desired attitude, the translational and rotational control inputs are designed in two stages to solve the trajectory tracking problem. Secondly, the coordination relationships of relative positions and headings are explored thoroughly for a group of non-holonomic vehicles to maintain a mobile formation with rigid body motion constraints. We prove that, except for the cases of parallel formation and translational straight line formation, a mobile formation with strict rigid-body motion can be achieved if and only if the ratios of linear speed to angular speed for each individual vehicle are constants. Motion properties for mobile formation with weak rigid-body motion are also demonstrated. Thereafter, based on the proposed trajectory tracking approach, a distributed mobile formation control law is designed under a directed tree graph. The performance of the proposed controllers is validated by both numerical simulations and experiments.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
true
false
false
false
122,826
2111.15242
ConDA: Unsupervised Domain Adaptation for LiDAR Segmentation via Regularized Domain Concatenation
Transferring knowledge learned from the labeled source domain to the raw target domain for unsupervised domain adaptation (UDA) is essential to the scalable deployment of autonomous driving systems. State-of-the-art methods in UDA often employ a key idea: utilizing joint supervision signals from both source and target domains for self-training. In this work, we improve and extend this aspect. We present ConDA, a concatenation-based domain adaptation framework for LiDAR segmentation that: 1) constructs an intermediate domain consisting of fine-grained interchange signals from both source and target domains without destabilizing the semantic coherency of objects and background around the ego-vehicle; and 2) utilizes the intermediate domain for self-training. To improve the network training on the source domain and self-training on the intermediate domain, we propose an anti-aliasing regularizer and an entropy aggregator to reduce the negative effect caused by the aliasing artifacts and noisy pseudo labels. Through extensive studies, we demonstrate that ConDA significantly outperforms prior arts in mitigating domain gaps.
false
false
false
false
false
false
true
true
false
false
false
true
false
false
false
false
false
false
268,866
0909.1817
Cooperative Transmission for a Vector Gaussian Parallel Relay Network
In this paper, we consider a parallel relay network where two relays cooperatively help a source transmit to a destination. We assume the source and the destination nodes are equipped with multiple antennas. Three basic schemes and their achievable rates are studied: Decode-and-Forward (DF), Amplify-and-Forward (AF), and Compress-and-Forward (CF). For the DF scheme, the source transmits two private signals, one for each relay, where dirty paper coding (DPC) is used between the two private streams, and a common signal for both relays. The relays make efficient use of the common information to introduce a proper amount of correlation in the transmission to the destination. We show that the DF scheme achieves the capacity under certain conditions. We also show that the CF scheme is asymptotically optimal in the high relay power limit, regardless of channel ranks. It turns out that the AF scheme also achieves the asymptotic optimality but only when the relays-to-destination channel is full rank. The relative advantages of the three schemes are discussed with numerical results.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
4,465
2107.03936
Graph Neural Pre-training for Enhancing Recommendations using Side Information
Leveraging the side information associated with entities (i.e. users and items) to enhance the performance of recommendation systems has been widely recognized as an important modelling dimension. While many existing approaches focus on the integration scheme to incorporate entity side information -- by combining the recommendation loss function with an extra side information-aware loss -- in this paper, we propose instead a novel pre-training scheme for leveraging the side information. In particular, we first pre-train a representation model using the side information of the entities, and then fine-tune it using an existing general representation-based recommendation model. Specifically, we propose two pre-training models, named GCN-P and COM-P, by considering the entities and their relations constructed from side information as two different types of graphs respectively, to pre-train entity embeddings. For the GCN-P model, two single-relational graphs are constructed from all the users' and items' side information respectively, to pre-train entity representations by using the Graph Convolutional Networks. For the COM-P model, two multi-relational graphs are constructed to pre-train the entity representations by using the Composition-based Graph Convolutional Networks. An extensive evaluation of our pre-training models fine-tuned under four general representation-based recommender models, i.e. MF, NCF, NGCF and LightGCN, shows that effectively pre-training embeddings with both the user's and item's side information can significantly improve these original models in terms of both effectiveness and stability.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
245,296
2110.12536
Neo: Generalizing Confusion Matrix Visualization to Hierarchical and Multi-Output Labels
The confusion matrix, a ubiquitous visualization for helping people evaluate machine learning models, is a tabular layout that compares predicted class labels against actual class labels over all data instances. We conduct formative research with machine learning practitioners at Apple and find that conventional confusion matrices do not support more complex data-structures found in modern-day applications, such as hierarchical and multi-output labels. To express such variations of confusion matrices, we design an algebra that models confusion matrices as probability distributions. Based on this algebra, we develop Neo, a visual analytics system that enables practitioners to flexibly author and interact with hierarchical and multi-output confusion matrices, visualize derived metrics, renormalize confusions, and share matrix specifications. Finally, we demonstrate Neo's utility with three model evaluation scenarios that help people better understand model performance and reveal hidden confusions.
true
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
262,875
2501.02792
Gaming on Coincident Peak Shaving: Equilibrium and Strategic Behavior
Coincident peak demand charges are imposed by power system operators or electric utilities when the overall system demand, aggregated across multiple consumers, reaches its peak. These charges incentivize consumers to reduce their demand during peak periods, a practice known as coincident peak shaving. In this paper, we analyze the coincident peak shaving problem through the lens of game theory, developing a theoretical model to examine the impact of strategic consumer behavior on system efficiency. We demonstrate that the game structure exhibits varying characteristics - concave, quasiconcave/discontinuous, or non-concave/discontinuous - depending on the extent of consumers demand-shifting capabilities. For a two-agent, two-period setting, we derive closed-form Nash equilibrium solutions under each condition and generalize our findings to cases with multiple agents. We prove the stability of the equilibrium points and present an algorithm for computing equilibrium outcomes across all game scenarios. We also show that the peak-shaving effectiveness of the game model matches that of the centralized peak-shaving model but with increased levels of anarchy. In the cases of quasiconcave and non-concave game conditions, we analytically demonstrate in the two-agent setting that anarchy increases with consumers' flexibility and inequity, as measured by their marginal shifting costs, and we also analyze the influence of the number of agents on anarchy. Finally, we provide numerical simulations to validate our theoretical results.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
522,634
2108.12530
Combining chest X-rays and electronic health record (EHR) data using machine learning to diagnose acute respiratory failure
Objective: When patients develop acute respiratory failure, accurately identifying the underlying etiology is essential for determining the best treatment. However, differentiating between common medical diagnoses can be challenging in clinical practice. Machine learning models could improve medical diagnosis by aiding in the diagnostic evaluation of these patients. Materials and Methods: Machine learning models were trained to predict the common causes of acute respiratory failure (pneumonia, heart failure, and/or COPD). Models were trained using chest radiographs and clinical data from the electronic health record (EHR) and applied to an internal and external cohort. Results: The internal cohort of 1,618 patients included 508 (31%) with pneumonia, 363 (22%) with heart failure, and 137 (8%) with COPD based on physician chart review. A model combining chest radiographs and EHR data outperformed models based on each modality alone. Models had similar or better performance compared to a randomly selected physician reviewer. For pneumonia, the combined model area under the receiver operating characteristic curve (AUROC) was 0.79 (0.77-0.79), image model AUROC was 0.74 (0.72-0.75), and EHR model AUROC was 0.74 (0.70-0.76). For heart failure, combined: 0.83 (0.77-0.84), image: 0.80 (0.71-0.81), and EHR: 0.79 (0.75-0.82). For COPD, combined: AUROC = 0.88 (0.83-0.91), image: 0.83 (0.77-0.89), and EHR: 0.80 (0.76-0.84). In the external cohort, performance was consistent for heart failure and increased for COPD, but declined slightly for pneumonia. Conclusions: Machine learning models combining chest radiographs and EHR data can accurately differentiate between common causes of acute respiratory failure. Further work is needed to determine how these models could act as a diagnostic aid to clinicians in clinical settings.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
252,520
2501.08888
A Partial Initialization Strategy to Mitigate the Overfitting Problem in CATE Estimation with Hidden Confounding
Estimating the conditional average treatment effect (CATE) from observational data plays a crucial role in areas such as e-commerce, healthcare, and economics. Existing studies mainly rely on the strong ignorability assumption that there are no hidden confounders, whose existence cannot be tested from observational data and can invalidate any causal conclusion. In contrast, data collected from randomized controlled trials (RCT) do not suffer from confounding but are usually limited by a small sample size. To avoid overfitting caused by the small-scale RCT data, we propose a novel two-stage pretraining-finetuning (TSPF) framework with a partial parameter initialization strategy to estimate the CATE in the presence of hidden confounding. In the first stage, a foundational representation of covariates is trained to estimate counterfactual outcomes through large-scale observational data. In the second stage, we propose to train an augmented representation of the covariates, which is concatenated with the foundational representation obtained in the first stage to adjust for the hidden confounding. Rather than training a separate network from scratch, part of the prediction heads are initialized from the first stage. The superiority of our approach is validated on two datasets with extensive experiments.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
524,930
1207.0206
Alternative Restart Strategies for CMA-ES
This paper focuses on the restart strategy of CMA-ES on multi-modal functions. A first alternative strategy proceeds by decreasing the initial step-size of the mutation while doubling the population size at each restart. A second strategy adaptively allocates the computational budget among the restart settings in the BIPOP scheme. Both restart strategies are validated on the BBOB benchmark; their generality is also demonstrated on an independent real-world problem suite related to spacecraft trajectory optimization.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
17,145
2008.13335
Quaternion-Based Self-Attentive Long Short-Term User Preference Encoding for Recommendation
Quaternion space has brought several benefits over the traditional Euclidean space: Quaternions (i) consist of a real and three imaginary components, encouraging richer representations; (ii) utilize Hamilton product which better encodes the inter-latent interactions across multiple Quaternion components; and (iii) result in a model with smaller degrees of freedom and less prone to overfitting. Unfortunately, most of the current recommender systems rely on real-valued representations in Euclidean space to model either user's long-term or short-term interests. In this paper, we fully utilize Quaternion space to model both user's long-term and short-term preferences. We first propose a QUaternion-based self-Attentive Long term user Encoding (QUALE) to study the user's long-term intents. Then, we propose a QUaternion-based self-Attentive Short term user Encoding (QUASE) to learn the user's short-term interests. To enhance our models' capability, we propose to fuse QUALE and QUASE into one model, namely QUALSE, by using a Quaternion-based gating mechanism. We further develop Quaternion-based Adversarial learning along with the Bayesian Personalized Ranking (QABPR) to improve our model's robustness. Extensive experiments on six real-world datasets show that our fused QUALSE model outperformed 11 state-of-the-art baselines, improving 8.43% at HIT@1 and 10.27% at NDCG@1 on average compared with the best baseline.
true
false
false
false
true
true
false
false
false
false
false
false
false
false
false
true
false
false
193,819
2302.14168
Signal Propagation in Double Edged Relays
A discrete signal propagation model blending characteristics of linear wave propagation and finite state automata is developed. We show this model obeys a limited form of superposition and is capable of displaying a wide variety of interesting behaviors. We show how the model's superposition properties permit information to be encoded and retained by signals that pass through discrete networks. We outline a SPIDER model replacement for Dijkstra's algorithm.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
348,182
2303.02995
HiCLIP: Contrastive Language-Image Pretraining with Hierarchy-aware Attention
The success of large-scale contrastive vision-language pretraining (CLIP) has benefited both visual recognition and multimodal content understanding. The concise design brings CLIP the advantage in inference efficiency against other vision-language models with heavier cross-attention fusion layers, making it a popular choice for a wide spectrum of downstream tasks. However, CLIP does not explicitly capture the hierarchical nature of high-level and fine-grained semantics conveyed in images and texts, which is arguably critical to vision-language understanding and reasoning. To this end, we equip both the visual and language branches in CLIP with hierarchy-aware attentions, namely Hierarchy-aware CLIP (HiCLIP), to progressively discover semantic hierarchies layer-by-layer from both images and texts in an unsupervised manner. As a result, such hierarchical aggregation significantly improves the cross-modal alignment. To demonstrate the advantages of HiCLIP, we conduct qualitative analysis on its unsupervised hierarchy induction during inference, as well as extensive quantitative experiments on both visual recognition and vision-language downstream tasks.
false
false
false
false
false
false
true
false
true
false
false
true
false
false
false
false
false
false
349,569
2303.13590
Une comparaison des algorithmes d'apprentissage pour la survie avec donn\'ees manquantes
Survival analysis is an essential tool for the study of health data. An inherent component of such data is the presence of missing values. In recent years, researchers proposed new learning algorithms for survival tasks based on neural networks. Here, we studied the predictive performance of such algorithms coupled with different methods for handling missing values on simulated data that reflect a realistic situation, i.e., when individuals belong to unobserved clusters. We investigated different patterns of missing data. The results show that, without further feature engineering, no single imputation method is better than the others in all cases. The proposed methodology can be used to compare other missing data patterns and/or survival models. The Python code is accessible via the package survivalsim. -- L'analyse de survie est un outil essentiel pour l'\'etude des donn\'ees de sant\'e. Une composante inh\'erente \`a ces donn\'ees est la pr\'esence de valeurs manquantes. Ces derni\`eres ann\'ees, de nouveaux algorithmes d'apprentissage pour la survie, bas\'es sur les r\'eseaux de neurones, ont \'et\'e con\c{c}us. L'objectif de ce travail est d'\'etudier la performance en pr\'ediction de ces algorithmes coupl\'es \`a diff\'erentes m\'ethodes pour g\'erer les valeurs manquantes, sur des donn\'ees simul\'ees qui refl\`etent une situation rencontr\'ee en pratique, c'est-\`a dire lorsque les individus peuvent \^etre group\'es selon leurs covariables. Diff\'erents sch\'emas de donn\'ees manquantes sont \'etudi\'es. Les r\'esultats montrent que, sans l'ajout de variables suppl\'ementaires, aucune m\'ethode d'imputation n'est meilleure que les autres dans tous les cas. La m\'ethodologie propos\'ee peut \^etre utilis\'ee pour comparer d'autres mod\`eles de survie. Le code en Python est accessible via le package survivalsim.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
353,744
1906.02124
PatentBERT: Patent Classification with Fine-Tuning a pre-trained BERT Model
In this work we focus on fine-tuning a pre-trained BERT model and applying it to patent classification. When applied to large datasets of over two millions patents, our approach outperforms the state of the art by an approach using CNN with word embeddings. In addition, we focus on patent claims without other parts in patent documents. Our contributions include: (1) a new state-of-the-art method based on pre-trained BERT model and fine-tuning for patent classification, (2) a large dataset USPTO-3M at the CPC subclass level with SQL statements that can be used by future researchers, (3) showing that patent claims alone are sufficient for classification task, in contrast to conventional wisdom.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
133,948
2001.04766
Self-reciprocal and self-conjugate-reciprocal irreducible factors of $x^n-\lambda$ and their applications
In this paper, we present some necessary and sufficient conditions under which an irreducible polynomial is self-reciprocal (SR) or self-conjugate-reciprocal (SCR). By these characterizations, we obtain some enumeration formulas of SR and SCR irreducible factors of $x^n-\lambda$, $\lambda\in \Bbb F_q^*$, over $\Bbb F_q$, which are just open questions posed by Boripan {\em et al} (2019). We also count the numbers of Euclidean and Hermitian LCD constacyclic codes and show some well-known results on Euclidean and Hermitian self-dual constacyclic codes in a simple and direct way.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
160,347
2305.13137
EMNS /Imz/ Corpus: An emotive single-speaker dataset for narrative storytelling in games, television and graphic novels
The increasing adoption of text-to-speech technologies has led to a growing demand for natural and emotive voices that adapt to a conversation's context and emotional tone. The Emotive Narrative Storytelling (EMNS) corpus is a unique speech dataset created to enhance conversations' expressiveness and emotive quality in interactive narrative-driven systems. The corpus consists of a 2.3-hour recording featuring a female speaker delivering labelled utterances. It encompasses eight acted emotional states, evenly distributed with a variance of 0.68%, along with expressiveness levels and natural language descriptions with word emphasis labels. The evaluation of audio samples from different datasets revealed that the EMNS corpus achieved the highest average scores in accurately conveying emotions and demonstrating expressiveness. It outperformed other datasets in conveying shared emotions and achieved comparable levels of genuineness. A classification task confirmed the accurate representation of intended emotions in the corpus, with participants recognising the recordings as genuine and expressive. Additionally, the availability of the dataset collection tool under the Apache 2.0 License simplifies remote speech data collection for researchers.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
true
366,364
2211.08742
Auditing Algorithmic Fairness in Machine Learning for Health with Severity-Based LOGAN
Auditing machine learning-based (ML) healthcare tools for bias is critical to preventing patient harm, especially in communities that disproportionately face health inequities. General frameworks are becoming increasingly available to measure ML fairness gaps between groups. However, ML for health (ML4H) auditing principles call for a contextual, patient-centered approach to model assessment. Therefore, ML auditing tools must be (1) better aligned with ML4H auditing principles and (2) able to illuminate and characterize communities vulnerable to the most harm. To address this gap, we propose supplementing ML4H auditing frameworks with SLOGAN (patient Severity-based LOcal Group biAs detectioN), an automatic tool for capturing local biases in a clinical prediction task. SLOGAN adapts an existing tool, LOGAN (LOcal Group biAs detectioN), by contextualizing group bias detection in patient illness severity and past medical history. We investigate and compare SLOGAN's bias detection capabilities to LOGAN and other clustering techniques across patient subgroups in the MIMIC-III dataset. On average, SLOGAN identifies larger fairness disparities in over 75% of patient groups than LOGAN while maintaining clustering quality. Furthermore, in a diabetes case study, health disparity literature corroborates the characterizations of the most biased clusters identified by SLOGAN. Our results contribute to the broader discussion of how machine learning biases may perpetuate existing healthcare disparities.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
330,758
2312.04062
A Low-Overhead Incorporation-Extrapolation based Few-Shot CSI Feedback Framework for Massive MIMO Systems
Accurate channel state information (CSI) is essential for downlink precoding in frequency division duplexing (FDD) massive multiple-input multiple-output (MIMO) systems with orthogonal frequency-division multiplexing (OFDM). However, obtaining CSI through feedback from the user equipment (UE) becomes challenging with the increasing scale of antennas and subcarriers and leads to extremely high CSI feedback overhead. Deep learning-based methods have emerged for compressing CSI but these methods generally require substantial collected samples and thus pose practical challenges. Moreover, existing deep learning methods also suffer from dramatically growing feedback overhead owing to their focus on full-dimensional CSI feedback. To address these issues, we propose a low-overhead Incorporation-Extrapolation based Few-Shot CSI feedback Framework (IEFSF) for massive MIMO systems. An incorporation-extrapolation scheme for eigenvector-based CSI feedback is proposed to reduce the feedback overhead. Then, to alleviate the necessity of extensive collected samples and enable few-shot CSI feedback, we further propose a knowledge-driven data augmentation (KDDA) method and an artificial intelligence-generated content (AIGC) -based data augmentation method by exploiting the domain knowledge of wireless channels and by exploiting a novel generative model, respectively. Experimental results based on the DeepMIMO dataset demonstrate that the proposed IEFSF significantly reduces CSI feedback overhead by 64 times compared with existing methods while maintaining higher feedback accuracy using only several hundred collected samples.
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
false
false
413,533
2306.09478
Understanding and Mitigating Extrapolation Failures in Physics-Informed Neural Networks
Physics-informed Neural Networks (PINNs) have recently gained popularity due to their effective approximation of partial differential equations (PDEs) using deep neural networks (DNNs). However, their out of domain behavior is not well understood, with previous work speculating that the presence of high frequency components in the solution function might be to blame for poor extrapolation performance. In this paper, we study the extrapolation behavior of PINNs on a representative set of PDEs of different types, including high-dimensional PDEs. We find that failure to extrapolate is not caused by high frequencies in the solution function, but rather by shifts in the support of the Fourier spectrum over time. We term these spectral shifts and quantify them by introducing a Weighted Wasserstein-Fourier distance (WWF). We show that the WWF can be used to predict PINN extrapolation performance, and that in the absence of significant spectral shifts, PINN predictions stay close to the true solution even in extrapolation. Finally, we propose a transfer learning-based strategy to mitigate the effects of larger spectral shifts, which decreases extrapolation errors by up to 82%.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
373,845
2001.09251
Deep Reinforcement Learning based Blind mmWave MIMO Beam Alignment
Directional beamforming is a crucial component for realizing robust wireless communication systems using millimeter wave (mmWave) technology. Beam alignment using brute-force search of the space introduces time overhead while location aided blind beam alignment adds additional hardware requirements to the system. In this paper, we introduce a method for blind beam alignment based on the RF fingerprints of user equipment obtained by the base stations. The proposed system performs blind beam alignment on a multiple base station cellular environment with multiple mobile users using deep reinforcement learning. We present a novel neural network architecture that can handle a mix of both continuous and discrete actions and use policy gradient methods to train the model. Our results show that the proposed method can achieve a data rate of up to four times the traditional method without any overheads.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
161,516
2312.03026
Uni3DL: Unified Model for 3D and Language Understanding
In this work, we present Uni3DL, a unified model for 3D and Language understanding. Distinct from existing unified vision-language models in 3D which are limited in task variety and predominantly dependent on projected multi-view images, Uni3DL operates directly on point clouds. This approach significantly expands the range of supported tasks in 3D, encompassing both vision and vision-language tasks in 3D. At the core of Uni3DL, a query transformer is designed to learn task-agnostic semantic and mask outputs by attending to 3D visual features, and a task router is employed to selectively generate task-specific outputs required for diverse tasks. With a unified architecture, our Uni3DL model enjoys seamless task decomposition and substantial parameter sharing across tasks. Uni3DL has been rigorously evaluated across diverse 3D vision-language understanding tasks, including semantic segmentation, object detection, instance segmentation, visual grounding, 3D captioning, and text-3D cross-modal retrieval. It demonstrates performance on par with or surpassing state-of-the-art (SOTA) task-specific models. We hope our benchmark and Uni3DL model will serve as a solid step to ease future research in unified models in the realm of 3D and language understanding. Project page: https://uni3dl.github.io.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
413,102
2411.00347
An Untethered Bioinspired Robotic Tensegrity Dolphin with Multi-Flexibility Design for Aquatic Locomotion
This paper presents the first steps toward a soft dolphin robot using a bio-inspired approach to mimic dolphin flexibility. The current dolphin robot uses a minimalist approach, with only two actuated cable-driven degrees of freedom actuated by a pair of motors. The actuated tail moves up and down in a swimming motion, but this first proof of concept does not permit controlled turns of the robot. While existing robotic dolphins typically use revolute joints to articulate rigid bodies, our design -- which will be made opensource -- incorporates a flexible tail with tunable silicone skin and actuation flexibility via a cable-driven system, which mimics muscle dynamics and design flexibility with a tunable skeleton structure. The design is also tunable since the backbone can be easily printed in various geometries. The paper provides insights into how a few such variations affect robot motion and efficiency, measured by speed and cost of transport (COT). This approach demonstrates the potential of achieving dolphin-like motion through enhanced flexibility in bio-inspired robotics.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
504,551
2011.06236
Adaptive Force-based Control for Legged Robots
Adaptive control can address model uncertainty in control systems. However, it is preliminarily designed for tracking control. Recent advancements in the control of quadruped robots show that force control can effectively realize agile and robust locomotion. In this paper, we present a novel adaptive force-based control framework for legged robots. We introduce a new architecture in our proposed approach to incorporate adaptive control into quadratic programming (QP) force control. Since our approach is based on force control, it also retains the advantages of the baseline framework, such as robustness to uneven terrain, controllable friction constraints, or soft impacts. Our method is successfully validated in both simulation and hardware experiments. While the baseline QP control has shown a significant degradation in the body tracking error with a small load, our proposed adaptive force-based control can enable the 12-kg Unitree A1 robot to walk on rough terrains while carrying a heavy load of up to 6 kg (50% of the robot weight). When standing with four legs, our proposed adaptive control can even allow the robot to carry up to 11 kg of load (92% of the robot weight) with less than 5-cm tracking error in the robot height.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
206,188
2501.13743
GPT-HTree: A Decision Tree Framework Integrating Hierarchical Clustering and Large Language Models for Explainable Classification
This paper introduces GPT-HTree, a framework combining hierarchical clustering, decision trees, and large language models (LLMs) to address this challenge. By leveraging hierarchical clustering to segment individuals based on salient features, resampling techniques to balance class distributions, and decision trees to tailor classification paths within each cluster, GPT-HTree ensures both accuracy and interpretability. LLMs enhance the framework by generating human-readable cluster descriptions, bridging quantitative analysis with actionable insights.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
526,803
2410.10567
When Precedents Clash
Consistency of case bases is a way to avoid the problem of retrieving conflicting constraining precedents for new cases to be decided. However, in legal practice the consistency requirements for case bases may not be satisfied. As pointed out in (Broughton 2019), a model of precedential constraint should take into account the hierarchical structure of the specific legal system under consideration and the temporal dimension of cases. This article continues the research initiated in (Liu et al. 2022; Di Florio et al. 2023), which established a connection between Boolean classifiers and legal case-based reasoning. On this basis, we enrich the classifier models with an organisational structure that takes into account both the hierarchy of courts and which courts issue decisions that are binding/constraining on subsequent cases. We focus on common law systems. We also introduce a temporal relation between cases. Within this enriched framework, we can formalise the notions of overruled cases and cases decided per incuriam: such cases are not to be considered binding on later cases. Finally, we show under which condition principles based on the hierarchical structure and on the temporal dimension can provide an unambiguous decision-making process for new cases in the presence of conflicting binding precedents.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
498,135
1512.00210
Quantized Message Passing for LDPC Codes
We propose a quantized decoding algorithm for low- density parity-check codes where the variable node update rule of the standard min-sum algorithm is replaced with a look-up table (LUT) that is designed using an information-theoretic criterion. We show that even with message resolutions as low as 3 bits, the proposed algorithm can achieve better error rates than a floating-point min-sum decoder. Moreover, we study in detail the effect of different decoder design parameters, like the design SNR and the LUT tree structure on the performance of our decoder, and we propose some complexity reduction techniques, such as LUT re-use and message alphabet downsizing.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
49,689
2112.11824
Binary Image Skeletonization Using 2-Stage U-Net
Object Skeletonization is the process of extracting skeletal, line-like representations of shapes. It provides a very useful tool for geometric shape understanding and minimal shape representation. It also has a wide variety of applications, most notably in anatomical research and activity detection. Several mathematical algorithmic approaches have been developed to solve this problem, and some of them have been proven quite robust. However, a lesser amount of attention has been invested into deep learning solutions for it. In this paper, we use a 2-stage variant of the famous U-Net architecture to split the problem space into two sub-problems: shape minimization and corrective skeleton thinning. Our model produces results that are visually much better than the baseline SkelNetOn model. We propose a new metric, M-CCORR, based on normalized correlation coefficients as an alternative to F1 for this challenge as it solves the problem of class imbalance, managing to recognize skeleton similarity without suffering from F1's over-sensitivity to pixel-shifts.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
272,812
1705.05933
Sub-sampled Cubic Regularization for Non-convex Optimization
We consider the minimization of non-convex functions that typically arise in machine learning. Specifically, we focus our attention on a variant of trust region methods known as cubic regularization. This approach is particularly attractive because it escapes strict saddle points and it provides stronger convergence guarantees than first- and second-order as well as classical trust region methods. However, it suffers from a high computational complexity that makes it impractical for large-scale learning. Here, we propose a novel method that uses sub-sampling to lower this computational cost. By the use of concentration inequalities we provide a sampling scheme that gives sufficiently accurate gradient and Hessian approximations to retain the strong global and local convergence guarantees of cubically regularized methods. To the best of our knowledge this is the first work that gives global convergence guarantees for a sub-sampled variant of cubic regularization on non-convex functions. Furthermore, we provide experimental results supporting our theory.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
73,567
2412.14744
A parametric algorithm is optimal for non-parametric regression of smooth functions
We address the regression problem for a general function $f:[-1,1]^d\to \mathbb R$ when the learner selects the training points $\{x_i\}_{i=1}^n$ to achieve a uniform error bound across the entire domain. In this setting, known historically as nonparametric regression, we aim to establish a sample complexity bound that depends solely on the function's degree of smoothness. Assuming periodicity at the domain boundaries, we introduce PADUA, an algorithm that, with high probability, provides performance guarantees optimal up to constant or logarithmic factors across all problem parameters. Notably, PADUA is the first parametric algorithm with optimal sample complexity for this setting. Due to this feature, we prove that, differently from the non-parametric state of the art, PADUA enjoys optimal space complexity in the prediction phase. To validate these results, we perform numerical experiments over functions coming from real audio data, where PADUA shows comparable performance to state-of-the-art methods, while requiring only a fraction of the computational time.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
518,847
nlin/0411066
Self-Organizing Traffic Lights
Steering traffic in cities is a very complex task, since improving efficiency involves the coordination of many actors. Traditional approaches attempt to optimize traffic lights for a particular density and configuration of traffic. The disadvantage of this lies in the fact that traffic densities and configurations change constantly. Traffic seems to be an adaptation problem rather than an optimization problem. We propose a simple and feasible alternative, in which traffic lights self-organize to improve traffic flow. We use a multi-agent simulation to study three self-organizing methods, which are able to outperform traditional rigid and adaptive methods. Using simple rules and no direct communication, traffic lights are able to self-organize and adapt to changing traffic conditions, reducing waiting times, number of stopped cars, and increasing average speeds.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
540,784
2009.12747
Smart Irrigation IoT Solution using Transfer Learning for Neural Networks
In this paper we develop a reliable system for smart irrigation of greenhouses using artificial neural networks, and an IoT architecture. Our solution uses four sensors in different layers of soil to predict future moisture. Using a dataset we collected by running experiments on different soils, we show high performance of neural networks compared to existing alternative method of support vector regression. To reduce the processing power of neural network for the IoT edge devices, we propose using transfer learning. Transfer learning also speeds up training performance with small amount of training data, and allows integrating climate sensors to a pre-trained model, which are the other two challenges of smart irrigation of greenhouses. Our proposed IoT architecture shows a complete solution for smart irrigation.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
197,524
2003.13260
TapLab: A Fast Framework for Semantic Video Segmentation Tapping into Compressed-Domain Knowledge
Real-time semantic video segmentation is a challenging task due to the strict requirements of inference speed. Recent approaches mainly devote great efforts to reducing the model size for high efficiency. In this paper, we rethink this problem from a different viewpoint: using knowledge contained in compressed videos. We propose a simple and effective framework, dubbed TapLab, to tap into resources from the compressed domain. Specifically, we design a fast feature warping module using motion vectors for acceleration. To reduce the noise introduced by motion vectors, we design a residual-guided correction module and a residual-guided frame selection module using residuals. TapLab significantly reduces redundant computations of the state-of-the-art fast semantic image segmentation models, running 3 to 10 times faster with controllable accuracy degradation. The experimental results show that TapLab achieves 70.6% mIoU on the Cityscapes dataset at 99.8 FPS with a single GPU card for the 1024x2048 videos. A high-speed version even reaches the speed of 160+ FPS. Codes will be available soon at https://github.com/Sixkplus/TapLab.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
170,157
2002.07953
Universal Domain Adaptation through Self Supervision
Unsupervised domain adaptation methods traditionally assume that all source categories are present in the target domain. In practice, little may be known about the category overlap between the two domains. While some methods address target settings with either partial or open-set categories, they assume that the particular setting is known a priori. We propose a more universally applicable domain adaptation framework that can handle arbitrary category shift, called Domain Adaptative Neighborhood Clustering via Entropy optimization (DANCE). DANCE combines two novel ideas: First, as we cannot fully rely on source categories to learn features discriminative for the target, we propose a novel neighborhood clustering technique to learn the structure of the target domain in a self-supervised way. Second, we use entropy-based feature alignment and rejection to align target features with the source, or reject them as unknown categories based on their entropy. We show through extensive experiments that DANCE outperforms baselines across open-set, open-partial and partial domain adaptation settings. Implementation is available at https://github.com/VisionLearningGroup/DANCE.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
164,614
2104.08717
Do We Really Need Dice? The Hidden Region-Size Biases of Segmentation Losses
Most segmentation losses are arguably variants of the Cross-Entropy (CE) or Dice losses. On the surface, these two categories of losses seem unrelated, and there is no clear consensus as to which category is a better choice, with varying performances for each across different benchmarks and applications. Furthermore, it is widely argued within the medical-imaging community that Dice and CE are complementary, which has motivated the use of compound CE-Dice losses. In this work, we provide a theoretical analysis, which shows that CE and Dice share a much deeper connection than previously thought. First, we show that, from a constrained-optimization perspective, they both decompose into two components, i.e., a similar ground-truth matching term, which pushes the predicted foreground regions towards the ground-truth, and a region-size penalty term imposing different biases on the size of the predicted regions. Then, we provide bound relationships and an information-theoretic analysis, which uncover hidden region-size biases: Dice has an intrinsic bias towards specific extremely imbalanced solutions, whereas CE implicitly encourages the ground-truth region proportions. Our theoretical results explain the wide experimental evidence in the medical-imaging literature, whereby Dice losses bring improvements for imbalanced segmentation. Based on our theoretical analysis, we propose a principled and simple solution, which enables to control explicitly the region-size bias. The proposed method integrates CE with explicit terms based on L1 or the KL divergence, which encourage segmenting region proportions to match target class proportions, thereby mitigating class imbalance but without losing generality. Comprehensive experiments and ablation studies over different losses and applications validate our theoretical analysis, as well as the effectiveness of explicit and simple region-size terms.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
230,935
2203.06911
Non-Parametric Modeling of Spatio-Temporal Human Activity Based on Mobile Robot Observations
This work presents a non-parametric spatio-temporal model for mapping human activity by mobile autonomous robots in a long-term context. Based on Variational Gaussian Process Regression, the model incorporates prior information of spatial and temporal-periodic dependencies to create a continuous representation of human occurrences. The inhomogeneous data distribution resulting from movements of the robot is included in the model via a heteroscedastic likelihood function and can be accounted for as predictive uncertainty. Using a sparse formulation, data sets over multiple weeks and several hundred square meters can be used for model creation. The experimental evaluation, based on multi-week data sets, demonstrates that the proposed approach outperforms the state of the art both in terms of predictive quality and subsequent path planning.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
285,268
2012.01403
Empirical Study on the Software Engineering Practices in Open Source ML Package Repositories
Recent advances in Artificial Intelligence (AI), especially in Machine Learning (ML), have introduced various practical applications (e.g., virtual personal assistants and autonomous cars) that enhance the experience of everyday users. However, modern ML technologies like Deep Learning require considerable technical expertise and resources to develop, train and deploy such models, making effective reuse of the ML models a necessity. Such discovery and reuse by practitioners and researchers are being addressed by public ML package repositories, which bundle up pre-trained models into packages for publication. Since such repositories are a recent phenomenon, there is no empirical data on their current state and challenges. Hence, this paper conducts an exploratory study that analyzes the structure and contents of two popular ML package repositories, TFHub and PyTorch Hub, comparing their information elements (features and policies), package organization, package manager functionalities and usage contexts against popular software package repositories (npm, PyPI, and CRAN). Through these studies, we have identified unique SE practices and challenges for sharing ML packages. These findings and implications would be useful for data scientists, researchers and software developers who intend to use these shared ML packages.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
209,408
1611.04218
Preference Completion from Partial Rankings
We propose a novel and efficient algorithm for the collaborative preference completion problem, which involves jointly estimating individualized rankings for a set of entities over a shared set of items, based on a limited number of observed affinity values. Our approach exploits the observation that while preferences are often recorded as numerical scores, the predictive quantity of interest is the underlying rankings. Thus, attempts to closely match the recorded scores may lead to overfitting and impair generalization performance. Instead, we propose an estimator that directly fits the underlying preference order, combined with nuclear norm constraints to encourage low--rank parameters. Besides (approximate) correctness of the ranking order, the proposed estimator makes no generative assumption on the numerical scores of the observations. One consequence is that the proposed estimator can fit any consistent partial ranking over a subset of the items represented as a directed acyclic graph (DAG), generalizing standard techniques that can only fit preference scores. Despite this generality, for supervision representing total or blockwise total orders, the computational complexity of our algorithm is within a $\log$ factor of the standard algorithms for nuclear norm regularization based estimates for matrix completion. We further show promising empirical results for a novel and challenging application of collaboratively ranking of the associations between brain--regions and cognitive neuroscience terms.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
63,809
1907.02886
Design and Characterization of Superconducting Nanowire-Based Processors for Acceleration of Deep Neural Network Training
Training of deep neural networks (DNNs) is a computationally intensive task and requires massive volumes of data transfer. Performing these operations with the conventional von Neumann architectures creates unmanageable time and power costs. Recent studies have shown that mixed-signal designs involving crossbar architectures are capable of achieving acceleration factors as high as 30,000x over the state of the art digital processors. These approaches involve utilization of non-volatile memory (NVM) elements as local processors. However, no technology has been developed to-date that can satisfy the strict device requirements for the unit cell. This paper presents the superconducting nanowire-based processing element as a cross-point device. The unit cell has many programmable non-volatile states that can be used to perform analog multiplication. Importantly, these states are intrinsically discrete due to quantization of flux, which provides symmetric switching characteristics. Operation of these devices in a crossbar is described and verified with electro-thermal circuit simulations. Finally, validation of the concept in an actual DNN training task is shown using an emulator.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
137,712
1812.04998
Neural Processes Mixed-Effect Models for Deep Normative Modeling of Clinical Neuroimaging Data
Normative modeling has recently been introduced as a promising approach for modeling variation of neuroimaging measures across individuals in order to derive biomarkers of psychiatric disorders. Current implementations rely on Gaussian process regression, which provides coherent estimates of uncertainty needed for the method but also suffers from drawbacks including poor scaling to large datasets and a reliance on fixed parametric kernels. In this paper, we propose a deep normative modeling framework based on neural processes (NPs) to solve these problems. To achieve this, we define a stochastic process formulation for mixed-effect models and show how NPs can be adopted for spatially structured mixed-effect modeling of neuroimaging data. This enables us to learn optimal feature representations and covariance structure for the random-effect and noise via global latent variables. In this scheme, predictive uncertainty can be approximated by sampling from the distribution of these global latent variables. On a publicly available clinical fMRI dataset, we compare the novelty detection performance of multivariate normative models estimated by the proposed NP approach to a baseline multi-task Gaussian process regression approach and show substantial improvements for certain diagnostic problems.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
116,330
2012.03093
Semantic Segmentation of Medium-Resolution Satellite Imagery using Conditional Generative Adversarial Networks
Semantic segmentation of satellite imagery is a common approach to identify patterns and detect changes around the planet. Most of the state-of-the-art semantic segmentation models are trained in a fully supervised way using Convolutional Neural Network (CNN). The generalization property of CNN is poor for satellite imagery because the data can be very diverse in terms of landscape types, image resolutions, and scarcity of labels for different geographies and seasons. Hence, the performance of CNN doesn't translate well to images from unseen regions or seasons. Inspired by Conditional Generative Adversarial Networks (CGAN) based approach of image-to-image translation for high-resolution satellite imagery, we propose a CGAN framework for land cover classification using medium-resolution Sentinel-2 imagery. We find that the CGAN model outperforms the CNN model of similar complexity by a significant margin on an unseen imbalanced test dataset.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
209,985
2310.05753
Large-Scale OD Matrix Estimation with A Deep Learning Method
The estimation of origin-destination (OD) matrices is a crucial aspect of Intelligent Transport Systems (ITS). It involves adjusting an initial OD matrix by regressing the current observations like traffic counts of road sections (e.g., using least squares). However, the OD estimation problem lacks sufficient constraints and is mathematically underdetermined. To alleviate this problem, some researchers incorporate a prior OD matrix as a target in the regression to provide more structural constraints. However, this approach is highly dependent on the existing prior matrix, which may be outdated. Others add structural constraints through sensor data, such as vehicle trajectory and speed, which can reflect more current structural constraints in real-time. Our proposed method integrates deep learning and numerical optimization algorithms to infer matrix structure and guide numerical optimization. This approach combines the advantages of both deep learning and numerical optimization algorithms. The neural network(NN) learns to infer structural constraints from probe traffic flows, eliminating dependence on prior information and providing real-time performance. Additionally, due to the generalization capability of NN, this method is economical in engineering. We conducted tests to demonstrate the good generalization performance of our method on a large-scale synthetic dataset. Subsequently, we verified the stability of our method on real traffic data. Our experiments provided confirmation of the benefits of combining NN and numerical optimization.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
398,277
2203.08945
Provable Adversarial Robustness for Fractional Lp Threat Models
In recent years, researchers have extensively studied adversarial robustness in a variety of threat models, including L_0, L_1, L_2, and L_infinity-norm bounded adversarial attacks. However, attacks bounded by fractional L_p "norms" (quasi-norms defined by the L_p distance with 0<p<1) have yet to be thoroughly considered. We proactively propose a defense with several desirable properties: it provides provable (certified) robustness, scales to ImageNet, and yields deterministic (rather than high-probability) certified guarantees when applied to quantized data (e.g., images). Our technique for fractional L_p robustness constructs expressive, deep classifiers that are globally Lipschitz with respect to the L_p^p metric, for any 0<p<1. However, our method is even more general: we can construct classifiers which are globally Lipschitz with respect to any metric defined as the sum of concave functions of components. Our approach builds on a recent work, Levine and Feizi (2021), which provides a provable defense against L_1 attacks. However, we demonstrate that our proposed guarantees are highly non-vacuous, compared to the trivial solution of using (Levine and Feizi, 2021) directly and applying norm inequalities. Code is available at https://github.com/alevine0/fractionalLpRobustness.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
285,964
2111.14094
Topic Driven Adaptive Network for Cross-Domain Sentiment Classification
Cross-domain sentiment classification has been a hot spot these years, which aims to learn a reliable classifier using labeled data from a source domain and evaluate it on a target domain. In this vein, most approaches utilized domain adaptation that maps data from different domains into a common feature space. To further improve the model performance, several methods targeted to mine domain-specific information were proposed. However, most of them only utilized a limited part of domain-specific information. In this study, we first develop a method of extracting domain-specific words based on the topic information derived from topic models. Then, we propose a Topic Driven Adaptive Network (TDAN) for cross-domain sentiment classification. The network consists of two sub-networks: a semantics attention network and a domain-specific word attention network, the structures of which are based on transformers. These sub-networks take different forms of input and their outputs are fused as the feature vector. Experiments validate the effectiveness of our TDAN on sentiment classification across domains. Case studies also indicate that topic models have the potential to add value to cross-domain sentiment classification by discovering interpretable and low-dimensional subspaces.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
268,492
2405.03429
ReCycle: Fast and Efficient Long Time Series Forecasting with Residual Cyclic Transformers
Transformers have recently gained prominence in long time series forecasting by elevating accuracies in a variety of use cases. Regrettably, in the race for better predictive performance the overhead of model architectures has grown onerous, leading to models with computational demand infeasible for most practical applications. To bridge the gap between high method complexity and realistic computational resources, we introduce the Residual Cyclic Transformer, ReCycle. ReCycle utilizes primary cycle compression to address the computational complexity of the attention mechanism in long time series. By learning residuals from refined smoothing average techniques, ReCycle surpasses state-of-the-art accuracy in a variety of application use cases. The reliable and explainable fallback behavior ensured by simple, yet robust, smoothing average techniques additionally lowers the barrier for user acceptance. At the same time, our approach reduces the run time and energy consumption by more than an order of magnitude, making both training and inference feasible on low-performance, low-power and edge computing devices. Code is available at https://github.com/Helmholtz-AI-Energy/ReCycle
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
452,183
2109.14828
Uncertainty Estimation of Dense Optical-Flow for Robust Visual Navigation
This paper presents a novel dense optical-flow algorithm to solve the monocular simultaneous localization and mapping (SLAM) problem for ground or aerial robots. Dense optical flow can effectively provide the ego-motion of the vehicle while enabling collision avoidance with the potential obstacles. Existing work has not fully utilized the uncertainty of the optical flow -- at most an isotropic Gaussian density model. We estimate the full uncertainty of the optical flow and propose a new eight-point algorithm based on the statistical Mahalanobis distance. Combined with the pose-graph optimization, the proposed method demonstrates enhanced robustness and accuracy for the public autonomous car dataset (KITTI) and aerial monocular dataset.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
258,083
2211.10173
How Do Input Attributes Impact the Privacy Loss in Differential Privacy?
Differential privacy (DP) is typically formulated as a worst-case privacy guarantee over all individuals in a database. More recently, extensions to individual subjects or their attributes, have been introduced. Under the individual/per-instance DP interpretation, we study the connection between the per-subject gradient norm in DP neural networks and individual privacy loss and introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS), which allows one to apportion the subject's privacy loss to their input attributes. We experimentally show how this enables the identification of sensitive attributes and of subjects at high risk of data reconstruction.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
331,232
2305.09447
Multi-Level Global Context Cross Consistency Model for Semi-Supervised Ultrasound Image Segmentation with Diffusion Model
Medical image segmentation is a critical step in computer-aided diagnosis, and convolutional neural networks are popular segmentation networks nowadays. However, the inherent local operation characteristics make it difficult to focus on the global contextual information of lesions with different positions, shapes, and sizes. Semi-supervised learning can be used to learn from both labeled and unlabeled samples, alleviating the burden of manual labeling. However, obtaining a large number of unlabeled images in medical scenarios remains challenging. To address these issues, we propose a Multi-level Global Context Cross-consistency (MGCC) framework that uses images generated by a Latent Diffusion Model (LDM) as unlabeled images for semi-supervised learning. The framework involves of two stages. In the first stage, a LDM is used to generate synthetic medical images, which reduces the workload of data annotation and addresses privacy concerns associated with collecting medical data. In the second stage, varying levels of global context noise perturbation are added to the input of the auxiliary decoder, and output consistency is maintained between decoders to improve the representation ability. Experiments conducted on open-source breast ultrasound and private thyroid ultrasound datasets demonstrate the effectiveness of our framework in bridging the probability distribution and the semantic representation of the medical image. Our approach enables the effective transfer of probability distribution knowledge to the segmentation network, resulting in improved segmentation accuracy. The code is available at https://github.com/FengheTan9/Multi-Level-Global-Context-Cross-Consistency.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
364,635
2201.03871
Combining Learning-based Locomotion Policy with Model-based Manipulation for Legged Mobile Manipulators
Deep reinforcement learning produces robust locomotion policies for legged robots over challenging terrains. To date, few studies have leveraged model-based methods to combine these locomotion skills with the precise control of manipulators. Here, we incorporate external dynamics plans into learning-based locomotion policies for mobile manipulation. We train the base policy by applying a random wrench sequence on the robot base in simulation and adding the noisified wrench sequence prediction to the policy observations. The policy then learns to counteract the partially-known future disturbance. The random wrench sequences are replaced with the wrench prediction generated with the dynamics plans from model predictive control to enable deployment. We show zero-shot adaptation for manipulators unseen during training. On the hardware, we demonstrate stable locomotion of legged robots with the prediction of the external wrench.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
274,960
1103.3095
A note on active learning for smooth problems
We show that the disagreement coefficient of certain smooth hypothesis classes is $O(m)$, where $m$ is the dimension of the hypothesis space, thereby answering a question posed in \cite{friedman09}.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
9,628
1701.05524
Synthetic to Real Adaptation with Generative Correlation Alignment Networks
Synthetic images rendered from 3D CAD models are useful for augmenting training data for object recognition algorithms. However, the generated images are non-photorealistic and do not match real image statistics. This leads to a large domain discrepancy, causing models trained on synthetic data to perform poorly on real domains. Recent work has shown the great potential of deep convolutional neural networks to generate realistic images, but has not utilized generative models to address synthetic-to-real domain adaptation. In this work, we propose a Deep Generative Correlation Alignment Network (DGCAN) to synthesize images using a novel domain adaption algorithm. DGCAN leverages a shape preserving loss and a low level statistic matching loss to minimize the domain discrepancy between synthetic and real images in deep feature space. Experimentally, we show training off-the-shelf classifiers on the newly generated data can significantly boost performance when testing on the real image domains (PASCAL VOC 2007 benchmark and Office dataset), improving upon several existing methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
66,995
2310.13073
Using Logic Programming and Kernel-Grouping for Improving Interpretability of Convolutional Neural Networks
Within the realm of deep learning, the interpretability of Convolutional Neural Networks (CNNs), particularly in the context of image classification tasks, remains a formidable challenge. To this end we present a neurosymbolic framework, NeSyFOLD-G that generates a symbolic rule-set using the last layer kernels of the CNN to make its underlying knowledge interpretable. What makes NeSyFOLD-G different from other similar frameworks is that we first find groups of similar kernels in the CNN (kernel-grouping) using the cosine-similarity between the feature maps generated by various kernels. Once such kernel groups are found, we binarize each kernel group's output in the CNN and use it to generate a binarization table which serves as input data to FOLD-SE-M which is a Rule Based Machine Learning (RBML) algorithm. FOLD-SE-M then generates a rule-set that can be used to make predictions. We present a novel kernel grouping algorithm and show that grouping similar kernels leads to a significant reduction in the size of the rule-set generated by FOLD-SE-M, consequently, improving the interpretability. This rule-set symbolically encapsulates the connectionist knowledge of the trained CNN. The rule-set can be viewed as a normal logic program wherein each predicate's truth value depends on a kernel group in the CNN. Each predicate in the rule-set is mapped to a concept using a few semantic segmentation masks of the images used for training, to make it human-understandable. The last layers of the CNN can then be replaced by this rule-set to obtain the NeSy-G model which can then be used for the image classification task. The goal directed ASP system s(CASP) can be used to obtain the justification of any prediction made using the NeSy-G model. We also propose a novel algorithm for labeling each predicate in the rule-set with the semantic concept(s) that its corresponding kernel group represents.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
401,280
2307.06118
TreeFormer: a Semi-Supervised Transformer-based Framework for Tree Counting from a Single High Resolution Image
Automatic tree density estimation and counting using single aerial and satellite images is a challenging task in photogrammetry and remote sensing, yet has an important role in forest management. In this paper, we propose the first semisupervised transformer-based framework for tree counting which reduces the expensive tree annotations for remote sensing images. Our method, termed as TreeFormer, first develops a pyramid tree representation module based on transformer blocks to extract multi-scale features during the encoding stage. Contextual attention-based feature fusion and tree density regressor modules are further designed to utilize the robust features from the encoder to estimate tree density maps in the decoder. Moreover, we propose a pyramid learning strategy that includes local tree density consistency and local tree count ranking losses to utilize unlabeled images into the training process. Finally, the tree counter token is introduced to regulate the network by computing the global tree counts for both labeled and unlabeled images. Our model was evaluated on two benchmark tree counting datasets, Jiangsu, and Yosemite, as well as a new dataset, KCL-London, created by ourselves. Our TreeFormer outperforms the state of the art semi-supervised methods under the same setting and exceeds the fully-supervised methods using the same number of labeled images. The codes and datasets are available at https://github.com/HAAClassic/TreeFormer.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
378,971
1905.08772
A Text Classification Framework for Simple and Effective Early Depression Detection Over Social Media Streams
With the rise of the Internet, there is a growing need to build intelligent systems that are capable of efficiently dealing with early risk detection (ERD) problems on social media, such as early depression detection, early rumor detection or identification of sexual predators. These systems, nowadays mostly based on machine learning techniques, must be able to deal with data streams since users provide their data over time. In addition, these systems must be able to decide when the processed data is sufficient to actually classify users. Moreover, since ERD tasks involve risky decisions by which people's lives could be affected, such systems must also be able to justify their decisions. However, most standard and state-of-the-art supervised machine learning models are not well suited to deal with this scenario. This is due to the fact that they either act as black boxes or do not support incremental classification/learning. In this paper we introduce SS3, a novel supervised learning model for text classification that naturally supports these aspects. SS3 was designed to be used as a general framework to deal with ERD problems. We evaluated our model on the CLEF's eRisk2017 pilot task on early depression detection. Most of the 30 contributions submitted to this competition used state-of-the-art methods. Experimental results show that our classifier was able to outperform these models and standard classifiers, despite being less computationally expensive and having the ability to explain its rationale.
false
false
false
true
false
true
true
false
true
false
false
false
false
true
false
false
false
false
131,561
2409.05900
Memory-Optimized Once-For-All Network
Deploying Deep Neural Networks (DNNs) on different hardware platforms is challenging due to varying resource constraints. Besides handcrafted approaches aiming at making deep models hardware-friendly, Neural Architectures Search is rising as a toolbox to craft more efficient DNNs without sacrificing performance. Among these, the Once-For-All (OFA) approach offers a solution by allowing the sampling of well-performing sub-networks from a single supernet -- this leads to evident advantages in terms of computation. However, OFA does not fully utilize the potential memory capacity of the target device, focusing instead on limiting maximum memory usage per layer. This leaves room for an unexploited potential in terms of model generalizability. In this paper, we introduce a Memory-Optimized OFA (MOOFA) supernet, designed to enhance DNN deployment on resource-limited devices by maximizing memory usage (and for instance, features diversity) across different configurations. Tested on ImageNet, our MOOFA supernet demonstrates improvements in memory exploitation and model accuracy compared to the original OFA supernet. Our code is available at https://github.com/MaximeGirard/memory-optimized-once-for-all.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
486,933
2501.17883
Explainable and Robust Millimeter Wave Beam Alignment for AI-Native 6G Networks
Integrated artificial intelligence (AI) and communication has been recognized as a key pillar of 6G and beyond networks. In line with AI-native 6G vision, explainability and robustness in AI-driven systems are critical for establishing trust and ensuring reliable performance in diverse and evolving environments. This paper addresses these challenges by developing a robust and explainable deep learning (DL)-based beam alignment engine (BAE) for millimeter-wave (mmWave) multiple-input multiple-output (MIMO) systems. The proposed convolutional neural network (CNN)-based BAE utilizes received signal strength indicator (RSSI) measurements over a set of wide beams to accurately predict the best narrow beam for each UE, significantly reducing the overhead associated with exhaustive codebook-based narrow beam sweeping for initial access (IA) and data transmission. To ensure transparency and resilience, the Deep k-Nearest Neighbors (DkNN) algorithm is employed to assess the internal representations of the network via nearest neighbor approach, providing human-interpretable explanations and confidence metrics for detecting out-of-distribution inputs. Experimental results demonstrate that the proposed DL-based BAE exhibits robustness to measurement noise, reduces beam training overhead by 75% compared to the exhaustive search while maintaining near-optimal performance in terms of spectral efficiency. Moreover, the proposed framework improves outlier detection robustness by up to 5x and offers clearer insights into beam prediction decisions compared to traditional softmax-based classifiers.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
528,495
1805.11793
Infinite Arms Bandit: Optimality via Confidence Bounds
Berry et al. (1997) initiated the development of the infinite arms bandit problem. They derived a regret lower bound of all allocation strategies for Bernoulli rewards with uniform priors, and proposed strategies based on success runs. Bonald and Prouti\`{e}re (2013) proposed a two-target algorithm that achieves the regret lower bound, and extended optimality to Bernoulli rewards with general priors. We present here a confidence bound target (CBT) algorithm that achieves optimality for rewards that are bounded above. For each arm we construct a confidence bound and compare it against each other and a target value to determine if the arm should be sampled further. The target value depends on the assumed priors of the arm means. In the absence of information on the prior, the target value is determined empirically. Numerical studies here show that CBT is versatile and outperforms its competitors.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
99,016
1906.03361
Robust Bi-Tempered Logistic Loss Based on Bregman Divergences
We introduce a temperature into the exponential function and replace the softmax output layer of neural nets by a high temperature generalization. Similarly, the logarithm in the log loss we use for training is replaced by a low temperature logarithm. By tuning the two temperatures we create loss functions that are non-convex already in the single layer case. When replacing the last layer of the neural nets by our bi-temperature generalization of logistic loss, the training becomes more robust to noise. We visualize the effect of tuning the two temperatures in a simple setting and show the efficacy of our method on large data sets. Our methodology is based on Bregman divergences and is superior to a related two-temperature method using the Tsallis divergence.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
134,354
1905.01861
The Missing Data Encoder: Cross-Channel Image Completion\\with Hide-And-Seek Adversarial Network
Image completion is the problem of generating whole images from fragments only. It encompasses inpainting (generating a patch given its surrounding), reverse inpainting/extrapolation (generating the periphery given the central patch) as well as colorization (generating one or several channels given other ones). In this paper, we employ a deep network to perform image completion, with adversarial training as well as perceptual and completion losses, and call it the ``missing data encoder'' (MDE). We consider several configurations based on how the seed fragments are chosen. We show that training MDE for ``random extrapolation and colorization'' (MDE-REC), i.e. using random channel-independent fragments, allows a better capture of the image semantics and geometry. MDE training makes use of a novel ``hide-and-seek'' adversarial loss, where the discriminator seeks the original non-masked regions, while the generator tries to hide them. We validate our models both qualitatively and quantitatively on several datasets, showing their interest for image completion, unsupervised representation learning as well as face occlusion handling.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
129,829
2306.15350
CellViT: Vision Transformers for Precise Cell Segmentation and Classification
Nuclei detection and segmentation in hematoxylin and eosin-stained (H&E) tissue images are important clinical tasks and crucial for a wide range of applications. However, it is a challenging task due to nuclei variances in staining and size, overlapping boundaries, and nuclei clustering. While convolutional neural networks have been extensively used for this task, we explore the potential of Transformer-based networks in this domain. Therefore, we introduce a new method for automated instance segmentation of cell nuclei in digitized tissue samples using a deep learning architecture based on Vision Transformer called CellViT. CellViT is trained and evaluated on the PanNuke dataset, which is one of the most challenging nuclei instance segmentation datasets, consisting of nearly 200,000 annotated Nuclei into 5 clinically important classes in 19 tissue types. We demonstrate the superiority of large-scale in-domain and out-of-domain pre-trained Vision Transformers by leveraging the recently published Segment Anything Model and a ViT-encoder pre-trained on 104 million histological image patches - achieving state-of-the-art nuclei detection and instance segmentation performance on the PanNuke dataset with a mean panoptic quality of 0.50 and an F1-detection score of 0.83. The code is publicly available at https://github.com/TIO-IKIM/CellViT
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
375,996
2003.00585
Online Hierarchical Forecasting for Power Consumption Data
We study the forecasting of the power consumptions of a population of households and of subpopulations thereof. These subpopulations are built according to location, to exogenous information and/or to profiles we determined from historical households consumption time series. Thus, we aim to forecast the electricity consumption time series at several levels of households aggregation. These time series are linked through some summation constraints which induce a hierarchy. Our approach consists in three steps: feature generation, aggregation and projection. Firstly (feature generation step), we build, for each considering group for households, a benchmark forecast (called features), using random forests or generalized additive models. Secondly (aggregation step), aggregation algorithms, run in parallel, aggregate these forecasts and provide new predictions. Finally (projection step), we use the summation constraints induced by the time series underlying hierarchy to re-conciliate the forecasts by projecting them in a well-chosen linear subspace. We provide some theoretical guaranties on the average prediction error of this methodology, through the minimization of a quantity called regret. We also test our approach on households power consumption data collected in Great Britain by multiple energy providers in the Energy Demand Research Project context. We build and compare various population segmentations for the evaluation of our approach performance.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
166,337
2309.14404
pLMFPPred: a novel approach for accurate prediction of functional peptides integrating embedding from pre-trained protein language model and imbalanced learning
Functional peptides have the potential to treat a variety of diseases. Their good therapeutic efficacy and low toxicity make them ideal therapeutic agents. Artificial intelligence-based computational strategies can help quickly identify new functional peptides from collections of protein sequences and discover their different functions.Using protein language model-based embeddings (ESM-2), we developed a tool called pLMFPPred (Protein Language Model-based Functional Peptide Predictor) for predicting functional peptides and identifying toxic peptides. We also introduced SMOTE-TOMEK data synthesis sampling and Shapley value-based feature selection techniques to relieve data imbalance issues and reduce computational costs. On a validated independent test set, pLMFPPred achieved accuracy, Area under the curve - Receiver Operating Characteristics, and F1-Score values of 0.974, 0.99, and 0.974, respectively. Comparative experiments show that pLMFPPred outperforms current methods for predicting functional peptides.The experimental results suggest that the proposed method (pLMFPPred) can provide better performance in terms of Accuracy, Area under the curve - Receiver Operating Characteristics, and F1-Score than existing methods. pLMFPPred has achieved good performance in predicting functional peptides and represents a new computational method for predicting functional peptides.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
394,604
2502.03365
A Match Made in Heaven? Matching Test Cases and Vulnerabilities With the VUTECO Approach
Software vulnerabilities are commonly detected via static analysis, penetration testing, and fuzzing. They can also be found by running unit tests - so-called vulnerability-witnessing tests - that stimulate the security-sensitive behavior with crafted inputs. Developing such tests is difficult and time-consuming; thus, automated data-driven approaches could help developers intercept vulnerabilities earlier. However, training and validating such approaches require a lot of data, which is currently scarce. This paper introduces VUTECO, a deep learning-based approach for collecting instances of vulnerability-witnessing tests from Java repositories. VUTECO carries out two tasks: (1) the "Finding" task to determine whether a test case is security-related, and (2) the "Matching" task to relate a test case to the exact vulnerability it is witnessing. VUTECO successfully addresses the Finding task, achieving perfect precision and 0.83 F0.5 score on validated test cases in VUL4J and returning 102 out of 145 (70%) correct security-related test cases from 244 open-source Java projects. Despite showing sufficiently good performance for the Matching task - i.e., 0.86 precision and 0.68 F0.5 score - VUTECO failed to retrieve any valid match in the wild. Nevertheless, we observed that in almost all of the matches, the test case was still security-related despite being matched to the wrong vulnerability. In the end, VUTECO can help find vulnerability-witnessing tests, though the matching with the right vulnerability is yet to be solved; the findings obtained lay the stepping stone for future research on the matter.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
true
530,689
1510.00477
Learning a Discriminative Model for the Perception of Realism in Composite Images
What makes an image appear realistic? In this work, we are answering this question from a data-driven perspective by learning the perception of visual realism directly from large amounts of data. In particular, we train a Convolutional Neural Network (CNN) model that distinguishes natural photographs from automatically generated composite images. The model learns to predict visual realism of a scene in terms of color, lighting and texture compatibility, without any human annotations pertaining to it. Our model outperforms previous works that rely on hand-crafted heuristics, for the task of classifying realistic vs. unrealistic photos. Furthermore, we apply our learned model to compute optimal parameters of a compositing method, to maximize the visual realism score predicted by our CNN model. We demonstrate its advantage against existing methods via a human perception study.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
47,518
2306.07921
Continuous Cost Aggregation for Dual-Pixel Disparity Extraction
Recent works have shown that depth information can be obtained from Dual-Pixel (DP) sensors. A DP arrangement provides two views in a single shot, thus resembling a stereo image pair with a tiny baseline. However, the different point spread function (PSF) per view, as well as the small disparity range, makes the use of typical stereo matching algorithms problematic. To address the above shortcomings, we propose a Continuous Cost Aggregation (CCA) scheme within a semi-global matching framework that is able to provide accurate continuous disparities from DP images. The proposed algorithm fits parabolas to matching costs and aggregates parabola coefficients along image paths. The aggregation step is performed subject to a quadratic constraint that not only enforces the disparity smoothness but also maintains the quadratic form of the total costs. This gives rise to an inherently efficient disparity propagation scheme with a pixel-wise minimization in closed-form. Furthermore, the continuous form allows for a robust multi-scale aggregation that better compensates for the varying PSF. Experiments on DP data from both DSLR and phone cameras show that the proposed scheme attains state-of-the-art performance in DP disparity estimation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
373,193
1906.05208
Sorted Top-k in Rounds
We consider the sorted top-$k$ problem whose goal is to recover the top-$k$ items with the correct order out of $n$ items using pairwise comparisons. In many applications, multiple rounds of interaction can be costly. We restrict our attention to algorithms with a constant number of rounds $r$ and try to minimize the sample complexity, i.e. the number of comparisons. When the comparisons are noiseless, we characterize how the optimal sample complexity depends on the number of rounds (up to a polylogarithmic factor for general $r$ and up to a constant factor for $r=1$ or 2). In particular, the sample complexity is $\Theta(n^2)$ for $r=1$, $\Theta(n\sqrt{k} + n^{4/3})$ for $r=2$ and $\tilde{\Theta}\left(n^{2/r} k^{(r-1)/r} + n\right)$ for $r \geq 3$. We extend our results of sorted top-$k$ to the noisy case where each comparison is correct with probability $2/3$. When $r=1$ or 2, we show that the sample complexity gets an extra $\Theta(\log(k))$ factor when we transition from the noiseless case to the noisy case. We also prove new results for top-$k$ and sorting in the noisy case. We believe our techniques can be generally useful for understanding the trade-off between round complexities and sample complexities of rank aggregation problems.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
134,958
2104.11332
Backup Control Barrier Functions: Formulation and Comparative Study
The backup control barrier function (CBF) was recently proposed as a tractable formulation that guarantees the feasibility of the CBF quadratic programming (QP) via an implicitly defined control invariant set. The control invariant set is based on a fixed backup policy and evaluated online by forward integrating the dynamics under the backup policy. This paper is intended as a tutorial of the backup CBF approach and a comparative study to some benchmarks. First, the backup CBF approach is presented step by step with the underlying math explained in detail. Second, we prove that the backup CBF always has a relative degree 1 under mild assumptions. Third, the backup CBF approach is compared with benchmarks such as Hamilton Jacobi PDE and Sum-of-Squares on the computation of control invariant sets, which shows that one can obtain a control invariant set close to the maximum control invariant set under a good backup policy for many practical problems.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
231,879