id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1911.02322 | Melanoma detection with electrical impedance spectroscopy and dermoscopy
using joint deep learning models | The initial assessment of skin lesions is typically based on dermoscopic images. As this is a difficult and time-consuming task, machine learning methods using dermoscopic images have been proposed to assist human experts. Other approaches have studied electrical impedance spectroscopy (EIS) as a basis for clinical decision support systems. Both methods represent different ways of measuring skin lesion properties as dermoscopy relies on visible light and EIS uses electric currents. Thus, the two methods might carry complementary features for lesion classification. Therefore, we propose joint deep learning models considering both EIS and dermoscopy for melanoma detection. For this purpose, we first study machine learning methods for EIS that incorporate domain knowledge and previously used heuristics into the design process. As a result, we propose a recurrent model with state-max-pooling which automatically learns the relevance of different EIS measurements. Second, we combine this new model with different convolutional neural networks that process dermoscopic images. We study ensembling approaches and also propose a cross-attention module guiding information exchange between the EIS and dermoscopy model. In general, combinations of EIS and dermoscopy clearly outperform models that only use either EIS or dermoscopy. We show that our attention-based, combined model outperforms other models with specificities of 34.4% (CI 31.3-38.4), 34.7% (CI 31.0-38.8) and 53.7% (CI 50.1-57.6) for dermoscopy, EIS and the combined model, respectively, at a clinically relevant sensitivity of 98%. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 152,337 |
2004.01299 | IVFS: Simple and Efficient Feature Selection for High Dimensional
Topology Preservation | Feature selection is an important tool to deal with high dimensional data. In unsupervised case, many popular algorithms aim at maintaining the structure of the original data. In this paper, we propose a simple and effective feature selection algorithm to enhance sample similarity preservation through a new perspective, topology preservation, which is represented by persistent diagrams from the context of computational topology. This method is designed upon a unified feature selection framework called IVFS, which is inspired by random subset method. The scheme is flexible and can handle cases where the problem is analytically intractable. The proposed algorithm is able to well preserve the pairwise distances, as well as topological patterns, of the full data. We demonstrate that our algorithm can provide satisfactory performance under a sharp sub-sampling rate, which supports efficient implementation of our proposed method to large scale datasets. Extensive experiments validate the effectiveness of the proposed feature selection scheme. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 170,873 |
2107.06195 | Transfer Learning in Multi-Agent Reinforcement Learning with Double
Q-Networks for Distributed Resource Sharing in V2X Communication | This paper addresses the problem of decentralized spectrum sharing in vehicle-to-everything (V2X) communication networks. The aim is to provide resource-efficient coexistence of vehicle-to-infrastructure(V2I) and vehicle-to-vehicle(V2V) links. A recent work on the topic proposes a multi-agent reinforcement learning (MARL) approach based on deep Q-learning, which leverages a fingerprint-based deep Q-network (DQN) architecture. This work considers an extension of this framework by combining Double Q-learning (via Double DQN) and transfer learning. The motivation behind is that Double Q-learning can alleviate the problem of overestimation of the action values present in conventional Q-learning, while transfer learning can leverage knowledge acquired by an expert model to accelerate learning in the MARL setting. The proposed algorithm is evaluated in a realistic V2X setting, with synthetic data generated based on a geometry-based propagation model that incorporates location-specific geographical descriptors of the simulated environment(outlines of buildings, foliage, and vehicles). The advantages of the proposed approach are demonstrated via numerical simulations. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 246,018 |
2004.04467 | Adversarial Latent Autoencoders | Autoencoder networks are unsupervised approaches aiming at combining generative and representational properties by learning simultaneously an encoder-generator map. Although studied extensively, the issues of whether they have the same generative power of GANs, or learn disentangled representations, have not been fully addressed. We introduce an autoencoder that tackles these issues jointly, which we call Adversarial Latent Autoencoder (ALAE). It is a general architecture that can leverage recent improvements on GAN training procedures. We designed two autoencoders: one based on a MLP encoder, and another based on a StyleGAN generator, which we call StyleALAE. We verify the disentanglement properties of both architectures. We show that StyleALAE can not only generate 1024x1024 face images with comparable quality of StyleGAN, but at the same resolution can also produce face reconstructions and manipulations based on real images. This makes ALAE the first autoencoder able to compare with, and go beyond the capabilities of a generator-only type of architecture. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 171,893 |
2501.03279 | Revolutionizing Encrypted Traffic Classification with MH-Net: A
Multi-View Heterogeneous Graph Model | With the growing significance of network security, the classification of encrypted traffic has emerged as an urgent challenge. Traditional byte-based traffic analysis methods are constrained by the rigid granularity of information and fail to fully exploit the diverse correlations between bytes. To address these limitations, this paper introduces MH-Net, a novel approach for classifying network traffic that leverages multi-view heterogeneous traffic graphs to model the intricate relationships between traffic bytes. The essence of MH-Net lies in aggregating varying numbers of traffic bits into multiple types of traffic units, thereby constructing multi-view traffic graphs with diverse information granularities. By accounting for different types of byte correlations, such as header-payload relationships, MH-Net further endows the traffic graph with heterogeneity, significantly enhancing model performance. Notably, we employ contrastive learning in a multi-task manner to strengthen the robustness of the learned traffic unit representations. Experiments conducted on the ISCX and CIC-IoT datasets for both the packet-level and flow-level traffic classification tasks demonstrate that MH-Net achieves the best overall performance compared to dozens of SOTA methods. | false | false | false | false | true | false | true | false | false | false | false | false | true | false | false | false | false | false | 522,825 |
2008.13485 | ROS-Neuro Integration of Deep Convolutional Autoencoders for EEG Signal
Compression in Real-time BCIs | Typical EEG-based BCI applications require the computation of complex functions over the noisy EEG channels to be carried out in an efficient way. Deep learning algorithms are capable of learning flexible nonlinear functions directly from data, and their constant processing latency is perfect for their deployment into online BCI systems. However, it is crucial for the jitter of the processing system to be as low as possible, in order to avoid unpredictable behaviour that can ruin the system's overall usability. In this paper, we present a novel encoding method, based on on deep convolutional autoencoders, that is able to perform efficient compression of the raw EEG inputs. We deploy our model in a ROS-Neuro node, thus making it suitable for the integration in ROS-based BCI and robotic systems in real world scenarios. The experimental results show that our system is capable to generate meaningful compressed encoding preserving to original information contained in the raw input. They also show that the ROS-Neuro node is able to produce such encodings at a steady rate, with minimal jitter. We believe that our system can represent an important step towards the development of an effective BCI processing pipeline fully standardized in ROS-Neuro framework. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 193,855 |
2408.00955 | Aggregation Models with Optimal Weights for Distributed Gaussian
Processes | Gaussian process (GP) models have received increasingly attentions in recent years due to their superb prediction accuracy and modeling flexibility. To address the computational burdens of GP models for large-scale datasets, distributed learning for GPs are often adopted. Current aggregation models for distributed GPs are not time-efficient when incorporating correlations between GP experts. In this work, we propose a novel approach for aggregated prediction in distributed GPs. The technique is suitable for both the exact and sparse variational GPs. The proposed method incorporates correlations among experts, leading to better prediction accuracy with manageable computational requirements. As demonstrated by empirical studies, the proposed approach results in more stable predictions in less time than state-of-the-art consistent aggregation models. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 478,044 |
1907.03827 | FairST: Equitable Spatial and Temporal Demand Prediction for New
Mobility Systems | Emerging transportation modes, including car-sharing, bike-sharing, and ride-hailing, are transforming urban mobility but have been shown to reinforce socioeconomic inequities. Spatiotemporal demand prediction models for these new mobility regimes must therefore consider fairness as a first-class design requirement. We present FairST, a fairness-aware model for predicting demand for new mobility systems. Our approach utilizes 1D, 2D and 3D convolutions to integrate various urban features and learn the spatial-temporal dynamics of a mobility system, but we include fairness metrics as a form of regularization to make the predictions more equitable across demographic groups. We propose two novel spatiotemporal fairness metrics, a region-based fairness gap (RFG) and an individual-based fairness gap (IFG). Both quantify equity in a spatiotemporal context, but vary by whether demographics are labeled at the region level (RFG) or whether population distribution information is available (IFG). Experimental results on real bike share and ride share datasets demonstrate the effectiveness of the proposed model: FairST not only reduces the fairness gap by more than 80%, but can surprisingly achieve better accuracy than state-of-the-art yet fairness-oblivious methods including LSTMs, ConvLSTMs, and 3D CNN. | false | false | false | false | true | false | true | false | false | false | false | false | false | true | false | false | false | false | 137,945 |
2403.00289 | Optimization of array encoding for ultrasound imaging | Objective: The transmit encoding model for synthetic aperture imaging is a robust and flexible framework for understanding the effects of acoustic transmission on ultrasound image reconstruction. Our objective is to use machine learning (ML) to construct scanning sequences, parameterized by time delays and apodization weights, that produce high-quality B-mode images. Approach: We use a custom ML model in PyTorch with simulated RF data from Field II to probe the space of possible encoding sequences for those that minimize a loss function that describes image quality. This approach is made computationally feasible by a novel formulation of the derivative for delay-and-sum beamforming. Main Results: When trained for a specified experimental setting (imaging domain, hardware restrictions, etc.), our ML model produces optimized encoding sequences that, when deployed in the REFoCUS imaging framework, improve a number of standard quality metrics over conventional sequences including resolution, field of view, and contrast. We demonstrate these results experimentally on both wire targets and a tissue-mimicking phantom. Significance: This work demonstrates that the set of commonly used encoding schemes represent only a narrow subset of those available. Additionally, it demonstrates the value for ML tasks in synthetic transmit aperture imaging to consider the beamformer within the model, instead of purely as a post-processing step. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 433,931 |
2407.04127 | Biometric Authentication Based on Enhanced Remote Photoplethysmography
Signal Morphology | Remote photoplethysmography (rPPG) is a non-contact method for measuring cardiac signals from facial videos, offering a convenient alternative to contact photoplethysmography (cPPG) obtained from contact sensors. Recent studies have shown that each individual possesses a unique cPPG signal morphology that can be utilized as a biometric identifier, which has inspired us to utilize the morphology of rPPG signals extracted from facial videos for person authentication. Since the facial appearance and rPPG are mixed in the facial videos, we first de-identify facial videos to remove facial appearance while preserving the rPPG information, which protects facial privacy and guarantees that only rPPG is used for authentication. The de-identified videos are fed into an rPPG model to get the rPPG signal morphology for authentication. In the first training stage, unsupervised rPPG training is performed to get coarse rPPG signals. In the second training stage, an rPPG-cPPG hybrid training is performed by incorporating external cPPG datasets to achieve rPPG biometric authentication and enhance rPPG signal morphology. Our approach needs only de-identified facial videos with subject IDs to train rPPG authentication models. The experimental results demonstrate that rPPG signal morphology hidden in facial videos can be used for biometric authentication. The code is available at https://github.com/zhaodongsun/rppg_biometrics. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 470,439 |
2410.05050 | FreSh: Frequency Shifting for Accelerated Neural Representation Learning | Implicit Neural Representations (INRs) have recently gained attention as a powerful approach for continuously representing signals such as images, videos, and 3D shapes using multilayer perceptrons (MLPs). However, MLPs are known to exhibit a low-frequency bias, limiting their ability to capture high-frequency details accurately. This limitation is typically addressed by incorporating high-frequency input embeddings or specialized activation layers. In this work, we demonstrate that these embeddings and activations are often configured with hyperparameters that perform well on average but are suboptimal for specific input signals under consideration, necessitating a costly grid search to identify optimal settings. Our key observation is that the initial frequency spectrum of an untrained model's output correlates strongly with the model's eventual performance on a given target signal. Leveraging this insight, we propose frequency shifting (or FreSh), a method that selects embedding hyperparameters to align the frequency spectrum of the model's initial output with that of the target signal. We show that this simple initialization technique improves performance across various neural representation methods and tasks, achieving results comparable to extensive hyperparameter sweeps but with only marginal computational overhead compared to training a single model with default hyperparameters. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 495,544 |
2002.11247 | Collision Avoidance Based on Robust Lexicographic Task Assignment | Traditional task assignment approaches for multi-agent motion control do not take the possibility of collisions into account. This can lead to challenging requirements for path planning. We derive an assignment method that not only minimises the largest distance between an agent and its assigned destination but also provides local constraints for guaranteed collision avoidance. To this end, we introduce a sequential bottleneck optimisation problem and define a notion of robustness of an optimising assignment to changes of individual assignment costs. Conditioned on a sufficient level of robustness in relation to the size of the agents, we construct time-varying position bounds for every individual agent. These local constraints are a direct byproduct of the assignment procedure and only depend on the initial agent positions, the destinations that are to be visited, and a timing parameter. We prove that no agent that is assigned to move to one of the target locations collides with any other agent if all agents satisfy their local position constraints. We demonstrate the method in a illustrative case study. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 165,647 |
1912.00238 | Visualizing structural balance in signed networks | Network visualization has established as a key complement to network analysis since the large variety of existing network layouts are able to graphically highlight different properties of networks. However, signed networks, i.e., networks whose edges are labeled as friendly (positive) or antagonistic (negative), are target of few of such layouts and none, to our knowledge, is able to show structural balance, i.e., the tendency of cycles towards including an even number of negative edges, which is a well-known theory for studying friction and polarization. In this work we present Structural-balance-viz: a novel visualization method showing whether a connected signed network is balanced or not and, in the latter case, how close the network is to be balanced. Structural-balance-viz exploits spectral computations of the signed Laplacian matrix to place network's nodes in a Cartesian coordinate system resembling a balance (a scale). Moreover, it uses edge coloring and bundling to distinguish positive and negative interactions. The proposed visualization method has characteristics desirable in a variety of network analysis tasks: Structural-balance-viz is able to provide indications of balance/polarization of the whole network and of each node, to identify two factions of nodes on the basis of their polarization, and to show their cumulative characteristics. Moreover, the layout is reproducible and easy to compare. Structural-balance-viz is validated over synthetic-generated networks and applied to a real-world dataset about political debates confirming that it is able to provide meaningful interpretations. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 155,710 |
2104.10856 | Frequency Domain Loss Function for Deep Exposure Correction of Dark
Images | We address the problem of exposure correction of dark, blurry and noisy images captured in low-light conditions in the wild. Classical image-denoising filters work well in the frequency space but are constrained by several factors such as the correct choice of thresholds, frequency estimates etc. On the other hand, traditional deep networks are trained end-to-end in the RGB space by formulating this task as an image-translation problem. However, that is done without any explicit constraints on the inherent noise of the dark images and thus produce noisy and blurry outputs. To this end we propose a DCT/FFT based multi-scale loss function, which when combined with traditional losses, trains a network to translate the important features for visually pleasing output. Our loss function is end-to-end differentiable, scale-agnostic, and generic; i.e., it can be applied to both RAW and JPEG images in most existing frameworks without additional overhead. Using this loss function, we report significant improvements over the state-of-the-art using quantitative metrics and subjective tests. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 231,740 |
2311.02903 | HDGL: A hierarchical dynamic graph representation learning model for
brain disorder classification | The human brain can be considered as complex networks, composed of various regions that continuously exchange their information with each other, forming the brain network graph, from which nodes and edges are extracted using resting-state functional magnetic resonance imaging (rs-fMRI). Therefore, this graph can potentially depict abnormal patterns that have emerged under the influence of brain disorders. So far, numerous studies have attempted to find embeddings for brain network graphs and subsequently classify samples with brain disorders from healthy ones, which include limitations such as: not considering the relationship between samples, not utilizing phenotype information, lack of temporal analysis, using static functional connectivity (FC) instead of dynamic ones and using a fixed graph structure. We propose a hierarchical dynamic graph representation learning (HDGL) model, which is the first model designed to address all the aforementioned challenges. HDGL consists of two levels, where at the first level, it constructs brain network graphs and learns their spatial and temporal embeddings, and at the second level, it forms population graphs and performs classification after embedding learning. Furthermore, based on how these two levels are trained, four methods have been introduced, some of which are suggested for reducing memory complexity. We evaluated the performance of the proposed model on the ABIDE and ADHD-200 datasets, and the results indicate the improvement of this model compared to several state-of-the-art models in terms of various evaluation metrics. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 405,636 |
2308.04265 | FLIRT: Feedback Loop In-context Red Teaming | Warning: this paper contains content that may be inappropriate or offensive. As generative models become available for public use in various applications, testing and analyzing vulnerabilities of these models has become a priority. In this work, we propose an automatic red teaming framework that evaluates a given black-box model and exposes its vulnerabilities against unsafe and inappropriate content generation. Our framework uses in-context learning in a feedback loop to red team models and trigger them into unsafe content generation. In particular, taking text-to-image models as target models, we explore different feedback mechanisms to automatically learn effective and diverse adversarial prompts. Our experiments demonstrate that even with enhanced safety features, Stable Diffusion (SD) models are vulnerable to our adversarial prompts, raising concerns on their robustness in practical uses. Furthermore, we demonstrate that the proposed framework is effective for red teaming text-to-text models. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 384,349 |
2407.14892 | Latent Pollution Model: The Hidden Carbon Footprint in 3D Image
Synthesis | Contemporary developments in generative AI are rapidly transforming the field of medical AI. These developments have been predominantly driven by the availability of large datasets and high computing power, which have facilitated a significant increase in model capacity. Despite their considerable potential, these models demand substantially high power, leading to high carbon dioxide (CO2) emissions. Given the harm such models are causing to the environment, there has been little focus on the carbon footprints of such models. This study analyzes carbon emissions from 2D and 3D latent diffusion models (LDMs) during training and data generation phases, revealing a surprising finding: the synthesis of large images contributes most significantly to these emissions. We assess different scenarios including model sizes, image dimensions, distributed training, and data generation steps. Our findings reveal substantial carbon emissions from these models, with training 2D and 3D models comparable to driving a car for 10 km and 90 km, respectively. The process of data generation is even more significant, with CO2 emissions equivalent to driving 160 km for 2D models and driving for up to 3345 km for 3D synthesis. Additionally, we found that the location of the experiment can increase carbon emissions by up to 94 times, and even the time of year can influence emissions by up to 50%. These figures are alarming, considering they represent only a single training and data generation phase for each model. Our results emphasize the urgent need for developing environmentally sustainable strategies in generative AI. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 474,948 |
2210.07410 | Identification of quantum entanglement with Siamese convolutional neural
networks and semi-supervised learning | Quantum entanglement is a fundamental property commonly used in various quantum information protocols and algorithms. Nonetheless, the problem of identifying entanglement has still not reached a general solution for systems larger than $2\times3$. In this study, we use deep convolutional NNs, a type of supervised machine learning, to identify quantum entanglement for any bipartition in a 3-qubit system. We demonstrate that training the model on synthetically generated datasets of random density matrices excluding challenging positive-under-partial-transposition entangled states (PPTES), which cannot be identified (and correctly labeled) in general, leads to good model accuracy even for PPTES states, that were outside the training data. Our aim is to enhance the model's generalization on PPTES. By applying entanglement-preserving symmetry operations through a triple Siamese network trained in a semi-supervised manner, we improve the model's accuracy and ability to recognize PPTES. Moreover, by constructing an ensemble of Siamese models, even better generalization is observed, in analogy with the idea of finding separate types of entanglement witnesses for different classes of states. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 323,689 |
1906.05226 | Continual and Multi-Task Architecture Search | Architecture search is the process of automatically learning the neural model or cell structure that best suits the given task. Recently, this approach has shown promising performance improvements (on language modeling and image classification) with reasonable training speed, using a weight sharing strategy called Efficient Neural Architecture Search (ENAS). In our work, we first introduce a novel continual architecture search (CAS) approach, so as to continually evolve the model parameters during the sequential training of several tasks, without losing performance on previously learned tasks (via block-sparsity and orthogonality constraints), thus enabling life-long learning. Next, we explore a multi-task architecture search (MAS) approach over ENAS for finding a unified, single cell structure that performs well across multiple tasks (via joint controller rewards), and hence allows more generalizable transfer of the cell structure knowledge to an unseen new task. We empirically show the effectiveness of our sequential continual learning and parallel multi-task learning based architecture search approaches on diverse sentence-pair classification tasks (GLUE) and multimodal-generation based video captioning tasks. Further, we present several ablations and analyses on the learned cell structures. | false | false | false | false | false | false | true | false | true | false | false | true | false | false | false | false | false | false | 134,963 |
2009.07351 | Federated Dynamic GNN with Secure Aggregation | Given video data from multiple personal devices or street cameras, can we exploit the structural and dynamic information to learn dynamic representation of objects for applications such as distributed surveillance, without storing data at a central server that leads to a violation of user privacy? In this work, we introduce Federated Dynamic Graph Neural Network (Feddy), a distributed and secured framework to learn the object representations from multi-user graph sequences: i) It aggregates structural information from nearby objects in the current graph as well as dynamic information from those in the previous graph. It uses a self-supervised loss of predicting the trajectories of objects. ii) It is trained in a federated learning manner. The centrally located server sends the model to user devices. Local models on the respective user devices learn and periodically send their learning to the central server without ever exposing the user's data to server. iii) Studies showed that the aggregated parameters could be inspected though decrypted when broadcast to clients for model synchronizing, after the server performed a weighted average. We design an appropriate aggregation mechanism of secure aggregation primitives that can protect the security and privacy in federated learning with scalability. Experiments on four video camera datasets (in four different scenes) as well as simulation demonstrate that Feddy achieves great effectiveness and security. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 195,886 |
2210.04637 | Association Graph Learning for Multi-Task Classification with Category
Shifts | In this paper, we focus on multi-task classification, where related classification tasks share the same label space and are learned simultaneously. In particular, we tackle a new setting, which is more realistic than currently addressed in the literature, where categories shift from training to test data. Hence, individual tasks do not contain complete training data for the categories in the test set. To generalize to such test data, it is crucial for individual tasks to leverage knowledge from related tasks. To this end, we propose learning an association graph to transfer knowledge among tasks for missing classes. We construct the association graph with nodes representing tasks, classes and instances, and encode the relationships among the nodes in the edges to guide their mutual knowledge transfer. By message passing on the association graph, our model enhances the categorical information of each instance, making it more discriminative. To avoid spurious correlations between task and class nodes in the graph, we introduce an assignment entropy maximization that encourages each class node to balance its edge weights. This enables all tasks to fully utilize the categorical information from related tasks. An extensive evaluation on three general benchmarks and a medical dataset for skin lesion classification reveals that our method consistently performs better than representative baselines. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 322,538 |
2205.02743 | Rethinking Classifier and Adversarial Attack | Various defense models have been proposed to resist adversarial attack algorithms, but existing adversarial robustness evaluation methods always overestimate the adversarial robustness of these models (i.e., not approaching the lower bound of robustness). To solve this problem, this paper uses the proposed decouple space method to divide the classifier into two parts: non-linear and linear. Then, this paper defines the representation vector of the original example (and its space, i.e., the representation space) and uses the iterative optimization of Absolute Classification Boundaries Initialization (ACBI) to obtain a better attack starting point. Particularly, this paper applies ACBI to nearly 50 widely-used defense models (including 8 architectures). Experimental results show that ACBI achieves lower robust accuracy in all cases. | false | false | false | false | true | false | true | false | false | false | false | false | true | false | false | false | false | false | 295,045 |
2105.05212 | Two novel feature selection algorithms based on crowding distance | In this paper, two novel algorithms for features selection are proposed. The first one is a filter method while the second is wrapper method. Both the proposed algorithms use the crowding distance used in the multiobjective optimization as a metric in order to sort the features. The less crowded features have great effects on the target attribute (class). The experimental results have shown the effectiveness and the robustness of the proposed algorithms. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 234,748 |
2207.07612 | Blessing of Nonconvexity in Deep Linear Models: Depth Flattens the
Optimization Landscape Around the True Solution | This work characterizes the effect of depth on the optimization landscape of linear regression, showing that, despite their nonconvexity, deeper models have more desirable optimization landscape. We consider a robust and over-parameterized setting, where a subset of measurements are grossly corrupted with noise and the true linear model is captured via an $N$-layer linear neural network. On the negative side, we show that this problem \textit{does not} have a benign landscape: given any $N\geq 1$, with constant probability, there exists a solution corresponding to the ground truth that is neither local nor global minimum. However, on the positive side, we prove that, for any $N$-layer model with $N\geq 2$, a simple sub-gradient method becomes oblivious to such ``problematic'' solutions; instead, it converges to a balanced solution that is not only close to the ground truth but also enjoys a flat local landscape, thereby eschewing the need for "early stopping". Lastly, we empirically verify that the desirable optimization landscape of deeper models extends to other robust learning tasks, including deep matrix recovery and deep ReLU networks with $\ell_1$-loss. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 308,254 |
1901.00844 | Machine Learning at the Wireless Edge: Distributed Stochastic Gradient
Descent Over-the-Air | We study federated machine learning (ML) at the wireless edge, where power- and bandwidth-limited wireless devices with local datasets carry out distributed stochastic gradient descent (DSGD) with the help of a remote parameter server (PS). Standard approaches assume separate computation and communication, where local gradient estimates are compressed and transmitted to the PS over orthogonal links. Following this digital approach, we introduce D-DSGD, in which the wireless devices employ gradient quantization and error accumulation, and transmit their gradient estimates to the PS over a multiple access channel (MAC). We then introduce a novel analog scheme, called A-DSGD, which exploits the additive nature of the wireless MAC for over-the-air gradient computation, and provide convergence analysis for this approach. In A-DSGD, the devices first sparsify their gradient estimates, and then project them to a lower dimensional space imposed by the available channel bandwidth. These projections are sent directly over the MAC without employing any digital code. Numerical results show that A-DSGD converges faster than D-DSGD thanks to its more efficient use of the limited bandwidth and the natural alignment of the gradient estimates over the channel. The improvement is particularly compelling at low power and low bandwidth regimes. We also illustrate for a classification problem that, A-DSGD is more robust to bias in data distribution across devices, while D-DSGD significantly outperforms other digital schemes in the literature. We also observe that both D-DSGD and A-DSGD perform better by increasing the number of devices (while keeping the total dataset size constant), showing their ability in harnessing the computation power of edge devices. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | true | 117,864 |
2209.07863 | Expanding continual few-shot learning benchmarks to include recognition
of specific instances | Continual learning and few-shot learning are important frontiers in progress toward broader Machine Learning (ML) capabilities. Recently, there has been intense interest in combining both. One of the first examples to do so was the Continual few-shot Learning (CFSL) framework of Antoniou et al. arXiv:2004.11967. In this study, we extend CFSL in two ways that capture a broader range of challenges, important for intelligent agent behaviour in real-world conditions. First, we increased the number of classes by an order of magnitude, making the results more comparable to standard continual learning experiments. Second, we introduced an 'instance test' which requires recognition of specific instances of classes -- a capability of animal cognition that is usually neglected in ML. For an initial exploration of ML model performance under these conditions, we selected representative baseline models from the original CFSL work and added a model variant with replay. As expected, learning more classes is more difficult than the original CFSL experiments, and interestingly, the way in which image instances and classes are presented affects classification performance. Surprisingly, accuracy in the baseline instance test is comparable to other classification tasks, but poor given significant occlusion and noise. The use of replay for consolidation substantially improves performance for both types of tasks, but particularly for the instance test. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 317,922 |
2306.08041 | Data Poisoning to Fake a Nash Equilibrium in Markov Games | We characterize offline data poisoning attacks on Multi-Agent Reinforcement Learning (MARL), where an attacker may change a data set in an attempt to install a (potentially fictitious) unique Markov-perfect Nash equilibrium for a two-player zero-sum Markov game. We propose the unique Nash set, namely the set of games, specified by their Q functions, with a specific joint policy being the unique Nash equilibrium. The unique Nash set is central to poisoning attacks because the attack is successful if and only if data poisoning pushes all plausible games inside the set. The unique Nash set generalizes the reward polytope commonly used in inverse reinforcement learning to MARL. For zero-sum Markov games, both the inverse Nash set and the set of plausible games induced by data are polytopes in the Q function space. We exhibit a linear program to efficiently compute the optimal poisoning attack. Our work sheds light on the structure of data poisoning attacks on offline MARL, a necessary step before one can design more robust MARL algorithms. | false | false | false | false | true | false | true | false | false | false | false | false | true | false | true | false | false | true | 373,262 |
1904.12201 | Human-Centered Emotion Recognition in Animated GIFs | As an intuitive way of expression emotion, the animated Graphical Interchange Format (GIF) images have been widely used on social media. Most previous studies on automated GIF emotion recognition fail to effectively utilize GIF's unique properties, and this potentially limits the recognition performance. In this study, we demonstrate the importance of human related information in GIFs and conduct human-centered GIF emotion recognition with a proposed Keypoint Attended Visual Attention Network (KAVAN). The framework consists of a facial attention module and a hierarchical segment temporal module. The facial attention module exploits the strong relationship between GIF contents and human characters, and extracts frame-level visual feature with a focus on human faces. The Hierarchical Segment LSTM (HS-LSTM) module is then proposed to better learn global GIF representations. Our proposed framework outperforms the state-of-the-art on the MIT GIFGIF dataset. Furthermore, the facial attention module provides reliable facial region mask predictions, which improves the model's interpretability. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 129,041 |
2108.04698 | Active Learning for Saddle Point Calculation | The saddle point (SP) calculation is a grand challenge for computationally intensive energy function in computational chemistry area, where the saddle point may represent the transition state (TS). The traditional methods need to evaluate the gradients of the energy function at a very large number of locations. To reduce the number of expensive computations of the true gradients, we propose an active learning framework consisting of a statistical surrogate model, Gaussian process regression (GPR) for the energy function, and a single-walker dynamics method, gentle accent dynamics (GAD), for the saddle-type transition states. SP is detected by the GAD applied to the GPR surrogate for the gradient vector and the Hessian matrix. Our key ingredient for efficiency improvements is an active learning method which sequentially designs the most informative locations and takes evaluations of the original model at these locations to train GPR. We formulate this active learning task as the optimal experimental design problem and propose a very efficient sample-based sub-optimal criterion to construct the optimal locations. We show that the new method significantly decreases the required number of energy or force evaluations of the original model. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 250,089 |
2409.06560 | A Primer on Variational Inference for Physics-Informed Deep Generative
Modelling | Variational inference (VI) is a computationally efficient and scalable methodology for approximate Bayesian inference. It strikes a balance between accuracy of uncertainty quantification and practical tractability. It excels at generative modelling and inversion tasks due to its built-in Bayesian regularisation and flexibility, essential qualities for physics related problems. Deriving the central learning objective for VI must often be tailored to new learning tasks where the nature of the problems dictates the conditional dependence between variables of interest, such as arising in physics problems. In this paper, we provide an accessible and thorough technical introduction to VI for forward and inverse problems, guiding the reader through standard derivations of the VI framework and how it can best be realized through deep learning. We then review and unify recent literature exemplifying the creative flexibility allowed by VI. This paper is designed for a general scientific audience looking to solve physics-based problems with an emphasis on uncertainty quantification. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 487,172 |
2502.08059 | On Mechanistic Circuits for Extractive Question-Answering | Large language models are increasingly used to process documents and facilitate question-answering on them. In our paper, we extract mechanistic circuits for this real-world language modeling task: context-augmented language modeling for extractive question-answering (QA) tasks and understand the potential benefits of circuits towards downstream applications such as data attribution to context information. We extract circuits as a function of internal model components (e.g., attention heads, MLPs) using causal mediation analysis techniques. Leveraging the extracted circuits, we first understand the interplay between the model's usage of parametric memory and retrieved context towards a better mechanistic understanding of context-augmented language models. We then identify a small set of attention heads in our circuit which performs reliable data attribution by default, thereby obtaining attribution for free in just the model's forward pass. Using this insight, we then introduce ATTNATTRIB, a fast data attribution algorithm which obtains state-of-the-art attribution results across various extractive QA benchmarks. Finally, we show the possibility to steer the language model towards answering from the context, instead of the parametric memory by using the attribution from ATTNATTRIB as an additional signal during the forward pass. Beyond mechanistic understanding, our paper provides tangible applications of circuits in the form of reliable data attribution and model steering. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 532,877 |
2405.08704 | Full Line Code Completion: Bringing AI to Desktop | In recent years, several industrial solutions for the problem of multi-token code completion appeared, each making a great advance in the area but mostly focusing on cloud-based runtime and avoiding working on the end user's device. In this work, we describe our approach for building a multi-token code completion feature for the JetBrains' IntelliJ Platform, which we call Full Line Code Completion. The feature suggests only syntactically correct code and works fully locally, i.e., data querying and the generation of suggestions happens on the end user's machine. We share important time and memory-consumption restrictions, as well as design principles that a code completion engine should satisfy. Working entirely on the end user's device, our code completion engine enriches user experience while being not only fast and compact but also secure. We share a number of useful techniques to meet the stated development constraints and also describe offline and online evaluation pipelines that allowed us to make better decisions. Our online evaluation shows that the usage of the tool leads to 1.3 times more Python code in the IDE being produced by code completion. The described solution was initially started with a help of researchers and was then bundled into all JetBrains IDEs where it is now used by millions of users. Thus, we believe that this work is useful for bridging academia and industry, providing researchers with the knowledge of what happens when complex research-based solutions are integrated into real products. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 454,185 |
1808.03782 | Epidemic spreading on time-varying multiplex networks | Social interactions are stratified in multiple contexts and are subject to complex temporal dynamics. The systematic study of these two features of social systems has started only very recently mainly thanks to the development of multiplex and time-varying networks. However, these two advancements have progressed almost in parallel with very little overlap. Thus, the interplay between multiplexity and the temporal nature of connectivity patterns is poorly understood. Here, we aim to tackle this limitation by introducing a time-varying model of multiplex networks. We are interested in characterizing how these two properties affect contagion processes. To this end, we study SIS epidemic models unfolding at comparable time-scale respect to the evolution of the multiplex network. We study both analytically and numerically the epidemic threshold as a function of the overlap between, and the features of, each layer. We found that, the overlap between layers significantly reduces the epidemic threshold especially when the temporal activation patterns of overlapping nodes are positively correlated. Furthermore, when the average connectivity across layers is very different, the contagion dynamics are driven by the features of the more densely connected layer. Here, the epidemic threshold is equivalent to that of a single layered graph and the impact of the disease, in the layer driving the contagion, is independent of the overlap. However, this is not the case in the other layers where the spreading dynamics are sharply influenced by it. The results presented provide another step towards the characterization of the properties of real networks and their effects on contagion phenomena | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 104,996 |
2403.13039 | Emotic Masked Autoencoder with Attention Fusion for Facial Expression
Recognition | Facial Expression Recognition (FER) is a critical task within computer vision with diverse applications across various domains. Addressing the challenge of limited FER datasets, which hampers the generalization capability of expression recognition models, is imperative for enhancing performance. Our paper presents an innovative approach integrating the MAE-Face self-supervised learning (SSL) method and multi-view Fusion Attention mechanism for expression classification, particularly showcased in the 6th Affective Behavior Analysis in-the-wild (ABAW) competition. By utilizing low-level feature information from the ipsilateral view (auxiliary view) before learning the high-level feature that emphasizes the shift in the human facial expression, our work seeks to provide a straightforward yet innovative way to improve the examined view (main view). We also suggest easy-to-implement and no-training frameworks aimed at highlighting key facial features to determine if such features can serve as guides for the model, focusing on pivotal local elements. The efficacy of this method is validated by improvements in model performance on the Aff-wild2 dataset, as observed in both training and validation contexts. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 439,452 |
2311.09706 | Towards Autonomous Hypothesis Verification via Language Models with
Minimal Guidance | Research automation efforts usually employ AI as a tool to automate specific tasks within the research process. To create an AI that truly conduct research themselves, it must independently generate hypotheses, design verification plans, and execute verification. Therefore, we investigated if an AI itself could autonomously generate and verify hypothesis for a toy machine learning research problem. We prompted GPT-4 to generate hypotheses and Python code for hypothesis verification with limited methodological guidance. Our findings suggest that, in some instances, GPT-4 can autonomously generate and validate hypotheses without detailed guidance. While this is a promising result, we also found that none of the verifications were flawless, and there remain significant challenges in achieving autonomous, human-level research using only generic instructions. These findings underscore the need for continued exploration to develop a general and autonomous AI researcher. | true | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 408,252 |
2302.14021 | Quantifying Valence and Arousal in Text with Multilingual Pre-trained
Transformers | The analysis of emotions expressed in text has numerous applications. In contrast to categorical analysis, focused on classifying emotions according to a pre-defined set of common classes, dimensional approaches can offer a more nuanced way to distinguish between different emotions. Still, dimensional methods have been less studied in the literature. Considering a valence-arousal dimensional space, this work assesses the use of pre-trained Transformers to predict these two dimensions on a continuous scale, with input texts from multiple languages and domains. We specifically combined multiple annotated datasets from previous studies, corresponding to either emotional lexica or short text documents, and evaluated models of multiple sizes and trained under different settings. Our results show that model size can have a significant impact on the quality of predictions, and that by fine-tuning a large model we can confidently predict valence and arousal in multiple languages. We make available the code, models, and supporting data. | false | false | false | false | true | true | false | false | true | false | false | false | false | false | false | false | false | false | 348,127 |
1803.03145 | Physical Layer Communications System Design Over-the-Air Using
Adversarial Networks | This paper presents a novel method for synthesizing new physical layer modulation and coding schemes for communications systems using a learning-based approach which does not require an analytic model of the impairments in the channel. It extends prior work published on the channel autoencoder to consider the case where the channel response is not known or can not be easily modeled in a closed form analytic expression. By adopting an adversarial approach for channel response approximation and information encoding, we can jointly learn a good solution to both tasks over a wide range of channel environments. We describe the operation of the proposed adversarial system, share results for its training and validation over-the-air, and discuss implications and future work in the area. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 92,195 |
0805.0034 | Diversity Order Gain with Noisy Feedback in Multiple Access Channels | In this paper, we study the effect of feedback channel noise on the diversity-multiplexing tradeoff in multiuser MIMO systems using quantized feedback, where each user has m transmit antennas and the base-station receiver has n antennas. We derive an achievable tradeoff and use it to show that in SNR-symmetric channels, a single bit of imperfect feedback is sufficient to double the maximum diversity order to 2mn compared to when there is no feedback (maximum is mn at multiplexing gain of zero). Further, additional feedback bits do not increase this maximum diversity order beyond 2mn. Finally, the above diversity order gain of mn over non-feedback systems can also be achieved for higher multiplexing gains, albeit requiring more than one bit of feedback. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 1,685 |
2211.04264 | Note on generalized group testing | In this note, we present a new adaptive algorithm for generalized group testing, which is asymptotically optimal if $d=o(\log_2|E|)$, $E$ is a set of potentially contaminated sets, $d$ is a maximal size of elements of $E$. Also, we design a 3-stage algorithm, which is asymptotically optimal for $d=2$. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 329,193 |
2102.02502 | 3D Surface Reconstruction From Multi-Date Satellite Images | The reconstruction of accurate three-dimensional environment models is one of the most fundamental goals in the field of photogrammetry. Since satellite images provide suitable properties for obtaining large-scale environment reconstructions, there exist a variety of Stereo Matching based methods to reconstruct point clouds for satellite image pairs. Recently, the first Structure from Motion (SfM) based approach has been proposed, which allows to reconstruct point clouds from multiple satellite images. In this work, we propose an extension of this SfM based pipeline that allows us to reconstruct not only point clouds but watertight meshes including texture information. We provide a detailed description of several steps that are mandatory to exploit state-of-the-art mesh reconstruction algorithms in the context of satellite imagery. This includes a decomposition of finite projective camera calibration matrices, a skew correction of corresponding depth maps and input images as well as the recovery of real-world depth maps from reparameterized depth values. The paper presents an extensive quantitative evaluation on multi-date satellite images demonstrating that the proposed pipeline combined with current meshing algorithms outperforms state-of-the-art point cloud reconstruction algorithms in terms of completeness and median error. We make the source code of our pipeline publicly available. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 218,427 |
cs/0612084 | Achievable Rates for the General Gaussian Multiple Access Wire-Tap
Channel with Collective Secrecy | We consider the General Gaussian Multiple Access Wire-Tap Channel (GGMAC-WT). In this scenario, multiple users communicate with an intended receiver in the presence of an intelligent and informed eavesdropper who is as capable as the intended receiver, but has different channel parameters. We aim to provide perfect secrecy for the transmitters in this multi-access environment. Using Gaussian codebooks, an achievable secrecy region is determined and the power allocation that maximizes the achievable sum-rate is found. Numerical results showing the new rate region are presented. It is shown that the multiple-access nature of the channel may be utilized to allow users with zero single-user secrecy capacity to be able to transmit in perfect secrecy. In addition, a new collaborative scheme is shown that may increase the achievable sum-rate. In this scheme, a user who would not transmit to maximize the sum rate can help another user who (i) has positive secrecy capacity to increase its rate, or (ii) has zero secrecy capacity to achieve a positive secrecy capacity. | false | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | 539,972 |
2307.11364 | Photo2Relief: Let Human in the Photograph Stand Out | In this paper, we propose a technique for making humans in photographs protrude like reliefs. Unlike previous methods which mostly focus on the face and head, our method aims to generate art works that describe the whole body activity of the character. One challenge is that there is no ground-truth for supervised deep learning. We introduce a sigmoid variant function to manipulate gradients tactfully and train our neural networks by equipping with a loss function defined in gradient domain. The second challenge is that actual photographs often across different light conditions. We used image-based rendering technique to address this challenge and acquire rendering images and depth data under different lighting conditions. To make a clear division of labor in network modules, a two-scale architecture is proposed to create high-quality relief from a single photograph. Extensive experimental results on a variety of scenes show that our method is a highly effective solution for generating digital 2.5D artwork from photographs. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 380,885 |
2311.08615 | Non-Uniform Smoothness for Gradient Descent | The analysis of gradient descent-type methods typically relies on the Lipschitz continuity of the objective gradient. This generally requires an expensive hyperparameter tuning process to appropriately calibrate a stepsize for a given problem. In this work we introduce a local first-order smoothness oracle (LFSO) which generalizes the Lipschitz continuous gradients smoothness condition and is applicable to any twice-differentiable function. We show that this oracle can encode all relevant problem information for tuning stepsizes for a suitably modified gradient descent method and give global and local convergence results. We also show that LFSOs in this modified first-order method can yield global linear convergence rates for non-strongly convex problems with extremely flat minima, and thus improve over the lower bound on rates achievable by general (accelerated) first-order methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 407,800 |
1610.03554 | Review of Inferring Latent Attributes from Twitter | This paper reviews literature from 2011 to 2013 on how Latent attributes like gender, political leaning etc. can be inferred from a person's twitter and neighborhood data. Prediction of demographic data can bring value to businesses, can prove instrumental in legal investigation. Moreover, political leanings can be inferred from the wide variety of user data available on-line. The motive of this review is to understand how large data sets can be made from available twitter data. The tweeting and re tweeting behavior of a user can be user to infer attributes like, gender, age etc. We explore in this text how this field can be expanded in future and possible avenues for future research. | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | false | 62,260 |
1610.05465 | Real-time analysis of cataract surgery videos using statistical models | The automatic analysis of the surgical process, from videos recorded during surgeries, could be very useful to surgeons, both for training and for acquiring new techniques. The training process could be optimized by automatically providing some targeted recommendations or warnings, similar to the expert surgeon's guidance. In this paper, we propose to reuse videos recorded and stored during cataract surgeries to perform the analysis. The proposed system allows to automatically recognize, in real time, what the surgeon is doing: what surgical phase or, more precisely, what surgical step he or she is performing. This recognition relies on the inference of a multilevel statistical model which uses 1) the conditional relations between levels of description (steps and phases) and 2) the temporal relations among steps and among phases. The model accepts two types of inputs: 1) the presence of surgical tools, manually provided by the surgeons, or 2) motion in videos, automatically analyzed through the Content Based Video retrieval (CBVR) paradigm. Different data-driven statistical models are evaluated in this paper. For this project, a dataset of 30 cataract surgery videos was collected at Brest University hospital. The system was evaluated in terms of area under the ROC curve. Promising results were obtained using either the presence of surgical tools ($A_z$ = 0.983) or motion analysis ($A_z$ = 0.759). The generality of the method allows to adapt it to any kinds of surgeries. The proposed solution could be used in a computer assisted surgery tool to support surgeons during the surgery. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 62,517 |
2407.13779 | Leveraging Latent Evolutionary Optimization for Targeted Molecule
Generation | Lead optimization is a pivotal task in the drug design phase within the drug discovery lifecycle. The primary objective is to refine the lead compound to meet specific molecular properties for progression to the subsequent phase of development. In this work, we present an innovative approach, Latent Evolutionary Optimization for Molecule Generation (LEOMol), a generative modeling framework for the efficient generation of optimized molecules. LEOMol leverages Evolutionary Algorithms, such as Genetic Algorithm and Differential Evolution, to search the latent space of a Variational AutoEncoder (VAE). This search facilitates the identification of the target molecule distribution within the latent space. Our approach consistently demonstrates superior performance compared to previous state-of-the-art models across a range of constrained molecule generation tasks, outperforming existing models in all four sub-tasks related to property targeting. Additionally, we suggest the importance of including toxicity in the evaluation of generative models. Furthermore, an ablation study underscores the improvements that our approach provides over gradient-based latent space optimization methods. This underscores the effectiveness and superiority of LEOMol in addressing the inherent challenges in constrained molecule generation while emphasizing its potential to propel advancements in drug discovery. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 474,522 |
2107.12958 | Adaptive Verifiable Coded Computing: Towards Fast, Secure and Private
Distributed Machine Learning | Stragglers, Byzantine workers, and data privacy are the main bottlenecks in distributed cloud computing. Some prior works proposed coded computing strategies to jointly address all three challenges. They require either a large number of workers, a significant communication cost or a significant computational complexity to tolerate Byzantine workers. Much of the overhead in prior schemes comes from the fact that they tightly couple coding for all three problems into a single framework. In this paper, we propose Adaptive Verifiable Coded Computing (AVCC) framework that decouples the Byzantine node detection challenge from the straggler tolerance. AVCC leverages coded computing just for handling stragglers and privacy, and then uses an orthogonal approach that leverages verifiable computing to mitigate Byzantine workers. Furthermore, AVCC dynamically adapts its coding scheme to trade-off straggler tolerance with Byzantine protection. We evaluate AVCC on a compute-intensive distributed logistic regression application. Our experiments show that AVCC achieves up to $4.2\times$ speedup and up to $5.1\%$ accuracy improvement over the state-of-the-art Lagrange coded computing approach (LCC). AVCC also speeds up the conventional uncoded implementation of distributed logistic regression by up to $7.6\times$, and improves the test accuracy by up to $12.1\%$. | false | false | false | false | false | false | true | false | false | true | false | false | true | false | false | false | false | true | 248,059 |
2405.11422 | Large Language Models are Biased Reinforcement Learners | In-context learning enables large language models (LLMs) to perform a variety of tasks, including learning to make reward-maximizing choices in simple bandit tasks. Given their potential use as (autonomous) decision-making agents, it is important to understand how these models perform such reinforcement learning (RL) tasks and the extent to which they are susceptible to biases. Motivated by the fact that, in humans, it has been widely documented that the value of an outcome depends on how it compares to other local outcomes, the present study focuses on whether similar value encoding biases apply to how LLMs encode rewarding outcomes. Results from experiments with multiple bandit tasks and models show that LLMs exhibit behavioral signatures of a relative value bias. Adding explicit outcome comparisons to the prompt produces opposing effects on performance, enhancing maximization in trained choice sets but impairing generalization to new choice sets. Computational cognitive modeling reveals that LLM behavior is well-described by a simple RL algorithm that incorporates relative values at the outcome encoding stage. Lastly, we present preliminary evidence that the observed biases are not limited to fine-tuned LLMs, and that relative value processing is detectable in the final hidden layer activations of a raw, pretrained model. These findings have important implications for the use of LLMs in decision-making applications. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 455,133 |
2208.02881 | Improving Fuzzy-Logic based Map-Matching Method with Trajectory
Stay-Point Detection | The requirement to trace and process moving objects in the contemporary era gradually increases since numerous applications quickly demand precise moving object locations. The Map-matching method is employed as a preprocessing technique, which matches a moving object point on a corresponding road. However, most of the GPS trajectory datasets include stay-points irregularity, which makes map-matching algorithms mismatch trajectories to irrelevant streets. Therefore, determining the stay-point region in GPS trajectory datasets results in better accurate matching and more rapid approaches. In this work, we cluster stay-points in a trajectory dataset with DBSCAN and eliminate redundant data to improve the efficiency of the map-matching algorithm by lowering processing time. We reckoned our proposed method's performance and exactness with a ground truth dataset compared to a fuzzy-logic based map-matching algorithm. Fortunately, our approach yields 27.39% data size reduction and 8.9% processing time reduction with the same accurate results as the previous fuzzy-logic based map-matching approach. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | true | 311,602 |
1806.09104 | Accuracy Analysis for Distributed Weighted Least-Squares Estimation in
Finite Steps and Loopy Networks | Distributed parameter estimation for large-scale systems is an active research problem. The goal is to derive a distributed algorithm in which each agent obtains a local estimate of its own subset of the global parameter vector, based on local measurements as well as information received from its neighbours. A recent algorithm has been proposed, which yields the optimal solution (i.e., the one that would be obtained using a centralized method) in finite time, provided the communication network forms an acyclic graph. If instead, the graph is cyclic, the only available alternative algorithm, which is based on iterative matrix inversion, achieving the optimal solution, does so asymptotically. However, it is also known that, in the cyclic case, the algorithm designed for acyclic graphs produces a solution which, although non optimal, is highly accurate. In this paper we do a theoretical study of the accuracy of this algorithm, in communication networks forming cyclic graphs. To this end, we provide bounds for the sub-optimality of the estimation error and the estimation error covariance, for a class of systems whose topological sparsity and signal-to-noise ratio satisfy certain condition. Our results show that, at each node, the accuracy improves exponentially with the so-called loop-free depth. Also, although the algorithm no longer converges in finite time in the case of cyclic graphs, simulation results show that the convergence is significantly faster than that of methods based on iterative matrix inversion. Our results suggest that, depending on the loop-free depth, the studied algorithm may be the preferred option even in applications with cyclic communication graphs. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | 101,300 |
2405.15982 | Automated Assessment and Adaptive Multimodal Formative Feedback Improves
Psychomotor Skills Training Outcomes in Quadrotor Teleoperation | The workforce will need to continually upskill in order to meet the evolving demands of industry, especially working with robotic and autonomous systems. Current training methods are not scalable and do not adapt to the skills that learners already possess. In this work, we develop a system that automatically assesses learner skill in a quadrotor teleoperation task using temporal logic task specifications. This assessment is used to generate multimodal feedback based on the principles of effective formative feedback. Participants perceived the feedback positively. Those receiving formative feedback viewed the feedback as more actionable compared to receiving summary statistics. Participants in the multimodal feedback condition were more likely to achieve a safe landing and increased their safe landings more over the experiment compared to other feedback conditions. Finally, we identify themes to improve adaptive feedback and discuss and how training for complex psychomotor tasks can be integrated with learning theories. | true | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 457,191 |
2408.11517 | Imagining from Images with an AI Storytelling Tool | A method for generating narratives by analyzing single images or image sequences is presented, inspired by the time immemorial tradition of Narrative Art. The proposed method explores the multimodal capabilities of GPT-4o to interpret visual content and create engaging stories, which are illustrated by a Stable Diffusion XL model. The method is supported by a fully implemented tool, called ImageTeller, which accepts images from diverse sources as input. Users can guide the narrative's development according to the conventions of fundamental genres - such as Comedy, Romance, Tragedy, Satire or Mystery -, opt to generate data-driven stories, or to leave the prototype free to decide how to handle the narrative structure. User interaction is provided along the generation process, allowing the user to request alternative chapters or illustrations, and even reject and restart the story generation based on the same input. Additionally, users can attach captions to the input images, influencing the system's interpretation of the visual content. Examples of generated stories are provided, along with details on how to access the prototype. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 482,326 |
1806.11187 | Neural-net-induced Gaussian process regression for function
approximation and PDE solution | Neural-net-induced Gaussian process (NNGP) regression inherits both the high expressivity of deep neural networks (deep NNs) as well as the uncertainty quantification property of Gaussian processes (GPs). We generalize the current NNGP to first include a larger number of hyperparameters and subsequently train the model by maximum likelihood estimation. Unlike previous works on NNGP that targeted classification, here we apply the generalized NNGP to function approximation and to solving partial differential equations (PDEs). Specifically, we develop an analytical iteration formula to compute the covariance function of GP induced by deep NN with an error-function nonlinearity. We compare the performance of the generalized NNGP for function approximations and PDE solutions with those of GPs and fully-connected NNs. We observe that for smooth functions the generalized NNGP can yield the same order of accuracy with GP, while both NNGP and GP outperform deep NN. For non-smooth functions, the generalized NNGP is superior to GP and comparable or superior to deep NN. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 101,657 |
2311.13959 | RankFeat&RankWeight: Rank-1 Feature/Weight Removal for
Out-of-distribution Detection | The task of out-of-distribution (OOD) detection is crucial for deploying machine learning models in real-world settings. In this paper, we observe that the singular value distributions of the in-distribution (ID) and OOD features are quite different: the OOD feature matrix tends to have a larger dominant singular value than the ID feature, and the class predictions of OOD samples are largely determined by it. This observation motivates us to propose \texttt{RankFeat}, a simple yet effective \emph{post hoc} approach for OOD detection by removing the rank-1 matrix composed of the largest singular value and the associated singular vectors from the high-level feature. \texttt{RankFeat} achieves \emph{state-of-the-art} performance and reduces the average false positive rate (FPR95) by 17.90\% compared with the previous best method. The success of \texttt{RankFeat} motivates us to investigate whether a similar phenomenon would exist in the parameter matrices of neural networks. We thus propose \texttt{RankWeight} which removes the rank-1 weight from the parameter matrices of a single deep layer. Our \texttt{RankWeight}is also \emph{post hoc} and only requires computing the rank-1 matrix once. As a standalone approach, \texttt{RankWeight} has very competitive performance against other methods across various backbones. Moreover, \texttt{RankWeight} enjoys flexible compatibility with a wide range of OOD detection methods. The combination of \texttt{RankWeight} and \texttt{RankFeat} refreshes the new \emph{state-of-the-art} performance, achieving the FPR95 as low as 16.13\% on the ImageNet-1k benchmark. Extensive ablation studies and comprehensive theoretical analyses are presented to support the empirical results. Code is publicly available via \url{https://github.com/KingJamesSong/RankFeat}. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 409,942 |
1911.03941 | Using LSTMs for climate change assessment studies on droughts and floods | Climate change affects occurrences of floods and droughts worldwide. However, predicting climate impacts over individual watersheds is difficult, primarily because accurate hydrological forecasts require models that are calibrated to past data. In this work we present a large-scale LSTM-based modeling approach that -- by training on large data sets -- learns a diversity of hydrological behaviors. Previous work shows that this model is more accurate than current state-of-the-art models, even when the LSTM-based approach operates out-of-sample and the latter in-sample. In this work, we show how this model can assess the sensitivity of the underlying systems with regard to extreme (high and low) flows in individual watersheds over the continental US. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 152,839 |
1910.02224 | Transductive Episodic-Wise Adaptive Metric for Few-Shot Learning | Few-shot learning, which aims at extracting new concepts rapidly from extremely few examples of novel classes, has been featured into the meta-learning paradigm recently. Yet, the key challenge of how to learn a generalizable classifier with the capability of adapting to specific tasks with severely limited data still remains in this domain. To this end, we propose a Transductive Episodic-wise Adaptive Metric (TEAM) framework for few-shot learning, by integrating the meta-learning paradigm with both deep metric learning and transductive inference. With exploring the pairwise constraints and regularization prior within each task, we explicitly formulate the adaptation procedure into a standard semi-definite programming problem. By solving the problem with its closed-form solution on the fly with the setup of transduction, our approach efficiently tailors an episodic-wise metric for each task to adapt all features from a shared task-agnostic embedding space into a more discriminative task-specific metric space. Moreover, we further leverage an attention-based bi-directional similarity strategy for extracting the more robust relationship between queries and prototypes. Extensive experiments on three benchmark datasets show that our framework is superior to other existing approaches and achieves the state-of-the-art performance in the few-shot literature. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 148,179 |
1910.12901 | Waypoint Optimization Using Bayesian Optimization: A Case Study in
Airborne Wind Energy Systems | We present a data-driven optimization framework that aims to address online adaptation of the flight path shape for an airborne wind energy system (AWE) that follows a repetitive path to generate power. Specifically, Bayesian optimization, which is a data-driven algorithm for finding the optimum of an unknown objective function, is utilized to solve the waypoint adaptation. To form a computationally efficient optimization framework, we describe each figure-$8$ flight via a compact set of parameters, termed as basis parameters. We model the underlying objective function by a Gaussian Process (GP). Bayesian optimization utilizes the predictive uncertainty information from the GP to determine the best subsequent basis parameters. Once a path is generated using Bayesian optimization, a path following mechanism is used to track the generated figure-$8$ flight. The proposed framework is validated on a simplified $2$-dimensional model that mimics the key behaviors of a $3$-dimensional AWE system. We demonstrate the capability of the proposed framework in a simulation environment for a simplified $2$-dimensional AWE system model. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 151,220 |
2104.02409 | Learning to Estimate Hidden Motions with Global Motion Aggregation | Occlusions pose a significant challenge to optical flow algorithms that rely on local evidences. We consider an occluded point to be one that is imaged in the first frame but not in the next, a slight overloading of the standard definition since it also includes points that move out-of-frame. Estimating the motion of these points is extremely difficult, particularly in the two-frame setting. Previous work relies on CNNs to learn occlusions, without much success, or requires multiple frames to reason about occlusions using temporal smoothness. In this paper, we argue that the occlusion problem can be better solved in the two-frame case by modelling image self-similarities. We introduce a global motion aggregation module, a transformer-based approach to find long-range dependencies between pixels in the first image, and perform global aggregation on the corresponding motion features. We demonstrate that the optical flow estimates in the occluded regions can be significantly improved without damaging the performance in non-occluded regions. This approach obtains new state-of-the-art results on the challenging Sintel dataset, improving the average end-point error by 13.6% on Sintel Final and 13.7% on Sintel Clean. At the time of submission, our method ranks first on these benchmarks among all published and unpublished approaches. Code is available at https://github.com/zacjiang/GMA | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 228,709 |
2112.05222 | High Voltage Shore Connection Systems: Grounding Resistance Selection
and Short Circuit Currents Evaluation | Cold ironing represents an effective solution to remove air polluting emissions from ports. The high voltage shore connection system is the key enabling facility that allows to provide power from the shore side electrical system to the ship. The design of the shore connection needs a comprehensive assessment of the fault currents in different operating scenarios. International standards require the neutral point of the shore connection transformer be equipped with a neutral grounding resistor. Its value has to be defined to guarantee safety and protection of equipment and personnel in case of single phase-to-ground faults. Moreover, three-phase short circuits need to be considered to size equipment and protection devices. A crucial role is played by the frequency converter control system, required to adapt the mains frequency to the frequency of the ship. In this work, a complete electro-magnetic dynamic model of the high voltage shore connection and of the on-board power system has been developed, including frequency converter, shore-side transformer, connection MV cables and power system of the ship, to analyze in detail the behavior of the system in case of single phase-to-ground fault and three-phase short circuit, taking into account relevant standards and best practices. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 270,773 |
2307.07603 | Gastrointestinal Disease Classification through Explainable and
Cost-Sensitive Deep Neural Networks with Supervised Contrastive Learning | Gastrointestinal diseases pose significant healthcare chall-enges as they manifest in diverse ways and can lead to potential complications. Ensuring precise and timely classification of these diseases is pivotal in guiding treatment choices and enhancing patient outcomes. This paper introduces a novel approach on classifying gastrointestinal diseases by leveraging cost-sensitive pre-trained deep convolutional neural network (CNN) architectures with supervised contrastive learning. Our approach enables the network to learn representations that capture vital disease-related features, while also considering the relationships of similarity between samples. To tackle the challenges posed by imbalanced datasets and the cost-sensitive nature of misclassification errors in healthcare, we incorporate cost-sensitive learning. By assigning distinct costs to misclassifications based on the disease class, we prioritize accurate classification of critical conditions. Furthermore, we enhance the interpretability of our model by integrating gradient-based techniques from explainable artificial intelligence (AI). This inclusion provides valuable insights into the decision-making process of the network, aiding in understanding the features that contribute to disease classification. To assess the effectiveness of our proposed approach, we perform extensive experiments on a comprehensive gastrointestinal disease dataset, such as the Hyper-Kvasir dataset. Through thorough comparisons with existing works, we demonstrate the strong classification accuracy, robustness and interpretability of our model. We have made the implementation of our proposed approach publicly available at https://github.com/dibya404/Gastrointestinal-Disease-Classification-through-Explainable-and-Cost-Sensitive-DNN-with-SCL | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 379,468 |
2303.11937 | High Probability Bounds for Stochastic Continuous Submodular
Maximization | We consider maximization of stochastic monotone continuous submodular functions (CSF) with a diminishing return property. Existing algorithms only guarantee the performance \textit{in expectation}, and do not bound the probability of getting a bad solution. This implies that for a particular run of the algorithms, the solution may be much worse than the provided guarantee in expectation. In this paper, we first empirically verify that this is indeed the case. Then, we provide the first \textit{high-probability} analysis of the existing methods for stochastic CSF maximization, namely PGA, boosted PGA, SCG, and SCG++. Finally, we provide an improved high-probability bound for SCG, under slightly stronger assumptions, with a better convergence rate than that of the expected solution. Through extensive experiments on non-concave quadratic programming (NQP) and optimal budget allocation, we confirm the validity of our bounds and show that even in the worst-case, PGA converges to $OPT/2$, and boosted PGA, SCG, SCG++ converge to $(1 - 1/e)OPT$, but at a slower rate than that of the expected solution. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 353,073 |
1606.07968 | Generalized Wishart processes for interpolation over diffusion tensor
fields | Diffusion Magnetic Resonance Imaging (dMRI) is a non-invasive tool for watching the microstructure of fibrous nerve and muscle tissue. From dMRI, it is possible to estimate 2-rank diffusion tensors imaging (DTI) fields, that are widely used in clinical applications: tissue segmentation, fiber tractography, brain atlas construction, brain conductivity models, among others. Due to hardware limitations of MRI scanners, DTI has the difficult compromise between spatial resolution and signal noise ratio (SNR) during acquisition. For this reason, the data are often acquired with very low resolution. To enhance DTI data resolution, interpolation provides an interesting software solution. The aim of this work is to develop a methodology for DTI interpolation that enhance the spatial resolution of DTI fields. We assume that a DTI field follows a recently introduced stochastic process known as a generalized Wishart process (GWP), which we use as a prior over the diffusion tensor field. For posterior inference, we use Markov Chain Monte Carlo methods. We perform experiments in toy and real data. Results of GWP outperform other methods in the literature, when compared in different validation protocols. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 57,809 |
2404.10359 | Stampede Alert Clustering Algorithmic System Based on Tiny-Scale
Strengthened DETR | A novel crowd stampede detection and prediction algorithm based on Deformable DETR is proposed to address the challenges of detecting a large number of small targets and target occlusion in crowded airport and train station environments. In terms of model design, the algorithm incorporates a multi-scale feature fusion module to enlarge the receptive field and enhance the detection capability of small targets. Furthermore, the deformable attention mechanism is improved to reduce missed detections and false alarms for critical targets. Additionally, a new algorithm is innovatively introduced for stampede event prediction and visualization. Experimental evaluations on the PKX-LHR dataset demonstrate that the enhanced algorithm achieves a 34% performance in small target detection accuracy while maintaining the original detection speed. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 447,071 |
2404.01291 | Evaluating Text-to-Visual Generation with Image-to-Text Generation | Despite significant progress in generative AI, comprehensive evaluation remains challenging because of the lack of effective metrics and standardized benchmarks. For instance, the widely-used CLIPScore measures the alignment between a (generated) image and text prompt, but it fails to produce reliable scores for complex prompts involving compositions of objects, attributes, and relations. One reason is that text encoders of CLIP can notoriously act as a "bag of words", conflating prompts such as "the horse is eating the grass" with "the grass is eating the horse". To address this, we introduce the VQAScore, which uses a visual-question-answering (VQA) model to produce an alignment score by computing the probability of a "Yes" answer to a simple "Does this figure show '{text}'?" question. Though simpler than prior art, VQAScore computed with off-the-shelf models produces state-of-the-art results across many (8) image-text alignment benchmarks. We also compute VQAScore with an in-house model that follows best practices in the literature. For example, we use a bidirectional image-question encoder that allows image embeddings to depend on the question being asked (and vice versa). Our in-house model, CLIP-FlanT5, outperforms even the strongest baselines that make use of the proprietary GPT-4V. Interestingly, although we train with only images, VQAScore can also align text with video and 3D models. VQAScore allows researchers to benchmark text-to-visual generation using complex texts that capture the compositional structure of real-world prompts. We introduce GenAI-Bench, a more challenging benchmark with 1,600 compositional text prompts that require parsing scenes, objects, attributes, relationships, and high-order reasoning like comparison and logic. GenAI-Bench also offers over 15,000 human ratings for leading image and video generation models such as Stable Diffusion, DALL-E 3, and Gen2. | false | false | false | false | true | false | true | false | true | false | false | true | false | false | false | false | false | true | 443,356 |
2001.05568 | Notes on Communication and Computation in Secure Distributed Matrix
Multiplication | We consider the problem of secure distributed matrix multiplication in which a user wishes to compute the product of two matrices with the assistance of honest but curious servers. In this paper, we answer the following question: Is it beneficial to offload the computations if security is a concern? We answer this question in the affirmative by showing that by adjusting the parameters in a polynomial code we can obtain a trade-off between the user's and the servers' computational time. Indeed, we show that if the computational time complexity of an operation in $\mathbb{F}_q$ is at most $\mathcal{Z}_q$ and the computational time complexity of multiplying two $n\times n$ matrices is $\mathcal{O}(n^\omega \mathcal{Z}_q)$ then, by optimizing the trade-off, the user together with the servers can compute the multiplication in $\mathcal{O}(n^{4-\frac{6}{\omega+1}} \mathcal{Z}_q)$ time. We also show that if the user is only concerned in optimizing the download rate, a common assumption in the literature, then the problem can be converted into a simple private information retrieval problem by means of a scheme we call Private Oracle Querying. However, this comes at large upload and computational costs for both the user and the servers. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 160,575 |
1006.0991 | Variational Program Inference | We introduce a framework for representing a variety of interesting problems as inference over the execution of probabilistic model programs. We represent a "solution" to such a problem as a guide program which runs alongside the model program and influences the model program's random choices, leading the model program to sample from a different distribution than from its priors. Ideally the guide program influences the model program to sample from the posteriors given the evidence. We show how the KL- divergence between the true posterior distribution and the distribution induced by the guided model program can be efficiently estimated (up to an additive constant) by sampling multiple executions of the guided model program. In addition, we show how to use the guide program as a proposal distribution in importance sampling to statistically prove lower bounds on the probability of the evidence and on the probability of a hypothesis and the evidence. We can use the quotient of these two bounds as an estimate of the conditional probability of the hypothesis given the evidence. We thus turn the inference problem into a heuristic search for better guide programs. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 6,674 |
2305.10353 | An Ensemble Learning Approach for Exercise Detection in Type 1 Diabetes
Patients | Type 1 diabetes is a serious disease in which individuals are unable to regulate their blood glucose levels, leading to various medical complications. Artificial pancreas (AP) systems have been developed as a solution for type 1 diabetic patients to mimic the behavior of the pancreas and regulate blood glucose levels. However, current AP systems lack detection capabilities for exercise-induced glucose intake, which can last up to 4 to 8 hours. This incapability can lead to hypoglycemia, which if left untreated, could have serious consequences, including death. Existing exercise detection methods are either limited to single sensor data or use inaccurate models for exercise detection, making them less effective in practice. In this work, we propose an ensemble learning framework that combines a data-driven physiological model and a Siamese network to leverage multiple physiological signal streams for exercise detection with high accuracy. To evaluate the effectiveness of our proposed approach, we utilized a public dataset with 12 diabetic patients collected from an 8-week clinical trial. Our approach achieves a true positive rate for exercise detection of 86.4% and a true negative rate of 99.1%, outperforming state-of-the-art solutions. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 365,017 |
2310.09401 | CROWN: A Novel Approach to Comprehending Users' Preferences for Accurate
Personalized News Recommendation | Personalized news recommendation aims to assist users in finding news articles that align with their interests, which plays a pivotal role in mitigating users' information overload problem. Although many recent works have been studied for better personalized news recommendation, the following challenges should be explored more: (C1) Comprehending manifold intents coupled within a news article, (C2) Differentiating varying post-read preferences of news articles, and (C3) Addressing the cold-start user problem. To tackle the aforementioned challenges together, in this paper, we propose a novel personalized news recommendation framework (CROWN) that employs (1) category-guided intent disentanglement for (C1), (2) consistency-based news representation for (C2), and (3) GNN-enhanced hybrid user representation for (C3). Furthermore, we incorporate a category prediction into the training process of CROWN as an auxiliary task, which provides supplementary supervisory signals to enhance intent disentanglement. Extensive experiments on two real-world datasets reveal that (1) CROWN provides consistent performance improvements over ten state-of-the-art news recommendation methods and (2) the proposed strategies significantly improve the accuracy of CROWN. | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | false | 399,756 |
2307.04052 | Learning to Group Auxiliary Datasets for Molecule | The limited availability of annotations in small molecule datasets presents a challenge to machine learning models. To address this, one common strategy is to collaborate with additional auxiliary datasets. However, having more data does not always guarantee improvements. Negative transfer can occur when the knowledge in the target dataset differs or contradicts that of the auxiliary molecule datasets. In light of this, identifying the auxiliary molecule datasets that can benefit the target dataset when jointly trained remains a critical and unresolved problem. Through an empirical analysis, we observe that combining graph structure similarity and task similarity can serve as a more reliable indicator for identifying high-affinity auxiliary datasets. Motivated by this insight, we propose MolGroup, which separates the dataset affinity into task and structure affinity to predict the potential benefits of each auxiliary molecule dataset. MolGroup achieves this by utilizing a routing mechanism optimized through a bi-level optimization framework. Empowered by the meta gradient, the routing mechanism is optimized toward maximizing the target dataset's performance and quantifies the affinity as the gating score. As a result, MolGroup is capable of predicting the optimal combination of auxiliary datasets for each target dataset. Our extensive experiments demonstrate the efficiency and effectiveness of MolGroup, showing an average improvement of 4.41%/3.47% for GIN/Graphormer trained with the group of molecule datasets selected by MolGroup on 11 target molecule datasets. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 378,263 |
2303.14817 | Frame Flexible Network | Existing video recognition algorithms always conduct different training pipelines for inputs with different frame numbers, which requires repetitive training operations and multiplying storage costs. If we evaluate the model using other frames which are not used in training, we observe the performance will drop significantly (see Fig.1), which is summarized as Temporal Frequency Deviation phenomenon. To fix this issue, we propose a general framework, named Frame Flexible Network (FFN), which not only enables the model to be evaluated at different frames to adjust its computation, but also reduces the memory costs of storing multiple models significantly. Concretely, FFN integrates several sets of training sequences, involves Multi-Frequency Alignment (MFAL) to learn temporal frequency invariant representations, and leverages Multi-Frequency Adaptation (MFAD) to further strengthen the representation abilities. Comprehensive empirical validations using various architectures and popular benchmarks solidly demonstrate the effectiveness and generalization of FFN (e.g., 7.08/5.15/2.17% performance gain at Frame 4/8/16 on Something-Something V1 dataset over Uniformer). Code is available at https://github.com/BeSpontaneous/FFN. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 354,255 |
1512.06050 | Pricing the Ramping Reserve and Capacity Reserve in Real Time Markets | The increasing penetration of renewable energy in recent years has led to more uncertainties in power systems. In order to maintain system reliability and security, electricity market operators need to keep certain reserves in the Security-Constrained Economic Dispatch (SCED) problems. A new concept, deliverable generation ramping reserve, is proposed in this paper. The prices of generation ramping reserves and generation capacity reserves are derived in the Affine Adjustable Robust Optimization framework. With the help of these prices, the valuable reserves can be identified among the available reserves. These prices provide crucial information on the values of reserve resources, which are critical for the long-term flexibility investment. The market equilibrium based on these prices is analyzed. Simulations on a 3-bus system and the IEEE 118-bus system are performed to illustrate the concept of ramping reserve price and capacity reserve price. The impacts of the reserve credit on market participants are discussed. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 50,278 |
2006.02804 | Exploring the Potential of Low-bit Training of Convolutional Neural
Networks | In this work, we propose a low-bit training framework for convolutional neural networks, which is built around a novel multi-level scaling (MLS) tensor format. Our framework focuses on reducing the energy consumption of convolution operations by quantizing all the convolution operands to low bit-width format. Specifically, we propose the MLS tensor format, in which the element-wise bit-width can be largely reduced. Then, we describe the dynamic quantization and the low-bit tensor convolution arithmetic to leverage the MLS tensor format efficiently. Experiments show that our framework achieves a superior trade-off between the accuracy and the bit-width than previous low-bit training frameworks. For training a variety of models on CIFAR-10, using 1-bit mantissa and 2-bit exponent is adequate to keep the accuracy loss within $1\%$. And on larger datasets like ImageNet, using 4-bit mantissa and 2-bit exponent is adequate to keep the accuracy loss within $1\%$. Through the energy consumption simulation of the computing units, we can estimate that training a variety of models with our framework could achieve $8.3\sim10.2\times$ and $1.9\sim2.3\times$ higher energy efficiency than training with full-precision and 8-bit floating-point arithmetic, respectively. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 180,147 |
1501.00892 | On the trade-off between control performance and communication cost in
event-triggered control | We consider a stochastic system where the communication between the controller and the actuator is triggered by a threshold-based rule. The communication is performed across an unreliable link that stochastically erases transmitted packets. To decrease the communication burden, and as a partial protection against dropped packets, the controller sends a sequence of control commands to the actuator in each packet. These commands are stored in a buffer and applied sequentially until the next control packet arrives. In this context, we study dead-beat control laws and compute the expected linear-quadratic loss of the closed-loop system for any given event-threshold. Furthermore, we provide analytical expressions that quantify the trade-off between the communication cost and the control performance of event-triggered control systems. Numerical examples demonstrate the effectiveness of the proposed framework. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 39,033 |
2502.02033 | On Iso-Dual MDS Codes From Elliptic Curves | For a linear code $C$ over a finite field, if its dual code $C^{\perp}$ is equivalent to itself, then the code $C$ is said to be {\it isometry-dual}. In this paper, we first confirm a conjecture about the isometry-dual MDS elliptic codes proposed by Han and Ren. Subsequently, two constructions of isometry-dual maximum distance separable (MDS) codes from elliptic curves are presented. The new code length $n$ satisfies $n\le\frac{q+\lfloor2\sqrt{q}\rfloor-1}{2}$ when $q$ is even and $n\le\frac{q+\lfloor2\sqrt{q}\rfloor-3}{2}$ when $q$ is odd. Additionally, we consider the hull dimension of both constructions. In the case of finite fields with even characteristics, an isometry-dual MDS code is equivalent to a self-dual MDS code and a linear complementary dual MDS code. Finally, we apply our results to entanglement-assisted quantum error correcting codes (EAQECCs) and obtain two new families of MDS EAQECCs. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 530,152 |
2308.10515 | QD-BEV : Quantization-aware View-guided Distillation for Multi-view 3D
Object Detection | Multi-view 3D detection based on BEV (bird-eye-view) has recently achieved significant improvements. However, the huge memory consumption of state-of-the-art models makes it hard to deploy them on vehicles, and the non-trivial latency will affect the real-time perception of streaming applications. Despite the wide application of quantization to lighten models, we show in our paper that directly applying quantization in BEV tasks will 1) make the training unstable, and 2) lead to intolerable performance degradation. To solve these issues, our method QD-BEV enables a novel view-guided distillation (VGD) objective, which can stabilize the quantization-aware training (QAT) while enhancing the model performance by leveraging both image features and BEV features. Our experiments show that QD-BEV achieves similar or even better accuracy than previous methods with significant efficiency gains. On the nuScenes datasets, the 4-bit weight and 6-bit activation quantized QD-BEV-Tiny model achieves 37.2% NDS with only 15.8 MB model size, outperforming BevFormer-Tiny by 1.8% with an 8x model compression. On the Small and Base variants, QD-BEV models also perform superbly and achieve 47.9% NDS (28.2 MB) and 50.9% NDS (32.9 MB), respectively. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 386,767 |
2006.04976 | Physics Informed Deep Kernel Learning | Deep kernel learning is a promising combination of deep neural networks and nonparametric function learning. However, as a data driven approach, the performance of deep kernel learning can still be restricted by scarce or insufficient data, especially in extrapolation tasks. To address these limitations, we propose Physics Informed Deep Kernel Learning (PI-DKL) that exploits physics knowledge represented by differential equations with latent sources. Specifically, we use the posterior function sample of the Gaussian process as the surrogate for the solution of the differential equation, and construct a generative component to integrate the equation in a principled Bayesian hybrid framework. For efficient and effective inference, we marginalize out the latent variables in the joint probability and derive a collapsed model evidence lower bound (ELBO), based on which we develop a stochastic model estimation algorithm. Our ELBO can be viewed as a nice, interpretable posterior regularization objective. On synthetic datasets and real-world applications, we show the advantage of our approach in both prediction accuracy and uncertainty quantification. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 180,879 |
2406.10848 | A transparency-based action model implemented in a robotic physical
trainer for improved HRI | Transparency is an important aspect of human-robot interaction (HRI), as it can improve system trust and usability leading to improved communication and performance. However, most transparency models focus only on the amount of information given to users. In this paper, we propose a bidirectional transparency model, termed a transparency-based action (TBA) model, which in addition to providing transparency information (robot-to-human), allows the robot to take actions based on transparency information received from the human (robot-of-human and human-to-robot). We implemented a three-level (High, Medium and Low) TBA model on a robotic system trainer in two pilot studies (with students as participants) to examine its impact on acceptance and HRI. Based on the pilot studies results, the Medium TBA level was not included in the main experiment, which was conducted with older adults (aged 75-85). In that experiment, two TBA levels were compared: Low (basic information including only robot-to-human transparency) and High (including additional information relating to predicted outcomes with robot-of-human and human-to-robot transparency). The results revealed a significant difference between the two TBA levels of the model in terms of perceived usefulness, ease of use, and attitude. The High TBA level resulted in improved user acceptance and was preferred by the users. | true | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 464,585 |
1911.07183 | Scale- and Context-Aware Convolutional Non-intrusive Load Monitoring | Non-intrusive load monitoring addresses the challenging task of decomposing the aggregate signal of a household's electricity consumption into appliance-level data without installing dedicated meters. By detecting load malfunction and recommending energy reduction programs, cost-effective non-intrusive load monitoring provides intelligent demand-side management for utilities and end users. In this paper, we boost the accuracy of energy disaggregation with a novel neural network structure named scale- and context-aware network, which exploits multi-scale features and contextual information. Specifically, we develop a multi-branch architecture with multiple receptive field sizes and branch-wise gates that connect the branches in the sub-networks. We build a self-attention module to facilitate the integration of global context, and we incorporate an adversarial loss and on-state augmentation to further improve the model's performance. Extensive simulation results tested on open datasets corroborate the merits of the proposed approach, which significantly outperforms state-of-the-art methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 153,755 |
1911.06644 | You Only Watch Once: A Unified CNN Architecture for Real-Time
Spatiotemporal Action Localization | Spatiotemporal action localization requires the incorporation of two sources of information into the designed architecture: (1) temporal information from the previous frames and (2) spatial information from the key frame. Current state-of-the-art approaches usually extract these information with separate networks and use an extra mechanism for fusion to get detections. In this work, we present YOWO, a unified CNN architecture for real-time spatiotemporal action localization in video streams. YOWO is a single-stage architecture with two branches to extract temporal and spatial information concurrently and predict bounding boxes and action probabilities directly from video clips in one evaluation. Since the whole architecture is unified, it can be optimized end-to-end. The YOWO architecture is fast providing 34 frames-per-second on 16-frames input clips and 62 frames-per-second on 8-frames input clips, which is currently the fastest state-of-the-art architecture on spatiotemporal action localization task. Remarkably, YOWO outperforms the previous state-of-the art results on J-HMDB-21 and UCF101-24 with an impressive improvement of ~3% and ~12%, respectively. Moreover, YOWO is the first and only single-stage architecture that provides competitive results on AVA dataset. We make our code and pretrained models publicly available. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 153,595 |
2203.01665 | $\beta$-DARTS: Beta-Decay Regularization for Differentiable Architecture
Search | Neural Architecture Search~(NAS) has attracted increasingly more attention in recent years because of its capability to design deep neural networks automatically. Among them, differential NAS approaches such as DARTS, have gained popularity for the search efficiency. However, they suffer from two main issues, the weak robustness to the performance collapse and the poor generalization ability of the searched architectures. To solve these two problems, a simple-but-efficient regularization method, termed as Beta-Decay, is proposed to regularize the DARTS-based NAS searching process. Specifically, Beta-Decay regularization can impose constraints to keep the value and variance of activated architecture parameters from too large. Furthermore, we provide in-depth theoretical analysis on how it works and why it works. Experimental results on NAS-Bench-201 show that our proposed method can help to stabilize the searching process and makes the searched network more transferable across different datasets. In addition, our search scheme shows an outstanding property of being less dependent on training time and data. Comprehensive experiments on a variety of search spaces and datasets validate the effectiveness of the proposed method. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 283,470 |
2412.04876 | Dynamic Interference Prediction for In-X 6G Sub-networks | The sixth generation (6G) industrial Sub-networks (SNs) face several challenges in meeting extreme latency and reliability requirements in the order of 0.1-1 ms and 99.999 -to-99.99999 percentile, respectively. Interference management (IM) plays an integral role in addressing these requirements, especially in ultra-dense SN environments with rapidly varying interference induced by channel characteristics, mobility, and resource limitations. In general, IM can be achieved using resource allocation and \textit{accurate} Link adaptation (LA). In this work, we focus on the latter, where we first model interference at SN devices using the spatially consistent 3GPP channel model. Following this, we present a discrete-time dynamic state space model (DSSM) at a SN access point (AP), where interference power values (IPVs) are modeled as latent variables incorporating underlying modeling errors as well as transmission/protocol delays. Necessary approximations are then presented to simplify the DSSM and to efficiently employ the extended Kalman filter (EKF) for interference prediction. Unlike baseline methods, our proposed approach predicts IPVs solely based on the channel quality indicator (CQI) reports available at the SN AP at every transmission time interval (TTI). Numerical results demonstrate that our proposed approach clearly outperforms the conventional baseline. Furthermore, we also show that despite predicting with limited information, our proposed approach consistently achieves a comparable performance w.r.t the off-the-shelf supervised learning based baseline. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 514,609 |
2304.12991 | A new invariant for cyclic orbit flag codes | In the network coding framework, given a prime power $q$ and the vector space $\mathbb{F}_q^n$, a constant type flag code is a set of nested sequences of $\mathbb{F}_q$-subspaces (flags) with the same increasing sequence of dimensions (the type of the flag). If a flag code arises as the orbit under the action of a cyclic subgroup of the general linear group over a flag, we say that it is a cyclic orbit flag code. Among the parameters of such a family of codes, we have its best friend, that is the largest field over which all the subspaces in the generating flag are vector spaces. This object permits to compute the cardinality of the code and estimate its minimum distance. However, as it occurs with other absolute parameters of a flag code, the information given by the best friend is not complete in many cases due to the fact that it can be obtained in different ways. In this work, we present a new invariant, the best friend vector, that captures the specific way the best friend can be unfolded. Furthermore, throughout the paper we analyze the strong underlying interaction between this invariant and other parameters such as the cardinality, the flag distance, or the type vector, and how it conditions them. Finally, we investigate the realizability of a prescribed best friend vector in a vector space. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 360,414 |
1906.10529 | AMF: Aggregated Mondrian Forests for Online Learning | Random Forests (RF) is one of the algorithms of choice in many supervised learning applications, be it classification or regression. The appeal of such tree-ensemble methods comes from a combination of several characteristics: a remarkable accuracy in a variety of tasks, a small number of parameters to tune, robustness with respect to features scaling, a reasonable computational cost for training and prediction, and their suitability in high-dimensional settings. The most commonly used RF variants however are "offline" algorithms, which require the availability of the whole dataset at once. In this paper, we introduce AMF, an online random forest algorithm based on Mondrian Forests. Using a variant of the Context Tree Weighting algorithm, we show that it is possible to efficiently perform an exact aggregation over all prunings of the trees; in particular, this enables to obtain a truly online parameter-free algorithm which is competitive with the optimal pruning of the Mondrian tree, and thus adaptive to the unknown regularity of the regression function. Numerical experiments show that AMF is competitive with respect to several strong baselines on a large number of datasets for multi-class classification. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 136,455 |
1910.04040 | Fast Task-Adaptation for Tasks Labeled Using Natural Language in
Reinforcement Learning | Over its lifetime, a reinforcement learning agent is often tasked with different tasks. How to efficiently adapt a previously learned control policy from one task to another, remains an open research question. In this paper, we investigate how instructions formulated in natural language can enable faster and more effective task adaptation. This can serve as the basis for developing language instructed skills, which can be used in a lifelong learning setting. Our method is capable of assessing, given a set of developed base control policies, which policy will adapt best to a new unseen task. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 148,659 |
2003.12140 | MetNet: A Neural Weather Model for Precipitation Forecasting | Weather forecasting is a long standing scientific challenge with direct social and economic impact. The task is suitable for deep neural networks due to vast amounts of continuously collected data and a rich spatial and temporal structure that presents long range dependencies. We introduce MetNet, a neural network that forecasts precipitation up to 8 hours into the future at the high spatial resolution of 1 km$^2$ and at the temporal resolution of 2 minutes with a latency in the order of seconds. MetNet takes as input radar and satellite data and forecast lead time and produces a probabilistic precipitation map. The architecture uses axial self-attention to aggregate the global context from a large input patch corresponding to a million square kilometers. We evaluate the performance of MetNet at various precipitation thresholds and find that MetNet outperforms Numerical Weather Prediction at forecasts of up to 7 to 8 hours on the scale of the continental United States. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 169,822 |
1706.02769 | Source Forager: A Search Engine for Similar Source Code | Developers spend a significant amount of time searching for code: e.g., to understand how to complete, correct, or adapt their own code for a new context. Unfortunately, the state of the art in code search has not evolved much beyond text search over tokenized source. Code has much richer structure and semantics than normal text, and this property can be exploited to specialize the code-search process for better querying, searching, and ranking of code-search results. We present a new code-search engine named Source Forager. Given a query in the form of a C/C++ function, Source Forager searches a pre-populated code database for similar C/C++ functions. Source Forager preprocesses the database to extract a variety of simple code features that capture different aspects of code. A search returns the $k$ functions in the database that are most similar to the query, based on the various extracted code features. We tested the usefulness of Source Forager using a variety of code-search queries from two domains. Our experiments show that the ranked results returned by Source Forager are accurate, and that query-relevant functions can be reliably retrieved even when searching through a large code database that contains very few query-relevant functions. We believe that Source Forager is a first step towards much-needed tools that provide a better code-search experience. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | true | 75,029 |
1910.04857 | Sampling the "Inverse Set" of a Neuron: An Approach to Understanding
Neural Nets | With the recent success of deep neural networks in computer vision, it is important to understand the internal working of these networks. What does a given neuron represent? The concepts captured by a neuron may be hard to understand or express in simple terms. The approach we propose in this paper is to characterize the region of input space that excites a given neuron to a certain level; we call this the inverse set. This inverse set is a complicated high dimensional object that we explore by an optimization-based sampling approach. Inspection of samples of this set by a human can reveal regularities that help to understand the neuron. This goes beyond approaches which were limited to finding an image which maximally activates the neuron or using Markov chain Monte Carlo to sample images, but this is very slow, generates samples with little diversity and lacks control over the activation value of the generated samples. Our approach also allows us to explore the intersection of inverse sets of several neurons and other variations. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 148,879 |
2306.00950 | Differential Diffusion: Giving Each Pixel Its Strength | Diffusion models have revolutionized image generation and editing, producing state-of-the-art results in conditioned and unconditioned image synthesis. While current techniques enable user control over the degree of change in an image edit, the controllability is limited to global changes over an entire edited region. This paper introduces a novel framework that enables customization of the amount of change per pixel or per image region. Our framework can be integrated into any existing diffusion model, enhancing it with this capability. Such granular control on the quantity of change opens up a diverse array of new editing capabilities, such as control of the extent to which individual objects are modified, or the ability to introduce gradual spatial changes. Furthermore, we showcase the framework's effectiveness in soft-inpainting -- the completion of portions of an image while subtly adjusting the surrounding areas to ensure seamless integration. Additionally, we introduce a new tool for exploring the effects of different change quantities. Our framework operates solely during inference, requiring no model training or fine-tuning. We demonstrate our method with the current open state-of-the-art models, and validate it via both quantitative and qualitative comparisons, and a user study. Our code is available at: https://github.com/exx8/differential-diffusion | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | true | 370,226 |
1901.07299 | SIMCom: Statistical Sniffing of Inter-Module Communications for Run-time
Hardware Trojan Detection | Timely detection of Hardware Trojans (HTs) has become a major challenge for secure integrated circuits. We present a run-time methodology for HT detection that employs a multi-parameter statistical traffic modeling of the communication channel in a given System-on-Chip (SoC), named as SIMCom. The main idea is to model the communication using multiple side-channel information like the Hurst exponent, the standard deviation of the injection distribution, and the hop distribution jointly to accurately identify HT-based online anomalies (that affects the communication without affecting the protocols or control signals). At design time, our methodology employs a "property specification language" to define and embed assertions in the RTL, specifying the correct communication behavior of a given SoC. At run-time, it monitors the anomalies in the communication behavior by checking the execution patterns against these assertions. For illustration, we evaluate SIMCom for three SoCs, i.e., SoC1 ( four single-core MC8051 and UART modules), SoC2 (four single-core MC8051, AES, ethernet, memctrl, BasicRSA, RS232 modules), and SoC3 (four single-core LEON3 connected with each other and AES, ethernet, memctrl, BasicRSA, RS23s modules microcontrollers). The experimental results show that with the combined analysis of multiple statistical parameters, SIMCom is able to detect all the benchmark Trojans (available on trust-hub) with less than 1% area and power overhead. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 119,186 |
2411.15618 | Accelerated Hydration Site Localization and Thermodynamic Profiling | Water plays a fundamental role in the structure and function of proteins and other biomolecules. The thermodynamic profile of water molecules surrounding a protein are critical for ligand binding and recognition. Therefore, identifying the location and thermodynamic behavior of relevant water molecules is important for generating and optimizing lead compounds for affinity and selectivity to a given target. Computational methods have been developed to identify these hydration sites, but are largely limited to simplified models that fail to capture multi-body interactions, or dynamics-based methods that rely on extensive sampling. Here we present a method for fast and accurate localization and thermodynamic profiling of hydration sites for protein structures. The method is based on a geometric deep neural network trained on a large, novel dataset of explicit water molecular dynamics simulations. We confirm the accuracy and robustness of our model on experimental data and demonstrate it's utility on several case studies. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 510,688 |
2303.05171 | RiDDLE: Reversible and Diversified De-identification with Latent
Encryptor | This work presents RiDDLE, short for Reversible and Diversified De-identification with Latent Encryptor, to protect the identity information of people from being misused. Built upon a pre-learned StyleGAN2 generator, RiDDLE manages to encrypt and decrypt the facial identity within the latent space. The design of RiDDLE has three appealing properties. First, the encryption process is cipher-guided and hence allows diverse anonymization using different passwords. Second, the true identity can only be decrypted with the correct password, otherwise the system will produce another de-identified face to maintain the privacy. Third, both encryption and decryption share an efficient implementation, benefiting from a carefully tailored lightweight encryptor. Comparisons with existing alternatives confirm that our approach accomplishes the de-identification task with better quality, higher diversity, and stronger reversibility. We further demonstrate the effectiveness of RiDDLE in anonymizing videos. Code and models will be made publicly available. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 350,372 |
2407.10687 | FRI-Net: Floorplan Reconstruction via Room-wise Implicit Representation | In this paper, we introduce a novel method called FRI-Net for 2D floorplan reconstruction from 3D point cloud. Existing methods typically rely on corner regression or box regression, which lack consideration for the global shapes of rooms. To address these issues, we propose a novel approach using a room-wise implicit representation with structural regularization to characterize the shapes of rooms in floorplans. By incorporating geometric priors of room layouts in floorplans into our training strategy, the generated room polygons are more geometrically regular. We have conducted experiments on two challenging datasets, Structured3D and SceneCAD. Our method demonstrates improved performance compared to state-of-the-art methods, validating the effectiveness of our proposed representation for floorplan reconstruction. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 473,079 |
2309.07158 | Compressed Real Numbers for AI: a case-study using a RISC-V CPU | As recently demonstrated, Deep Neural Networks (DNN), usually trained using single precision IEEE 754 floating point numbers (binary32), can also work using lower precision. Therefore, 16-bit and 8-bit compressed format have attracted considerable attention. In this paper, we focused on two families of formats that have already achieved interesting results in compressing binary32 numbers in machine learning applications, without sensible degradation of the accuracy: bfloat and posit. Even if 16-bit and 8-bit bfloat/posit are routinely used for reducing the storage of the weights/biases of trained DNNs, the inference still often happens on the 32-bit FPU of the CPU (especially if GPUs are not available). In this paper we propose a way to decompress a tensor of bfloat/posits just before computations, i.e., after the compressed operands have been loaded within the vector registers of a vector capable CPU, in order to save bandwidth usage and increase cache efficiency. Finally, we show the architectural parameters and considerations under which this solution is advantageous with respect to the uncompressed one. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 391,680 |
2312.16794 | ZONE: Zero-Shot Instruction-Guided Local Editing | Recent advances in vision-language models like Stable Diffusion have shown remarkable power in creative image synthesis and editing.However, most existing text-to-image editing methods encounter two obstacles: First, the text prompt needs to be carefully crafted to achieve good results, which is not intuitive or user-friendly. Second, they are insensitive to local edits and can irreversibly affect non-edited regions, leaving obvious editing traces. To tackle these problems, we propose a Zero-shot instructiON-guided local image Editing approach, termed ZONE. We first convert the editing intent from the user-provided instruction (e.g., "make his tie blue") into specific image editing regions through InstructPix2Pix. We then propose a Region-IoU scheme for precise image layer extraction from an off-the-shelf segment model. We further develop an edge smoother based on FFT for seamless blending between the layer and the image.Our method allows for arbitrary manipulation of a specific region with a single instruction while preserving the rest. Extensive experiments demonstrate that our ZONE achieves remarkable local editing results and user-friendliness, outperforming state-of-the-art methods. Code is available at https://github.com/lsl001006/ZONE. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 418,510 |
2301.10892 | Planning Automated Driving with Accident Experience Referencing and
Common-sense Inferencing | Although a typical autopilot system far surpasses humans in term of sensing accuracy, performance stability and response agility, such a system is still far behind humans in the wisdom of understanding an unfamiliar environment with creativity, adaptivity and resiliency. Current AD brains are basically expert systems featuring logical computations, which resemble the thinking flow of a left brain working at tactical level. A right brain is needed to upgrade the safety of automated driving vehicle onto next generation by making intuitive strategical judgements that can supervise the tactical action planning. In this work, we present the concept of an Automated Driving Strategical Brain (ADSB): a framework of a scene perception and scene safety evaluation system that works at a higher abstraction level, incorporating experience referencing, common-sense inferring and goal-and-value judging capabilities, to provide a contextual perspective for decision making within automated driving planning. The ADSB brain architecture is made up of the Experience Referencing Engine (ERE), the Common-sense Referencing Engine (CIE) and the Goal and Value Keeper (GVK). 1,614,748 cases from FARS/CRSS database of NHTSA in the period 1975 to 2018 are used for the training of ERE model. The kernel of CIE is a trained model, COMET-BART by ATOMIC, which can be used to provide directional advice when tactical-level environmental perception conclusions are ambiguous; it can also use future scenario models to remind tactical-level decision systems to plan ahead of a perceived hazard scene. GVK can take in any additional expert-hand-written rules that are of qualitative nature. Moreover, we believe that with good scalability, the ADSB approach provides a potential solution to the problem of long-tail corner cases encountered in the validation of a rule-based planning algorithm. | false | false | false | false | true | false | true | true | false | false | false | false | false | false | false | false | false | false | 341,939 |
2005.05097 | Statistical learning for sensor localization in wireless networks | Indoor localization has become an important issue for wireless sensor networks. This paper presents a zoning-based localization technique that uses WiFi signals and works efficiently in indoor environments. The targeted area is composed of several zones, the objective being to determine the zone of the sensor using an observation model based on statistical learning. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 176,644 |
2210.10965 | IDM-Follower: A Model-Informed Deep Learning Method for Long-Sequence
Car-Following Trajectory Prediction | Model-based and learning-based methods are two major types of methodologies to model car following behaviors. Model-based methods describe the car-following behaviors with explicit mathematical equations, while learning-based methods focus on getting a mapping between inputs and outputs. Both types of methods have advantages and weaknesses. Meanwhile, most car-following models are generative and only consider the inputs of the speed, position, and acceleration of the last time step. To address these issues, this study proposes a novel framework called IDM-Follower that can generate a sequence of following vehicle trajectory by a recurrent autoencoder informed by a physical car-following model, the Intelligent Driving Model (IDM).We implement a novel structure with two independent encoders and a self-attention decoder that could sequentially predict the following trajectories. A loss function considering the discrepancies between predictions and labeled data integrated with discrepancies from model-based predictions is implemented to update the neural network parameters. Numerical experiments with multiple settings on simulation and NGSIM datasets show that the IDM-Follower can improve the prediction performance compared to the model-based or learning-based methods alone. Analysis on different noise levels also shows good robustness of the model. | false | false | false | false | false | false | true | true | false | false | true | false | false | false | false | false | false | false | 325,125 |
1104.1317 | Algorithm for Sensor Network Attitude Problem | Sensor network attitude problem consists in retrieving the attitude of each sensor of a network knowing some relative orientations between pairs of sensors. The attitude of a sensor is its orientation in an absolute axis system. We present in this paper a method for solving the sensor network attitude problem using quaternion formalism which allows to apply linear algebra tools. The proposed algorithm solves the problem when all of the relative attitudes are known. A complete characterisation of the algorithm is established: spatial complexity, time complexity and robustness. Our algorithm is validated in simulations and with real experiments. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 9,905 |
2107.14597 | Text Classification and Clustering with Annealing Soft Nearest Neighbor
Loss | We define disentanglement as how far class-different data points from each other are, relative to the distances among class-similar data points. When maximizing disentanglement during representation learning, we obtain a transformed feature representation where the class memberships of the data points are preserved. If the class memberships of the data points are preserved, we would have a feature representation space in which a nearest neighbour classifier or a clustering algorithm would perform well. We take advantage of this method to learn better natural language representation, and employ it on text classification and text clustering tasks. Through disentanglement, we obtain text representations with better-defined clusters and improve text classification performance. Our approach had a test classification accuracy of as high as 90.11% and test clustering accuracy of 88% on the AG News dataset, outperforming our baseline models -- without any other training tricks or regularization. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | true | false | false | 248,518 |
1503.05938 | On Invariance and Selectivity in Representation Learning | We discuss data representation which can be learned automatically from data, are invariant to transformations, and at the same time selective, in the sense that two points have the same representation only if they are one the transformation of the other. The mathematical results here sharpen some of the key claims of i-theory -- a recent theory of feedforward processing in sensory cortex. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 41,298 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.