id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2303.15234 | Prompt Tuning based Adapter for Vision-Language Model Adaption | Large pre-trained vision-language (VL) models have shown significant promise in adapting to various downstream tasks. However, fine-tuning the entire network is challenging due to the massive number of model parameters. To address this issue, efficient adaptation methods such as prompt tuning have been proposed. We explore the idea of prompt tuning with multi-task pre-trained initialization and find it can significantly improve model performance. Based on our findings, we introduce a new model, termed Prompt-Adapter, that combines pre-trained prompt tunning with an efficient adaptation network. Our approach beat the state-of-the-art methods in few-shot image classification on the public 11 datasets, especially in settings with limited data instances such as 1 shot, 2 shots, 4 shots, and 8 shots images. Our proposed method demonstrates the promise of combining prompt tuning and parameter-efficient networks for efficient vision-language model adaptation. The code is publicly available at: https://github.com/Jingchensun/prompt_adapter. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 354,421 |
2208.02931 | CIGAN: A Python Package for Handling Class Imbalance using Generative
Adversarial Networks | A key challenge in Machine Learning is class imbalance, where the sample size of some classes (majority classes) are much higher than that of the other classes (minority classes). If we were to train a classifier directly on imbalanced data, it is more likely for the classifier to predict a new sample as one of the majority classes. In the extreme case, the classifier could completely ignore the minority classes. This could have serious sociological implications in healthcare, as the minority classes are usually the disease classes (e.g., death or positive clinical test result). In this paper, we introduce a software that uses Generative Adversarial Networks to oversample the minority classes so as to improve downstream classification. To the best of our knowledge, this is the first tool that allows multi-class classification (where the target can have an arbitrary number of classes). The code of the tool is publicly available in our github repository (https://github.com/yuxiaohuang/research/tree/master/gwu/working/cigan/code). | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 311,617 |
2309.09668 | DFormer: Rethinking RGBD Representation Learning for Semantic
Segmentation | We present DFormer, a novel RGB-D pretraining framework to learn transferable representations for RGB-D segmentation tasks. DFormer has two new key innovations: 1) Unlike previous works that encode RGB-D information with RGB pretrained backbone, we pretrain the backbone using image-depth pairs from ImageNet-1K, and hence the DFormer is endowed with the capacity to encode RGB-D representations; 2) DFormer comprises a sequence of RGB-D blocks, which are tailored for encoding both RGB and depth information through a novel building block design. DFormer avoids the mismatched encoding of the 3D geometry relationships in depth maps by RGB pretrained backbones, which widely lies in existing methods but has not been resolved. We finetune the pretrained DFormer on two popular RGB-D tasks, i.e., RGB-D semantic segmentation and RGB-D salient object detection, with a lightweight decoder head. Experimental results show that our DFormer achieves new state-of-the-art performance on these two tasks with less than half of the computational cost of the current best methods on two RGB-D semantic segmentation datasets and five RGB-D salient object detection datasets. Our code is available at: https://github.com/VCIP-RGBD/DFormer. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 392,697 |
2412.17053 | DR-Encoder: Encode Low-rank Gradients with Random Prior for Large
Language Models Differentially Privately | The emergence of the Large Language Model (LLM) has shown their superiority in a wide range of disciplines, including language understanding and translation, relational logic reasoning, and even partial differential equations solving. The transformer is the pervasive backbone architecture for the foundation model construction. It is vital to research how to adjust the Transformer architecture to achieve an end-to-end privacy guarantee in LLM fine-tuning. In this paper, we investigate three potential information leakage during a federated fine-tuning procedure for LLM (FedLLM). Based on the potential information leakage, we provide an end-to-end privacy guarantee solution for FedLLM by inserting two-stage randomness. The first stage is to train a gradient auto-encoder with a Gaussian random prior based on the statistical information of the gradients generated by local clients. The second stage is to fine-tune the overall LLM with a differential privacy guarantee by adopting appropriate Gaussian noises. We show the efficiency and accuracy gains of our proposed method with several foundation models and two popular evaluation benchmarks. Furthermore, we present a comprehensive privacy analysis with Gaussian Differential Privacy (GDP) and Renyi Differential Privacy (RDP). | false | false | false | false | true | false | true | false | false | false | false | false | true | false | false | false | false | false | 519,812 |
2207.01603 | Invariant and Transportable Representations for Anti-Causal Domain
Shifts | Real-world classification problems must contend with domain shift, the (potential) mismatch between the domain where a model is deployed and the domain(s) where the training data was gathered. Methods to handle such problems must specify what structure is common between the domains and what varies. A natural assumption is that causal (structural) relationships are invariant in all domains. Then, it is tempting to learn a predictor for label $Y$ that depends only on its causal parents. However, many real-world problems are "anti-causal" in the sense that $Y$ is a cause of the covariates $X$ -- in this case, $Y$ has no causal parents and the naive causal invariance is useless. In this paper, we study representation learning under a particular notion of domain shift that both respects causal invariance and that naturally handles the "anti-causal" structure. We show how to leverage the shared causal structure of the domains to learn a representation that both admits an invariant predictor and that also allows fast adaptation in new domains. The key is to translate causal assumptions into learning principles that disentangle "invariant" and "non-stable" features. Experiments on both synthetic and real-world data demonstrate the effectiveness of the proposed learning algorithm. Code is available at https://github.com/ybjiaang/ACTIR. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 306,239 |
1101.2288 | On the Degree of Freedom for Multi-Source Multi-Destination Wireless
Network with Multi-layer Relays | Degree of freedom (DoF) region provides an approximation of capacity region in high signal-to-noise ratio (SNR) regime, while sum DoF gives the scaling factor. In this correspondence, we analyse the DoF region and sum DoF for unicast layered multi-hop relay wireless networks with arbitrary number of source/destination/relay nodes, arbitrary number of hops and arbitrary number of antennas at each node. The result is valid for quite a few message topologies. We reveal the limitation on capacity of multi-hop network due to the concatenation structure and show the similarity with capacitor network. From the analysis on bound gap and optimality condition, the ultimate capacity of multi-hop network is shown to be strictly inferior to that of single-hop network. Linear scaling law can be established when the number of hops is fixed. At cost of channel state information at transmitters (CSIT) for each component single-hop network, our achievable scheme avoids routing and simplifies scheduling. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 8,793 |
2409.01140 | LLM-PQA: LLM-enhanced Prediction Query Answering | The advent of Large Language Models (LLMs) provides an opportunity to change the way queries are processed, moving beyond the constraints of conventional SQL-based database systems. However, using an LLM to answer a prediction query is still challenging, since an external ML model has to be employed and inference has to be performed in order to provide an answer. This paper introduces LLM-PQA, a novel tool that addresses prediction queries formulated in natural language. LLM-PQA is the first to combine the capabilities of LLMs and retrieval-augmented mechanism for the needs of prediction queries by integrating data lakes and model zoos. This integration provides users with access to a vast spectrum of heterogeneous data and diverse ML models, facilitating dynamic prediction query answering. In addition, LLM-PQA can dynamically train models on demand, based on specific query requirements, ensuring reliable and relevant results even when no pre-trained model in a model zoo, available for the task. | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | 485,226 |
2006.16958 | Evaluating the Performance of Reinforcement Learning Algorithms | Performance evaluations are critical for quantifying algorithmic advances in reinforcement learning. Recent reproducibility analyses have shown that reported performance results are often inconsistent and difficult to replicate. In this work, we argue that the inconsistency of performance stems from the use of flawed evaluation metrics. Taking a step towards ensuring that reported results are consistent, we propose a new comprehensive evaluation methodology for reinforcement learning algorithms that produces reliable measurements of performance both on a single environment and when aggregated across environments. We demonstrate this method by evaluating a broad class of reinforcement learning algorithms on standard benchmark tasks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 184,961 |
2106.03791 | Learning Gaussian Mixtures with Generalised Linear Models: Precise
Asymptotics in High-dimensions | Generalised linear models for multi-class classification problems are one of the fundamental building blocks of modern machine learning tasks. In this manuscript, we characterise the learning of a mixture of $K$ Gaussians with generic means and covariances via empirical risk minimisation (ERM) with any convex loss and regularisation. In particular, we prove exact asymptotics characterising the ERM estimator in high-dimensions, extending several previous results about Gaussian mixture classification in the literature. We exemplify our result in two tasks of interest in statistical learning: a) classification for a mixture with sparse means, where we study the efficiency of $\ell_1$ penalty with respect to $\ell_2$; b) max-margin multi-class classification, where we characterise the phase transition on the existence of the multi-class logistic maximum likelihood estimator for $K>2$. Finally, we discuss how our theory can be applied beyond the scope of synthetic data, showing that in different cases Gaussian mixtures capture closely the learning curve of classification tasks in real data sets. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 239,454 |
2405.13394 | A theory of neural emulators | A central goal in neuroscience is to provide explanations for how animal nervous systems can generate actions and cognitive states such as consciousness while artificial intelligence (AI) and machine learning (ML) seek to provide models that are increasingly better at prediction. Despite many decades of research we have made limited progress on providing neuroscience explanations yet there is an increased use of AI and ML methods in neuroscience for prediction of behavior and even cognitive states. Here we propose emulator theory (ET) and neural emulators as circuit- and scale-independent predictive models of biological brain activity and emulator theory (ET) as an alternative research paradigm in neuroscience. ET proposes that predictive models trained solely on neural dynamics and behaviors can generate functionally indistinguishable systems from their sources. That is, compared to the biological organisms which they model, emulators may achieve indistinguishable behavior and cognitive states - including consciousness - without any mechanistic explanations. We posit ET via several conjectures, discuss the nature of endogenous and exogenous activation of neural circuits, and discuss neural causality of phenomenal states. ET provides the conceptual and empirical framework for prediction-based models of neural dynamics and behavior without explicit representations of idiosyncratically evolved nervous systems. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 455,931 |
2102.11380 | Decomposition of Clifford Gates | In fault-tolerant quantum computation and quantum error-correction one is interested on Pauli matrices that commute with a circuit/unitary. We provide a fast algorithm that decomposes any Clifford gate as a $\textit{minimal}$ product of Clifford transvections. The algorithm can be directly used for finding all Pauli matrices that commute with any given Clifford gate. To achieve this goal, we exploit the structure of the symplectic group with a novel graphical approach. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 221,394 |
2112.02612 | Training Structured Neural Networks Through Manifold Identification and
Variance Reduction | This paper proposes an algorithm (RMDA) for training neural networks (NNs) with a regularization term for promoting desired structures. RMDA does not incur computation additional to proximal SGD with momentum, and achieves variance reduction without requiring the objective function to be of the finite-sum form. Through the tool of manifold identification from nonlinear optimization, we prove that after a finite number of iterations, all iterates of RMDA possess a desired structure identical to that induced by the regularizer at the stationary point of asymptotic convergence, even in the presence of engineering tricks like data augmentation and dropout that complicate the training process. Experiments on training NNs with structured sparsity confirm that variance reduction is necessary for such an identification, and show that RMDA thus significantly outperforms existing methods for this task. For unstructured sparsity, RMDA also outperforms a state-of-the-art pruning method, validating the benefits of training structured NNs through regularization. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 269,905 |
1805.12301 | Rotation Equivariance and Invariance in Convolutional Neural Networks | Performance of neural networks can be significantly improved by encoding known invariance for particular tasks. Many image classification tasks, such as those related to cellular imaging, exhibit invariance to rotation. We present a novel scheme using the magnitude response of the 2D-discrete-Fourier transform (2D-DFT) to encode rotational invariance in neural networks, along with a new, efficient convolutional scheme for encoding rotational equivariance throughout convolutional layers. We implemented this scheme for several image classification tasks and demonstrated improved performance, in terms of classification accuracy, time required to train the model, and robustness to hyperparameter selection, over a standard CNN and another state-of-the-art method. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 99,136 |
1712.07108 | Improved Regularization Techniques for End-to-End Speech Recognition | Regularization is important for end-to-end speech models, since the models are highly flexible and easy to overfit. Data augmentation and dropout has been important for improving end-to-end models in other domains. However, they are relatively under explored for end-to-end speech models. Therefore, we investigate the effectiveness of both methods for end-to-end trainable, deep speech recognition models. We augment audio data through random perturbations of tempo, pitch, volume, temporal alignment, and adding random noise.We further investigate the effect of dropout when applied to the inputs of all layers of the network. We show that the combination of data augmentation and dropout give a relative performance improvement on both Wall Street Journal (WSJ) and LibriSpeech dataset of over 20%. Our model performance is also competitive with other end-to-end speech models on both datasets. | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 86,990 |
2407.15296 | Weak-to-Strong Compositional Learning from Generative Models for
Language-based Object Detection | Vision-language (VL) models often exhibit a limited understanding of complex expressions of visual objects (e.g., attributes, shapes, and their relations), given complex and diverse language queries. Traditional approaches attempt to improve VL models using hard negative synthetic text, but their effectiveness is limited. In this paper, we harness the exceptional compositional understanding capabilities of generative foundational models. We introduce a novel method for structured synthetic data generation aimed at enhancing the compositional understanding of VL models in language-based object detection. Our framework generates densely paired positive and negative triplets (image, text descriptions, and bounding boxes) in both image and text domains. By leveraging these synthetic triplets, we transform 'weaker' VL models into 'stronger' models in terms of compositional understanding, a process we call "Weak-to-Strong Compositional Learning" (WSCL). To achieve this, we propose a new compositional contrastive learning formulation that discovers semantics and structures in complex descriptions from synthetic triplets. As a result, VL models trained with our synthetic data generation exhibit a significant performance boost in the Omnilabel benchmark by up to +5AP and the D3 benchmark by +6.9AP upon existing baselines. | false | false | false | false | false | false | true | false | true | false | false | true | false | false | false | false | false | false | 475,119 |
1001.0597 | Inference of global clusters from locally distributed data | We consider the problem of analyzing the heterogeneity of clustering distributions for multiple groups of observed data, each of which is indexed by a covariate value, and inferring global clusters arising from observations aggregated over the covariate domain. We propose a novel Bayesian nonparametric method reposing on the formalism of spatial modeling and a nested hierarchy of Dirichlet processes. We provide an analysis of the model properties, relating and contrasting the notions of local and global clusters. We also provide an efficient inference algorithm, and demonstrate the utility of our method in several data examples, including the problem of object tracking and a global clustering analysis of functional data where the functional identity information is not available. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 5,261 |
1312.5713 | Giving the AI definition a form suitable for the engineer | Artificial Intelligence - what is this? That is the question! In earlier papers we already gave a formal definition for AI, but if one desires to build an actual AI implementation, the following issues require attention and are treated here: the data format to be used, the idea of Undef and Nothing symbols, various ways for defining the "meaning of life", and finally, a new notion of "incorrect move". These questions are of minor importance in the theoretical discussion, but we already know the answer of the question "Does AI exist?" Now we want to make the next step and to create this program. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 29,255 |
2301.10297 | Large language models can segment narrative events similarly to humans | Humans perceive discrete events such as "restaurant visits" and "train rides" in their continuous experience. One important prerequisite for studying human event perception is the ability of researchers to quantify when one event ends and another begins. Typically, this information is derived by aggregating behavioral annotations from several observers. Here we present an alternative computational approach where event boundaries are derived using a large language model, GPT-3, instead of using human annotations. We demonstrate that GPT-3 can segment continuous narrative text into events. GPT-3-annotated events are significantly correlated with human event annotations. Furthermore, these GPT-derived annotations achieve a good approximation of the "consensus" solution (obtained by averaging across human annotations); the boundaries identified by GPT-3 are closer to the consensus, on average, than boundaries identified by individual human annotators. This finding suggests that GPT-3 provides a feasible solution for automated event annotations, and it demonstrates a further parallel between human cognition and prediction in large language models. In the future, GPT-3 may thereby help to elucidate the principles underlying human event perception. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 341,762 |
2410.06765 | To Preserve or To Compress: An In-Depth Study of Connector Selection in
Multimodal Large Language Models | In recent years, multimodal large language models (MLLMs) have garnered significant attention from both industry and academia. However, there is still considerable debate on constructing MLLM architectures, particularly regarding the selection of appropriate connectors for perception tasks of varying granularities. This paper systematically investigates the impact of connectors on MLLM performance. Specifically, we classify connectors into feature-preserving and feature-compressing types. Utilizing a unified classification standard, we categorize sub-tasks from three comprehensive benchmarks, MMBench, MME, and SEED-Bench, into three task types: coarse-grained perception, fine-grained perception, and reasoning, and evaluate the performance. Our findings reveal that feature-preserving connectors excel in \emph{fine-grained perception} tasks due to their ability to retain detailed visual information. In contrast, feature-compressing connectors, while less effective in fine-grained perception tasks, offer significant speed advantages and perform comparably in \emph{coarse-grained perception} and \emph{reasoning} tasks. These insights are crucial for guiding MLLM architecture design and advancing the optimization of MLLM architectures. | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | 496,340 |
2403.11775 | Minimal Ternary Linear Codes from Vectorial Functions | The study on minimal linear codes has received great attention due to their significant applications in secret sharing schemes and secure two-party computation. Until now, numerous minimal linear codes have been discovered. However, to the best of our knowledge, no infinite family of minimal ternary linear codes was found from vectorial functions. In this paper, we present a necessary and sufficient condition for a large class of ternary linear codes from vectorial functions such that those codes are minimal. Based on that, we construct several minimal ternary linear codes with three-weight from vectorial regular plateaued functions, and determine their weight distributions. Moreover, we also give a necessary and sufficient condition for a large family of ternary linear codes from vectorial functions such that the codes are minimal and violate the AB condition simultaneously. According to this characterization, we find several minimal ternary linear codes violating the AB condition. Notably, our results show that our method can be applied to solve a problem on minimal linear codes proposed by Li et al. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 438,852 |
2502.03835 | Single-Domain Generalized Object Detection by Balancing Domain Diversity
and Invariance | Single-domain generalization for object detection (S-DGOD) aims to transfer knowledge from a single source domain to unseen target domains. In recent years, many models have focused primarily on achieving feature invariance to enhance robustness. However, due to the inherent diversity across domains, an excessive emphasis on invariance can cause the model to overlook the actual differences between images. This overemphasis may complicate the training process and lead to a loss of valuable information. To address this issue, we propose the Diversity Invariance Detection Model (DIDM), which focuses on the balance between the diversity of domain-specific and invariance cross domains. Recognizing that domain diversity introduces variations in domain-specific features, we introduce a Diversity Learning Module (DLM). The DLM is designed to preserve the diversity of domain-specific information with proposed feature diversity loss while limiting the category semantics in the features. In addition, to maintain domain invariance, we incorporate a Weighted Aligning Module (WAM), which aligns features without compromising feature diversity. We conducted our model on five distinct datasets, which have illustrated the superior performance and effectiveness of the proposed model. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 530,883 |
2210.11190 | Integration of Neuromorphic AI in Event-Driven Distributed Digitized
Systems: Concepts and Research Directions | Increasing complexity and data-generation rates in cyber-physical systems and the industrial Internet of things are calling for a corresponding increase in AI capabilities at the resource-constrained edges of the Internet. Meanwhile, the resource requirements of digital computing and deep learning are growing exponentially, in an unsustainable manner. One possible way to bridge this gap is the adoption of resource-efficient brain-inspired "neuromorphic" processing and sensing devices, which use event-driven, asynchronous, dynamic neurosynaptic elements with colocated memory for distributed processing and machine learning. However, since neuromorphic systems are fundamentally different from conventional von Neumann computers and clock-driven sensor systems, several challenges are posed to large-scale adoption and integration of neuromorphic devices into the existing distributed digital-computational infrastructure. Here, we describe the current landscape of neuromorphic computing, focusing on characteristics that pose integration challenges. Based on this analysis, we propose a microservice-based framework for neuromorphic systems integration, consisting of a neuromorphic-system proxy, which provides virtualization and communication capabilities required in distributed systems of systems, in combination with a declarative programming approach offering engineering-process abstraction. We also present concepts that could serve as a basis for the realization of this framework, and identify directions for further research required to enable large-scale system integration of neuromorphic devices. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | true | 325,216 |
1208.0451 | Directed Random Markets: Connectivity determines Money | Boltzmann-Gibbs distribution arises as the statistical equilibrium probability distribution of money among the agents of a closed economic system where random and undirected exchanges are allowed. When considering a model with uniform savings in the exchanges, the final distribution is close to the gamma family. In this work, we implement these exchange rules on networks and we find that these stationary probability distributions are robust and they are not affected by the topology of the underlying network. We introduce a new family of interactions: random but directed ones. In this case, it is found the topology to be determinant and the mean money per economic agent is related to the degree of the node representing the agent in the network. The relation between the mean money per economic agent and its degree is shown to be linear. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | 17,917 |
2009.04595 | tsBNgen: A Python Library to Generate Time Series Data from an Arbitrary
Dynamic Bayesian Network Structure | Synthetic data is widely used in various domains. This is because many modern algorithms require lots of data for efficient training, and data collection and labeling usually are a time-consuming process and are prone to errors. Furthermore, some real-world data, due to its nature, is confidential and cannot be shared. Bayesian networks are a type of probabilistic graphical model widely used to model the uncertainties in real-world processes. Dynamic Bayesian networks are a special class of Bayesian networks that model temporal and time series data. In this paper, we introduce the tsBNgen, a Python library to generate time series and sequential data based on an arbitrary dynamic Bayesian network. The package, documentation, and examples can be downloaded from https://github.com/manitadayon/tsBNgen. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 195,089 |
0807.1997 | Multi-Instance Learning by Treating Instances As Non-I.I.D. Samples | Multi-instance learning attempts to learn from a training set consisting of labeled bags each containing many unlabeled instances. Previous studies typically treat the instances in the bags as independently and identically distributed. However, the instances in a bag are rarely independent, and therefore a better performance can be expected if the instances are treated in an non-i.i.d. way that exploits the relations among instances. In this paper, we propose a simple yet effective multi-instance learning method, which regards each bag as a graph and uses a specific kernel to distinguish the graphs by considering the features of the nodes as well as the features of the edges that convey some relations among instances. The effectiveness of the proposed method is validated by experiments. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 2,054 |
2410.16658 | Adsorb-Agent: Autonomous Identification of Stable Adsorption
Configurations via Large Language Model Agent | Adsorption energy is a key reactivity descriptor in catalysis, enabling efficient screening for optimal catalysts. However, determining adsorption energy typically requires evaluating numerous adsorbate-catalyst configurations. Current algorithmic approaches rely on exhaustive enumeration of adsorption sites and configurations, which makes the process computationally intensive and does not inherently guarantee the identification of the global minimum energy. In this work, we introduce Adsorb-Agent, a Large Language Model (LLM) agent designed to efficiently identify system-specific stable adsorption configurations corresponding to the global minimum adsorption energy. Adsorb-Agent leverages its built-in knowledge and emergent reasoning capabilities to strategically explore adsorption configurations likely to hold adsorption energy. By reducing the reliance on exhaustive sampling, it significantly decreases the number of initial configurations required while improving the accuracy of adsorption energy predictions. We evaluate Adsorb-Agent's performance across twenty representative systems encompassing a range of complexities. The Adsorb-Agent successfully identifies comparable adsorption energies for 83.7% of the systems and achieves lower energies, closer to the actual global minimum, for 35% of the systems, while requiring significantly fewer initial configurations than conventional methods. Its capability is particularly evident in complex systems, where it identifies lower adsorption energies for 46.7% of systems involving intermetallic surfaces and 66.7% of systems with large adsorbate molecules. These results demonstrate the potential of Adsorb-Agent to accelerate catalyst discovery by reducing computational costs and improving the reliability of adsorption energy predictions. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 501,123 |
2102.05346 | The Hessigheim 3D (H3D) Benchmark on Semantic Segmentation of
High-Resolution 3D Point Clouds and Textured Meshes from UAV LiDAR and
Multi-View-Stereo | Automated semantic segmentation and object detection are of great importance in geospatial data analysis. However, supervised machine learning systems such as convolutional neural networks require large corpora of annotated training data. Especially in the geospatial domain, such datasets are quite scarce. Within this paper, we aim to alleviate this issue by introducing a new annotated 3D dataset that is unique in three ways: i) The dataset consists of both an Unmanned Aerial Vehicle (UAV) laser scanning point cloud and a 3D textured mesh. ii) The point cloud features a mean point density of about 800 pts/sqm and the oblique imagery used for 3D mesh texturing realizes a ground sampling distance of about 2-3 cm. This enables the identification of fine-grained structures and represents the state of the art in UAV-based mapping. iii) Both data modalities will be published for a total of three epochs allowing applications such as change detection. The dataset depicts the village of Hessigheim (Germany), henceforth referred to as H3D. It is designed to promote research in the field of 3D data analysis on one hand and to evaluate and rank existing and emerging approaches for semantic segmentation of both data modalities on the other hand. Ultimately, we hope that H3D will become a widely used benchmark dataset in company with the well-established ISPRS Vaihingen 3D Semantic Labeling Challenge benchmark (V3D). The dataset can be downloaded from https://ifpwww.ifp.uni-stuttgart.de/benchmark/hessigheim/default.aspx. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 219,408 |
2309.03899 | The Making and Breaking of Camouflage | Not all camouflages are equally effective, as even a partially visible contour or a slight color difference can make the animal stand out and break its camouflage. In this paper, we address the question of what makes a camouflage successful, by proposing three scores for automatically assessing its effectiveness. In particular, we show that camouflage can be measured by the similarity between background and foreground features and boundary visibility. We use these camouflage scores to assess and compare all available camouflage datasets. We also incorporate the proposed camouflage score into a generative model as an auxiliary loss and show that effective camouflage images or videos can be synthesised in a scalable manner. The generated synthetic dataset is used to train a transformer-based model for segmenting camouflaged animals in videos. Experimentally, we demonstrate state-of-the-art camouflage breaking performance on the public MoCA-Mask benchmark. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 390,552 |
2404.18746 | Enhancing Interactive Image Retrieval With Query Rewriting Using Large
Language Models and Vision Language Models | Image search stands as a pivotal task in multimedia and computer vision, finding applications across diverse domains, ranging from internet search to medical diagnostics. Conventional image search systems operate by accepting textual or visual queries, retrieving the top-relevant candidate results from the database. However, prevalent methods often rely on single-turn procedures, introducing potential inaccuracies and limited recall. These methods also face the challenges, such as vocabulary mismatch and the semantic gap, constraining their overall effectiveness. To address these issues, we propose an interactive image retrieval system capable of refining queries based on user relevance feedback in a multi-turn setting. This system incorporates a vision language model (VLM) based image captioner to enhance the quality of text-based queries, resulting in more informative queries with each iteration. Moreover, we introduce a large language model (LLM) based denoiser to refine text-based query expansions, mitigating inaccuracies in image descriptions generated by captioning models. To evaluate our system, we curate a new dataset by adapting the MSR-VTT video retrieval dataset to the image retrieval task, offering multiple relevant ground truth images for each query. Through comprehensive experiments, we validate the effectiveness of our proposed system against baseline methods, achieving state-of-the-art performance with a notable 10\% improvement in terms of recall. Our contributions encompass the development of an innovative interactive image retrieval system, the integration of an LLM-based denoiser, the curation of a meticulously designed evaluation dataset, and thorough experimental validation. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 450,373 |
2202.08853 | Scalable approach to many-body localization via quantum data | We are interested in how quantum data can allow for practical solutions to otherwise difficult computational problems. A notoriously difficult phenomenon from quantum many-body physics is the emergence of many-body localization (MBL). So far, is has evaded a comprehensive analysis. In particular, numerical studies are challenged by the exponential growth of the Hilbert space dimension. As many of these studies rely on exact diagonalization of the system's Hamiltonian, only small system sizes are accessible. In this work, we propose a highly flexible neural network based learning approach that, once given training data, circumvents any computationally expensive step. In this way, we can efficiently estimate common indicators of MBL such as the adjacent gap ratio or entropic quantities. Our estimator can be trained on data from various system sizes at once which grants the ability to extrapolate from smaller to larger ones. Moreover, using transfer learning we show that already a two-dimensional feature vector is sufficient to obtain several different indicators at various energy densities at once. We hope that our approach can be applied to large-scale quantum experiments to provide new insights into quantum many-body physics. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 281,002 |
1909.04063 | Exploratory Combinatorial Optimization with Reinforcement Learning | Many real-world problems can be reduced to combinatorial optimization on a graph, where the subset or ordering of vertices that maximize some objective function must be found. With such tasks often NP-hard and analytically intractable, reinforcement learning (RL) has shown promise as a framework with which efficient heuristic methods to tackle these problems can be learned. Previous works construct the solution subset incrementally, adding one element at a time, however, the irreversible nature of this approach prevents the agent from revising its earlier decisions, which may be necessary given the complexity of the optimization task. We instead propose that the agent should seek to continuously improve the solution by learning to explore at test time. Our approach of exploratory combinatorial optimization (ECO-DQN) is, in principle, applicable to any combinatorial problem that can be defined on a graph. Experimentally, we show our method to produce state-of-the-art RL performance on the Maximum Cut problem. Moreover, because ECO-DQN can start from any arbitrary configuration, it can be combined with other search methods to further improve performance, which we demonstrate using a simple random search. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 144,683 |
2406.07682 | Stabilization of a Quadrotor via Energy Shaping | Stabilization of a quadrotor without a controller based on cascade structure is a challenging problem. Besides, due to the dynamics and the number of underactuation, an energy shaping controller has not been designed in 3D for a quadrotor. This paper presents a novel solution to the potential energy shaping problem for a quadrotor utilizing the Interconnection and Damping Assignment Passivity Based Control (IDA-PBC) approach. For the first time, we extend the solution of PDEs from the 2D case to the full 3D scenario. This advancement seems to be a significant step forward for stabilization of underactuated aerial vehicles without a cascade controller. The results are verified via simulation on a typical quadrotor. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 463,156 |
2410.20670 | Segmenting Watermarked Texts From Language Models | Watermarking is a technique that involves embedding nearly unnoticeable statistical signals within generated content to help trace its source. This work focuses on a scenario where an untrusted third-party user sends prompts to a trusted language model (LLM) provider, who then generates a text from their LLM with a watermark. This setup makes it possible for a detector to later identify the source of the text if the user publishes it. The user can modify the generated text by substitutions, insertions, or deletions. Our objective is to develop a statistical method to detect if a published text is LLM-generated from the perspective of a detector. We further propose a methodology to segment the published text into watermarked and non-watermarked sub-strings. The proposed approach is built upon randomization tests and change point detection techniques. We demonstrate that our method ensures Type I and Type II error control and can accurately identify watermarked sub-strings by finding the corresponding change point locations. To validate our technique, we apply it to texts generated by several language models with prompts extracted from Google's C4 dataset and obtain encouraging numerical results. We release all code publicly at https://github.com/doccstat/llm-watermark-cpd. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | true | 502,910 |
2407.18965 | Generative AI Augmented Induction-based Formal Verification | Generative Artificial Intelligence (GenAI) has demonstrated its capabilities in the present world that reduce human effort significantly. It utilizes deep learning techniques to create original and realistic content in terms of text, images, code, music, and video. Researchers have also shown the capabilities of modern Large Language Models (LLMs) used by GenAI models that can be used to aid hardware development. Formal verification is a mathematical-based proof method used to exhaustively verify the correctness of a design. In this paper, we demonstrate how GenAI can be used in induction-based formal verification to increase the verification throughput. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 476,596 |
1806.03146 | Neural Message Passing with Edge Updates for Predicting Properties of
Molecules and Materials | Neural message passing on molecular graphs is one of the most promising methods for predicting formation energy and other properties of molecules and materials. In this work we extend the neural message passing model with an edge update network which allows the information exchanged between atoms to depend on the hidden state of the receiving atom. We benchmark the proposed model on three publicly available datasets (QM9, The Materials Project and OQMD) and show that the proposed model yields superior prediction of formation energies and other properties on all three datasets in comparison with the best published results. Furthermore we investigate different methods for constructing the graph used to represent crystalline structures and we find that using a graph based on K-nearest neighbors achieves better prediction accuracy than using maximum distance cutoff or the Voronoi tessellation graph. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 99,932 |
2412.03702 | Asymptotics of Linear Regression with Linearly Dependent Data | In this paper we study the asymptotics of linear regression in settings with non-Gaussian covariates where the covariates exhibit a linear dependency structure, departing from the standard assumption of independence. We model the covariates using stochastic processes with spatio-temporal covariance and analyze the performance of ridge regression in the high-dimensional proportional regime, where the number of samples and feature dimensions grow proportionally. A Gaussian universality theorem is proven, demonstrating that the asymptotics are invariant under replacing the non-Gaussian covariates with Gaussian vectors preserving mean and covariance, for which tools from random matrix theory can be used to derive precise characterizations of the estimation error. The estimation error is characterized by a fixed-point equation involving the spectral properties of the spatio-temporal covariance matrices, enabling efficient computation. We then study optimal regularization, overparameterization, and the double descent phenomenon in the context of dependent data. Simulations validate our theoretical predictions, shedding light on how dependencies influence estimation error and the choice of regularization parameters. | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | 514,067 |
1902.10675 | High-dimensional Bayesian optimization using low-dimensional feature
spaces | Bayesian optimization (BO) is a powerful approach for seeking the global optimum of expensive black-box functions and has proven successful for fine tuning hyper-parameters of machine learning models. However, BO is practically limited to optimizing 10--20 parameters. To scale BO to high dimensions, we usually make structural assumptions on the decomposition of the objective and\slash or exploit the intrinsic lower dimensionality of the problem, e.g. by using linear projections. We could achieve a higher compression rate with nonlinear projections, but learning these nonlinear embeddings typically requires much data. This contradicts the BO objective of a relatively small evaluation budget. To address this challenge, we propose to learn a low-dimensional feature space jointly with (a) the response surface and (b) a reconstruction mapping. Our approach allows for optimization of BO's acquisition function in the lower-dimensional subspace, which significantly simplifies the optimization problem. We reconstruct the original parameter space from the lower-dimensional subspace for evaluating the black-box function. For meaningful exploration, we solve a constrained optimization problem. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 122,740 |
2411.07540 | Lateral String Stability in Autonomous & Connected Vehicle Platoons | This paper addresses the lateral control of Autonomous and Connected Vehicles (ACVs) in a platoon executing an Emergency Lane Change (ELC) maneuver. These maneuvers are typically triggered by emergency signals from the front or rear of the platoon in response to the need to avoid obstacles or allow other vehicles to pass. The study assumes that ACVs maintain reliable connectivity, enabling each following vehicle to access GPS position traces of both the lead and immediately preceding vehicles in the platoon. We demonstrate that lateral string stability in the ACV platoon can be achieved using communicated information solely from the lead and preceding vehicles. Additionally, we present a lateral control framework for ACVs, which helps track a discretized preview of the trajectory constructed from the communicated data. This framework involves constructing two distinct trajectories based on the preview data from the lead and preceding vehicles, calculating the associated errors and lateral control actions for each, and then integrating these to generate a steering command. Numerical results validate the effectiveness of the proposed lateral control scheme. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 507,573 |
2010.03516 | Combination of digital signal processing and assembled predictive models
facilitates the rational design of proteins | Predicting the effect of mutations in proteins is one of the most critical challenges in protein engineering; by knowing the effect a substitution of one (or several) residues in the protein's sequence has on its overall properties, could design a variant with a desirable function. New strategies and methodologies to create predictive models are continually being developed. However, those that claim to be general often do not reach adequate performance, and those that aim to a particular task improve their predictive performance at the cost of the method's generality. Moreover, these approaches typically require a particular decision to encode the amino acidic sequence, without an explicit methodological agreement in such endeavor. To address these issues, in this work, we applied clustering, embedding, and dimensionality reduction techniques to the AAIndex database to select meaningful combinations of physicochemical properties for the encoding stage. We then used the chosen set of properties to obtain several encodings of the same sequence, to subsequently apply the Fast Fourier Transform (FFT) on them. We perform an exploratory stage of Machine-Learning models in the frequency space, using different algorithms and hyperparameters. Finally, we select the best performing predictive models in each set of properties and create an assembled model. We extensively tested the proposed methodology on different datasets and demonstrated that the generated assembled model achieved notably better performance metrics than those models based on a single encoding and, in most cases, better than those previously reported. The proposed method is available as a Python library for non-commercial use under the GNU General Public License (GPLv3) license. | false | true | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 199,423 |
1208.0573 | Invariants for Homology Classes with Application to Optimal Search and
Planning Problem in Robotics | We consider planning problems on a punctured Euclidean spaces, $\mathbb{R}^D - \widetilde{\mathcal{O}}$, where $\widetilde{\mathcal{O}}$ is a collection of obstacles. Such spaces are of frequent occurrence as configuration spaces of robots, where $\widetilde{\mathcal{O}}$ represent either physical obstacles that the robots need to avoid (e.g., walls, other robots, etc.) or illegal states (e.g., all legs off-the-ground). As state-planning is translated to path-planning on a configuration space, we collate equivalent plannings via topologically-equivalent paths. This prompts finding or exploring the different homology classes in such environments and finding representative optimal trajectories in each such class. In this paper we start by considering the problem of finding a complete set of easily computable homology class invariants for $(N-1)$-cycles in $(\mathbb{R}^D - \widetilde{\mathcal{O}})$. We achieve this by finding explicit generators of the $(N-1)^{st}$ de Rham cohomology group of this punctured Euclidean space, and using their integrals to define cocycles. The action of those dual cocycles on $(N-1)$-cycles gives the desired complete set of invariants. We illustrate the computation through examples. We further show that, due to the integral approach, this complete set of invariants is well-suited for efficient search-based planning of optimal robot trajectories with topological constraints. Finally we extend this approach to computation of invariants in spaces derived from $(\mathbb{R}^D - \widetilde{\mathcal{O}})$ by collapsing subspace, thereby permitting application to a wider class of non-Euclidean ambient spaces. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 17,924 |
2303.00295 | Region Prediction for Efficient Robot Localization on Large Maps | Recognizing already explored places (a.k.a. place recognition) is a fundamental task in Simultaneous Localization and Mapping (SLAM) to enable robot relocalization and loop closure detection. In topological SLAM the recognition takes place by comparing a signature (or feature vector) associated to the current node with the signatures of the nodes in the known map. However, as the number of nodes increases, matching the current node signature against all the existing ones becomes inefficient and thwarts real-time navigation. In this paper we propose a novel approach to pre-select a subset of map nodes for place recognition. The map nodes are clustered during exploration and each cluster is associated with a region. The region labels become the prediction targets of a deep neural network and, during navigation, only the nodes associated with the regions predicted with high probability are considered for matching. While the proposed technique can be integrated in different SLAM approaches, in this work we describe an effective integration with RTAB-Map (a popular framework for real-time topological SLAM) which allowed us to design and run several experiments to demonstrate its effectiveness. All the code and material from the experiments will be available online at https://github.com/MI-BioLab/region-learner. | false | false | false | false | true | false | true | true | false | false | false | false | false | false | false | false | false | false | 348,550 |
2204.01234 | Soft Threshold Ternary Networks | Large neural networks are difficult to deploy on mobile devices because of intensive computation and storage. To alleviate it, we study ternarization, a balance between efficiency and accuracy that quantizes both weights and activations into ternary values. In previous ternarized neural networks, a hard threshold {\Delta} is introduced to determine quantization intervals. Although the selection of {\Delta} greatly affects the training results, previous works estimate {\Delta} via an approximation or treat it as a hyper-parameter, which is suboptimal. In this paper, we present the Soft Threshold Ternary Networks (STTN), which enables the model to automatically determine quantization intervals instead of depending on a hard threshold. Concretely, we replace the original ternary kernel with the addition of two binary kernels at training time, where ternary values are determined by the combination of two corresponding binary values. At inference time, we add up the two binary kernels to obtain a single ternary kernel. Our method dramatically outperforms current state-of-the-arts, lowering the performance gap between full-precision networks and extreme low bit networks. Experiments on ImageNet with ResNet-18 (Top-1 66.2%) achieves new state-of-the-art. Update: In this version, we further fine-tune the experimental hyperparameters and training procedure. The latest STTN shows that ResNet-18 with ternary weights and ternary activations achieves up to 68.2% Top-1 accuracy on ImageNet. Code is available at: github.com/WeixiangXu/STTN. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 289,543 |
2301.02440 | An Image captioning algorithm based on the Hybrid Deep Learning
Technique (CNN+GRU) | Image captioning by the encoder-decoder framework has shown tremendous advancement in the last decade where CNN is mainly used as encoder and LSTM is used as a decoder. Despite such an impressive achievement in terms of accuracy in simple images, it lacks in terms of time complexity and space complexity efficiency. In addition to this, in case of complex images with a lot of information and objects, the performance of this CNN-LSTM pair downgraded exponentially due to the lack of semantic understanding of the scenes presented in the images. Thus, to take these issues into consideration, we present CNN-GRU encoder decode framework for caption-to-image reconstructor to handle the semantic context into consideration as well as the time complexity. By taking the hidden states of the decoder into consideration, the input image and its similar semantic representations is reconstructed and reconstruction scores from a semantic reconstructor are used in conjunction with likelihood during model training to assess the quality of the generated caption. As a result, the decoder receives improved semantic information, enhancing the caption production process. During model testing, combining the reconstruction score and the log-likelihood is also feasible to choose the most appropriate caption. The suggested model outperforms the state-of-the-art LSTM-A5 model for picture captioning in terms of time complexity and accuracy. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 339,512 |
2303.00984 | Encoding of data sets and algorithms | In many high-impact applications, it is important to ensure the quality of output of a machine learning algorithm as well as its reliability in comparison with the complexity of the algorithm used. In this paper, we have initiated a mathematically rigorous theory to decide which models (algorithms applied on data sets) are close to each other in terms of certain metrics, such as performance and the complexity level of the algorithm. This involves creating a grid on the hypothetical spaces of data sets and algorithms so as to identify a finite set of probability distributions from which the data sets are sampled and a finite set of algorithms. A given threshold metric acting on this grid will express the nearness (or statistical distance) from each algorithm and data set of interest to any given application. A technically difficult part of this project is to estimate the so-called metric entropy of a compact subset of functions of \textbf{infinitely many variables} that arise in the definition of these spaces. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 348,788 |
2404.17699 | Deep Learning for Melt Pool Depth Contour Prediction From Surface
Thermal Images via Vision Transformers | Insufficient overlap between the melt pools produced during Laser Powder Bed Fusion (L-PBF) can lead to lack-of-fusion defects and deteriorated mechanical and fatigue performance. In-situ monitoring of the melt pool subsurface morphology requires specialized equipment that may not be readily accessible or scalable. Therefore, we introduce a machine learning framework to correlate in-situ two-color thermal images observed via high-speed color imaging to the two-dimensional profile of the melt pool cross-section. Specifically, we employ a hybrid CNN-Transformer architecture to establish a correlation between single bead off-axis thermal image sequences and melt pool cross-section contours measured via optical microscopy. In this architecture, a ResNet model embeds the spatial information contained within the thermal images to a latent vector, while a Transformer model correlates the sequence of embedded vectors to extract temporal information. Our framework is able to model the curvature of the subsurface melt pool structure, with improved performance in high energy density regimes compared to analytical melt pool models. The performance of this model is evaluated through dimensional and geometric comparisons to the corresponding experimental melt pool observations. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 449,956 |
2110.07735 | Continual Learning on Noisy Data Streams via Self-Purified Replay | Continually learning in the real world must overcome many challenges, among which noisy labels are a common and inevitable issue. In this work, we present a repla-ybased continual learning framework that simultaneously addresses both catastrophic forgetting and noisy labels for the first time. Our solution is based on two observations; (i) forgetting can be mitigated even with noisy labels via self-supervised learning, and (ii) the purity of the replay buffer is crucial. Building on this regard, we propose two key components of our method: (i) a self-supervised replay technique named Self-Replay which can circumvent erroneous training signals arising from noisy labeled data, and (ii) the Self-Centered filter that maintains a purified replay buffer via centrality-based stochastic graph ensembles. The empirical results on MNIST, CIFAR-10, CIFAR-100, and WebVision with real-world noise demonstrate that our framework can maintain a highly pure replay buffer amidst noisy streamed data while greatly outperforming the combinations of the state-of-the-art continual learning and noisy label learning methods. The source code is available at http://vision.snu.ac.kr/projects/SPR | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 261,106 |
2403.11511 | Sim-to-Real Grasp Detection with Global-to-Local RGB-D Adaptation | This paper focuses on the sim-to-real issue of RGB-D grasp detection and formulates it as a domain adaptation problem. In this case, we present a global-to-local method to address hybrid domain gaps in RGB and depth data and insufficient multi-modal feature alignment. First, a self-supervised rotation pre-training strategy is adopted to deliver robust initialization for RGB and depth networks. We then propose a global-to-local alignment pipeline with individual global domain classifiers for scene features of RGB and depth images as well as a local one specifically working for grasp features in the two modalities. In particular, we propose a grasp prototype adaptation module, which aims to facilitate fine-grained local feature alignment by dynamically updating and matching the grasp prototypes from the simulation and real-world scenarios throughout the training process. Due to such designs, the proposed method substantially reduces the domain shift and thus leads to consistent performance improvements. Extensive experiments are conducted on the GraspNet-Planar benchmark and physical environment, and superior results are achieved which demonstrate the effectiveness of our method. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 438,738 |
2406.09082 | Data-driven modeling and supervisory control system optimization for
plug-in hybrid electric vehicles | Learning-based intelligent energy management systems for plug-in hybrid electric vehicles (PHEVs) are crucial for achieving efficient energy utilization. However, their application faces system reliability challenges in the real world, which prevents widespread acceptance by original equipment manufacturers (OEMs). This paper begins by establishing a PHEV model based on physical and data-driven models, focusing on the high-fidelity training environment. It then proposes a real-vehicle application-oriented control framework, combining horizon-extended reinforcement learning (RL)-based energy management with the equivalent consumption minimization strategy (ECMS) to enhance practical applicability, and improves the flawed method of equivalent factor evaluation based on instantaneous driving cycle and powertrain states found in existing research. Finally, comprehensive simulation and hardware-in-the-loop validation are carried out which demonstrates the advantages of the proposed control framework in fuel economy over adaptive-ECMS and rule-based strategies. Compared to conventional RL architectures that directly control powertrain components, the proposed control method not only achieves similar optimality but also significantly enhances the disturbance resistance of the energy management system, providing an effective control framework for RL-based energy management strategies aimed at real-vehicle applications by OEMs. | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | 463,759 |
1705.09144 | Modeling and Simulation of the Dynamics of the Quick Return Mechanism: A
Bond Graph Approach | This paper applies the multibond graph approach for rigid multibody systems to model the dynamics of general spatial mechanisms. The commonly used quick return mechanism which comprises of revolute as well as prismatic joints has been chosen as a representative example to demonstrate the application of this technique and its resulting advantages. In this work, the links of the quick return mechanism are modeled as rigid bodies. The rigid links are then coupled at the joints based on the nature of constraint. This alternative method of formulation of system dynamics, using Bond Graphs, offers a rich set of features that include pictorial representation of the dynamics of translation and rotation for each link of the mechanism in the inertial frame, representation and handling of constraints at the joints, depiction of causality, obtaining dynamic reaction forces and moments at various locations in the mechanism and so on. Yet another advantage of this approach is that the coding for simulation can be carried out directly from the Bond Graph in an algorithmic manner, without deriving system equations. In this work, the program code for simulation is written in MATLAB. The vector and tensor operations are conveniently represented in MATLAB, resulting in a compact and optimized code. The simulation results are plotted and discussed in detail. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 74,150 |
2302.11051 | Fusion of Global and Local Knowledge for Personalized Federated Learning | Personalized federated learning, as a variant of federated learning, trains customized models for clients using their heterogeneously distributed data. However, it is still inconclusive about how to design personalized models with better representation of shared global knowledge and personalized pattern. To bridge the gap, we in this paper explore personalized models with low-rank and sparse decomposition. Specifically, we employ proper regularization to extract a low-rank global knowledge representation (GKR), so as to distill global knowledge into a compact representation. Subsequently, we employ a sparse component over the obtained GKR to fuse the personalized pattern into the global knowledge. As a solution, we propose a two-stage proximal-based algorithm named \textbf{Fed}erated learning with mixed \textbf{S}parse and \textbf{L}ow-\textbf{R}ank representation (FedSLR) to efficiently search for the mixed models. Theoretically, under proper assumptions, we show that the GKR trained by FedSLR can at least sub-linearly converge to a stationary point of the regularized problem, and that the sparse component being fused can converge to its stationary point under proper settings. Extensive experiments also demonstrate the superior empirical performance of FedSLR. Moreover, FedSLR reduces the number of parameters, and lowers the down-link communication complexity, which are all desirable for federated learning algorithms. Source code is available in \url{https://github.com/huangtiansheng/fedslr}. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 347,066 |
2403.11070 | Controllable Relation Disentanglement for Few-Shot Class-Incremental
Learning | In this paper, we propose to tackle Few-Shot Class-Incremental Learning (FSCIL) from a new perspective, i.e., relation disentanglement, which means enhancing FSCIL via disentangling spurious relation between categories. The challenge of disentangling spurious correlations lies in the poor controllability of FSCIL. On one hand, an FSCIL model is required to be trained in an incremental manner and thus it is very hard to directly control relationships between categories of different sessions. On the other hand, training samples per novel category are only in the few-shot setting, which increases the difficulty of alleviating spurious relation issues as well. To overcome this challenge, in this paper, we propose a new simple-yet-effective method, called ConTrollable Relation-disentangLed Few-Shot Class-Incremental Learning (CTRL-FSCIL). Specifically, during the base session, we propose to anchor base category embeddings in feature space and construct disentanglement proxies to bridge gaps between the learning for category representations in different sessions, thereby making category relation controllable. During incremental learning, the parameters of the backbone network are frozen in order to relieve the negative impact of data scarcity. Moreover, a disentanglement loss is designed to effectively guide a relation disentanglement controller to disentangle spurious correlations between the embeddings encoded by the backbone. In this way, the spurious correlation issue in FSCIL can be suppressed. Extensive experiments on CIFAR-100, mini-ImageNet, and CUB-200 datasets demonstrate the effectiveness of our CTRL-FSCIL method. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 438,509 |
2408.10642 | Minor SFT loss for LLM fine-tune to increase performance and reduce
model deviation | Instruct LLM provide a paradigm used in large scale language model to align LLM to human preference. The paradigm contains supervised fine tuning and reinforce learning from human feedback. This paradigm is also used in downstream scenarios to adapt LLM to specific corpora and applications. Comparing to SFT, there are many efforts focused on RLHF and several algorithms being proposed, such as PPO, DPO, IPO, KTO, MinorDPO and etc. Meanwhile most efforts for SFT are focused on how to collect, filter and mix high quality data. In this article with insight from DPO and MinorDPO, we propose a training metric for SFT to measure the discrepancy between the optimized model and the original model, and a loss function MinorSFT that can increase the training effectiveness, and reduce the discrepancy between the optimized LLM and original LLM. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 481,955 |
1710.10777 | Understanding Hidden Memories of Recurrent Neural Networks | Recurrent neural networks (RNNs) have been successfully applied to various natural language processing (NLP) tasks and achieved better results than conventional methods. However, the lack of understanding of the mechanisms behind their effectiveness limits further improvements on their architectures. In this paper, we present a visual analytics method for understanding and comparing RNN models for NLP tasks. We propose a technique to explain the function of individual hidden state units based on their expected response to input texts. We then co-cluster hidden state units and words based on the expected response and visualize co-clustering results as memory chips and word clouds to provide more structured knowledge on RNNs' hidden states. We also propose a glyph-based sequence visualization based on aggregate information to analyze the behavior of an RNN's hidden state at the sentence-level. The usability and effectiveness of our method are demonstrated through case studies and reviews from domain experts. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 83,472 |
2401.17345 | Reproducibility, energy efficiency and performance of pseudorandom
number generators in machine learning: a comparative study of python, numpy,
tensorflow, and pytorch implementations | Pseudo-Random Number Generators (PRNGs) have become ubiquitous in machine learning technologies because they are interesting for numerous methods. The field of machine learning holds the potential for substantial advancements across various domains, as exemplified by recent breakthroughs in Large Language Models (LLMs). However, despite the growing interest, persistent concerns include issues related to reproducibility and energy consumption. Reproducibility is crucial for robust scientific inquiry and explainability, while energy efficiency underscores the imperative to conserve finite global resources. This study delves into the investigation of whether the leading Pseudo-Random Number Generators (PRNGs) employed in machine learning languages, libraries, and frameworks uphold statistical quality and numerical reproducibility when compared to the original C implementation of the respective PRNG algorithms. Additionally, we aim to evaluate the time efficiency and energy consumption of various implementations. Our experiments encompass Python, NumPy, TensorFlow, and PyTorch, utilizing the Mersenne Twister, PCG, and Philox algorithms. Remarkably, we verified that the temporal performance of machine learning technologies closely aligns with that of C-based implementations, with instances of achieving even superior performances. On the other hand, it is noteworthy that ML technologies consumed only 10% more energy than their C-implementation counterparts. However, while statistical quality was found to be comparable, achieving numerical reproducibility across different platforms for identical seeds and algorithms was not achieved. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 425,176 |
2411.00147 | Mutual Information Preserving Neural Network Pruning | Pruning has emerged as the primary approach used to limit the resource requirements of large neural networks (NNs). Since the proposal of the lottery ticket hypothesis, researchers have focused either on pruning at initialization or after training. However, recent theoretical findings have shown that the sample efficiency of robust pruned models is proportional to the mutual information (MI) between the pruning masks and the model's training datasets, \textit{whether at initialization or after training}. In this paper, starting from these results, we introduce Mutual Information Preserving Pruning (MIPP), a structured activation-based pruning technique applicable before or after training. The core principle of MIPP is to select nodes in a way that conserves MI shared between the activations of adjacent layers, and consequently between the data and masks. Approaching the pruning problem in this manner means we can prove that there exists a function that can map the pruned upstream layer's activations to the downstream layer's, implying re-trainability. We demonstrate that MIPP consistently outperforms state-of-the-art methods, regardless of whether pruning is performed before or after training. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 504,453 |
0706.1399 | Duality and Stability Regions of Multi-rate Broadcast and Multiple
Access Networks | We characterize stability regions of two-user fading Gaussian multiple access (MAC) and broadcast (BC) networks with centralized scheduling. The data to be transmitted to the users is encoded into codewords of fixed length. The rates of the codewords used are restricted to a fixed set of finite cardinality. With successive decoding and interference cancellation at the receivers, we find the set of arrival rates that can be stabilized over the MAC and BC networks. In MAC and BC networks with average power constraints, we observe that the duality property that relates the MAC and BC information theoretic capacity regions extend to their stability regions as well. In MAC and BC networks with peak power constraints, the union of stability regions of dual MAC networks is found to be strictly contained in the BC stability region. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 323 |
2003.13791 | Domain Balancing: Face Recognition on Long-Tailed Domains | Long-tailed problem has been an important topic in face recognition task. However, existing methods only concentrate on the long-tailed distribution of classes. Differently, we devote to the long-tailed domain distribution problem, which refers to the fact that a small number of domains frequently appear while other domains far less existing. The key challenge of the problem is that domain labels are too complicated (related to race, age, pose, illumination, etc.) and inaccessible in real applications. In this paper, we propose a novel Domain Balancing (DB) mechanism to handle this problem. Specifically, we first propose a Domain Frequency Indicator (DFI) to judge whether a sample is from head domains or tail domains. Secondly, we formulate a light-weighted Residual Balancing Mapping (RBM) block to balance the domain distribution by adjusting the network according to DFI. Finally, we propose a Domain Balancing Margin (DBM) in the loss function to further optimize the feature space of the tail domains to improve generalization. Extensive analysis and experiments on several face recognition benchmarks demonstrate that the proposed method effectively enhances the generalization capacities and achieves superior performance. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 170,304 |
2106.04279 | Staircase Attention for Recurrent Processing of Sequences | Attention mechanisms have become a standard tool for sequence modeling tasks, in particular by stacking self-attention layers over the entire input sequence as in the Transformer architecture. In this work we introduce a novel attention procedure called staircase attention that, unlike self-attention, operates across the sequence (in time) recurrently processing the input by adding another step of processing. A step in the staircase comprises of backward tokens (encoding the sequence so far seen) and forward tokens (ingesting a new part of the sequence), or an extreme Ladder version with a forward step of zero that simply repeats the Transformer on each step of the ladder, sharing the weights. We thus describe a family of such models that can trade off performance and compute, by either increasing the amount of recurrence through time, the amount of sequential processing via recurrence in depth, or both. Staircase attention is shown to be able to solve tasks that involve tracking that conventional Transformers cannot, due to this recurrence. Further, it is shown to provide improved modeling power for the same size model (number of parameters) compared to self-attentive Transformers on large language modeling and dialogue tasks, yielding significant perplexity gains. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 239,665 |
1609.07531 | Popular Matchings with Multiple Partners | Our input is a bipartite graph $G = (A \cup B,E)$ where each vertex in $A \cup B$ has a preference list strictly ranking its neighbors. The vertices in $A$ and in $B$ are called students and courses, respectively. Each student $a$ seeks to be matched to $\mathsf{cap}(a) \ge 1$ courses while each course $b$ seeks $\mathsf{cap}(b) \ge 1$ many students to be matched to it. The Gale-Shapley algorithm computes a pairwise-stable matching (one with no blocking edge) in $G$ in linear time. We consider the problem of computing a popular matching in $G$ -- a matching $M$ is popular if $M$ cannot lose an election to any matching where vertices cast votes for one matching versus another. Our main contribution is to show that a max-size popular matching in $G$ can be computed by the 2-level Gale-Shapley algorithm in linear time. This is an extension of the classical Gale-Shapley algorithm and we prove its correctness via linear programming. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | true | 61,443 |
2310.12692 | Representation Learning via Consistent Assignment of Views over Random
Partitions | We present Consistent Assignment of Views over Random Partitions (CARP), a self-supervised clustering method for representation learning of visual features. CARP learns prototypes in an end-to-end online fashion using gradient descent without additional non-differentiable modules to solve the cluster assignment problem. CARP optimizes a new pretext task based on random partitions of prototypes that regularizes the model and enforces consistency between views' assignments. Additionally, our method improves training stability and prevents collapsed solutions in joint-embedding training. Through an extensive evaluation, we demonstrate that CARP's representations are suitable for learning downstream tasks. We evaluate CARP's representations capabilities in 17 datasets across many standard protocols, including linear evaluation, few-shot classification, k-NN, k-means, image retrieval, and copy detection. We compare CARP performance to 11 existing self-supervised methods. We extensively ablate our method and demonstrate that our proposed random partition pretext task improves the quality of the learned representations by devising multiple random classification tasks. In transfer learning tasks, CARP achieves the best performance on average against many SSL methods trained for a longer time. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 401,116 |
2304.04580 | Matrix Factorization Based Blind Bayesian Receiver for Grant-Free Random
Access in mmWave MIMO mMTC | Grant-free random access is promising for massive connectivity with sporadic transmissions in massive machine type communications (mMTC), where the hand-shaking between the access point (AP) and users is skipped, leading to high access efficiency. In grant-free random access, the AP needs to identify the active users and perform channel estimation and signal detection. Conventionally, pilot signals are required for the AP to achieve user activity detection and channel estimation before active user signal detection, which may still result in substantial overhead and latency. In this paper, to further reduce the overhead and latency, we explore the problem of grant-free random access without the use of pilot signals in a millimeter wave (mmWave) multiple input and multiple output (MIMO) system, where the AP performs blind joint user activity detection, channel estimation and signal detection (UACESD). We show that the blind joint UACESD can be formulated as a constrained composite matrix factorization problem, which can be solved by exploiting the structures of the channel matrix and signal matrix. Leveraging our recently developed unitary approximate message passing based matrix factorization (UAMP-MF) algorithm, we design a message passing based Bayesian algorithm to solve the blind joint UACESD problem. Extensive simulation results demonstrate the effectiveness of the blind grant-free random access scheme. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 357,272 |
0709.3262 | A software for learning Information Theory basics with emphasis on
Entropy of Spanish | In this paper, a tutorial software to learn Information Theory basics in a practical way is reported. The software, called IT-tutor-UV, makes use of a modern existing Spanish corpus for the modeling of the source. Both the source and the channel coding are also included in this educational tool as part of the learning experience. Entropy values of the Spanish language obtained with the IT-tutor-UV are discussed and compared to others that were previously calculated under limited conditions. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 674 |
2007.04100 | A Survey of Real-Time Social-Based Traffic Detection | Online traffic news web sites do not always announce traffic events in areas in real-time. There is a capability to employ text mining and machine learning techniques on the twitter stream to perform event detection, in order to develop a real-time traffic detection system. In this present survey paper, we will deliberate the current state-of-art techniques in detecting traffic events in real-time focusing on five papers [1, 2, 3, 4, 5]. Lastly, applying text mining techniques and SVM classifiers in paper [2] gave the best results (i.e. 95.75% accuracy and 95.8% F1-score). | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 186,257 |
2303.00566 | Structured Pruning for Deep Convolutional Neural Networks: A survey | The remarkable performance of deep Convolutional neural networks (CNNs) is generally attributed to their deeper and wider architectures, which can come with significant computational costs. Pruning neural networks has thus gained interest since it effectively lowers storage and computational costs. In contrast to weight pruning, which results in unstructured models, structured pruning provides the benefit of realistic acceleration by producing models that are friendly to hardware implementation. The special requirements of structured pruning have led to the discovery of numerous new challenges and the development of innovative solutions. This article surveys the recent progress towards structured pruning of deep CNNs. We summarize and compare the state-of-the-art structured pruning techniques with respect to filter ranking methods, regularization methods, dynamic execution, neural architecture search, the lottery ticket hypothesis, and the applications of pruning. While discussing structured pruning algorithms, we briefly introduce the unstructured pruning counterpart to emphasize their differences. Furthermore, we provide insights into potential research opportunities in the field of structured pruning. A curated list of neural network pruning papers can be found at https://github.com/he-y/Awesome-Pruning . A dedicated website offering a more interactive comparison of structured pruning methods can be found at: https://huggingface.co/spaces/he-yang/Structured-Pruning-Survey . | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 348,642 |
2108.05629 | Optimal actuator design via Brunovsky's normal form | In this paper, by using the Brunovsky normal form, we provide a reformulation of the problem consisting in finding the actuator design which minimizes the controllability cost for finite-dimensional linear systems with scalar controls. Such systems may be seen as spatially discretized linear partial differential equations with lumped controls. The change of coordinates induced by Brunovsky's normal form allows us to remove the restriction of having to work with diagonalizable system dynamics, and does not entail a randomization procedure as done in past literature on diffusion equations or waves. Instead, the optimization problem reduces to a minimization of the norm of the inverse of a change of basis matrix, and allows for an easy deduction of existence of solutions, and for a clearer picture of some of the problem's intrinsic symmetries. Numerical experiments help to visualize these artifacts, indicate further open problems, and also show a possible obstruction of using gradient-based algorithms - this is alleviated by using an evolutionary algorithm. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 250,358 |
2202.11616 | ChimeraMix: Image Classification on Small Datasets via Masked Feature
Mixing | Deep convolutional neural networks require large amounts of labeled data samples. For many real-world applications, this is a major limitation which is commonly treated by augmentation methods. In this work, we address the problem of learning deep neural networks on small datasets. Our proposed architecture called ChimeraMix learns a data augmentation by generating compositions of instances. The generative model encodes images in pairs, combines the features guided by a mask, and creates new samples. For evaluation, all methods are trained from scratch without any additional data. Several experiments on benchmark datasets, e.g. ciFAIR-10, STL-10, and ciFAIR-100, demonstrate the superior performance of ChimeraMix compared to current state-of-the-art methods for classification on small datasets. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 281,939 |
1705.10299 | Robustness to unknown error in sparse regularization | Quadratically-constrained basis pursuit has become a popular device in sparse regularization; in particular, in the context of compressed sensing. However, the majority of theoretical error estimates for this regularizer assume an a priori bound on the noise level, which is usually lacking in practice. In this paper, we develop stability and robustness estimates which remove this assumption. First, we introduce an abstract framework and show that robust instance optimality of any decoder in the noise-aware setting implies stability and robustness in the noise-blind setting. This is based on certain sup-inf constants referred to as quotients, strictly related to the quotient property of compressed sensing. We then apply this theory to prove the robustness of quadratically-constrained basis pursuit under unknown error in the cases of random Gaussian matrices and of random matrices with heavy-tailed rows, such as random sampling matrices from bounded orthonormal systems. We illustrate our results in several cases of practical importance, including subsampled Fourier measurements and recovery of sparse polynomial expansions. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 74,367 |
1509.07950 | Mixed-ADC Massive MIMO Detectors: Performance Analysis and Design
Optimization | Using a very low-resolution analog-to-digital convertor (ADC) unit at each antenna can remarkably reduce the hardware cost and power consumption of a massive multiple-input multiple-output (MIMO) system. However, such a pure low-resolution ADC architecture also complicates parameter estimation problems such as time/frequency synchronization and channel estimation. A mixed-ADC architecture, where most of the antennas are equipped with low-precision ADCs while a few antennas have full-precision ADCs, can solve these issues and actualize the potential of the pure low-resolution ADC architecture. In this paper, we present a unified framework to develop a family of detectors over the massive MIMO uplink system with the mixed-ADC receiver architecture by exploiting probabilistic Bayesian inference. As a basic setup, an optimal detector is developed to provide a minimum mean-squared-error (MMSE) estimate on data symbols. Considering the highly nonlinear steps involved in the quantization process, we also investigate the potential for complexity reduction on the optimal detector by postulating the common \emph{pseudo-quantization noise} (PQN) model. In particular, we provide asymptotic performance expressions including the MSE and bit error rate for the optimal and suboptimal MIMO detectors. The asymptotic performance expressions can be evaluated quickly and efficiently; thus, they are useful in system design optimization. We show that in the low signal-to-noise ratio (SNR) regime, the distortion caused by the PQN model can be ignored, whereas in the high-SNR regime, such distortion may cause 1-bit detection performance loss. The performance gap resulting from the PQN model can be narrowed by a small fraction of high-precision ADCs in the mixed-ADC architecture. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 47,306 |
1606.03816 | Multistage Campaigning in Social Networks | We consider the problem of how to optimize multi-stage campaigning over social networks. The dynamic programming framework is employed to balance the high present reward and large penalty on low future outcome in the presence of extensive uncertainties. In particular, we establish theoretical foundations of optimal campaigning over social networks where the user activities are modeled as a multivariate Hawkes process, and we derive a time dependent linear relation between the intensity of exogenous events and several commonly used objective functions of campaigning. We further develop a convex dynamic programming framework for determining the optimal intervention policy that prescribes the required level of external drive at each stage for the desired campaigning result. Experiments on both synthetic data and the real-world MemeTracker dataset show that our algorithm can steer the user activities for optimal campaigning much more accurately than baselines. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 57,150 |
2301.08157 | SoftEnNet: Symbiotic Monocular Depth Estimation and Lumen Segmentation
for Colonoscopy Endorobots | Colorectal cancer is the third most common cause of cancer death worldwide. Optical colonoscopy is the gold standard for detecting colorectal cancer; however, about 25 percent of polyps are missed during the procedure. A vision-based autonomous endorobot can improve colonoscopy procedures significantly through systematic, complete screening of the colonic mucosa. The reliable robot navigation needed requires a three-dimensional understanding of the environment and lumen tracking to support autonomous tasks. We propose a novel multi-task model that simultaneously predicts dense depth and lumen segmentation with an ensemble of deep networks. The depth estimation sub-network is trained in a self-supervised fashion guided by view synthesis; the lumen segmentation sub-network is supervised. The two sub-networks are interconnected with pathways that enable information exchange and thereby mutual learning. As the lumen is in the image's deepest visual space, lumen segmentation helps with the depth estimation at the farthest location. In turn, the estimated depth guides the lumen segmentation network as the lumen location defines the farthest scene location. Unlike other environments, view synthesis often fails in the colon because of the deformable wall, textureless surface, specularities, and wide field of view image distortions, all challenges that our pipeline addresses. We conducted qualitative analysis on a synthetic dataset and quantitative analysis on a colon training model and real colonoscopy videos. The experiments show that our model predicts accurate scale-invariant depth maps and lumen segmentation from colonoscopy images in near real-time. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 341,120 |
2009.14821 | A Systematic Method for On-The-Fly Denormalization of Relational
Databases | Normalized relational databases are a common method for storing data, but pulling out usable denormalized data for consumption generally requires either direct access to the source data or creation of an appropriate view or table by a database administrator. End-users are thus limited in their ability to explore and use data that is stored in this manner. Presented here is a method for performing automated denormalization of relational databases at run-time, without requiring access to source data or ongoing intervention by a database administrator. Furthermore, this method does not require a restructure of the database itself and so it can be flexibly applied as a layer on top of already existing databases. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 198,152 |
1904.03239 | ShapeMask: Learning to Segment Novel Objects by Refining Shape Priors | Instance segmentation aims to detect and segment individual objects in a scene. Most existing methods rely on precise mask annotations of every category. However, it is difficult and costly to segment objects in novel categories because a large number of mask annotations is required. We introduce ShapeMask, which learns the intermediate concept of object shape to address the problem of generalization in instance segmentation to novel categories. ShapeMask starts with a bounding box detection and gradually refines it by first estimating the shape of the detected object through a collection of shape priors. Next, ShapeMask refines the coarse shape into an instance level mask by learning instance embeddings. The shape priors provide a strong cue for object-like prediction, and the instance embeddings model the instance specific appearance information. ShapeMask significantly outperforms the state-of-the-art by 6.4 and 3.8 AP when learning across categories, and obtains competitive performance in the fully supervised setting. It is also robust to inaccurate detections, decreased model capacity, and small training data. Moreover, it runs efficiently with 150ms inference time and trains within 11 hours on TPUs. With a larger backbone model, ShapeMask increases the gap with state-of-the-art to 9.4 and 6.2 AP across categories. Code will be released. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 126,641 |
1704.00874 | The string of diamonds is nearly tight for rumour spreading | For a rumour spreading protocol, the spread time is defined as the first time that everyone learns the rumour. We compare the synchronous push&pull rumour spreading protocol with its asynchronous variant, and show that for any $n$-vertex graph and any starting vertex, the ratio between their expected spread times is bounded by $O \left({n}^{1/3}{\log^{2/3} n}\right)$. This improves the $O(\sqrt n)$ upper bound of Giakkoupis, Nazari, and Woelfel (in Proceedings of ACM Symposium on Principles of Distributed Computing, 2016). Our bound is tight up to a factor of $O(\log n)$, as illustrated by the string of diamonds graph. We also show that if for a pair $\alpha,\beta$ of real numbers, there exists infinitely many graphs for which the two spread times are $n^{\alpha}$ and $n^{\beta}$ in expectation, then $0\leq\alpha \leq 1$ and $\alpha \leq \beta \leq \frac13 + \frac23 \alpha$; and we show each such pair $\alpha,\beta$ is achievable. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 71,162 |
2403.09037 | The First to Know: How Token Distributions Reveal Hidden Knowledge in
Large Vision-Language Models? | Large vision-language models (LVLMs), designed to interpret and respond to human instructions, occasionally generate hallucinated or harmful content due to inappropriate instructions. This study uses linear probing to shed light on the hidden knowledge at the output layers of LVLMs. We demonstrate that the logit distributions of the first tokens contain sufficient information to determine whether to respond to the instructions, including recognizing unanswerable visual questions, defending against jailbreaking attacks, and identifying deceptive questions. Such hidden knowledge is gradually lost in logits of subsequent tokens during response generation. Then, we illustrate a simple decoding strategy at the generation of the first token, effectively improving the generated content. In experiments, we find a few interesting insights: First, the CLIP model already contains a strong signal for solving these tasks, which indicates potential bias in the existing datasets. Second, we observe performance improvement by utilizing the first logit distributions on three additional tasks, including indicating uncertainty in math solving, mitigating hallucination, and image classification. Last, with the same training data, simply finetuning LVLMs improves models' performance but is still inferior to linear probing on these tasks. | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | 437,598 |
1810.01534 | Band Assignment in Dual Band Systems: A Learning-based Approach | We consider the band assignment problem in dual band systems, where the base-station (BS) chooses one of the two available frequency bands (centimeter-wave and millimeter-wave bands) to communicate data to the mobile station (MS). While the millimeter-wave band offers higher data rate when it is available, there is a significant probability of outage during which the communication should be carried on the centimeter-wave band. In this work, we use a machine learning framework to provide an efficient and practical solution to the band assignment problem. In particular, the BS trains a Neural Network (NN) to predict the right band assignment decision using observed channel information. We study the performance of the NN in two environments: (i) A stochastic channel model with correlated bands, and (ii) microcellular outdoor channels obtained by simulations with a commercial ray-tracer. For the former case, for sake of comparison we also develop a threshold based band assignment that relies on the optimal mean square error estimator of the best band. In addition, we study the performance of the NN-based solution with different NN structures and different observed parameters (position, field strength, etc.). We compare the achieved performance to linear and logistic regression based solutions as well as the threshold based solution. Under practical constraints, the learning based band assignment shows competitive or superior performance in both environments. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 109,413 |
1707.00622 | Rank Determination for Low-Rank Data Completion | Recently, fundamental conditions on the sampling patterns have been obtained for finite completability of low-rank matrices or tensors given the corresponding ranks. In this paper, we consider the scenario where the rank is not given and we aim to approximate the unknown rank based on the location of sampled entries and some given completion. We consider a number of data models, including single-view matrix, multi-view matrix, CP tensor, tensor-train tensor and Tucker tensor. For each of these data models, we provide an upper bound on the rank when an arbitrary low-rank completion is given. We characterize these bounds both deterministically, i.e., with probability one given that the sampling pattern satisfies certain combinatorial properties, and probabilistically, i.e., with high probability given that the sampling probability is above some threshold. Moreover, for both single-view matrix and CP tensor, we are able to show that the obtained upper bound is exactly equal to the unknown rank if the lowest-rank completion is given. Furthermore, we provide numerical experiments for the case of single-view matrix, where we use nuclear norm minimization to find a low-rank completion of the sampled data and we observe that in most of the cases the proposed upper bound on the rank is equal to the true rank. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 76,383 |
2204.10451 | SCOPE: Safe Exploration for Dynamic Computer Systems Optimization | Modern computer systems need to execute under strict safety constraints (e.g., a power limit), but doing so often conflicts with their ability to deliver high performance (i.e. minimal latency). Prior work uses machine learning to automatically tune hardware resources such that the system execution meets safety constraints optimally. Such solutions monitor past system executions to learn the system's behavior under different hardware resource allocations before dynamically tuning resources to optimize the application execution. However, system behavior can change significantly between different applications and even different inputs of the same applications. Hence, the models learned using data collected a priori are often suboptimal and violate safety constraints when used with new applications and inputs. To address this limitation, we introduce the concept of an execution space, which is the cross product of hardware resources, input features, and applications. To dynamically and safely allocate hardware resources from the execution space, we present SCOPE, a resource manager that leverages a novel safe exploration framework. We evaluate SCOPE's ability to deliver improved latency while minimizing power constraint violations by dynamically configuring hardware while running a variety of Apache Spark applications. Compared to prior approaches that minimize power constraint violations, SCOPE consumes comparable power while improving latency by up to 9.5X. Compared to prior approaches that minimize latency, SCOPE achieves similar latency but reduces power constraint violation rates by up to 45.88X, achieving almost zero safety constraint violations across all applications. | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | true | 292,788 |
2402.11273 | Semi-supervised Medical Image Segmentation Method Based on Cross-pseudo
Labeling Leveraging Strong and Weak Data Augmentation Strategies | Traditional supervised learning methods have historically encountered certain constraints in medical image segmentation due to the challenging collection process, high labeling cost, low signal-to-noise ratio, and complex features characterizing biomedical images. This paper proposes a semi-supervised model, DFCPS, which innovatively incorporates the Fixmatch concept. This significantly enhances the model's performance and generalizability through data augmentation processing, employing varied strategies for unlabeled data. Concurrently, the model design gives appropriate emphasis to the generation, filtration, and refinement processes of pseudo-labels. The novel concept of cross-pseudo-supervision is introduced, integrating consistency learning with self-training. This enables the model to fully leverage pseudo-labels from multiple perspectives, thereby enhancing training diversity. The DFCPS model is compared with both baseline and advanced models using the publicly accessible Kvasir-SEG dataset. Across all four subdivisions containing different proportions of unlabeled data, our model consistently exhibits superior performance. Our source code is available at https://github.com/JustlfC03/DFCPS. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 430,314 |
2205.15565 | Sub-Image Histogram Equalization using Coot Optimization Algorithm for
Segmentation and Parameter Selection | Contrast enhancement is very important in terms of assessing images in an objective way. Contrast enhancement is also significant for various algorithms including supervised and unsupervised algorithms for accurate classification of samples. Some contrast enhancement algorithms solve this problem by addressing the low contrast issue. Mean and variance based sub-image histogram equalization (MVSIHE) algorithm is one of these contrast enhancements methods proposed in the literature. It has different parameters which need to be tuned in order to achieve optimum results. With this motivation, in this study, we employed one of the most recent optimization algorithms, namely, coot optimization algorithm (COA) for selecting appropriate parameters for the MVSIHE algorithm. Blind/referenceless image spatial quality evaluator (BRISQUE) and natural image quality evaluator (NIQE) metrics are used for evaluating fitness of the coot swarm population. The results show that the proposed method can be used in the field of biomedical image processing. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 299,780 |
2211.01812 | Benchmarking local motion planners for navigation of mobile manipulators | There are various trajectory planners for mobile manipulators. It is often challenging to compare their performance under similar circumstances due to differences in hardware, dissimilarity of tasks and objectives, as well as uncertainties in measurements and operating environments. In this paper, we propose a simulation framework to evaluate the performance of the local trajectory planners to generate smooth, and dynamically and kinematically feasible trajectories for mobile manipulators in the same environment. We focus on local planners as they are key components that provide smooth trajectories while carrying a load, react to dynamic obstacles, and avoid collisions. We evaluate two prominent local trajectory planners, Dynamic-Window Approach (DWA) and Time Elastic Band (TEB) using the metrics that we introduce. Moreover, our software solution is applicable to any other local planners used in the Robot Operating System (ROS) framework, without additional programming effort. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 328,366 |
1804.07405 | GritNet: Student Performance Prediction with Deep Learning | Student performance prediction - where a machine forecasts the future performance of students as they interact with online coursework - is a challenging problem. Reliable early-stage predictions of a student's future performance could be critical to facilitate timely educational interventions during a course. However, very few prior studies have explored this problem from a deep learning perspective. In this paper, we recast the student performance prediction problem as a sequential event prediction problem and propose a new deep learning based algorithm, termed GritNet, which builds upon the bidirectional long short term memory (BLSTM). Our results, from real Udacity students' graduation predictions, show that the GritNet not only consistently outperforms the standard logistic-regression based method, but that improvements are substantially pronounced in the first few weeks when accurate predictions are most challenging. | false | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | 95,520 |
2202.01587 | MiDaS: Representative Sampling from Real-world Hypergraphs | Graphs are widely used for representing pairwise interactions in complex systems. Since such real-world graphs are large and often evergrowing, sampling a small representative subgraph is indispensable for various purposes: simulation, visualization, stream processing, representation learning, crawling, to name a few. However, many complex systems consist of group interactions (e.g., collaborations of researchers and discussions on online Q&A platforms), and thus they can be represented more naturally and accurately by hypergraphs (i.e., sets of sets) than by ordinary graphs. Motivated by the prevalence of large-scale hypergraphs, we study the problem of representative sampling from real-world hypergraphs, aiming to answer (Q1) what a representative sub-hypergraph is and (Q2) how we can find a representative one rapidly without an extensive search. Regarding Q1, we propose to measure the goodness of a sub-hypergraph by comparing it with the entire hypergraph in terms of ten graph-level, hyperedge-level, and node-level statistics. Regarding Q2, we first analyze the characteristics of six intuitive approaches in 11 real-world hypergraphs. Then, based on the analysis, we propose MiDaS, which draws hyperedges with a bias towards those with high-degree nodes. Through extensive experiments, we demonstrate that MiDaS is (a) Representative: finding overall the most representative samples among 13 considered approaches, (b) Fast: several orders of magnitude faster than the strongest competitors, which performs an extensive search, and (c) Automatic: rapidly searching a proper degree of bias. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 278,527 |
2210.02766 | AutoQC: Automated Synthesis of Quantum Circuits Using Neural Network | While the ability to build quantum computers is improving dramatically, developing quantum algorithms is limited and relies on human insight and ingenuity. Although a number of quantum programming languages have been developed, it is challenging for software developers who are not familiar with quantum computing to learn and use these languages. It is, therefore, necessary to develop tools to support developing new quantum algorithms and programs automatically. This paper proposes AutoQC, an approach to automatically synthesizing quantum circuits using the neural network from input and output pairs. We consider a quantum circuit a sequence of quantum gates and synthesize a quantum circuit probabilistically by prioritizing with a neural network at each step. The experimental results highlight the ability of AutoQC to synthesize some essential quantum circuits at a lower cost. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 321,778 |
2103.04132 | A Real-time Low-cost Artificial Intelligence System for Autonomous
Spraying in Palm Plantations | In precision crop protection, (target-orientated) object detection in image processing can help navigate Unmanned Aerial Vehicles (UAV, crop protection drones) to the right place to apply the pesticide. Unnecessary application of non-target areas could be avoided. Deep learning algorithms dominantly use in modern computer vision tasks which require high computing time, memory footprint, and power consumption. Based on the Edge Artificial Intelligence, we investigate the main three paths that lead to dealing with this problem, including hardware accelerators, efficient algorithms, and model compression. Finally, we integrate them and propose a solution based on a light deep neural network (DNN), called Ag-YOLO, which can make the crop protection UAV have the ability to target detection and autonomous operation. This solution is restricted in size, cost, flexible, fast, and energy-effective. The hardware is only 18 grams in weight and 1.5 watts in energy consumption, and the developed DNN model needs only 838 kilobytes of disc space. We tested the developed hardware and software in comparison to the tiny version of the state-of-art YOLOv3 framework, known as YOLOv3-Tiny to detect individual palm in a plantation. An average F1 score of 0.9205 at the speed of 36.5 frames per second (in comparison to similar accuracy at 18 frames per second and 8.66 megabytes of the YOLOv3-Tiny algorithm) was reached. This developed detection system is easily plugged into any machines already purchased as long as the machines have USB ports and run Linux Operating System. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 223,534 |
2502.06152 | The Value of Information in Human-AI Decision-making | Humans and AIs are often paired on decision tasks with the expectation of achieving complementary performance, where the combination of human and AI outperforms either one alone. However, how to improve performance of a human-AI team is often not clear without knowing more about what particular information and strategies each agent employs. We provide a decision-theoretic framework for characterizing the value of information -- and consequently, opportunities for agents to better exploit available information -- in AI-assisted decision workflow. We demonstrate the use of the framework for model selection, empirical evaluation of human-AI performance, and explanation design. We propose a novel information-based instance-level explanation technique that adapts a conventional saliency-based explanation to explain information value in decision making. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 531,954 |
2105.12638 | Predicting Aqueous Solubility of Organic Molecules Using Deep Learning
Models with Varied Molecular Representations | Determining the aqueous solubility of molecules is a vital step in many pharmaceutical, environmental, and energy storage applications. Despite efforts made over decades, there are still challenges associated with developing a solubility prediction model with satisfactory accuracy for many of these applications. The goal of this study is to develop a general model capable of predicting the solubility of a broad range of organic molecules. Using the largest currently available solubility dataset, we implement deep learning-based models to predict solubility from molecular structure and explore several different molecular representations including molecular descriptors, simplified molecular-input line-entry system (SMILES) strings, molecular graphs, and three-dimensional (3D) atomic coordinates using four different neural network architectures - fully connected neural networks (FCNNs), recurrent neural networks (RNNs), graph neural networks (GNNs), and SchNet. We find that models using molecular descriptors achieve the best performance, with GNN models also achieving good performance. We perform extensive error analysis to understand the molecular properties that influence model performance, perform feature analysis to understand which information about molecular structure is most valuable for prediction, and perform a transfer learning and data size study to understand the impact of data availability on model performance. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 237,055 |
2303.09705 | Batch Updating of a Posterior Tree Distribution over a Meta-Tree | Previously, we proposed a probabilistic data generation model represented by an unobservable tree and a sequential updating method to calculate a posterior distribution over a set of trees. The set is called a meta-tree. In this paper, we propose a more efficient batch updating method. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 352,153 |
2411.11376 | Lung Disease Detection with Vision Transformers: A Comparative Study of
Machine Learning Methods | Recent advancements in medical image analysis have predominantly relied on Convolutional Neural Networks (CNNs), achieving impressive performance in chest X-ray classification tasks, such as the 92% AUC reported by AutoThorax-Net and the 88% AUC achieved by ChexNet in classifcation tasks. However, in the medical field, even small improvements in accuracy can have significant clinical implications. This study explores the application of Vision Transformers (ViT), a state-of-the-art architecture in machine learning, to chest X-ray analysis, aiming to push the boundaries of diagnostic accuracy. I present a comparative analysis of two ViT-based approaches: one utilizing full chest X-ray images and another focusing on segmented lung regions. Experiments demonstrate that both methods surpass the performance of traditional CNN-based models, with the full-image ViT achieving up to 97.83% accuracy and the lung-segmented ViT reaching 96.58% accuracy in classifcation of diseases on three label and AUC of 94.54% when label numbers are increased to eight. Notably, the full-image approach showed superior performance across all metrics, including precision, recall, F1 score, and AUC-ROC. These findings suggest that Vision Transformers can effectively capture relevant features from chest X-rays without the need for explicit lung segmentation, potentially simplifying the preprocessing pipeline while maintaining high accuracy. This research contributes to the growing body of evidence supporting the efficacy of transformer-based architectures in medical image analysis and highlights their potential to enhance diagnostic precision in clinical settings. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 509,036 |
2308.02533 | Improving Generalization of Adversarial Training via Robust Critical
Fine-Tuning | Deep neural networks are susceptible to adversarial examples, posing a significant security risk in critical applications. Adversarial Training (AT) is a well-established technique to enhance adversarial robustness, but it often comes at the cost of decreased generalization ability. This paper proposes Robustness Critical Fine-Tuning (RiFT), a novel approach to enhance generalization without compromising adversarial robustness. The core idea of RiFT is to exploit the redundant capacity for robustness by fine-tuning the adversarially trained model on its non-robust-critical module. To do so, we introduce module robust criticality (MRC), a measure that evaluates the significance of a given module to model robustness under worst-case weight perturbations. Using this measure, we identify the module with the lowest MRC value as the non-robust-critical module and fine-tune its weights to obtain fine-tuned weights. Subsequently, we linearly interpolate between the adversarially trained weights and fine-tuned weights to derive the optimal fine-tuned model weights. We demonstrate the efficacy of RiFT on ResNet18, ResNet34, and WideResNet34-10 models trained on CIFAR10, CIFAR100, and Tiny-ImageNet datasets. Our experiments show that \method can significantly improve both generalization and out-of-distribution robustness by around 1.5% while maintaining or even slightly enhancing adversarial robustness. Code is available at https://github.com/microsoft/robustlearn. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 383,668 |
2410.00976 | Learning Chaotic Dynamics with Embedded Dissipativity | Chaotic dynamics, commonly seen in weather systems and fluid turbulence, are characterized by their sensitivity to initial conditions, which makes accurate prediction challenging. Despite its sensitivity to initial perturbations, many chaotic systems observe dissipative behaviors and ergodicity. Therefore, recently various approaches have been proposed to develop data-driven models preserving invariant statistics over long horizons. Although these methods have shown empirical success in reducing instances of unbounded trajectory generation, many of the models are still prone to generating unbounded trajectories, leading to invalid statistics evaluation. In this paper, we propose a novel neural network architecture that simultaneously learns a dissipative dynamics emulator that guarantees to generate bounded trajectories and an energy-like function that governs the dissipative behavior. More specifically, by leveraging control-theoretic ideas, we derive algebraic conditions based on the learned energy-like function that ensure asymptotic convergence to an invariant level set. Using these algebraic conditions, our proposed model enforces dissipativity through a ReLU projection layer, which provides formal trajectory boundedness guarantees. Furthermore, the invariant level set provides an outer estimate for the strange attractor, which is known to be very difficult to characterize due to its complex geometry. We demonstrate the capability of our model in producing bounded long-horizon trajectory forecasts and characterizing the attractor for chaotic dynamical systems including Lorenz 96 and a truncated Kuramoto-Sivashinsky equation. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 493,550 |
1711.08180 | Video Semantic Object Segmentation by Self-Adaptation of DCNN | This paper proposes a new framework for semantic segmentation of objects in videos. We address the label inconsistency problem of deep convolutional neural networks (DCNNs) by exploiting the fact that videos have multiple frames; in a few frames the object is confidently-estimated (CE) and we use the information in them to improve labels of the other frames. Given the semantic segmentation results of each frame obtained from DCNN, we sample several CE frames to adapt the DCNN model to the input video by focusing on specific instances in the video rather than general objects in various circumstances. We propose offline and online approaches under different supervision levels. In experiments our method achieved great improvement over the original model and previous state-of-the-art methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 85,151 |
2411.06456 | Dropout the High-rate Downsampling: A Novel Design Paradigm for UHD
Image Restoration | With the popularization of high-end mobile devices, Ultra-high-definition (UHD) images have become ubiquitous in our lives. The restoration of UHD images is a highly challenging problem due to the exaggerated pixel count, which often leads to memory overflow during processing. Existing methods either downsample UHD images at a high rate before processing or split them into multiple patches for separate processing. However, high-rate downsampling leads to significant information loss, while patch-based approaches inevitably introduce boundary artifacts. In this paper, we propose a novel design paradigm to solve the UHD image restoration problem, called D2Net. D2Net enables direct full-resolution inference on UHD images without the need for high-rate downsampling or dividing the images into several patches. Specifically, we ingeniously utilize the characteristics of the frequency domain to establish long-range dependencies of features. Taking into account the richer local patterns in UHD images, we also design a multi-scale convolutional group to capture local features. Additionally, during the decoding stage, we dynamically incorporate features from the encoding stage to reduce the flow of irrelevant information. Extensive experiments on three UHD image restoration tasks, including low-light image enhancement, image dehazing, and image deblurring, show that our model achieves better quantitative and qualitative results than state-of-the-art methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 507,124 |
1909.07330 | Knowledge Discovery In Nanophotonics Using Geometric Deep Learning | We present here a new approach for using the intelligence aspects of artificial intelligence for knowledge discovery rather than device optimization in electromagnetic (EM) nanostructures. This approach uses training data obtained through full-wave EM simulations of a series of nanostructures to train geometric deep learning algorithms to assess the range of feasible responses as well as the feasibility of a desired response from a class of EM nanostructures. To facilitate the knowledge discovery and reduce the computation complexity, our approach combines the dimensionality reduction technique (using an autoencoder) with convex-hull and one-class support-vector-machine (SVM) algorithms to find the range of the feasible responses in the latent (or the reduced) response space of the EM nanostructure. We show that by using a small set of training instances (compared to all possible structures), our approach can provide better than 95% accuracy in assessing the feasibility of a given response. More importantly, the one-class SVM algorithm can be trained to provide the degree of feasibility (or unfeasibility) of a response from a given nanostructure. This important information can be used to modify the initial structure to an alternative one that can enable an initially unfeasible response. To show the applicability of our approach, we apply it to two important classes of binary metasurfaces (MSs), formed by array of plasmonic nanostructures, and periodic MSs formed by an array of dielectric nanopillars. In addition to theoretical results, we show the experimental results obtained by fabricating several MSs of the second class. Our theoretical and experimental results confirm the unique features of this approach for knowledge discovery in EM nanostructures. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 145,647 |
2202.10806 | Stochastic Causal Programming for Bounding Treatment Effects | Causal effect estimation is important for many tasks in the natural and social sciences. We design algorithms for the continuous partial identification problem: bounding the effects of multivariate, continuous treatments when unmeasured confounding makes identification impossible. Specifically, we cast causal effects as objective functions within a constrained optimization problem, and minimize/maximize these functions to obtain bounds. We combine flexible learning algorithms with Monte Carlo methods to implement a family of solutions under the name of stochastic causal programming. In particular, we show how the generic framework can be efficiently formulated in settings where auxiliary variables are clustered into pre-treatment and post-treatment sets, where no fine-grained causal graph can be easily specified. In these settings, we can avoid the need for fully specifying the distribution family of hidden common causes. Monte Carlo computation is also much simplified, leading to algorithms which are more computationally stable against alternatives. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 281,672 |
2011.01973 | Greedy k-Center from Noisy Distance Samples | We study a variant of the canonical k-center problem over a set of vertices in a metric space, where the underlying distances are apriori unknown. Instead, we can query an oracle which provides noisy/incomplete estimates of the distance between any pair of vertices. We consider two oracle models: Dimension Sampling where each query to the oracle returns the distance between a pair of points in one dimension; and Noisy Distance Sampling where the oracle returns the true distance corrupted by noise. We propose active algorithms, based on ideas such as UCB, Thompson Sampling and Track-and-Stop developed in the closely related Multi-Armed Bandit problem, which adaptively decide which queries to send to the oracle and are able to solve the k-center problem within an approximation ratio of two with high probability. We analytically characterize instance-dependent query complexity of our algorithms and also demonstrate significant improvements over naive implementations via numerical evaluations on two real-world datasets (Tiny ImageNet and UT Zappos50K). | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 204,775 |
2206.04502 | What is a Good Metric to Study Generalization of Minimax Learners? | Minimax optimization has served as the backbone of many machine learning (ML) problems. Although the convergence behavior of optimization algorithms has been extensively studied in the minimax settings, their generalization guarantees in stochastic minimax optimization problems, i.e., how the solution trained on empirical data performs on unseen testing data, have been relatively underexplored. A fundamental question remains elusive: What is a good metric to study generalization of minimax learners? In this paper, we aim to answer this question by first showing that primal risk, a universal metric to study generalization in minimization problems, which has also been adopted recently to study generalization in minimax ones, fails in simple examples. We thus propose a new metric to study generalization of minimax learners: the primal gap, defined as the difference between the primal risk and its minimum over all models, to circumvent the issues. Next, we derive generalization error bounds for the primal gap in nonconvex-concave settings. As byproducts of our analysis, we also solve two open questions: establishing generalization error bounds for primal risk and primal-dual risk, another existing metric that is only well-defined when the global saddle-point exists, in the strong sense, i.e., without strong concavity or assuming that the maximization and expectation can be interchanged, while either of these assumptions was needed in the literature. Finally, we leverage this new metric to compare the generalization behavior of two popular algorithms -- gradient descent-ascent (GDA) and gradient descent-max (GDMax) in stochastic minimax optimization. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 301,651 |
1801.08841 | FlashRL: A Reinforcement Learning Platform for Flash Games | Reinforcement Learning (RL) is a research area that has blossomed tremendously in recent years and has shown remarkable potential in among others successfully playing computer games. However, there only exists a few game platforms that provide diversity in tasks and state-space needed to advance RL algorithms. The existing platforms offer RL access to Atari- and a few web-based games, but no platform fully expose access to Flash games. This is unfortunate because applying RL to Flash games have potential to push the research of RL algorithms. This paper introduces the Flash Reinforcement Learning platform (FlashRL) which attempts to fill this gap by providing an environment for thousands of Flash games on a novel platform for Flash automation. It opens up easy experimentation with RL algorithms for Flash games, which has previously been challenging. The platform shows excellent performance with as little as 5% CPU utilization on consumer hardware. It shows promising results for novel reinforcement learning algorithms. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 89,005 |
2309.15762 | Rapid Network Adaptation: Learning to Adapt Neural Networks Using
Test-Time Feedback | We propose a method for adapting neural networks to distribution shifts at test-time. In contrast to training-time robustness mechanisms that attempt to anticipate and counter the shift, we create a closed-loop system and make use of a test-time feedback signal to adapt a network on the fly. We show that this loop can be effectively implemented using a learning-based function, which realizes an amortized optimizer for the network. This leads to an adaptation method, named Rapid Network Adaptation (RNA), that is notably more flexible and orders of magnitude faster than the baselines. Through a broad set of experiments using various adaptation signals and target tasks, we study the efficiency and flexibility of this method. We perform the evaluations using various datasets (Taskonomy, Replica, ScanNet, Hypersim, COCO, ImageNet), tasks (depth, optical flow, semantic segmentation, classification), and distribution shifts (Cross-datasets, 2D and 3D Common Corruptions) with promising results. We end with a discussion on general formulations for handling distribution shifts and our observations from comparing with similar approaches from other domains. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 395,104 |
2304.12422 | Multi-Source to Multi-Target Decentralized Federated Domain Adaptation | Heterogeneity across devices in federated learning (FL) typically refers to statistical (e.g., non-i.i.d. data distributions) and resource (e.g., communication bandwidth) dimensions. In this paper, we focus on another important dimension that has received less attention: varying quantities/distributions of labeled and unlabeled data across devices. In order to leverage all data, we develop a decentralized federated domain adaptation methodology which considers the transfer of ML models from devices with high quality labeled data (called sources) to devices with low quality or unlabeled data (called targets). Our methodology, Source-Target Determination and Link Formation (ST-LF), optimizes both (i) classification of devices into sources and targets and (ii) source-target link formation, in a manner that considers the trade-off between ML model accuracy and communication energy efficiency. To obtain a concrete objective function, we derive a measurable generalization error bound that accounts for estimates of source-target hypothesis deviations and divergences between data distributions. The resulting optimization problem is a mixed-integer signomial program, a class of NP-hard problems, for which we develop an algorithm based on successive convex approximations to solve it tractably. Subsequent numerical evaluations of ST-LF demonstrate that it improves classification accuracy and energy efficiency over state-of-the-art baselines. | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | true | 360,199 |
2410.17872 | A Method to Reduce the Complexity of Computing the Complete Weight
Distribution of Polar Codes | The code spectrum of polar codes is crucial to the performance of polar codes. Based on the lower-triangular affine group (LTA) of decreasing monomial codes and the one-variable descendance (ovd) relation, we define a new subgroup of LTA which can find more cosets with the same weight distribution. Using this algebraic structure, we further reduce the complexity by proofing the group action on a coset set is transitive. Our method is an enhanced version of previous research, and the complexity of most cases can be reduced exceeding several times. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 501,647 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.