id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2109.12398 | Channel State Information Based Localization with Deep Learning | Localization is one of the most important problems in various fields such as robotics and wireless communications. For instance, Unmanned Aerial Vehicles (UAVs) require the information of the position precisely for an adequate control strategy. This problem is handled very efficiently with integrated GPS units for outdoor applications. However, indoor applications require special treatment due to the unavailability of GPS signals. Another aspect of mobile robots such as UAVs is that there is constant wireless communication between the mobile robot and a computational unit. This communication is mainly done for obtaining telemetry information or computation of control actions directly. The responsible integrated units for this transmission are commercial wireless communication chipsets. These units on the receiver side are responsible for getting rid of the diverse effects of the communication channel with various mathematical techniques. These techniques mainly require the Channel State Information (CSI) of the current channel to compensate the channel itself. After the compensation, the chipset has nothing to do with CSI. However, the locations of both the transmitter and receiver have a direct impact on CSI. Even though CSI contains such rich information about the environment, the accessibility of these data is blocked by the commercial wireless chipsets since they are manufactured to provide only the processed information data bits to the user. However, with the IEEE 802.11n standardization, certain chipsets provide access to CSI. Therefore, CSI data became processible and integrable to localization schemes. In this project, a test environment was constructed for the localization task. Two routers with proper chipsets were assigned as transmitter and receiver. They were operationalized for the CSI data collection. Lastly, these data were processed with various deep learning models. | false | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | 257,271 |
2008.03609 | Enhance CNN Robustness Against Noises for Classification of 12-Lead ECG
with Variable Length | Electrocardiogram (ECG) is the most widely used diagnostic tool to monitor the condition of the cardiovascular system. Deep neural networks (DNNs), have been developed in many research labs for automatic interpretation of ECG signals to identify potential abnormalities in patient hearts. Studies have shown that given a sufficiently large amount of data, the classification accuracy of DNNs could reach human-expert cardiologist level. However, despite of the excellent performance in classification accuracy, it has been shown that DNNs are highly vulnerable to adversarial noises which are subtle changes in input of a DNN and lead to a wrong class-label prediction with a high confidence. Thus, it is challenging and essential to improve robustness of DNNs against adversarial noises for ECG signal classification, a life-critical application. In this work, we designed a CNN for classification of 12-lead ECG signals with variable length, and we applied three defense methods to improve robustness of this CNN for this classification task. The ECG data in this study is very challenging because the sample size is limited, and the length of each ECG recording varies in a large range. The evaluation results show that our customized CNN reached satisfying F1 score and average accuracy, comparable to the top-6 entries in the CPSC2018 ECG classification challenge, and the defense methods enhanced robustness of our CNN against adversarial noises and white noises, with a minimal reduction in accuracy on clean data. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 190,963 |
2104.02297 | Shapley Explanation Networks | Shapley values have become one of the most popular feature attribution explanation methods. However, most prior work has focused on post-hoc Shapley explanations, which can be computationally demanding due to its exponential time complexity and preclude model regularization based on Shapley explanations during training. Thus, we propose to incorporate Shapley values themselves as latent representations in deep models thereby making Shapley explanations first-class citizens in the modeling paradigm. This intrinsic explanation approach enables layer-wise explanations, explanation regularization of the model during training, and fast explanation computation at test time. We define the Shapley transform that transforms the input into a Shapley representation given a specific function. We operationalize the Shapley transform as a neural network module and construct both shallow and deep networks, called ShapNets, by composing Shapley modules. We prove that our Shallow ShapNets compute the exact Shapley values and our Deep ShapNets maintain the missingness and accuracy properties of Shapley values. We demonstrate on synthetic and real-world datasets that our ShapNets enable layer-wise Shapley explanations, novel Shapley regularizations during training, and fast computation while maintaining reasonable performance. Code is available at https://github.com/inouye-lab/ShapleyExplanationNetworks. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 228,672 |
2306.06479 | Learning a Neuron by a Shallow ReLU Network: Dynamics and Implicit Bias
for Correlated Inputs | We prove that, for the fundamental regression task of learning a single neuron, training a one-hidden layer ReLU network of any width by gradient flow from a small initialisation converges to zero loss and is implicitly biased to minimise the rank of network parameters. By assuming that the training points are correlated with the teacher neuron, we complement previous work that considered orthogonal datasets. Our results are based on a detailed non-asymptotic analysis of the dynamics of each hidden neuron throughout the training. We also show and characterise a surprising distinction in this setting between interpolator networks of minimal rank and those of minimal Euclidean norm. Finally we perform a range of numerical experiments, which corroborate our theoretical findings. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 372,620 |
2403.18600 | RAP: Retrieval-Augmented Planner for Adaptive Procedure Planning in
Instructional Videos | Procedure Planning in instructional videos entails generating a sequence of action steps based on visual observations of the initial and target states. Despite the rapid progress in this task, there remain several critical challenges to be solved: (1) Adaptive procedures: Prior works hold an unrealistic assumption that the number of action steps is known and fixed, leading to non-generalizable models in real-world scenarios where the sequence length varies. (2) Temporal relation: Understanding the step temporal relation knowledge is essential in producing reasonable and executable plans. (3) Annotation cost: Annotating instructional videos with step-level labels (i.e., timestamp) or sequence-level labels (i.e., action category) is demanding and labor-intensive, limiting its generalizability to large-scale datasets. In this work, we propose a new and practical setting, called adaptive procedure planning in instructional videos, where the procedure length is not fixed or pre-determined. To address these challenges, we introduce Retrieval-Augmented Planner (RAP) model. Specifically, for adaptive procedures, RAP adaptively determines the conclusion of actions using an auto-regressive model architecture. For temporal relation, RAP establishes an external memory module to explicitly retrieve the most relevant state-action pairs from the training videos and revises the generated procedures. To tackle high annotation cost, RAP utilizes a weakly-supervised learning manner to expand the training dataset to other task-relevant, unannotated videos by generating pseudo labels for action steps. Experiments on CrossTask and COIN benchmarks show the superiority of RAP over traditional fixed-length models, establishing it as a strong baseline solution for adaptive procedure planning. | false | false | false | false | true | false | false | true | false | false | false | true | false | false | false | false | false | false | 442,007 |
1402.4834 | The Application of Imperialist Competitive Algorithm for Fuzzy Random
Portfolio Selection Problem | This paper presents an implementation of the Imperialist Competitive Algorithm (ICA) for solving the fuzzy random portfolio selection problem where the asset returns are represented by fuzzy random variables. Portfolio Optimization is an important research field in modern finance. By using the necessity-based model, fuzzy random variables reformulate to the linear programming and ICA will be designed to find the optimum solution. To show the efficiency of the proposed method, a numerical example illustrates the whole idea on implementation of ICA for fuzzy random portfolio selection problem. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 30,997 |
2404.08709 | $F_\beta$-plot -- a visual tool for evaluating imbalanced data
classifiers | One of the significant problems associated with imbalanced data classification is the lack of reliable metrics. This runs primarily from the fact that for most real-life (as well as commonly used benchmark) problems, we do not have information from the user on the actual form of the loss function that should be minimized. Although it is pretty common to have metrics indicating the classification quality within each class, for the end user, the analysis of several such metrics is then required, which in practice causes difficulty in interpreting the usefulness of a given classifier. Hence, many aggregate metrics have been proposed or adopted for the imbalanced data classification problem, but there is still no consensus on which should be used. An additional disadvantage is their ambiguity and systematic bias toward one class. Moreover, their use in analyzing experimental results in recognition of those classification models that perform well for the chosen aggregated metrics is burdened with the drawbacks mentioned above. Hence, the paper proposes a simple approach to analyzing the popular parametric metric $F_\beta$. We point out that it is possible to indicate for a given pool of analyzed classifiers when a given model should be preferred depending on user requirements. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 446,371 |
2209.00860 | Real-time 3D Single Object Tracking with Transformer | LiDAR-based 3D single object tracking is a challenging issue in robotics and autonomous driving. Currently, existing approaches usually suffer from the problem that objects at long distance often have very sparse or partially-occluded point clouds, which makes the features extracted by the model ambiguous. Ambiguous features will make it hard to locate the target object and finally lead to bad tracking results. To solve this problem, we utilize the powerful Transformer architecture and propose a Point-Track-Transformer (PTT) module for point cloud-based 3D single object tracking task. Specifically, PTT module generates fine-tuned attention features by computing attention weights, which guides the tracker focusing on the important features of the target and improves the tracking ability in complex scenarios. To evaluate our PTT module, we embed PTT into the dominant method and construct a novel 3D SOT tracker named PTT-Net. In PTT-Net, we embed PTT into the voting stage and proposal generation stage, respectively. PTT module in the voting stage could model the interactions among point patches, which learns context-dependent features. Meanwhile, PTT module in the proposal generation stage could capture the contextual information between object and background. We evaluate our PTT-Net on KITTI and NuScenes datasets. Experimental results demonstrate the effectiveness of PTT module and the superiority of PTT-Net, which surpasses the baseline by a noticeable margin, ~10% in the Car category. Meanwhile, our method also has a significant performance improvement in sparse scenarios. In general, the combination of transformer and tracking pipeline enables our PTT-Net to achieve state-of-the-art performance on both two datasets. Additionally, PTT-Net could run in real-time at 40FPS on NVIDIA 1080Ti GPU. Our code is open-sourced for the research community at https://github.com/shanjiayao/PTT. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 315,704 |
1409.4813 | Identification of core-periphery structure in networks | Many networks can be usefully decomposed into a dense core plus an outlying, loosely-connected periphery. Here we propose an algorithm for performing such a decomposition on empirical network data using methods of statistical inference. Our method fits a generative model of core-periphery structure to observed data using a combination of an expectation--maximization algorithm for calculating the parameters of the model and a belief propagation algorithm for calculating the decomposition itself. We find the method to be efficient, scaling easily to networks with a million or more nodes and we test it on a range of networks, including real-world examples as well as computer-generated benchmarks, for which it successfully identifies known core-periphery structure with low error rate. We also demonstrate that the method is immune from the detectability transition observed in the related community detection problem, which prevents the detection of community structure when that structure is too weak. There is no such transition for core-periphery structure, which is detectable, albeit with some statistical error, no matter how weak it is. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 36,110 |
2207.11583 | Boosting the Efficiency of Parametric Detection with Hierarchical Neural
Networks | Gravitational wave astronomy is a vibrant field that leverages both classic and modern data processing techniques for the understanding of the universe. Various approaches have been proposed for improving the efficiency of the detection scheme, with hierarchical matched filtering being an important strategy. Meanwhile, deep learning methods have recently demonstrated both consistency with matched filtering methods and remarkable statistical performance. In this work, we propose Hierarchical Detection Network (HDN), a novel approach to efficient detection that combines ideas from hierarchical matching and deep learning. The network is trained using a novel loss function, which encodes simultaneously the goals of statistical accuracy and efficiency. We discuss the source of complexity reduction of the proposed model, and describe a general recipe for initialization with each layer specializing in different regions. We demonstrate the performance of HDN with experiments using open LIGO data and synthetic injections, and observe with two-layer models a $79\%$ efficiency gain compared with matched filtering at an equal error rate of $0.2\%$. Furthermore, we show how training a three-layer HDN initialized using two-layer model can further boost both accuracy and efficiency, highlighting the power of multiple simple layers in efficient detection. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 309,700 |
2012.08824 | Learning to Run with Potential-Based Reward Shaping and Demonstrations
from Video Data | Learning to produce efficient movement behaviour for humanoid robots from scratch is a hard problem, as has been illustrated by the "Learning to run" competition at NIPS 2017. The goal of this competition was to train a two-legged model of a humanoid body to run in a simulated race course with maximum speed. All submissions took a tabula rasa approach to reinforcement learning (RL) and were able to produce relatively fast, but not optimal running behaviour. In this paper, we demonstrate how data from videos of human running (e.g. taken from YouTube) can be used to shape the reward of the humanoid learning agent to speed up the learning and produce a better result. Specifically, we are using the positions of key body parts at regular time intervals to define a potential function for potential-based reward shaping (PBRS). Since PBRS does not change the optimal policy, this approach allows the RL agent to overcome sub-optimalities in the human movements that are shown in the videos. We present experiments in which we combine selected techniques from the top ten approaches from the NIPS competition with further optimizations to create an high-performing agent as a baseline. We then demonstrate how video-based reward shaping improves the performance further, resulting in an RL agent that runs twice as fast as the baseline in 12 hours of training. We furthermore show that our approach can overcome sub-optimal running behaviour in videos, with the learned policy significantly outperforming that of the running agent from the video. | false | false | false | false | false | false | true | true | false | false | false | true | false | false | false | false | false | false | 211,879 |
1707.08208 | Robust Detection of Random Events with Spatially Correlated Data in
Wireless Sensor Networks via Distributed Compressive Sensing | In this paper, we exploit the theory of compressive sensing to perform detection of a random source in a dense sensor network. When the sensors are densely deployed, observations at adjacent sensors are highly correlated while those corresponding to distant sensors are less correlated. Thus, the covariance matrix of the concatenated observation vector of all the sensors at any given time can be sparse where the sparse structure depends on the network topology and the correlation model. Exploiting the sparsity structure of the covariance matrix, we develop a robust nonparametric detector to detect the presence of the random event using a compressed version of the data collected at the distributed nodes. We employ the multiple access channel (MAC) model with distributed random projections for sensors to transmit observations so that a compressed version of the observations is available at the fusion center. Detection is performed by constructing a decision statistic based on the covariance information of uncompressed data which is estimated using compressed data. The proposed approach does not require any knowledge of the noise parameter to set the threshold, and is also robust when the distributed random projection matrices become sparse. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 77,775 |
1103.2544 | Almost-perfect secret sharing | Splitting a secret s between several participants, we generate (for each value of s) shares for all participants. The goal: authorized groups of participants should be able to reconstruct the secret but forbidden ones get no information about it. In this paper we introduce several notions of non- perfect secret sharing, where some small information leak is permitted. We study its relation to the Kolmogorov complexity version of secret sharing (establishing some connection in both directions) and the effects of changing the secret size (showing that we can decrease the size of the secret and the information leak at the same time). | false | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | 9,592 |
2410.03693 | Linear Independence of Generalized Neurons and Related Functions | The linear independence of neurons plays a significant role in theoretical analysis of neural networks. Specifically, given neurons $H_1, ..., H_n: \bR^N \times \bR^d \to \bR$, we are interested in the following question: when are $\{H_1(\theta_1, \cdot), ..., H_n(\theta_n, \cdot)\}$ are linearly independent as the parameters $\theta_1, ..., \theta_n$ of these functions vary over $\bR^N$. Previous works give a complete characterization of two-layer neurons without bias, for generic smooth activation functions. In this paper, we study the problem for neurons with arbitrary layers and widths, giving a simple but complete characterization for generic analytic activation functions. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 494,902 |
2307.02472 | Deductive Additivity for Planning of Natural Language Proofs | Current natural language systems designed for multi-step claim validation typically operate in two phases: retrieve a set of relevant premise statements using heuristics (planning), then generate novel conclusions from those statements using a large language model (deduction). The planning step often requires expensive Transformer operations and does not scale to arbitrary numbers of premise statements. In this paper, we investigate whether an efficient planning heuristic is possible via embedding spaces compatible with deductive reasoning. Specifically, we evaluate whether embedding spaces exhibit a property we call deductive additivity: the sum of premise statement embeddings should be close to embeddings of conclusions based on those premises. We explore multiple sources of off-the-shelf dense embeddings in addition to fine-tuned embeddings from GPT3 and sparse embeddings from BM25. We study embedding models both intrinsically, evaluating whether the property of deductive additivity holds, and extrinsically, using them to assist planning in natural language proof generation. Lastly, we create a dataset, Single-Step Reasoning Contrast (SSRC), to further probe performance on various reasoning types. Our findings suggest that while standard embedding methods frequently embed conclusions near the sums of their premises, they fall short of being effective heuristics and lack the ability to model certain categories of reasoning. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 377,703 |
1803.06959 | On the importance of single directions for generalization | Despite their ability to memorize large datasets, deep neural networks often achieve good generalization performance. However, the differences between the learned solutions of networks which generalize and those which do not remain unclear. Additionally, the tuning properties of single directions (defined as the activation of a single unit or some linear combination of units in response to some input) have been highlighted, but their importance has not been evaluated. Here, we connect these lines of inquiry to demonstrate that a network's reliance on single directions is a good predictor of its generalization performance, across networks trained on datasets with different fractions of corrupted labels, across ensembles of networks trained on datasets with unmodified labels, across different hyperparameters, and over the course of training. While dropout only regularizes this quantity up to a point, batch normalization implicitly discourages single direction reliance, in part by decreasing the class selectivity of individual units. Finally, we find that class selectivity is a poor predictor of task importance, suggesting not only that networks which generalize well minimize their dependence on individual units by reducing their selectivity, but also that individually selective units may not be necessary for strong network performance. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | true | false | false | 92,937 |
2105.06138 | Unsupervised Hashing with Contrastive Information Bottleneck | Many unsupervised hashing methods are implicitly established on the idea of reconstructing the input data, which basically encourages the hashing codes to retain as much information of original data as possible. However, this requirement may force the models spending lots of their effort on reconstructing the unuseful background information, while ignoring to preserve the discriminative semantic information that is more important for the hashing task. To tackle this problem, inspired by the recent success of contrastive learning in learning continuous representations, we propose to adapt this framework to learn binary hashing codes. Specifically, we first propose to modify the objective function to meet the specific requirement of hashing and then introduce a probabilistic binary representation layer into the model to facilitate end-to-end training of the entire model. We further prove the strong connection between the proposed contrastive-learning-based hashing method and the mutual information, and show that the proposed model can be considered under the broader framework of the information bottleneck (IB). Under this perspective, a more general hashing model is naturally obtained. Extensive experimental results on three benchmark image datasets demonstrate that the proposed hashing method significantly outperforms existing baselines. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 235,034 |
2404.10124 | Epistemic Uncertainty Quantification For Pre-trained Neural Network | Epistemic uncertainty quantification (UQ) identifies where models lack knowledge. Traditional UQ methods, often based on Bayesian neural networks, are not suitable for pre-trained non-Bayesian models. Our study addresses quantifying epistemic uncertainty for any pre-trained model, which does not need the original training data or model modifications and can ensure broad applicability regardless of network architectures or training techniques. Specifically, we propose a gradient-based approach to assess epistemic uncertainty, analyzing the gradients of outputs relative to model parameters, and thereby indicating necessary model adjustments to accurately represent the inputs. We first explore theoretical guarantees of gradient-based methods for epistemic UQ, questioning the view that this uncertainty is only calculable through differences between multiple models. We further improve gradient-driven UQ by using class-specific weights for integrating gradients and emphasizing distinct contributions from neural network layers. Additionally, we enhance UQ accuracy by combining gradient and perturbation methods to refine the gradients. We evaluate our approach on out-of-distribution detection, uncertainty calibration, and active learning, demonstrating its superiority over current state-of-the-art UQ methods for pre-trained models. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 446,957 |
2407.05415 | DIVESPOT: Depth Integrated Volume Estimation of Pile of Things Based on
Point Cloud | Non-contact volume estimation of pile-type objects has considerable potential in industrial scenarios, including grain, coal, mining, and stone materials. However, using existing method for these scenarios is challenged by unstable measurement poses, significant light interference, the difficulty of training data collection, and the computational burden brought by large piles. To address the above issues, we propose the Depth Integrated Volume EStimation of Pile Of Things (DIVESPOT) based on point cloud technology in this study. For the challenges of unstable measurement poses, the point cloud pose correction and filtering algorithm is designed based on the Random Sample Consensus (RANSAC) and the Hierarchical Density-Based Spatial Clustering of Applications with Noise (HDBSCAN). To cope with light interference and to avoid the relying on training data, the height-distribution-based ground feature extraction algorithm is proposed to achieve RGB-independent. To reduce the computational burden, the storage space optimizing strategy is developed, such that accurate estimation can be acquired by using compressed voxels. Experimental results demonstrate that the DIVESPOT method enables non-data-driven, RGB-independent segmentation of pile point clouds, maintaining a volume calculation relative error within 2%. Even with 90% compression of the voxel mesh, the average error of the results can be under 3%. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 470,966 |
2110.06490 | Dict-BERT: Enhancing Language Model Pre-training with Dictionary | Pre-trained language models (PLMs) aim to learn universal language representations by conducting self-supervised training tasks on large-scale corpora. Since PLMs capture word semantics in different contexts, the quality of word representations highly depends on word frequency, which usually follows a heavy-tailed distributions in the pre-training corpus. Therefore, the embeddings of rare words on the tail are usually poorly optimized. In this work, we focus on enhancing language model pre-training by leveraging definitions of the rare words in dictionaries (e.g., Wiktionary). To incorporate a rare word definition as a part of input, we fetch its definition from the dictionary and append it to the end of the input text sequence. In addition to training with the masked language modeling objective, we propose two novel self-supervised pre-training tasks on word and sentence-level alignment between input text sequence and rare word definitions to enhance language modeling representation with dictionary. We evaluate the proposed Dict-BERT model on the language understanding benchmark GLUE and eight specialized domain benchmark datasets. Extensive experiments demonstrate that Dict-BERT can significantly improve the understanding of rare words and boost model performance on various NLP downstream tasks. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 260,642 |
2305.02211 | Influence zones for continuous beam systems | Unlike influence lines, the concept of influence zones is remarkably absent within the field of structural engineering, despite its existence in the closely related domain of geotechnics. This paper proposes the novel concept of a structural influence zone in relation to continuous beam systems and explores its size numerically with various design constraints applicable to steel framed buildings. The key challenge involves explicitly defining the critical load arrangements, and is tackled by using the novel concepts of polarity sequences and polarity zones. These lead to the identification of flexural and (discovery of) shear load arrangements, with an equation demarcating when the latter arises. After developing algorithms that help identify both types of critical load arrangements, design data sets are generated and the influence zone values are extracted. The results indicate that the influence zone under ultimate state considerations is typically less than 3, rising to a maximum size of 5 adjacent members for any given continuous beam. Additional insights from the influence zone concept, specifically in comparison to influence lines, are highlighted, and the avenues for future research, such as in relation to the newly identified shear load arrangements, are discussed. | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 361,952 |
2306.15114 | Transfer: Cross Modality Knowledge Transfer using Adversarial Networks
-- A Study on Gesture Recognition | Knowledge transfer across sensing technology is a novel concept that has been recently explored in many application domains, including gesture-based human computer interaction. The main aim is to gather semantic or data driven information from a source technology to classify / recognize instances of unseen classes in the target technology. The primary challenge is the significant difference in dimensionality and distribution of feature sets between the source and the target technologies. In this paper, we propose TRANSFER, a generic framework for knowledge transfer between a source and a target technology. TRANSFER uses a language-based representation of a hand gesture, which captures a temporal combination of concepts such as handshape, location, and movement that are semantically related to the meaning of a word. By utilizing a pre-specified syntactic structure and tokenizer, TRANSFER segments a hand gesture into tokens and identifies individual components using a token recognizer. The tokenizer in this language-based recognition system abstracts the low-level technology-specific characteristics to the machine interface, enabling the design of a discriminator that learns technology-invariant features essential for recognition of gestures in both source and target technologies. We demonstrate the usage of TRANSFER for three different scenarios: a) transferring knowledge across technology by learning gesture models from video and recognizing gestures using WiFi, b) transferring knowledge from video to accelerometer, and d) transferring knowledge from accelerometer to WiFi signals. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 375,912 |
1711.01921 | $A^{4}NT$: Author Attribute Anonymity by Adversarial Training of Neural
Machine Translation | Text-based analysis methods allow to reveal privacy relevant author attributes such as gender, age and identify of the text's author. Such methods can compromise the privacy of an anonymous author even when the author tries to remove privacy sensitive content. In this paper, we propose an automatic method, called Adversarial Author Attribute Anonymity Neural Translation ($A^4NT$), to combat such text-based adversaries. We combine sequence-to-sequence language models used in machine translation and generative adversarial networks to obfuscate author attributes. Unlike machine translation techniques which need paired data, our method can be trained on unpaired corpora of text containing different authors. Importantly, we propose and evaluate techniques to impose constraints on our $A^4NT$ to preserve the semantics of the input text. $A^4NT$ learns to make minimal changes to the input text to successfully fool author attribute classifiers, while aiming to maintain the meaning of the input. We show through experiments on two different datasets and three settings that our proposed method is effective in fooling the author attribute classifiers and thereby improving the anonymity of authors. | false | false | false | true | false | false | false | false | true | false | false | false | true | true | false | false | false | false | 83,974 |
1703.05522 | Treating Smoothness and Balance during Data Exchange in Explicit
Simulator Coupling or Cosimulation | Cosimulation methods allow combination of simulation tools of physical systems running in parallel to act as a single simulation environment for a big system. As data is passed across subsystem boundaries instead of solving the system as one single equation system, it is not ensured that systemwide balances are fulfilled. If the exchanged data is a flow of a conserved quantity, approximation errors can accumulate and make simulation results inaccurate. The problem of approximation errors is typically addressed with extrapolation of exchanged data. Nevertheless balance errors occur as extrapolation is approximation. This problem can be handled with balance correction methods which compensate these errors by adding corrections for the balances to the signal in next coupling time step. This work aims at combining extrapolation of exchanged data and balance correction in a way that the exchanged signal not only remains smooth, meaning the existence of continuous derivatives, but even in a way reducing the derivatives, in order to avoid unphysical dynamics caused by the coupling. To this end, suitable switch and hat functions are constructed and applied to the problem. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 70,096 |
2404.12774 | Recent Advancements in Battery State of Power Estimation Technology: A
Comprehensive Overview and Error Source Analysis | Accurate state of power (SOP) estimation is of great importance for lithium-ion batteries in safety-critical and power-intensive applications for electric vehicles. This review article delves deeply into the entire development flow of current SOP estimation technology, offering a systematic breakdown of all key aspects with their recent advancements. First, we review the design of battery safe operation area, summarizing diverse limitation factors and furnishing a profound comprehension of battery safety across a broad operational scale. Second, we illustrate the unique discharge and charge characteristics of various peak operation modes, such as constant current, constant voltage, constant current-constant voltage, and constant power, and explore their impacts on battery peak power performance. Third, we extensively survey the aspects of battery modelling and algorithm development in current SOP estimation technology, highlighting their technical contributions and specific considerations. Fourth, we present an in-depth dissection of all error sources to unveil their propagation pathways, providing insightful analysis into how each type of error impacts the SOP estimation performance. Finally, the technical challenges and complexities inherent in this field of research are addressed, suggesting potential directions for future development. Our goal is to inspire further efforts towards developing more accurate and intelligent SOP estimation technology for next-generation battery management systems. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 448,028 |
2012.00780 | Refining Deep Generative Models via Discriminator Gradient Flow | Deep generative modeling has seen impressive advances in recent years, to the point where it is now commonplace to see simulated samples (e.g., images) that closely resemble real-world data. However, generation quality is generally inconsistent for any given model and can vary dramatically between samples. We introduce Discriminator Gradient flow (DGflow), a new technique that improves generated samples via the gradient flow of entropy-regularized f-divergences between the real and the generated data distributions. The gradient flow takes the form of a non-linear Fokker-Plank equation, which can be easily simulated by sampling from the equivalent McKean-Vlasov process. By refining inferior samples, our technique avoids wasteful sample rejection used by previous methods (DRS & MH-GAN). Compared to existing works that focus on specific GAN variants, we show our refinement approach can be applied to GANs with vector-valued critics and even other deep generative models such as VAEs and Normalizing Flows. Empirical results on multiple synthetic, image, and text datasets demonstrate that DGflow leads to significant improvement in the quality of generated samples for a variety of generative models, outperforming the state-of-the-art Discriminator Optimal Transport (DOT) and Discriminator Driven Latent Sampling (DDLS) methods. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 209,235 |
2407.02281 | On the Linearization of Optimal Rates for Independent Zero-Error Source
and Channel Problems | Zero-error coding encompasses a variety of source and channel problems where the probability of error must be exactly zero. This condition is stricter than that of the vanishing error regime, where the error probability goes to zero as the code blocklength goes to infinity. In general, zero-error coding is an open combinatorial question. We investigate two unsolved zero-error problems: the source coding problem with side information and the channel coding problem. We focus our attention on families of independent problems for which the distribution decomposes into a product of distributions, corresponding to solved zero-error problems. A crucial step is the linearization property of the optimal rate which does not always hold in the zero-error regime, unlike in the vanishing error regime. We derive a condition under which the linearization properties of the complementary graph entropy $\overline{H}$ for the AND product of graph and for the disjoint union of graphs are equivalent. Then we establish the connection with a recent result obtained by Wigderson and Zuiddam and by Schrijver, for the zero-error capacity $C_0$. As a consequence, we provide new single-letter characterizations of $\overline{H}$ and $C_0$, for example when the graph is a product of perfect graphs, which is not perfect in general, and for the class of graphs obtained by the product of a perfect graph $G$ with the pentagon graph $C_5$. By building on Haemers result for $C_0$, we also show that the linearization of $\overline{H}$ does not hold for the product of the Schl\"{a}fli graph with its complementary graph. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 469,662 |
1809.01478 | Weakly-Supervised Neural Text Classification | Deep neural networks are gaining increasing popularity for the classic text classification task, due to their strong expressive power and less requirement for feature engineering. Despite such attractiveness, neural text classification models suffer from the lack of training data in many real-world applications. Although many semi-supervised and weakly-supervised text classification models exist, they cannot be easily applied to deep neural models and meanwhile support limited supervision types. In this paper, we propose a weakly-supervised method that addresses the lack of training data in neural text classification. Our method consists of two modules: (1) a pseudo-document generator that leverages seed information to generate pseudo-labeled documents for model pre-training, and (2) a self-training module that bootstraps on real unlabeled data for model refinement. Our method has the flexibility to handle different types of weak supervision and can be easily integrated into existing deep neural models for text classification. We have performed extensive experiments on three real-world datasets from different domains. The results demonstrate that our proposed method achieves inspiring performance without requiring excessive training data and outperforms baseline methods significantly. | false | false | false | false | false | true | true | false | true | false | false | false | false | false | false | false | false | false | 106,814 |
2309.01745 | Benchmarking Autoregressive Conditional Diffusion Models for Turbulent
Flow Simulation | Simulating turbulent flows is crucial for a wide range of applications, and machine learning-based solvers are gaining increasing relevance. However, achieving temporal stability when generalizing to longer rollout horizons remains a persistent challenge for learned PDE solvers. In this work, we analyze if fully data-driven fluid solvers that utilize an autoregressive rollout based on conditional diffusion models are a viable option to address this challenge. We investigate accuracy, posterior sampling, spectral behavior, and temporal stability, while requiring that methods generalize to flow parameters beyond the training regime. To quantitatively and qualitatively benchmark the performance of various flow prediction approaches, three challenging 2D scenarios including incompressible and transonic flows, as well as isotropic turbulence are employed. We find that even simple diffusion-based approaches can outperform multiple established flow prediction methods in terms of accuracy and temporal stability, while being on par with state-of-the-art stabilization techniques like unrolling at training time. Such traditional architectures are superior in terms of inference speed, however, the probabilistic nature of diffusion approaches allows for inferring multiple predictions that align with the statistics of the underlying physics. Overall, our benchmark contains three carefully chosen data sets that are suitable for probabilistic evaluation alongside various established flow prediction architectures. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 389,786 |
2006.06081 | Ergodic Specifications for Flexible Swarm Control: From User Commands to
Persistent Adaptation | This paper presents a formulation for swarm control and high-level task planning that is dynamically responsive to user commands and adaptable to environmental changes. We design an end-to-end pipeline from a tactile tablet interface for user commands to onboard control of robotic agents based on decentralized ergodic coverage. Our approach demonstrates reliable and dynamic control of a swarm collective through the use of ergodic specifications for planning and executing agent trajectories as well as responding to user and external inputs. We validate our approach in a virtual reality simulation environment and in real-world experiments at the DARPA OFFSET Urban Swarm Challenge FX3 field tests with a robotic swarm where user-based control of the swarm and mission-based tasks require a dynamic and flexible response to changing conditions and objectives in real-time. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 181,301 |
1212.6478 | The degrees of freedom of the Group Lasso for a General Design | In this paper, we are concerned with regression problems where covariates can be grouped in nonoverlapping blocks, and where only a few of them are assumed to be active. In such a situation, the group Lasso is an at- tractive method for variable selection since it promotes sparsity of the groups. We study the sensitivity of any group Lasso solution to the observations and provide its precise local parameterization. When the noise is Gaussian, this allows us to derive an unbiased estimator of the degrees of freedom of the group Lasso. This result holds true for any fixed design, no matter whether it is under- or overdetermined. With these results at hand, various model selec- tion criteria, such as the Stein Unbiased Risk Estimator (SURE), are readily available which can provide an objectively guided choice of the optimal group Lasso fit. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 20,647 |
2102.05143 | Classifier Calibration: with application to threat scores in
cybersecurity | This paper explores the calibration of a classifier output score in binary classification problems. A calibrator is a function that maps the arbitrary classifier score, of a testing observation, onto $[0,1]$ to provide an estimate for the posterior probability of belonging to one of the two classes. Calibration is important for two reasons; first, it provides a meaningful score, that is the posterior probability; second, it puts the scores of different classifiers on the same scale for comparable interpretation. The paper presents three main contributions: (1) Introducing multi-score calibration, when more than one classifier provides a score for a single observation. (2) Introducing the idea that the classifier scores to a calibration process are nothing but features to a classifier, hence proposing expanding the classifier scores to higher dimensions to boost the calibrator's performance. (3) Conducting a massive simulation study, in the order of 24,000 experiments, that incorporates different configurations, in addition to experimenting on two real datasets from the cybersecurity domain. The results show that there is no overall winner among the different calibrators and different configurations. However, general advices for practitioners include the following: the Platt's calibrator~\citep{Platt1999ProbabilisticOutputsForSupport}, a version of the logistic regression that decreases bias for a small sample size, has a very stable and acceptable performance among all experiments; our suggested multi-score calibration provides better performance than single score calibration in the majority of experiments, including the two real datasets. In addition, expanding the scores can help in some experiments. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 219,334 |
2406.08479 | Real3D: Scaling Up Large Reconstruction Models with Real-World Images | The default strategy for training single-view Large Reconstruction Models (LRMs) follows the fully supervised route using large-scale datasets of synthetic 3D assets or multi-view captures. Although these resources simplify the training procedure, they are hard to scale up beyond the existing datasets and they are not necessarily representative of the real distribution of object shapes. To address these limitations, in this paper, we introduce Real3D, the first LRM system that can be trained using single-view real-world images. Real3D introduces a novel self-training framework that can benefit from both the existing synthetic data and diverse single-view real images. We propose two unsupervised losses that allow us to supervise LRMs at the pixel- and semantic-level, even for training examples without ground-truth 3D or novel views. To further improve performance and scale up the image data, we develop an automatic data curation approach to collect high-quality examples from in-the-wild images. Our experiments show that Real3D consistently outperforms prior work in four diverse evaluation settings that include real and synthetic data, as well as both in-domain and out-of-domain shapes. Code and model can be found here: https://hwjiang1510.github.io/Real3D/ | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 463,512 |
2111.09251 | A fast solver for the pseudo-two-dimensional model of lithium-ion
batteries | The pseudo-two-dimensional (P2D) model is a complex mathematical model that can capture the electrochemical processes in Li-ion batteries. However, the model also brings a heavy computational burden. Many simplifications to the model have been introduced in the literature to reduce the complexity. We present a method for fast computation of the P2D model which can be used when simplifications are not accurate enough. By rearranging the calculations, we reduce the complexity of the linear algebra problem. We also employ automatic differentiation, using an open source package JAX for robustness, while also allowing easy implementation of changes to coefficient expressions. The method alleviates the computational bottleneck in P2D models without compromising accuracy. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 266,956 |
2306.03314 | Multi-Agent Collaboration: Harnessing the Power of Intelligent LLM
Agents | In this paper, we present a novel framework for enhancing the capabilities of large language models (LLMs) by leveraging the power of multi-agent systems. Our framework introduces a collaborative environment where multiple intelligent agent components, each with distinctive attributes and roles, work together to handle complex tasks more efficiently and effectively. We demonstrate the practicality and versatility of our framework through case studies in artificial general intelligence (AGI), specifically focusing on the Auto-GPT and BabyAGI models. We also examine the "Gorilla" model, which integrates external APIs into the LLM. Our framework addresses limitations and challenges such as looping issues, security risks, scalability, system evaluation, and ethical considerations. By modeling various domains such as courtroom simulations and software development scenarios, we showcase the potential applications and benefits of our proposed multi-agent system. Our framework provides an avenue for advancing the capabilities and performance of LLMs through collaboration and knowledge exchange among intelligent agents. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | true | false | false | false | 371,270 |
2304.08998 | Method for Comparison of Surrogate Safety Measures in Multi-Vehicle
Scenarios | With the race towards higher levels of automation in vehicles, it is imperative to guarantee the safety of all involved traffic participants. Yet, while high-risk traffic situations between two vehicles are well understood, traffic situations involving more vehicles lack the tools to be properly analyzed. This paper proposes a method to compare Surrogate Safety Measures values in highway multi-vehicle traffic situations such as lane-changes that involve three vehicles. This method allows for a comprehensive statistical analysis and highlights how the safety distance between vehicles is shifted in favor of the traffic conflict between the leading vehicle and the lane-changing vehicle. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 358,903 |
2206.03420 | An Adaptive Federated Relevance Framework for Spatial Temporal Graph
Learning | Spatial-temporal data contains rich information and has been widely studied in recent years due to the rapid development of relevant applications in many fields. For instance, medical institutions often use electrodes attached to different parts of a patient to analyse the electorencephal data rich with spatial and temporal features for health assessment and disease diagnosis. Existing research has mainly used deep learning techniques such as convolutional neural network (CNN) or recurrent neural network (RNN) to extract hidden spatial-temporal features. Yet, it is challenging to incorporate both inter-dependencies spatial information and dynamic temporal changes simultaneously. In reality, for a model that leverages these spatial-temporal features to fulfil complex prediction tasks, it often requires a colossal amount of training data in order to obtain satisfactory model performance. Considering the above-mentioned challenges, we propose an adaptive federated relevance framework, namely FedRel, for spatial-temporal graph learning in this paper. After transforming the raw spatial-temporal data into high quality features, the core Dynamic Inter-Intra Graph (DIIG) module in the framework is able to use these features to generate the spatial-temporal graphs capable of capturing the hidden topological and long-term temporal correlation information in these graphs. To improve the model generalization ability and performance while preserving the local data privacy, we also design a relevance-driven federated learning module in our framework to leverage diverse data distributions from different participants with attentive aggregations of their models. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 301,278 |
1204.0100 | Roles of Ties in Spreading | Background: Controlling global epidemics in the real world and accelerating information propagation in the artificial world are of great significance, which have activated an upsurge in the studies on networked spreading dynamics. Lots of efforts have been made to understand the impacts of macroscopic statistics (e.g., degree distribution and average distance) and mesoscopic structures (e.g., communities and rich clubs) on spreading processes while the microscopic elements are less concerned. In particular, roles of ties are not yet clear to the academic community. Methodology/Principle Findings: Every edges is stamped by its strength that is defined solely based on the local topology. According to a weighted susceptible-infected-susceptible model, the steady-state infected density and spreading speed are respectively optimized by adjusting the relationship between edge's strength and spreading ability. Experiments on six real networks show that the infected density is increased when strong ties are favored in the spreading, while the speed is enhanced when weak ties are favored. Significance of these findings is further demonstrated by comparing with a null model. Conclusions/Significance: Experimental results indicate that strong and weak ties play distinguishable roles in spreading dynamics: the former enlarge the infected density while the latter fasten the process. The proposed method provides a quantitative way to reveal the qualitatively different roles of ties, which could find applications in analyzing many networked dynamical processes with multiple performance indices, such as synchronizability and converging time in synchronization and throughput and delivering time in transportation. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 15,207 |
2403.12641 | Automated Contrastive Learning Strategy Search for Time Series | In recent years, Contrastive Learning (CL) has become a predominant representation learning paradigm for time series. Most existing methods manually build specific CL Strategies (CLS) by human heuristics for certain datasets and tasks. However, manually developing CLS usually requires excessive prior knowledge about the data, and massive experiments to determine the detailed CL configurations. In this paper, we present an Automated Machine Learning (AutoML) practice at Microsoft, which automatically learns CLS for time series datasets and tasks, namely Automated Contrastive Learning (AutoCL). We first construct a principled search space of size over $3\times10^{12}$, covering data augmentation, embedding transformation, contrastive pair construction, and contrastive losses. Further, we introduce an efficient reinforcement learning algorithm, which optimizes CLS from the performance on the validation tasks, to obtain effective CLS within the space. Experimental results on various real-world datasets demonstrate that AutoCL could automatically find the suitable CLS for the given dataset and task. From the candidate CLS found by AutoCL on several public datasets/tasks, we compose a transferable Generally Good Strategy (GGS), which has a strong performance for other datasets. We also provide empirical analysis as a guide for the future design of CLS. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 439,278 |
2003.13968 | Towards Productionizing Subjective Search Systems | Existing e-commerce search engines typically support search only over objective attributes, such as price and locations, leaving the more desirable subjective attributes, such as romantic vibe and worklife balance unsearchable. We found that this is also the case for Recruit Group, which operates a wide range of online booking and search services, including jobs, travel, housing, bridal, dining, beauty, and where each service is among the biggest in Japan, if not internationally. We present our progress towards productionizing a recent subjective search prototype (OpineDB) developed by Megagon Labs for Recruit Group. Several components within OpineDB are enhanced to satisfy production demands, including adding a BERT language model pre-trained on massive hospitality domain review corpora. We also found that the challenges of productionizing the system are beyond enhancing the components. In particular, an important requirement in production-quality systems is to instrument a proper way of measuring the search quality, which is extremely tricky when the search results are subjective. This led to the creation of a high-quality benchmark dataset from scratch, involving over 600 queries by user interviews and a collection of more than 120,000 query-entity relevancy labels. Also, we found that the existing search algorithms do not meet the search quality standard required by production systems. Consequently, we enhanced the ranking model by fine-tuning several search algorithms and combining them under a learning-to-rank framework. The model achieves 5%-10% overall precision improvement and 90+% precision on more than half of the benchmark testing queries making these queries ready for AB-testing. While some enhancements can be immediately applied to other verticals, our experience reveals that benchmarking and fine-tuning ranking algorithms are specific to each domain and cannot be avoided. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | false | 170,376 |
2305.11664 | Few-shot 3D Shape Generation | Realistic and diverse 3D shape generation is helpful for a wide variety of applications such as virtual reality, gaming, and animation. Modern generative models, such as GANs and diffusion models, learn from large-scale datasets and generate new samples following similar data distributions. However, when training data is limited, deep neural generative networks overfit and tend to replicate training samples. Prior works focus on few-shot image generation to produce high-quality and diverse results using a few target images. Unfortunately, abundant 3D shape data is typically hard to obtain as well. In this work, we make the first attempt to realize few-shot 3D shape generation by adapting generative models pre-trained on large source domains to target domains using limited data. To relieve overfitting and keep considerable diversity, we propose to maintain the probability distributions of the pairwise relative distances between adapted samples at feature-level and shape-level during domain adaptation. Our approach only needs the silhouettes of few-shot target samples as training data to learn target geometry distributions and achieve generated shapes with diverse topology and textures. Moreover, we introduce several metrics to evaluate the quality and diversity of few-shot 3D shape generation. The effectiveness of our approach is demonstrated qualitatively and quantitatively under a series of few-shot 3D shape adaptation setups. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 365,643 |
2410.03962 | SpecSAR-Former: A Lightweight Transformer-based Network for Global LULC
Mapping Using Integrated Sentinel-1 and Sentinel-2 | Recent approaches in remote sensing have increasingly focused on multimodal data, driven by the growing availability of diverse earth observation datasets. Integrating complementary information from different modalities has shown substantial potential in enhancing semantic understanding. However, existing global multimodal datasets often lack the inclusion of Synthetic Aperture Radar (SAR) data, which excels at capturing texture and structural details. SAR, as a complementary perspective to other modalities, facilitates the utilization of spatial information for global land use and land cover (LULC). To address this gap, we introduce the Dynamic World+ dataset, expanding the current authoritative multispectral dataset, Dynamic World, with aligned SAR data. Additionally, to facilitate the combination of multispectral and SAR data, we propose a lightweight transformer architecture termed SpecSAR-Former. It incorporates two innovative modules, Dual Modal Enhancement Module (DMEM) and Mutual Modal Aggregation Module (MMAM), designed to exploit cross-information between the two modalities in a split-fusion manner. These modules enhance the model's ability to integrate spectral and spatial information, thereby improving the overall performance of global LULC semantic segmentation. Furthermore, we adopt an imbalanced parameter allocation strategy that assigns parameters to different modalities based on their importance and information density. Extensive experiments demonstrate that our network outperforms existing transformer and CNN-based models, achieving a mean Intersection over Union (mIoU) of 59.58%, an Overall Accuracy (OA) of 79.48%, and an F1 Score of 71.68% with only 26.70M parameters. The code will be available at https://github.com/Reagan1311/LULC_segmentation. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 495,064 |
2103.06523 | Improving Bi-encoder Document Ranking Models with Two Rankers and
Multi-teacher Distillation | BERT-based Neural Ranking Models (NRMs) can be classified according to how the query and document are encoded through BERT's self-attention layers - bi-encoder versus cross-encoder. Bi-encoder models are highly efficient because all the documents can be pre-processed before the query time, but their performance is inferior compared to cross-encoder models. Both models utilize a ranker that receives BERT representations as the input and generates a relevance score as the output. In this work, we propose a method where multi-teacher distillation is applied to a cross-encoder NRM and a bi-encoder NRM to produce a bi-encoder NRM with two rankers. The resulting student bi-encoder achieves an improved performance by simultaneously learning from a cross-encoder teacher and a bi-encoder teacher and also by combining relevance scores from the two rankers. We call this method TRMD (Two Rankers and Multi-teacher Distillation). In the experiments, TwinBERT and ColBERT are considered as baseline bi-encoders. When monoBERT is used as the cross-encoder teacher, together with either TwinBERT or ColBERT as the bi-encoder teacher, TRMD produces a student bi-encoder that performs better than the corresponding baseline bi-encoder. For P@20, the maximum improvement was 11.4%, and the average improvement was 6.8%. As an additional experiment, we considered producing cross-encoder students with TRMD, and found that it could also improve the cross-encoders. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 224,335 |
1710.02765 | Protein identification with deep learning: from abc to xyz | Proteins are the main workhorses of biological functions in a cell, a tissue, or an organism. Identification and quantification of proteins in a given sample, e.g. a cell type under normal/disease conditions, are fundamental tasks for the understanding of human health and disease. In this paper, we present DeepNovo, a deep learning-based tool to address the problem of protein identification from tandem mass spectrometry data. The idea was first proposed in the context of de novo peptide sequencing [1] in which convolutional neural networks and recurrent neural networks were applied to predict the amino acid sequence of a peptide from its spectrum, a similar task to generating a caption from an image. We further develop DeepNovo to perform sequence database search, the main technique for peptide identification that greatly benefits from numerous existing protein databases. We combine two modules de novo sequencing and database search into a single deep learning framework for peptide identification, and integrate de Bruijn graph assembly technique to offer a complete solution to reconstruct protein sequences from tandem mass spectrometry data. This paper describes a comprehensive protocol of DeepNovo for protein identification, including training neural network models, dynamic programming search, database querying, estimation of false discovery rate, and de Bruijn graph assembly. Training and testing data, model implementations, and comprehensive tutorials in form of IPython notebooks are available in our GitHub repository (https://github.com/nh2tran/DeepNovo). | false | true | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 82,221 |
2406.13299 | Empirical Evaluation of Integrated Trust Mechanism to Improve Trust in
E-commerce Services | There are mostly two approaches to tackle trust management worldwide Strong and crisp and Soft and Social. We analyze the impact of integrated trust mechanism in three different e-commerce services. The trust aspect is a dormant element between potential users and being developed expert or internet systems. We support our integration by preside over an experiment in controlled laboratory environment. The model selected for the experiment is a composite of policy and reputation based trust mechanisms and widely acknowledged in e-commerce industry. The integration between policy and trust mechanism was accomplished through mapping process, weakness of one brought to a close with the strength of other. Furthermore, experiment has been supervised to validate the effectiveness of implementation by segregating both integrated and traditional trust mechanisms in learning system | false | false | false | true | true | false | false | false | false | false | false | false | false | true | false | false | false | true | 465,792 |
2310.11991 | Removing Spurious Concepts from Neural Network Representations via Joint
Subspace Estimation | Out-of-distribution generalization in neural networks is often hampered by spurious correlations. A common strategy is to mitigate this by removing spurious concepts from the neural network representation of the data. Existing concept-removal methods tend to be overzealous by inadvertently eliminating features associated with the main task of the model, thereby harming model performance. We propose an iterative algorithm that separates spurious from main-task concepts by jointly identifying two low-dimensional orthogonal subspaces in the neural network representation. We evaluate the algorithm on benchmark datasets for computer vision (Waterbirds, CelebA) and natural language processing (MultiNLI), and show that it outperforms existing concept removal methods | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 400,863 |
1508.02035 | Security Games with Ambiguous Beliefs of Agents | Currently the Dempster-Shafer based algorithm and Uniform Random Probability based algorithm are the preferred method of resolving security games, in which defenders are able to identify attackers and only strategy remained ambiguous. However this model is inefficient in situations where resources are limited and both the identity of the attackers and their strategies are ambiguous. The intent of this study is to find a more effective algorithm to guide the defenders in choosing which outside agents with which to cooperate given both ambiguities. We designed an experiment where defenders were compelled to engage with outside agents in order to maximize protection of their targets. We introduced two important notions: the behavior of each agent in target protection and the tolerance threshold in the target protection process. From these, we proposed an algorithm that was applied by each defender to determine the best potential assistant(s) with which to cooperate. Our results showed that our proposed algorithm is safer than the Dempster-Shafer based algorithm. | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | false | true | 45,850 |
2303.08434 | DeDA: Deep Directed Accumulator | Chronic active multiple sclerosis lesions, also termed as rim+ lesions, can be characterized by a hyperintense rim at the edge of the lesion on quantitative susceptibility maps. These rim+ lesions exhibit a geometrically simple structure, where gradients at the lesion edge are radially oriented and a greater magnitude of gradients is observed in contrast to rim- (non rim+) lesions. However, recent studies have shown that the identification performance of such lesions remains unsatisfied due to the limited amount of data and high class imbalance. In this paper, we propose a simple yet effective image processing operation, deep directed accumulator (DeDA), that provides a new perspective for injecting domain-specific inductive biases (priors) into neural networks for rim+ lesion identification. Given a feature map and a set of sampling grids, DeDA creates and quantizes an accumulator space into finite intervals, and accumulates feature values accordingly. This DeDA operation is a generalized discrete Radon transform and can also be regarded as a symmetric operation to the grid sampling within the forward-backward neural network framework, the process of which is order-agnostic, and can be efficiently implemented with the native CUDA programming. Experimental results on a dataset with 177 rim+ and 3986 rim- lesions show that 10.1% of improvement in a partial (false positive rate<0.1) area under the receiver operating characteristic curve (pROC AUC) and 10.2% of improvement in an area under the precision recall curve (PR AUC) can be achieved respectively comparing to other state-of-the-art methods. The source code is available online at https://github.com/tinymilky/DeDA | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 351,640 |
1507.01484 | Temporal Fidelity in Dynamic Social Networks | It has recently become possible to record detailed social interactions in large social systems with high resolution. As we study these datasets, human social interactions display patterns that emerge at multiple time scales, from minutes to months. On a fundamental level, understanding of the network dynamics can be used to inform the process of measuring social networks. The details of measurement are of particular importance when considering dynamic processes where minute-to-minute details are important, because collection of physical proximity interactions with high temporal resolution is difficult and expensive. Here, we consider the dynamic network of proximity-interactions between approximately 500 individuals participating in the Copenhagen Networks Study. We show that in order to accurately model spreading processes in the network, the dynamic processes that occur on the order of minutes are essential and must be included in the analysis. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 44,875 |
2101.00583 | Multi-label Ranking: Mining Multi-label and Label Ranking Data | We survey multi-label ranking tasks, specifically multi-label classification and label ranking classification. We highlight the unique challenges, and re-categorize the methods, as they no longer fit into the traditional categories of transformation and adaptation. We survey developments in the last demi-decade, with a special focus on state-of-the-art methods in deep learning multi-label mining, extreme multi-label classification and label ranking. We conclude by offering a few future research directions. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 214,133 |
1207.4711 | Efficient Feedback-Based Scheduling Policies for Chunked Network Codes
over Networks with Loss and Delay | The problem of designing efficient feedback-based scheduling policies for chunked codes (CC) over packet networks with delay and loss is considered. For networks with feedback, two scheduling policies, referred to as random push (RP) and local-rarest-first (LRF), already exist. We propose a new scheduling policy, referred to as minimum-distance-first (MDF), based on the expected number of innovative successful packet transmissions at each node of the network prior to the "next" transmission time, given the feedback information from the downstream node(s) about the received packets. Unlike the existing policies, the MDF policy incorporates loss and delay models of the link in the selection process of the chunk to be transmitted. Our simulations show that MDF significantly reduces the expected time required for all the chunks (or equivalently, all the message packets) to be decodable compared to the existing scheduling policies for line networks with feedback. The improvements are particularly profound (up to about 46% for the tested cases) for smaller chunks and larger networks which are of more practical interest. The improvement in the performance of the proposed scheduling policy comes at the cost of more computations, and a slight increase in the amount of feedback. We also propose a low-complexity version of MDF with a rather small loss in the performance, referred to as minimumcurrent-metric-first (MCMF). The MCMF policy is based on the expected number of innovative packet transmissions prior to the "current" transmission time, as opposed to the next transmission time, used in MDF. Our simulations (over line networks) demonstrate that MCMF is always superior to RP and LRF policies, and the superiority becomes more pronounced for smaller chunks and larger networks. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 17,662 |
1705.06942 | Voltage-Driven Domain-Wall Motion based Neuro-Synaptic Devices for
Dynamic On-line Learning | Conventional von-Neumann computing models have achieved remarkable feats for the past few decades. However, they fail to deliver the required efficiency for certain basic tasks like image and speech recognition when compared to biological systems. As such, taking cues from biological systems, novel computing paradigms are being explored for efficient hardware implementations of recognition/classification tasks. The basic building blocks of such neuromorphic systems are neurons and synapses. Towards that end, we propose a leaky-integrate-fire (LIF) neuron and a programmable non-volatile synapse using domain wall motion induced by magneto-electric effect. Due to a strong elastic pinning between the ferro-magnetic domain wall (FM-DW) and the underlying ferro-electric domain wall (FE-DW), the FM-DW gets dragged by the FE-DW on application of a voltage pulse. The fact that FE materials are insulators allows for pure voltage-driven FM-DW motion, which in turn can be used to mimic the behaviors of biological spiking neurons and synapses. The voltage driven nature of the proposed devices allows energy-efficient operation. A detailed device to system level simulation framework based on micromagnetic simulations has been developed to analyze the feasibility of the proposed neuro-synaptic devices. We also demonstrate that the energy-efficient voltage-controlled behavior of the proposed devices make them suitable for dynamic on-line and lifelong learning in spiking neural networks (SNNs). | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | true | 73,710 |
1504.02081 | Hybrid Block Diagonalization for Massive Multiuser MIMO Systems | For a massive multiple-input multiple-output (MIMO) system, restricting the number of RF chains to far less than the number of antenna elements can significantly reduce the implementation cost compared to the full complexity RF chain configuration. In this paper, we consider the downlink communication of a massive multiuser MIMO (MU-MIMO) system and propose a low-complexity hybrid block diagonalization (Hy-BD) scheme to approach the capacity performance of the traditional BD processing method. We aim to harvest the large array gain through the phase-only RF precoding and combining and then digital BD processing is performed on the equivalent baseband channel. The proposed Hy-BD scheme is examined in both the large Rayleigh fading channels and millimeter wave (mmWave) channels. A performance analysis is further conducted for single-path channels and large number of transmit and receive antennas. Finally, simulation results demonstrate that our Hy-BD scheme, with a lower implementation and computational complexity, achieves a capacity performance that is close to (sometimes even higher than) that of the traditional high-dimensional BD processing. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 41,880 |
2204.12264 | Energy Efficient Beamforming Optimization for Integrated Sensing and
Communication | This paper investigates the optimization of beamforming design in a system with integrated sensing and communication (ISAC), where the base station (BS) sends signals for simultaneous multiuser communication and radar sensing. We aim at maximizing the energy efficiency (EE) of the multiuser communication while guaranteeing the sensing requirement in terms of individual radar beampattern gains. The problem is a complicated nonconvex fractional program which is challenging to be solved. By appropriately reformulating the problem and then applying the techniques of successive convex approximation (SCA) and semidefinite relaxation (SDR), we propose an iterative algorithm to address this problem. In theory, we prove that the introduced relaxation of the SDR is rigorously tight. Numerical results validate the effectiveness of the proposed algorithm. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 293,422 |
2405.13960 | Learning To Play Atari Games Using Dueling Q-Learning and Hebbian
Plasticity | In this work, an advanced deep reinforcement learning architecture is used to train neural network agents playing atari games. Given only the raw game pixels, action space, and reward information, the system can train agents to play any Atari game. At first, this system uses advanced techniques like deep Q-networks and dueling Q-networks to train efficient agents, the same techniques used by DeepMind to train agents that beat human players in Atari games. As an extension, plastic neural networks are used as agents, and their feasibility is analyzed in this scenario. The plasticity implementation was based on backpropagation and the Hebbian update rule. Plastic neural networks have excellent features like lifelong learning after the initial training, which makes them highly suitable in adaptive learning environments. As a new analysis of plasticity in this context, this work might provide valuable insights and direction for future works. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 456,163 |
2301.13464 | Training with Mixed-Precision Floating-Point Assignments | When training deep neural networks, keeping all tensors in high precision (e.g., 32-bit or even 16-bit floats) is often wasteful. However, keeping all tensors in low precision (e.g., 8-bit floats) can lead to unacceptable accuracy loss. Hence, it is important to use a precision assignment -- a mapping from all tensors (arising in training) to precision levels (high or low) -- that keeps most of the tensors in low precision and leads to sufficiently accurate models. We provide a technique that explores this memory-accuracy tradeoff by generating precision assignments for convolutional neural networks that (i) use less memory and (ii) lead to more accurate convolutional networks at the same time, compared to the precision assignments considered by prior work in low-precision floating-point training. We evaluate our technique on image classification tasks by training convolutional networks on CIFAR-10, CIFAR-100, and ImageNet. Our method typically provides > 2x memory reduction over a baseline precision assignment while preserving training accuracy, and gives further reductions by trading off accuracy. Compared to other baselines which sometimes cause training to diverge, our method provides similar or better memory reduction while avoiding divergence. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 342,929 |
2310.01334 | Merge, Then Compress: Demystify Efficient SMoE with Hints from Its
Routing Policy | Sparsely activated Mixture-of-Experts (SMoE) has shown promise to scale up the learning capacity of neural networks, however, they have issues like (a) High Memory Usage, due to duplication of the network layers into multiple copies as experts; and (b) Redundancy in Experts, as common learning-based routing policies suffer from representational collapse. Therefore, vanilla SMoE models are memory inefficient and non-scalable, especially for resource-constrained downstream scenarios. In this paper, we ask: Can we craft a compact SMoE model by consolidating expert information? What is the best recipe to merge multiple experts into fewer but more knowledgeable experts? Our pilot investigation reveals that conventional model merging methods fail to be effective in such expert merging for SMoE. The potential reasons are: (1) redundant information overshadows critical experts; (2) appropriate neuron permutation for each expert is missing to bring all of them in alignment. To address this, we propose M-SMoE, which leverages routing statistics to guide expert merging. Specifically, it starts with neuron permutation alignment for experts; then, dominant experts and their "group members" are formed; lastly, every expert group is merged into a single expert by utilizing each expert's activation frequency as their weight for merging, thus diminishing the impact of insignificant experts. Moreover, we observed that our proposed merging promotes a low dimensionality in the merged expert's weight space, naturally paving the way for additional compression. Hence, our final method, MC-SMoE (i.e., Merge, then Compress SMoE), further decomposes the merged experts into low-rank and structural sparse alternatives. Extensive experiments across 8 benchmarks validate the effectiveness of MC-SMoE. For instance, our MC-SMoE achieves up to 80% memory and a 20% FLOPs reduction, with virtually no loss in performance. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 396,372 |
2308.09234 | Deep Boosting Multi-Modal Ensemble Face Recognition with Sample-Level
Weighting | Deep convolutional neural networks have achieved remarkable success in face recognition (FR), partly due to the abundant data availability. However, the current training benchmarks exhibit an imbalanced quality distribution; most images are of high quality. This poses issues for generalization on hard samples since they are underrepresented during training. In this work, we employ the multi-model boosting technique to deal with this issue. Inspired by the well-known AdaBoost, we propose a sample-level weighting approach to incorporate the importance of different samples into the FR loss. Individual models of the proposed framework are experts at distinct levels of sample hardness. Therefore, the combination of models leads to a robust feature extractor without losing the discriminability on the easy samples. Also, for incorporating the sample hardness into the training criterion, we analytically show the effect of sample mining on the important aspects of current angular margin loss functions, i.e., margin and scale. The proposed method shows superior performance in comparison with the state-of-the-art algorithms in extensive experiments on the CFP-FP, LFW, CPLFW, CALFW, AgeDB, TinyFace, IJB-B, and IJB-C evaluation datasets. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 386,202 |
2203.13285 | Continuous-Time Audiovisual Fusion with Recurrence vs. Attention for
In-The-Wild Affect Recognition | In this paper, we present our submission to 3rd Affective Behavior Analysis in-the-wild (ABAW) challenge. Learningcomplex interactions among multimodal sequences is critical to recognise dimensional affect from in-the-wild audiovisual data. Recurrence and attention are the two widely used sequence modelling mechanisms in the literature. To clearly understand the performance differences between recurrent and attention models in audiovisual affect recognition, we present a comprehensive evaluation of fusion models based on LSTM-RNNs, self-attention and cross-modal attention, trained for valence and arousal estimation. Particularly, we study the impact of some key design choices: the modelling complexity of CNN backbones that provide features to the the temporal models, with and without end-to-end learning. We trained the audiovisual affect recognition models on in-the-wild ABAW corpus by systematically tuning the hyper-parameters involved in the network architecture design and training optimisation. Our extensive evaluation of the audiovisual fusion models shows that LSTM-RNNs can outperform the attention models when coupled with low-complex CNN backbones and trained in an end-to-end fashion, implying that attention models may not necessarily be the optimal choice for continuous-time multimodal emotion recognition. | false | false | true | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 287,574 |
1701.02496 | Network Topology Modulation for Energy and Data Transmission in Internet
of Magneto-Inductive Things | Internet-of-things (IoT) architectures connecting a massive number of heterogeneous devices need energy efficient, low hardware complexity, low cost, simple and secure mechanisms to realize communication among devices. One of the emerging schemes is to realize simultaneous wireless information and power transfer (SWIPT) in an energy harvesting network. Radio frequency (RF) solutions require special hardware and modulation methods for RF to direct current (DC) conversion and optimized operation to achieve SWIPT which are currently in an immature phase. On the other hand, magneto-inductive (MI) communication transceivers are intrinsically energy harvesting with potential for SWIPT in an efficient manner. In this article, novel modulation and demodulation mechanisms are presented in a combined framework with multiple-access channel (MAC) communication and wireless power transmission. The network topology of power transmitting active coils in a transceiver composed of a grid of coils is changed as a novel method to transmit information. Practical demodulation schemes are formulated and numerically simulated for two-user MAC topology of small size coils. The transceivers are suitable to attach to everyday objects to realize reliable local area network (LAN) communication performances with tens of meters communication ranges. The designed scheme is promising for future IoT applications requiring SWIPT with energy efficient, low cost, low power and low hardware complexity solutions. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 66,562 |
2208.03970 | Optimized Design for IRS-Assisted Integrated Sensing and Communication
Systems in Clutter Environments | In this paper, we investigate an intelligent reflecting surface (IRS)-assisted integrated sensing and communication (ISAC) system design in a clutter environment. Assisted by an IRS equipped with a uniform linear array (ULA), a multi-antenna base station (BS) is targeted for communicating with multiple communication users (CUs) and sensing multiple targets simultaneously. We consider the IRS-assisted ISAC design in the case with Type-I or Type-II CUs, where each Type-I and Type-II CU can and cannot cancel the interference from sensing signals, respectively. In particular, we aim to maximize the minimum sensing beampattern gain among multiple targets, by jointly optimizing the BS transmit beamforming vectors and the IRS phase shifting matrix, subject to the signal-to-interference-plus-noise ratio (SINR) constraint for each Type-I/Type-II CU, the interference power constraint per clutter, the transmission power constraint at the BS, and the cross-correlation pattern constraint. Due to the coupling of the BS's transmit design variables and the IRS's phase shifting matrix, the formulated max-min IRS-assisted ISAC design problem in the case with Type-I/Type-II CUs is highly non-convex. As such, we propose an efficient algorithm based on the alternating-optimization and semi-definite relaxation (SDR) techniques. In the case with Type-I CUs, we show that the dedicated sensing signal at the BS is always beneficial to improve the sensing performance. By contrast, the dedicated sensing signal at the BS is not required in the case with Type-II CUs. Numerical results are provided to show that the proposed IRS-assisted ISAC design schemes achieve a significant gain over the existing benchmark schemes. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 311,959 |
1604.05429 | Comparative Study of Instance Based Learning and Back Propagation for
Classification Problems | The paper presents a comparative study of the performance of Back Propagation and Instance Based Learning Algorithm for classification tasks. The study is carried out by a series of experiments will all possible combinations of parameter values for the algorithms under evaluation. The algorithm's classification accuracy is compared over a range of datasets and measurements like Cross Validation, Kappa Statistics, Root Mean Squared Value and True Positive vs False Positive rate have been used to evaluate their performance. Along with performance comparison, techniques of handling missing values have also been compared that include Mean or Mode replacement and Multiple Imputation. The results showed that parameter adjustment plays vital role in improving an algorithm's accuracy and therefore, Back Propagation has shown better results as compared to Instance Based Learning. Furthermore, the problem of missing values was better handled by Multiple imputation method, however, not suitable for less amount of data. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 54,804 |
2305.00426 | Transfer of knowledge among instruments in automatic music transcription | Automatic music transcription (AMT) is one of the most challenging tasks in the music information retrieval domain. It is the process of converting an audio recording of music into a symbolic representation containing information about the notes, chords, and rhythm. Current research in this domain focuses on developing new models based on transformer architecture or using methods to perform semi-supervised training, which gives outstanding results, but the computational cost of training such models is enormous. This work shows how to employ easily generated synthesized audio data produced by software synthesizers to train a universal model. It is a good base for further transfer learning to quickly adapt transcription model for other instruments. Achieved results prove that using synthesized data for training may be a good base for pretraining general-purpose models, where the task of transcription is not focused on one instrument. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 361,334 |
2210.09276 | Imagic: Text-Based Real Image Editing with Diffusion Models | Text-conditioned image editing has recently attracted considerable interest. However, most methods are currently either limited to specific editing types (e.g., object overlay, style transfer), or apply to synthetically generated images, or require multiple input images of a common object. In this paper we demonstrate, for the very first time, the ability to apply complex (e.g., non-rigid) text-guided semantic edits to a single real image. For example, we can change the posture and composition of one or multiple objects inside an image, while preserving its original characteristics. Our method can make a standing dog sit down or jump, cause a bird to spread its wings, etc. -- each within its single high-resolution natural image provided by the user. Contrary to previous work, our proposed method requires only a single input image and a target text (the desired edit). It operates on real images, and does not require any additional inputs (such as image masks or additional views of the object). Our method, which we call "Imagic", leverages a pre-trained text-to-image diffusion model for this task. It produces a text embedding that aligns with both the input image and the target text, while fine-tuning the diffusion model to capture the image-specific appearance. We demonstrate the quality and versatility of our method on numerous inputs from various domains, showcasing a plethora of high quality complex semantic image edits, all within a single unified framework. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 324,478 |
1802.03691 | Tree-to-tree Neural Networks for Program Translation | Program translation is an important tool to migrate legacy code in one language into an ecosystem built in a different language. In this work, we are the first to employ deep neural networks toward tackling this problem. We observe that program translation is a modular procedure, in which a sub-tree of the source tree is translated into the corresponding target sub-tree at each step. To capture this intuition, we design a tree-to-tree neural network to translate a source tree into a target one. Meanwhile, we develop an attention mechanism for the tree-to-tree model, so that when the decoder expands one non-terminal in the target tree, the attention mechanism locates the corresponding sub-tree in the source tree to guide the expansion of the decoder. We evaluate the program translation capability of our tree-to-tree model against several state-of-the-art approaches. Compared against other neural translation models, we observe that our approach is consistently better than the baselines with a margin of up to 15 points. Further, our approach can improve the previous state-of-the-art program translation approaches by a margin of 20 points on the translation of real-world projects. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 90,044 |
2305.00035 | SAM on Medical Images: A Comprehensive Study on Three Prompt Modes | The Segment Anything Model (SAM) made an eye-catching debut recently and inspired many researchers to explore its potential and limitation in terms of zero-shot generalization capability. As the first promptable foundation model for segmentation tasks, it was trained on a large dataset with an unprecedented number of images and annotations. This large-scale dataset and its promptable nature endow the model with strong zero-shot generalization. Although the SAM has shown competitive performance on several datasets, we still want to investigate its zero-shot generalization on medical images. As we know, the acquisition of medical image annotation usually requires a lot of effort from professional practitioners. Therefore, if there exists a foundation model that can give high-quality mask prediction simply based on a few point prompts, this model will undoubtedly become the game changer for medical image analysis. To evaluate whether SAM has the potential to become the foundation model for medical image segmentation tasks, we collected more than 12 public medical image datasets that cover various organs and modalities. We also explore what kind of prompt can lead to the best zero-shot performance with different modalities. Furthermore, we find that a pattern shows that the perturbation of the box size will significantly change the prediction accuracy. Finally, Extensive experiments show that the predicted mask quality varied a lot among different datasets. And providing proper prompts, such as bounding boxes, to the SAM will significantly increase its performance. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 361,184 |
1406.3638 | Impact of Residual Transmit RF Impairments on Training-Based MIMO
Systems | Radio-frequency (RF) impairments, that exist intimately in wireless communications systems, can severely degrade the performance of traditional multiple-input multiple-output (MIMO) systems. Although compensation schemes can cancel out part of these RF impairments, there still remains a certain amount of impairments. These residual impairments have fundamental impact on the MIMO system performance. However, most of the previous works have neglected this factor. In this paper, a training-based MIMO system with residual transmit RF impairments (RTRI) is considered. In particular, we derive a new channel estimator for the proposed model, and find that RTRI can create an irreducible estimation error floor. Moreover, we show that, in the presence of RTRI, the optimal training sequence length can be larger than the number of transmit antennas, especially in the low and high signal-to-noise ratio (SNR) regimes. An increase in the proposed approximated achievable rate is also observed by adopting the optimal training sequence length. When the training and data symbol powers are required to be equal, we demonstrate that, at high SNRs, systems with RTRI demand more training, whereas at low SNRs, such demands are nearly the same for all practical levels of RTRI. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 33,860 |
2012.13975 | Power Normalizations in Fine-grained Image, Few-shot Image and Graph
Classification | Power Normalizations (PN) are useful non-linear operators which tackle feature imbalances in classification problems. We study PNs in the deep learning setup via a novel PN layer pooling feature maps. Our layer combines the feature vectors and their respective spatial locations in the feature maps produced by the last convolutional layer of CNN into a positive definite matrix with second-order statistics to which PN operators are applied, forming so-called Second-order Pooling (SOP). As the main goal of this paper is to study Power Normalizations, we investigate the role and meaning of MaxExp and Gamma, two popular PN functions. To this end, we provide probabilistic interpretations of such element-wise operators and discover surrogates with well-behaved derivatives for end-to-end training. Furthermore, we look at the spectral applicability of MaxExp and Gamma by studying Spectral Power Normalizations (SPN). We show that SPN on the autocorrelation/covariance matrix and the Heat Diffusion Process (HDP) on a graph Laplacian matrix are closely related, thus sharing their properties. Such a finding leads us to the culmination of our work, a fast spectral MaxExp which is a variant of HDP for covariances/autocorrelation matrices. We evaluate our ideas on fine-grained recognition, scene recognition, and material classification, as well as in few-shot learning and graph classification. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 213,373 |
2002.08159 | Learning Fair Scoring Functions: Bipartite Ranking under ROC-based
Fairness Constraints | Many applications of AI involve scoring individuals using a learned function of their attributes. These predictive risk scores are then used to take decisions based on whether the score exceeds a certain threshold, which may vary depending on the context. The level of delegation granted to such systems in critical applications like credit lending and medical diagnosis will heavily depend on how questions of fairness can be answered. In this paper, we study fairness for the problem of learning scoring functions from binary labeled data, a classic learning task known as bipartite ranking. We argue that the functional nature of the ROC curve, the gold standard measure of ranking accuracy in this context, leads to several ways of formulating fairness constraints. We introduce general families of fairness definitions based on the AUC and on ROC curves, and show that our ROC-based constraints can be instantiated such that classifiers obtained by thresholding the scoring function satisfy classification fairness for a desired range of thresholds. We establish generalization bounds for scoring functions learned under such constraints, design practical learning algorithms and show the relevance our approach with numerical experiments on real and synthetic data. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 164,676 |
2202.13843 | Deepfake Network Architecture Attribution | With the rapid progress of generation technology, it has become necessary to attribute the origin of fake images. Existing works on fake image attribution perform multi-class classification on several Generative Adversarial Network (GAN) models and obtain high accuracies. While encouraging, these works are restricted to model-level attribution, only capable of handling images generated by seen models with a specific seed, loss and dataset, which is limited in real-world scenarios when fake images may be generated by privately trained models. This motivates us to ask whether it is possible to attribute fake images to the source models' architectures even if they are finetuned or retrained under different configurations. In this work, we present the first study on Deepfake Network Architecture Attribution to attribute fake images on architecture-level. Based on an observation that GAN architecture is likely to leave globally consistent fingerprints while traces left by model weights vary in different regions, we provide a simple yet effective solution named DNA-Det for this problem. Extensive experiments on multiple cross-test setups and a large-scale dataset demonstrate the effectiveness of DNA-Det. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 282,763 |
1608.05605 | Using Distributed Representations to Disambiguate Biomedical and
Clinical Concepts | In this paper, we report a knowledge-based method for Word Sense Disambiguation in the domains of biomedical and clinical text. We combine word representations created on large corpora with a small number of definitions from the UMLS to create concept representations, which we then compare to representations of the context of ambiguous terms. Using no relational information, we obtain comparable performance to previous approaches on the MSH-WSD dataset, which is a well-known dataset in the biomedical domain. Additionally, our method is fast and easy to set up and extend to other domains. Supplementary materials, including source code, can be found at https: //github.com/clips/yarn | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 60,001 |
2408.12423 | Multi-Knowledge Fusion Network for Time Series Representation Learning | Forecasting the behaviour of complex dynamical systems such as interconnected sensor networks characterized by high-dimensional multivariate time series(MTS) is of paramount importance for making informed decisions and planning for the future in a broad spectrum of applications. Graph forecasting networks(GFNs) are well-suited for forecasting MTS data that exhibit spatio-temporal dependencies. However, most prior works of GFN-based methods on MTS forecasting rely on domain-expertise to model the nonlinear dynamics of the system, but neglect the potential to leverage the inherent relational-structural dependencies among time series variables underlying MTS data. On the other hand, contemporary works attempt to infer the relational structure of the complex dependencies between the variables and simultaneously learn the nonlinear dynamics of the interconnected system but neglect the possibility of incorporating domain-specific prior knowledge to improve forecast accuracy. To this end, we propose a hybrid architecture that combines explicit prior knowledge with implicit knowledge of the relational structure within the MTS data. It jointly learns intra-series temporal dependencies and inter-series spatial dependencies by encoding time-conditioned structural spatio-temporal inductive biases to provide more accurate and reliable forecasts. It also models the time-varying uncertainty of the multi-horizon forecasts to support decision-making by providing estimates of prediction uncertainty. The proposed architecture has shown promising results on multiple benchmark datasets and outperforms state-of-the-art forecasting methods by a significant margin. We report and discuss the ablation studies to validate our forecasting architecture. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 482,727 |
2304.04559 | Event-based Camera Tracker by $\nabla$t NeRF | When a camera travels across a 3D world, only a fraction of pixel value changes; an event-based camera observes the change as sparse events. How can we utilize sparse events for efficient recovery of the camera pose? We show that we can recover the camera pose by minimizing the error between sparse events and the temporal gradient of the scene represented as a neural radiance field (NeRF). To enable the computation of the temporal gradient of the scene, we augment NeRF's camera pose as a time function. When the input pose to the NeRF coincides with the actual pose, the output of the temporal gradient of NeRF equals the observed intensity changes on the event's points. Using this principle, we propose an event-based camera pose tracking framework called TeGRA which realizes the pose update by using the sparse event's observation. To the best of our knowledge, this is the first camera pose estimation algorithm using the scene's implicit representation and the sparse intensity change from events. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 357,266 |
2011.07964 | Learning from similarity and information extraction from structured
documents | The automation of document processing is gaining recent attention due to the great potential to reduce manual work through improved methods and hardware. Neural networks have been successfully applied before - even though they have been trained only on relatively small datasets with hundreds of documents so far. To successfully explore deep learning techniques and improve the information extraction results, a dataset with more than twenty-five thousand documents has been compiled, anonymized and is published as a part of this work. We will expand our previous work where we proved that convolutions, graph convolutions and self-attention can work together and exploit all the information present in a structured document. Taking the fully trainable method one step further, we will now design and examine various approaches to using siamese networks, concepts of similarity, one-shot learning and context/memory awareness. The aim is to improve micro F1 of per-word classification on the huge real-world document dataset. The results verify the hypothesis that trainable access to a similar (yet still different) page together with its already known target information improves the information extraction. Furthermore, the experiments confirm that all proposed architecture parts are all required to beat the previous results. The best model improves the previous state-of-the-art results by an 8.25 gain in F1 score. Qualitative analysis is provided to verify that the new model performs better for all target classes. Additionally, multiple structural observations about the causes of the underperformance of some architectures are revealed. All the source codes, parameters and implementation details are published together with the dataset in the hope to push the research boundaries since all the techniques used in this work are not problem-specific and can be generalized for other tasks and contexts. | false | false | false | false | false | true | true | false | true | false | false | false | false | false | false | false | false | false | 206,727 |
1910.04024 | Model predictive control design for dynamical systems learned by Long
Short-Term Memory Networks | This paper analyzes the stability-related properties of Long Short-Term Memory (LSTM) networks and investigates their use as the model of the plant in the design of Model Predictive Controllers (MPC). First, sufficient conditions guaranteeing the Input-to-State stability (ISS) and Incremental Input-to-State stability (dISS) of LSTM are derived. These properties are then exploited to design an observer with guaranteed convergence of the state estimate to the true one. Such observer is then embedded in a MPC scheme solving the tracking problem. The resulting closed-loop scheme is proved to be asymptotically stable. The training algorithm and control scheme are tested numerically on the simulator of a pH reactor, and the reported results confirm the effectiveness of the proposed approach. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 148,653 |
1702.08495 | Don't Fear the Reaper: Refuting Bostrom's Superintelligence Argument | In recent years prominent intellectuals have raised ethical concerns about the consequences of artificial intelligence. One concern is that an autonomous agent might modify itself to become "superintelligent" and, in supremely effective pursuit of poorly specified goals, destroy all of humanity. This paper considers and rejects the possibility of this outcome. We argue that this scenario depends on an agent's ability to rapidly improve its ability to predict its environment through self-modification. Using a Bayesian model of a reasoning agent, we show that there are important limitations to how an agent may improve its predictive ability through self-modification alone. We conclude that concern about this artificial intelligence outcome is misplaced and better directed at policy questions around data access and storage. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 68,993 |
2402.14454 | CCPA: Long-term Person Re-Identification via Contrastive Clothing and
Pose Augmentation | Long-term Person Re-Identification (LRe-ID) aims at matching an individual across cameras after a long period of time, presenting variations in clothing, pose, and viewpoint. In this work, we propose CCPA: Contrastive Clothing and Pose Augmentation framework for LRe-ID. Beyond appearance, CCPA captures body shape information which is cloth-invariant using a Relation Graph Attention Network. Training a robust LRe-ID model requires a wide range of clothing variations and expensive cloth labeling, which is lacked in current LRe-ID datasets. To address this, we perform clothing and pose transfer across identities to generate images of more clothing variations and of different persons wearing similar clothing. The augmented batch of images serve as inputs to our proposed Fine-grained Contrastive Losses, which not only supervise the Re-ID model to learn discriminative person embeddings under long-term scenarios but also ensure in-distribution data generation. Results on LRe-ID datasets demonstrate the effectiveness of our CCPA framework. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 431,692 |
2406.15112 | Micro-power spoken keyword spotting on Xylo Audio 2 | For many years, designs for "Neuromorphic" or brain-like processors have been motivated by achieving extreme energy efficiency, compared with von-Neumann and tensor processor devices. As part of their design language, Neuromorphic processors take advantage of weight, parameter, state and activity sparsity. In the extreme case, neural networks based on these principles mimic the sparse activity oof biological nervous systems, in ``Spiking Neural Networks'' (SNNs). Few benchmarks are available for Neuromorphic processors, that have been implemented for a range of Neuromorphic and non-Neuromorphic platforms, which can therefore demonstrate the energy benefits of Neuromorphic processor designs. Here we describes the implementation of a spoken audio keyword-spotting (KWS) benchmark "Aloha" on the Xylo Audio 2 (SYNS61210) Neuromorphic processor device. We obtained high deployed quantized task accuracy, (95%), exceeding the benchmark task accuracy. We measured real continuous power of the deployed application on Xylo. We obtained best-in-class dynamic inference power ($291\mu$W) and best-in-class inference efficiency ($6.6\mu$J / Inf). Xylo sets a new minimum power for the Aloha KWS benchmark, and highlights the extreme energy efficiency achievable with Neuromorphic processor designs. Our results show that Neuromorphic designs are well-suited for real-time near- and in-sensor processing on edge devices. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 466,623 |
1909.13362 | Language-Agnostic Syllabification with Neural Sequence Labeling | The identification of syllables within phonetic sequences is known as syllabification. This task is thought to play an important role in natural language understanding, speech production, and the development of speech recognition systems. The concept of the syllable is cross-linguistic, though formal definitions are rarely agreed upon, even within a language. In response, data-driven syllabification methods have been developed to learn from syllabified examples. These methods often employ classical machine learning sequence labeling models. In recent years, recurrence-based neural networks have been shown to perform increasingly well for sequence labeling tasks such as named entity recognition (NER), part of speech (POS) tagging, and chunking. We present a novel approach to the syllabification problem which leverages modern neural network techniques. Our network is constructed with long short-term memory (LSTM) cells, a convolutional component, and a conditional random field (CRF) output layer. Existing syllabification approaches are rarely evaluated across multiple language families. To demonstrate cross-linguistic generalizability, we show that the network is competitive with state of the art systems in syllabifying English, Dutch, Italian, French, Manipuri, and Basque datasets. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 147,399 |
2402.16670 | Pay Attention: a Call to Regulate the Attention Market and Prevent
Algorithmic Emotional Governance | Over the last 70 years, we, humans, have created an economic market where attention is being captured and turned into money thanks to advertising. During the last two decades, leveraging research in psychology, sociology, neuroscience and other domains, Web platforms have brought the process of capturing attention to an unprecedented scale. With the initial commonplace goal of making targeted advertising more effective, the generalization of attention-capturing techniques and their use of cognitive biases and emotions have multiple detrimental side effects such as polarizing opinions, spreading false information and threatening public health, economies and democracies. This is clearly a case where the Web is not used for the common good and where, in fact, all its users become a vulnerable population. This paper brings together contributions from a wide range of disciplines to analyze current practices and consequences thereof. Through a set of propositions and principles that could be used do drive further works, it calls for actions against these practices competing to capture our attention on the Web, as it would be unsustainable for a civilization to allow attention to be wasted with impunity on a world-wide scale. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 432,649 |
2105.07174 | Stacked Deep Multi-Scale Hierarchical Network for Fast Bokeh Effect
Rendering from a Single Image | The Bokeh Effect is one of the most desirable effects in photography for rendering artistic and aesthetic photos. Usually, it requires a DSLR camera with different aperture and shutter settings and certain photography skills to generate this effect. In smartphones, computational methods and additional sensors are used to overcome the physical lens and sensor limitations to achieve such effect. Most of the existing methods utilized additional sensor's data or pretrained network for fine depth estimation of the scene and sometimes use portrait segmentation pretrained network module to segment salient objects in the image. Because of these reasons, networks have many parameters, become runtime intensive and unable to run in mid-range devices. In this paper, we used an end-to-end Deep Multi-Scale Hierarchical Network (DMSHN) model for direct Bokeh effect rendering of images captured from the monocular camera. To further improve the perceptual quality of such effect, a stacked model consisting of two DMSHN modules is also proposed. Our model does not rely on any pretrained network module for Monocular Depth Estimation or Saliency Detection, thus significantly reducing the size of model and run time. Stacked DMSHN achieves state-of-the-art results on a large scale EBB! dataset with around 6x less runtime compared to the current state-of-the-art model in processing HD quality images. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 235,345 |
2104.00764 | SYSML: StYlometry with Structure and Multitask Learning: Implications
for Darknet Forum Migrant Analysis | Darknet market forums are frequently used to exchange illegal goods and services between parties who use encryption to conceal their identities. The Tor network is used to host these markets, which guarantees additional anonymization from IP and location tracking, making it challenging to link across malicious users using multiple accounts (sybils). Additionally, users migrate to new forums when one is closed, making it difficult to link users across multiple forums. We develop a novel stylometry-based multitask learning approach for natural language and interaction modeling using graph embeddings to construct low-dimensional representations of short episodes of user activity for authorship attribution. We provide a comprehensive evaluation of our methods across four different darknet forums demonstrating its efficacy over the state-of-the-art, with a lift of up to 2.5X on Mean Retrieval Rank and 2X on Recall@10. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 228,106 |
1803.01160 | Real-Time Deep Learning Method for Abandoned Luggage Detection in Video | Recent terrorist attacks in major cities around the world have brought many casualties among innocent citizens. One potential threat is represented by abandoned luggage items (that could contain bombs or biological warfare) in public areas. In this paper, we describe an approach for real-time automatic detection of abandoned luggage in video captured by surveillance cameras. The approach is comprised of two stages: (i) static object detection based on background subtraction and motion estimation and (ii) abandoned luggage recognition based on a cascade of convolutional neural networks (CNN). To train our neural networks we provide two types of examples: images collected from the Internet and realistic examples generated by imposing various suitcases and bags over the scene's background. We present empirical results demonstrating that our approach yields better performance than a strong CNN baseline method. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 91,824 |
1601.01675 | Ensemble Methods of Classification for Power Systems Security Assessment | One of the most promising approaches for complex technical systems analysis employs ensemble methods of classification. Ensemble methods enable to build a reliable decision rules for feature space classification in the presence of many possible states of the system. In this paper, novel techniques based on decision trees are used for evaluation of the reliability of the regime of electric power systems. We proposed hybrid approach based on random forests models and boosting models. Such techniques can be applied to predict the interaction of increasing renewable power, storage devices and swiching of smart loads from intelligent domestic appliances, heaters and air-conditioning units and electric vehicles with grid for enhanced decision making. The ensemble classification methods were tested on the modified 118-bus IEEE power system showing that proposed technique can be employed to examine whether the power system is secured under steady-state operating conditions. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 50,769 |
2103.10824 | Enhancing Robustness of On-line Learning Models on Highly Noisy Data | Classification algorithms have been widely adopted to detect anomalies for various systems, e.g., IoT, cloud and face recognition, under the common assumption that the data source is clean, i.e., features and labels are correctly set. However, data collected from the wild can be unreliable due to careless annotations or malicious data transformation for incorrect anomaly detection. In this paper, we extend a two-layer on-line data selection framework: Robust Anomaly Detector (RAD) with a newly designed ensemble prediction where both layers contribute to the final anomaly detection decision. To adapt to the on-line nature of anomaly detection, we consider additional features of conflicting opinions of classifiers, repetitive cleaning, and oracle knowledge. We on-line learn from incoming data streams and continuously cleanse the data, so as to adapt to the increasing learning capacity from the larger accumulated data set. Moreover, we explore the concept of oracle learning that provides additional information of true labels for difficult data points. We specifically focus on three use cases, (i) detecting 10 classes of IoT attacks, (ii) predicting 4 classes of task failures of big data jobs, and (iii) recognising 100 celebrities faces. Our evaluation results show that RAD can robustly improve the accuracy of anomaly detection, to reach up to 98.95% for IoT device attacks (i.e., +7%), up to 85.03% for cloud task failures (i.e., +14%) under 40% label noise, and for its extension, it can reach up to 77.51% for face recognition (i.e., +39%) under 30% label noise. The proposed RAD and its extensions are general and can be applied to different anomaly detection algorithms. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 225,578 |
2203.00849 | Adversarially Robust Learning with Tolerance | We initiate the study of tolerant adversarial PAC-learning with respect to metric perturbation sets. In adversarial PAC-learning, an adversary is allowed to replace a test point $x$ with an arbitrary point in a closed ball of radius $r$ centered at $x$. In the tolerant version, the error of the learner is compared with the best achievable error with respect to a slightly larger perturbation radius $(1+\gamma)r$. This simple tweak helps us bridge the gap between theory and practice and obtain the first PAC-type guarantees for algorithmic techniques that are popular in practice. Our first result concerns the widely-used ``perturb-and-smooth'' approach for adversarial learning. For perturbation sets with doubling dimension $d$, we show that a variant of these approaches PAC-learns any hypothesis class $\mathcal{H}$ with VC-dimension $v$ in the $\gamma$-tolerant adversarial setting with $O\left(\frac{v(1+1/\gamma)^{O(d)}}{\varepsilon}\right)$ samples. This is in contrast to the traditional (non-tolerant) setting in which, as we show, the perturb-and-smooth approach can provably fail. Our second result shows that one can PAC-learn the same class using $\widetilde{O}\left(\frac{d.v\log(1+1/\gamma)}{\varepsilon^2}\right)$ samples even in the agnostic setting. This result is based on a novel compression-based algorithm, and achieves a linear dependence on the doubling dimension as well as the VC-dimension. This is in contrast to the non-tolerant setting where there is no known sample complexity upper bound that depend polynomially on the VC-dimension. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 283,145 |
2105.11839 | DiBS: Differentiable Bayesian Structure Learning | Bayesian structure learning allows inferring Bayesian network structure from data while reasoning about the epistemic uncertainty -- a key element towards enabling active causal discovery and designing interventions in real world systems. In this work, we propose a general, fully differentiable framework for Bayesian structure learning (DiBS) that operates in the continuous space of a latent probabilistic graph representation. Contrary to existing work, DiBS is agnostic to the form of the local conditional distributions and allows for joint posterior inference of both the graph structure and the conditional distribution parameters. This makes our formulation directly applicable to posterior inference of complex Bayesian network models, e.g., with nonlinear dependencies encoded by neural networks. Using DiBS, we devise an efficient, general purpose variational inference method for approximating distributions over structural models. In evaluations on simulated and real-world data, our method significantly outperforms related approaches to joint posterior inference. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 236,838 |
1405.0157 | Dimensionality of social networks using motifs and eigenvalues | We consider the dimensionality of social networks, and develop experiments aimed at predicting that dimension. We find that a social network model with nodes and links sampled from an $m$-dimensional metric space with power-law distributed influence regions best fits samples from real-world networks when $m$ scales logarithmically with the number of nodes of the network. This supports a logarithmic dimension hypothesis, and we provide evidence with two different social networks, Facebook and LinkedIn. Further, we employ two different methods for confirming the hypothesis: the first uses the distribution of motif counts, and the second exploits the eigenvalue distribution. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 32,749 |
2110.05313 | Unsupervised Source Separation via Bayesian Inference in the Latent
Domain | State of the art audio source separation models rely on supervised data-driven approaches, which can be expensive in terms of labeling resources. On the other hand, approaches for training these models without any direct supervision are typically high-demanding in terms of memory and time requirements, and remain impractical to be used at inference time. We aim to tackle these limitations by proposing a simple yet effective unsupervised separation algorithm, which operates directly on a latent representation of time-domain signals. Our algorithm relies on deep Bayesian priors in the form of pre-trained autoregressive networks to model the probability distributions of each source. We leverage the low cardinality of the discrete latent space, trained with a novel loss term imposing a precise arithmetic structure on it, to perform exact Bayesian inference without relying on an approximation strategy. We validate our approach on the Slakh dataset arXiv:1909.08494, demonstrating results in line with state of the art supervised approaches while requiring fewer resources with respect to other unsupervised methods. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 260,239 |
2012.08178 | On how Cognitive Computing will plan your next Systematic Review | Systematic literature reviews (SLRs) are at the heart of evidence-based research, setting the foundation for future research and practice. However, producing good quality timely contributions is a challenging and highly cognitive endeavor, which has lately motivated the exploration of automation and support in the SLR process. In this paper we address an often overlooked phase in this process, that of planning literature reviews, and explore under the lenses of cognitive process augmentation how to overcome its most salient challenges. In doing so, we report on the insights from 24 SLR authors on planning practices, its challenges as well as feedback on support strategies inspired by recent advances in cognitive computing. We frame our findings under the cognitive augmentation framework, and report on a prototype implementation and evaluation focusing on further informing the technical feasibility. | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 211,691 |
1501.05628 | Identification of a Hybrid Spring Mass Damper via Harmonic Transfer
Functions as a Step Towards Data-Driven Models for Legged Locomotion | There are limitations on the extent to which manually constructed mathematical models can capture relevant aspects of legged locomotion. Even simple models for basic behaviors such as running involve non-integrable dynamics, requiring the use of possibly inaccurate approximations in the design of model-based controllers. In this study, we show how data-driven frequency domain system identification methods can be used to obtain input--output characteristics for a class of dynamical systems around their limit cycles, with hybrid structural properties similar to those observed in legged locomotion systems. Under certain assumptions, we can approximate hybrid dynamics of such systems around their limit cycle as a piecewise smooth linear time periodic system (LTP), further approximated as a time-periodic, piecewise LTI system to reduce parametric degrees of freedom in the identification process. In this paper, we use a simple one-dimensional hybrid model in which a limit-cycle is induced through the actions of a linear actuator to illustrate the details of our method. We first derive theoretical harmonic transfer functions of our example model. We then excite the model with small chirp signals to introduce perturbations around its limit-cycle and present systematic identification results to estimate the harmonic transfer functions for this model. Comparison between the data-driven HTF model and its theoretical prediction illustrates the potential effectiveness of such empirical identification methods in legged locomotion. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 39,507 |
2405.19673 | Bridging Model-Based Optimization and Generative Modeling via
Conservative Fine-Tuning of Diffusion Models | AI-driven design problems, such as DNA/protein sequence design, are commonly tackled from two angles: generative modeling, which efficiently captures the feasible design space (e.g., natural images or biological sequences), and model-based optimization, which utilizes reward models for extrapolation. To combine the strengths of both approaches, we adopt a hybrid method that fine-tunes cutting-edge diffusion models by optimizing reward models through RL. Although prior work has explored similar avenues, they primarily focus on scenarios where accurate reward models are accessible. In contrast, we concentrate on an offline setting where a reward model is unknown, and we must learn from static offline datasets, a common scenario in scientific domains. In offline scenarios, existing approaches tend to suffer from overoptimization, as they may be misled by the reward model in out-of-distribution regions. To address this, we introduce a conservative fine-tuning approach, BRAID, by optimizing a conservative reward model, which includes additional penalization outside of offline data distributions. Through empirical and theoretical analysis, we demonstrate the capability of our approach to outperform the best designs in offline data, leveraging the extrapolation capabilities of reward models while avoiding the generation of invalid designs through pre-trained diffusion models. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 459,001 |
2209.07808 | Single Image Deraining via Rain-Steaks Aware Deep Convolutional Neural
Network | It is challenging to remove rain-steaks from a single rainy image because the rain steaks are spatially varying in the rainy image. This problem is studied in this paper by combining conventional image processing techniques and deep learning based techniques. An improved weighted guided image filter (iWGIF) is proposed to extract high frequency information from a rainy image. The high frequency information mainly includes rain steaks and noise, and it can guide the rain steaks aware deep convolutional neural network (RSADCNN) to pay more attention to rain steaks. The efficiency and explain-ability of RSADNN are improved. Experiments show that the proposed algorithm significantly outperforms state-of-the-art methods on both synthetic and real-world images in terms of both qualitative and quantitative measures. It is useful for autonomous navigation in raining conditions. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 317,896 |
2406.10663 | Interpreting Multi-objective Evolutionary Algorithms via Sokoban Level
Generation | This paper presents an interactive platform to interpret multi-objective evolutionary algorithms. Sokoban level generation is selected as a showcase for its widespread use in procedural content generation. By balancing the emptiness and spatial diversity of Sokoban levels, we illustrate the improved two-archive algorithm, Two_Arch2, a well-known multi-objective evolutionary algorithm. Our web-based platform integrates Two_Arch2 into an interface that visually and interactively demonstrates the evolutionary process in real-time. Designed to bridge theoretical optimisation strategies with practical game generation applications, the interface is also accessible to both researchers and beginners to multi-objective evolutionary algorithms or procedural content generation on a website. Through dynamic visualisations and interactive gameplay demonstrations, this web-based platform also has potential as an educational tool. | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 464,498 |
2210.11689 | SLING: Sino Linguistic Evaluation of Large Language Models | To understand what kinds of linguistic knowledge are encoded by pretrained Chinese language models (LMs), we introduce the benchmark of Sino LINGuistics (SLING), which consists of 38K minimal sentence pairs in Mandarin Chinese grouped into 9 high-level linguistic phenomena. Each pair demonstrates the acceptability contrast of a specific syntactic or semantic phenomenon (e.g., The keys are lost vs. The keys is lost), and an LM should assign lower perplexity to the acceptable sentence. In contrast to the CLiMP dataset (Xiang et al., 2021), which also contains Chinese minimal pairs and was created by translating the vocabulary of the English BLiMP dataset, the minimal pairs in SLING are derived primarily by applying syntactic and lexical transformations to naturally-occurring, linguist-annotated sentences from the Chinese Treebank 9.0, thus addressing severe issues in CLiMP's data generation process. We test 18 publicly available pretrained monolingual (e.g., BERT-base-zh, CPM) and multi-lingual (e.g., mT5, XLM) language models on SLING. Our experiments show that the average accuracy for LMs is far below human performance (69.7% vs. 97.1%), while BERT-base-zh achieves the highest accuracy (84.8%) of all tested LMs, even much larger ones. Additionally, we find that most LMs have a strong gender and number (singular/plural) bias, and they perform better on local phenomena than hierarchical ones. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 325,409 |
2408.14478 | Uncertainty Quantification in Alzheimer's Disease Progression Modeling | With the increasing number of patients diagnosed with Alzheimer's Disease, prognosis models have the potential to aid in early disease detection. However, current approaches raise dependability concerns as they do not account for uncertainty. In this work, we compare the performance of Monte Carlo Dropout, Variational Inference, Markov Chain Monte Carlo, and Ensemble Learning trained on 512 patients to predict 4-year cognitive score trajectories with confidence bounds. We show that MC Dropout and MCMC are able to produce well-calibrated, and accurate predictions under noisy training data. | false | false | false | false | true | false | false | false | false | true | false | false | false | true | false | false | false | false | 483,553 |
2411.18677 | MatchDiffusion: Training-free Generation of Match-cuts | Match-cuts are powerful cinematic tools that create seamless transitions between scenes, delivering strong visual and metaphorical connections. However, crafting match-cuts is a challenging, resource-intensive process requiring deliberate artistic planning. In MatchDiffusion, we present the first training-free method for match-cut generation using text-to-video diffusion models. MatchDiffusion leverages a key property of diffusion models: early denoising steps define the scene's broad structure, while later steps add details. Guided by this insight, MatchDiffusion employs "Joint Diffusion" to initialize generation for two prompts from shared noise, aligning structure and motion. It then applies "Disjoint Diffusion", allowing the videos to diverge and introduce unique details. This approach produces visually coherent videos suited for match-cuts. User studies and metrics demonstrate MatchDiffusion's effectiveness and potential to democratize match-cut creation. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 511,970 |
1511.06104 | Semi-supervised Learning for Convolutional Neural Networks via Online
Graph Construction | The recent promising achievements of deep learning rely on the large amount of labeled data. Considering the abundance of data on the web, most of them do not have labels at all. Therefore, it is important to improve generalization performance using unlabeled data on supervised tasks with few labeled instances. In this work, we revisit graph-based semi-supervised learning algorithms and propose an online graph construction technique which suits deep convolutional neural network better. We consider an EM-like algorithm for semi-supervised learning on deep neural networks: In forward pass, the graph is constructed based on the network output, and the graph is then used for loss calculation to help update the network by back propagation in the backward pass. We demonstrate the strength of our online approach compared to the conventional ones whose graph is constructed on static but not robust enough feature representations beforehand. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | true | false | false | 49,160 |
cs/0003022 | Hypothetical revision and matter-of-fact supposition | The paper studies the notion of supposition encoded in non-Archimedean conditional probability (and revealed in the acceptance of the so-called indicative conditionals). The notion of qualitative change of view that thus arises is axiomatized and compared with standard notions like AGM and UPDATE. Applications in the following fields are discussed: (1) theory of games and decisions, (2) causal models, (3) non-monotonic logic. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 537,036 |
1802.04422 | A comparative study of fairness-enhancing interventions in machine
learning | Computers are increasingly used to make decisions that have significant impact in people's lives. Often, these predictions can affect different population subgroups disproportionately. As a result, the issue of fairness has received much recent interest, and a number of fairness-enhanced classifiers and predictors have appeared in the literature. This paper seeks to study the following questions: how do these different techniques fundamentally compare to one another, and what accounts for the differences? Specifically, we seek to bring attention to many under-appreciated aspects of such fairness-enhancing interventions. Concretely, we present the results of an open benchmark we have developed that lets us compare a number of different algorithms under a variety of fairness measures, and a large number of existing datasets. We find that although different algorithms tend to prefer specific formulations of fairness preservations, many of these measures strongly correlate with one another. In addition, we find that fairness-preserving algorithms tend to be sensitive to fluctuations in dataset composition (simulated in our benchmark by varying training-test splits), indicating that fairness interventions might be more brittle than previously thought. | false | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | 90,216 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.