id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2003.09889 | Audio Impairment Recognition Using a Correlation-Based Feature
Representation | Audio impairment recognition is based on finding noise in audio files and categorising the impairment type. Recently, significant performance improvement has been obtained thanks to the usage of advanced deep learning models. However, feature robustness is still an unresolved issue and it is one of the main reasons why we need powerful deep learning architectures. In the presence of a variety of musical styles, hand-crafted features are less efficient in capturing audio degradation characteristics and they are prone to failure when recognising audio impairments and could mistakenly learn musical concepts rather than impairment types. In this paper, we propose a new representation of hand-crafted features that is based on the correlation of feature pairs. We experimentally compare the proposed correlation-based feature representation with a typical raw feature representation used in machine learning and we show superior performance in terms of compact feature dimensionality and improved computational speed in the test stage whilst achieving comparable accuracy. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 169,174 |
2304.01233 | Multi-Modal Perceiver Language Model for Outcome Prediction in Emergency
Department | Language modeling have shown impressive progress in generating compelling text with good accuracy and high semantic coherence. An interesting research direction is to augment these powerful models for specific applications using contextual information. In this work, we explore multi-modal language modeling for healthcare applications. We are interested in outcome prediction and patient triage in hospital emergency department based on text information in chief complaints and vital signs recorded at triage. We adapt Perceiver - a modality-agnostic transformer-based model that has shown promising results in several applications. Since vital-sign modality is represented in tabular format, we modified Perceiver position encoding to ensure permutation invariance. We evaluated the multi-modal language model for the task of diagnosis code prediction using MIMIC-IV ED dataset on 120K visits. In the experimental analysis, we show that mutli-modality improves the prediction performance compared with models trained solely on text or vital signs. We identified disease categories for which multi-modality leads to performance improvement and show that for these categories, vital signs have added predictive power. By analyzing the cross-attention layer, we show how multi-modality contributes to model predictions. This work gives interesting insights on the development of multi-modal language models for healthcare applications. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 355,990 |
2006.03873 | Unique properties of adversarially trained linear classifiers on
Gaussian data | Machine learning models are vulnerable to adversarial perturbations, that when added to an input, can cause high confidence misclassifications. The adversarial learning research community has made remarkable progress in the understanding of the root causes of adversarial perturbations. However, most problems that one may consider important to solve for the deployment of machine learning in safety critical tasks involve high dimensional complex manifolds that are difficult to characterize and study. It is common to develop adversarially robust learning theory on simple problems, in the hope that insights will transfer to `real world datasets'. In this work, we discuss a setting where this approach fails. In particular, we show with a linear classifier, it is always possible to solve a binary classification problem on Gaussian data under arbitrary levels of adversarial corruption during training, and that this property is not observed with non-linear classifiers on the CIFAR-10 dataset. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 180,464 |
2501.02447 | MedSegDiffNCA: Diffusion Models With Neural Cellular Automata for Skin
Lesion Segmentation | Denoising Diffusion Models (DDMs) are widely used for high-quality image generation and medical image segmentation but often rely on Unet-based architectures, leading to high computational overhead, especially with high-resolution images. This work proposes three NCA-based improvements for diffusion-based medical image segmentation. First, Multi-MedSegDiffNCA uses a multilevel NCA framework to refine rough noise estimates generated by lower level NCA models. Second, CBAM-MedSegDiffNCA incorporates channel and spatial attention for improved segmentation. Third, MultiCBAM-MedSegDiffNCA combines these methods with a new RGB channel loss for semantic guidance. Evaluations on Lesion segmentation show that MultiCBAM-MedSegDiffNCA matches Unet-based model performance with dice score of 87.84% while using 60-110 times fewer parameters, offering a more efficient solution for low resource medical settings. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 522,488 |
2204.01847 | Bayesian Sequential Stacking Algorithm for Concurrently Designing
Molecules and Synthetic Reaction Networks | In the last few years, de novo molecular design using machine learning has made great technical progress but its practical deployment has not been as successful. This is mostly owing to the cost and technical difficulty of synthesizing such computationally designed molecules. To overcome such barriers, various methods for synthetic route design using deep neural networks have been studied intensively in recent years. However, little progress has been made in designing molecules and their synthetic routes simultaneously. Here, we formulate the problem of simultaneously designing molecules with the desired set of properties and their synthetic routes within the framework of Bayesian inference. The design variables consist of a set of reactants in a reaction network and its network topology. The design space is extremely large because it consists of all combinations of purchasable reactants, often in the order of millions or more. In addition, the designed reaction networks can adopt any topology beyond simple multistep linear reaction routes. To solve this hard combinatorial problem, we present a powerful sequential Monte Carlo algorithm that recursively designs a synthetic reaction network by sequentially building up single-step reactions. In a case study of designing drug-like molecules based on commercially available compounds, compared with heuristic combinatorial search methods, the proposed method shows overwhelming performance in terms of computational efficiency and coverage and novelty with respect to existing compounds. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 289,746 |
2408.13697 | Guided and Fused: Efficient Frozen CLIP-ViT with Feature Guidance and
Multi-Stage Feature Fusion for Generalizable Deepfake Detection | The rise of generative models has sparked concerns about image authenticity online, highlighting the urgent need for an effective and general detector. Recent methods leveraging the frozen pre-trained CLIP-ViT model have made great progress in deepfake detection. However, these models often rely on visual-general features directly extracted by the frozen network, which contain excessive information irrelevant to the task, resulting in limited detection performance. To address this limitation, in this paper, we propose an efficient Guided and Fused Frozen CLIP-ViT (GFF), which integrates two simple yet effective modules. The Deepfake-Specific Feature Guidance Module (DFGM) guides the frozen pre-trained model in extracting features specifically for deepfake detection, reducing irrelevant information while preserving its generalization capabilities. The Multi-Stage Fusion Module (FuseFormer) captures low-level and high-level information by fusing features extracted from each stage of the ViT. This dual-module approach significantly improves deepfake detection by fully leveraging CLIP-ViT's inherent advantages. Extensive experiments demonstrate the effectiveness and generalization ability of GFF, which achieves state-of-the-art performance with optimal results in only 5 training epochs. Even when trained on only 4 classes of ProGAN, GFF achieves nearly 99% accuracy on unseen GANs and maintains an impressive 97% accuracy on unseen diffusion models. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 483,249 |
2310.17513 | The Expressive Power of Low-Rank Adaptation | Low-Rank Adaptation (LoRA), a parameter-efficient fine-tuning method that leverages low-rank adaptation of weight matrices, has emerged as a prevalent technique for fine-tuning pre-trained models such as large language models and diffusion models. Despite its huge success in practice, the theoretical underpinnings of LoRA have largely remained unexplored. This paper takes the first step to bridge this gap by theoretically analyzing the expressive power of LoRA. We prove that, for fully connected neural networks, LoRA can adapt any model $f$ to accurately represent any smaller target model $\overline{f}$ if LoRA-rank $\geq(\text{width of }f) \times \frac{\text{depth of }\overline{f}}{\text{depth of }f}$. We also quantify the approximation error when LoRA-rank is lower than the threshold. For Transformer networks, we show any model can be adapted to a target model of the same size with rank-$(\frac{\text{embedding size}}{2})$ LoRA adapters. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 403,165 |
2412.08472 | Local Identifiability of Networks with Nonlinear Node Dynamics | We study the identifiability of nonlinear network systems with partial excitation and partial measurement when the network dynamics is linear on the edges and nonlinear on the nodes. We assume that the graph topology and the nonlinear functions at the node level are known, and we aim to identify the weight matrix of the graph. Our main result is to prove that fully-connected layered feed-forward networks are generically locally identifiable by exciting sources and measuring sinks in the class of analytic functions that cross the origin. This holds even when all other nodes remain unexcited and unmeasured and stands in sharp contrast to most findings on network identifiability requiring measurement and/or excitation of each node. The result applies in particular to feed-forward artificial neural networks with no offsets and generalizes previous literature by considering a broader class of functions and topologies. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 516,099 |
2405.12252 | Enhanced Deterministic Approximation Algorithm for Non-monotone
Submodular Maximization under Knapsack Constraint with Linear Query
Complexity | In this work, we consider the Submodular Maximization under Knapsack (SMK) constraint problem over the ground set of size $n$. The problem recently attracted a lot of attention due to its applications in various domains of combination optimization, artificial intelligence, and machine learning. We improve the approximation factor of the fastest deterministic algorithm from $6+\epsilon$ to $5+\epsilon$ while keeping the best query complexity of $O(n)$, where $\epsilon >0$ is a constant parameter. Our technique is based on optimizing the performance of two components: the threshold greedy subroutine and the building of two disjoint sets as candidate solutions. Besides, by carefully analyzing the cost of candidate solutions, we obtain a tighter approximation factor. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 455,459 |
2105.11241 | Generation of COVID-19 Chest CT Scan Images using Generative Adversarial
Networks | SARS-CoV-2, also known as COVID-19 or Coronavirus, is a viral contagious disease that is infected by a novel coronavirus, and has been rapidly spreading across the globe. It is very important to test and isolate people to reduce spread, and from here comes the need to do this quickly and efficiently. According to some studies, Chest-CT outperforms RT-PCR lab testing, which is the current standard, when diagnosing COVID-19 patients. Due to this, computer vision researchers have developed various deep learning systems that can predict COVID-19 using a Chest-CT scan correctly to a certain degree. The accuracy of these systems is limited since deep learning neural networks such as CNNs (Convolutional Neural Networks) need a significantly large quantity of data for training in order to produce good quality results. Since the disease is relatively recent and more focus has been on CXR (Chest XRay) images, the available chest CT Scan image dataset is much less. We propose a method, by utilizing GANs, to generate synthetic chest CT images of both positive and negative COVID-19 patients. Using a pre-built predictive model, we concluded that around 40% of the generated images are correctly predicted as COVID-19 positive. The dataset thus generated can be used to train a CNN-based classifier which can help determine COVID-19 in a patient with greater accuracy. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 236,638 |
2407.12858 | Grounding and Evaluation for Large Language Models: Practical Challenges
and Lessons Learned (Survey) | With the ongoing rapid adoption of Artificial Intelligence (AI)-based systems in high-stakes domains, ensuring the trustworthiness, safety, and observability of these systems has become crucial. It is essential to evaluate and monitor AI systems not only for accuracy and quality-related metrics but also for robustness, bias, security, interpretability, and other responsible AI dimensions. We focus on large language models (LLMs) and other generative AI models, which present additional challenges such as hallucinations, harmful and manipulative content, and copyright infringement. In this survey article accompanying our KDD 2024 tutorial, we highlight a wide range of harms associated with generative AI systems, and survey state of the art approaches (along with open challenges) to address these harms. | false | false | false | false | true | false | true | false | true | false | false | true | false | false | false | false | false | false | 474,130 |
2202.06913 | FOLD-RM: A Scalable, Efficient, and Explainable Inductive Learning
Algorithm for Multi-Category Classification of Mixed Data | FOLD-RM is an automated inductive learning algorithm for learning default rules for mixed (numerical and categorical) data. It generates an (explainable) answer set programming (ASP) rule set for multi-category classification tasks while maintaining efficiency and scalability. The FOLD-RM algorithm is competitive in performance with the widely-used, state-of-the-art algorithms such as XGBoost and multi-layer perceptrons (MLPs), however, unlike these algorithms, the FOLD-RM algorithm produces an explainable model. FOLD-RM outperforms XGBoost on some datasets, particularly large ones. FOLD-RM also provides human-friendly explanations for predictions. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 280,370 |
1906.05057 | Selecting stock pairs for pairs trading while incorporating lead-lag
relationship | Pairs Trading is carried out in the financial market to earn huge profits from known equilibrium relation between pairs of stock. In financial markets, seldom it is seen that stock pairs are correlated at particular lead or lag. This lead-lag relationship has been empirically studied in various financial markets. Earlier research works have suggested various measures for identifying the best pairs for pairs trading, but they do not consider this lead-lag effect. The present study proposes a new distance measure which incorporates the lead-lag relationship between the stocks while selecting the best pairs for pairs trading. Further, the lead-lag value between the stocks is allowed to vary continuously over time. The proposed measures importance has been show-cased through experimentation on two different datasets, one corresponding to Indian companies and another corresponding to American companies. When the proposed measure is clubbed with SSD measure, i.e., when pairs are identified through optimising both these measures, then the selected pairs consistently generate the best profit, as compared to all other measures. Finally, possible generalisation and extension of the proposed distance measure have been discussed. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 134,915 |
2102.09013 | A Visibility Roadmap Sampling Approach for a Multi-Robot
Visibility-Based Pursuit-Evasion Problem | Given a two-dimensional polygonal space, the multi-robot visibility-based pursuit-evasion problem tasks several pursuer robots with the goal of establishing visibility with an arbitrarily fast evader. The best known complete algorithm for this problem takes time doubly exponential in the number of robots. However, sampling-based techniques have shown promise in generating feasible solutions in these scenarios. One of the primary drawbacks to employing existing sampling-based methods is that existing algorithms have long execution times and high failure rates for complex environments. This paper addresses that limitation by proposing a new algorithm that takes an environment as its input and returns a joint motion strategy which ensures that the evader is captured by one of the pursuers. Starting with a single pursuer, we sequentially construct Sample-Generated Pursuit-Evasion Graphs to create such a joint motion strategy. This sequential graph structure ensures that our algorithm will always terminate with a solution, regardless of the complexity of the environment. We describe an implementation of this algorithm and present quantitative results that show significant improvement in comparison to the existing algorithm. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 220,642 |
2411.10580 | Gradient-Based Stochastic Extremum-Seeking Control for Multivariable
Systems with Distinct Input Delays | This paper addresses the design and analysis of a multivariable gradient-based stochastic extremum-seeking control method for multi-input systems with arbitrary input delays. The approach accommodates systems with distinct time delays across input channels and achieves local exponential stability of the closed-loop system, guaranteeing convergence to a small neighborhood around the extremum point. By incorporating phase compensation for dither signals and a novel predictor-feedback mechanism with averaging-based estimates of the unknown gradient and Hessian, the proposed method overcomes traditional challenges associated with arbitrary, distinct input delays. Unlike previous work on deterministic multiparameter extremum-seeking with distinct input delays, this stability analysis is achieved without using backstepping transformations, simplifying the predictor design and enabling a more straightforward implementation. Specifically, the direct application of Artstein's reduction approach results in delay- and system-dimension-independent convergence rates, enhancing practical applicability. A numerical example illustrates the robust performance and advantages of the proposed delay-compensated stochastic extremum-seeking method. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 508,700 |
2402.10365 | Deep Spectral Meshes: Multi-Frequency Facial Mesh Processing with Graph
Neural Networks | With the rising popularity of virtual worlds, the importance of data-driven parametric models of 3D meshes has grown rapidly. Numerous applications, such as computer vision, procedural generation, and mesh editing, vastly rely on these models. However, current approaches do not allow for independent editing of deformations at different frequency levels. They also do not benefit from representing deformations at different frequencies with dedicated representations, which would better expose their properties and improve the generated meshes' geometric and perceptual quality. In this work, spectral meshes are introduced as a method to decompose mesh deformations into low-frequency and high-frequency deformations. These features of low- and high-frequency deformations are used for representation learning with graph convolutional networks. A parametric model for 3D facial mesh synthesis is built upon the proposed framework, exposing user parameters that control disentangled high- and low-frequency deformations. Independent control of deformations at different frequencies and generation of plausible synthetic examples are mutually exclusive objectives. A Conditioning Factor is introduced to leverage these objectives. Our model takes further advantage of spectral partitioning by representing different frequency levels with disparate, more suitable representations. Low frequencies are represented with standardised Euclidean coordinates, and high frequencies with a normalised deformation representation (DR). This paper investigates applications of our proposed approach in mesh reconstruction, mesh interpolation, and multi-frequency editing. It is demonstrated that our method improves the overall quality of generated meshes on most datasets when considering both the $L_1$ norm and perceptual Dihedral Angle Mesh Error (DAME) metrics. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 429,930 |
2210.17416 | Efficient Similarity-based Passive Filter Pruning for Compressing CNNs | Convolution neural networks (CNNs) have shown great success in various applications. However, the computational complexity and memory storage of CNNs is a bottleneck for their deployment on resource-constrained devices. Recent efforts towards reducing the computation cost and the memory overhead of CNNs involve similarity-based passive filter pruning methods. Similarity-based passive filter pruning methods compute a pairwise similarity matrix for the filters and eliminate a few similar filters to obtain a small pruned CNN. However, the computational complexity of computing the pairwise similarity matrix is high, particularly when a convolutional layer has many filters. To reduce the computational complexity in obtaining the pairwise similarity matrix, we propose to use an efficient method where the complete pairwise similarity matrix is approximated from only a few of its columns by using a Nystr\"om approximation method. The proposed efficient similarity-based passive filter pruning method is 3 times faster and gives same accuracy at the same reduction in computations for CNNs compared to that of the similarity-based pruning method that computes a complete pairwise similarity matrix. Apart from this, the proposed efficient similarity-based pruning method performs similarly or better than the existing norm-based pruning methods. The efficacy of the proposed pruning method is evaluated on CNNs such as DCASE 2021 Task 1A baseline network and a VGGish network designed for acoustic scene classification. | false | false | true | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 327,682 |
2209.03013 | Quantitative probing: Validating causal models using quantitative domain
knowledge | We present quantitative probing as a model-agnostic framework for validating causal models in the presence of quantitative domain knowledge. The method is constructed as an analogue of the train/test split in correlation-based machine learning and as an enhancement of current causal validation strategies that are consistent with the logic of scientific discovery. The effectiveness of the method is illustrated using Pearl's sprinkler example, before a thorough simulation-based investigation is conducted. Limits of the technique are identified by studying exemplary failing scenarios, which are furthermore used to propose a list of topics for future research and improvements of the presented version of quantitative probing. The code for integrating quantitative probing into causal analysis, as well as the code for the presented simulation-based studies of the effectiveness of quantitative probing is provided in two separate open-source Python packages. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 316,374 |
1904.11834 | DeepFreak: Learning Crystallography Diffraction Patterns with Automated
Machine Learning | Serial crystallography is the field of science that studies the structure and properties of crystals via diffraction patterns. In this paper, we introduce a new serial crystallography dataset comprised of real and synthetic images; the synthetic images are generated through the use of a simulator that is both scalable and accurate. The resulting dataset is called DiffraNet, and it is composed of 25,457 512x512 grayscale labeled images. We explore several computer vision approaches for classification on DiffraNet such as standard feature extraction algorithms associated with Random Forests and Support Vector Machines but also an end-to-end CNN topology dubbed DeepFreak tailored to work on this new dataset. All implementations are publicly available and have been fine-tuned using off-the-shelf AutoML optimization tools for a fair comparison. Our best model achieves 98.5% accuracy on synthetic images and 94.51% accuracy on real images. We believe that the DiffraNet dataset and its classification methods will have in the long term a positive impact in accelerating discoveries in many disciplines, including chemistry, geology, biology, materials science, metallurgy, and physics. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 128,961 |
2103.03629 | Self-supervised Mean Teacher for Semi-supervised Chest X-ray
Classification | The training of deep learning models generally requires a large amount of annotated data for effective convergence and generalisation. However, obtaining high-quality annotations is a laboursome and expensive process due to the need of expert radiologists for the labelling task. The study of semi-supervised learning in medical image analysis is then of crucial importance given that it is much less expensive to obtain unlabelled images than to acquire images labelled by expert radiologists. Essentially, semi-supervised methods leverage large sets of unlabelled data to enable better training convergence and generalisation than using only the small set of labelled images. In this paper, we propose Self-supervised Mean Teacher for Semi-supervised (S$^2$MTS$^2$) learning that combines self-supervised mean-teacher pre-training with semi-supervised fine-tuning. The main innovation of S$^2$MTS$^2$ is the self-supervised mean-teacher pre-training based on the joint contrastive learning, which uses an infinite number of pairs of positive query and key features to improve the mean-teacher representation. The model is then fine-tuned using the exponential moving average teacher framework trained with semi-supervised learning. We validate S$^2$MTS$^2$ on the multi-label classification problems from Chest X-ray14 and CheXpert, and the multi-class classification from ISIC2018, where we show that it outperforms the previous SOTA semi-supervised learning methods by a large margin. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 223,350 |
0807.3566 | Stabilizer Quantum Codes: A Unified View based on Forney-style Factor
Graphs | Quantum error-correction codes (QECCs) are a vital ingredient of quantum computation and communication systems. In that context it is highly desirable to design QECCs that can be represented by graphical models which possess a structure that enables efficient and close-to-optimal iterative decoding. In this paper we focus on stabilizer QECCs, a class of QECCs whose construction is rendered non-trivial by the fact that the stabilizer label code, a code that is associated with a stabilizer QECC, has to satisfy a certain self-orthogonality condition. In order to design graphical models of stabilizer label codes that satisfy this condition, we extend a duality result for Forney-style factor graphs (FFGs) to the stabilizer label code framework. This allows us to formulate a simple FFG design rule for constructing stabilizer label codes, a design rule that unifies several earlier stabilizer label code constructions. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 2,102 |
2106.11960 | Variance-Aware Off-Policy Evaluation with Linear Function Approximation | We study the off-policy evaluation (OPE) problem in reinforcement learning with linear function approximation, which aims to estimate the value function of a target policy based on the offline data collected by a behavior policy. We propose to incorporate the variance information of the value function to improve the sample efficiency of OPE. More specifically, for time-inhomogeneous episodic linear Markov decision processes (MDPs), we propose an algorithm, VA-OPE, which uses the estimated variance of the value function to reweight the Bellman residual in Fitted Q-Iteration. We show that our algorithm achieves a tighter error bound than the best-known result. We also provide a fine-grained characterization of the distribution shift between the behavior policy and the target policy. Extensive numerical experiments corroborate our theory. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 242,574 |
2010.04084 | EDNA-Covid: A Large-Scale Covid-19 Tweets Dataset Collected with the
EDNA Streaming Toolkit | The Covid-19 pandemic has fundamentally altered many facets of our lives. With nationwide lockdowns and stay-at-home advisories, conversations about the pandemic have naturally moved to social networks, e.g. Twitter. This affords an unprecedented insight into the evolution of social discourse in the presence of a long-running destabilizing factor such as a pandemic with the high-volume, high-velocity, high-noise Covid-19 Twitter feed. However, real-time information extraction from such a data stream requires a fault-tolerant streaming infrastructure to perform the non-trivial integration of heterogenous data sources from news organizations, social feeds, and authoritative medical organizations like the CDC. To address this, we present (i) the EDNA streaming toolkit for consuming and processing streaming data, and (ii) EDNA-Covid, a multilingual, large-scale dataset of coronavirus-related tweets collected with EDNA since January 25, 2020. EDNA-Covid includes, at time of this publication, over 600M tweets from around the world in over 10 languages. We release both the EDNA toolkit and the EDNA-Covid dataset to the public so that they can be used to extract valuable insights on this extraordinary social event. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 199,617 |
2104.10377 | Dual Head Adversarial Training | Deep neural networks (DNNs) are known to be vulnerable to adversarial examples/attacks, raising concerns about their reliability in safety-critical applications. A number of defense methods have been proposed to train robust DNNs resistant to adversarial attacks, among which adversarial training has so far demonstrated the most promising results. However, recent studies have shown that there exists an inherent tradeoff between accuracy and robustness in adversarially-trained DNNs. In this paper, we propose a novel technique Dual Head Adversarial Training (DH-AT) to further improve the robustness of existing adversarial training methods. Different from existing improved variants of adversarial training, DH-AT modifies both the architecture of the network and the training strategy to seek more robustness. Specifically, DH-AT first attaches a second network head (or branch) to one intermediate layer of the network, then uses a lightweight convolutional neural network (CNN) to aggregate the outputs of the two heads. The training strategy is also adapted to reflect the relative importance of the two heads. We empirically show, on multiple benchmark datasets, that DH-AT can bring notable robustness improvements to existing adversarial training methods. Compared with TRADES, one state-of-the-art adversarial training method, our DH-AT can improve the robustness by 3.4% against PGD40 and 2.3% against AutoAttack, and also improve the clean accuracy by 1.8%. | false | false | false | false | false | false | true | false | false | false | false | true | true | false | false | false | false | false | 231,559 |
2101.03235 | Key Phrase Extraction & Applause Prediction | With the increase in content availability over the internet it is very difficult to get noticed. It has become an upmost the priority of the blog writers to get some feedback over their creations to be confident about the impact of their article. We are training a machine learning model to learn popular article styles, in the form of vector space representations using various word embeddings, and their popularity based on claps and tags. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 214,858 |
2404.02763 | Impact and Integration of Mini Photovoltaic Systems on Electric Power
Distribution Grids | This work analyzes the impact of varying concentrations mini-photovoltaic (MPV) systems, often referred to as balcony power plants, on the stability and control of the low-voltage (LV) grid. By local energy use and potentially reversing meter operation, we focus on how these MPV systems transform grid dynamics and elucidate consumer participation in the energy transition. We scrutinize the effects of these systems on power quality, power loss, transformer loading, and the functioning of other inverter-based voltage-regulating distributed energy resources (DER). Owing to the rise in renewable output from MPVs, the emerging bidirectional energy flow poses challenges for distribution grids abundant with DERs. Our case studies, featuring sensitivity analysis and comparison of distributed and decentralized DER control strategies, highlight that autonomous inverters are essential for providing ancillary services. With the growing use of battery energy storage (BES) systems in LV grids for these services, the need for adaptable DER control strategies becomes increasingly evident. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 443,995 |
2209.07699 | Graph Contrastive Learning with Cross-view Reconstruction | Among different existing graph self-supervised learning strategies, graph contrastive learning (GCL) has been one of the most prevalent approaches to this problem. Despite the remarkable performance those GCL methods have achieved, existing GCL methods that heavily depend on various manually designed augmentation techniques still struggle to alleviate the feature suppression issue without risking losing task-relevant information. Consequently, the learned representation is either brittle or unilluminating. In light of this, we introduce the Graph Contrastive Learning with Cross-View Reconstruction (GraphCV), which follows the information bottleneck principle to learn minimal yet sufficient representation from graph data. Specifically, GraphCV aims to elicit the predictive (useful for downstream instance discrimination) and other non-predictive features separately. Except for the conventional contrastive loss which guarantees the consistency and sufficiency of the representation across different augmentation views, we introduce a cross-view reconstruction mechanism to pursue the disentanglement of the two learned representations. Besides, an adversarial view perturbed from the original view is added as the third view for the contrastive loss to guarantee the intactness of the global semantics and improve the representation robustness. We empirically demonstrate that our proposed model outperforms the state-of-the-art on graph classification task over multiple benchmark datasets. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 317,852 |
2111.06923 | ARC Nav -- A 3D Navigation Stack for Autonomous Robots | Popular navigation stacks implemented on top of open-source frameworks such as ROS(Robot Operating System) and ROS2 represent the robot workspace using a discretized 2D occupancy grid. This method, while requiring less computation, restricts the use of such navigation stacks to wheeled robots navigating on flat surfaces. In this paper, we present a navigation stack that uses a volumetric representation of the robot workspace, and hence can be extended to aerial and legged robots navigating through uneven terrain. Additionally, we present a new sampling-based motion planning algorithm which introduces a bi-directional approach to the Batch Informed Trees (BIT*) motion planning algorithm, whilst wrapping it with a strategy switching approach in order to reduce the initial time taken to find a path, in addition to the time taken to find the shortest path. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 266,216 |
2410.23097 | A proof of a conjecture on trivariate permutations | In this note we show (for a large enough dimension of the underlying field) a conjecture of [C. Beierle, C. Carlet, G. Leander, L. Perrin, {\em A further study of quadratic APN permutations in dimension nine}, Finite Fields Appl. 81 (2022), 102049] on a trivariate permutation. This function is a global representation of two new sporadic quadratic APN permutations in dimension $9$ found by [C. Beierle, G. Leander, {\em New instances of quadratic APN functions}, IEEE Trans. Inf. Theory 68(1) (2022), 670--678]. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 503,899 |
cs/0511047 | The Secret Key-Private Key Capacity Region for Three Terminals | We consider a model for secrecy generation, with three terminals, by means of public interterminal communication, and examine the problem of characterizing all the rates at which all three terminals can generate a ``secret key,'' and -- simultaneously -- two designated terminals can generate a ``private key'' which is effectively concealed from the remaining terminal; both keys are also concealed from an eavesdropper that observes the public communication. Inner and outer bounds for the ``secret key--private key capacity region'' are derived. Under a certain special condition, these bounds coincide to yield the (exact) secret key--private key capacity region. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 539,076 |
2403.01139 | ParallelPARC: A Scalable Pipeline for Generating Natural-Language
Analogies | Analogy-making is central to human cognition, allowing us to adapt to novel situations -- an ability that current AI systems still lack. Most analogy datasets today focus on simple analogies (e.g., word analogies); datasets including complex types of analogies are typically manually curated and very small. We believe that this holds back progress in computational analogy. In this work, we design a data generation pipeline, ParallelPARC (Parallel Paragraph Creator) leveraging state-of-the-art Large Language Models (LLMs) to create complex, paragraph-based analogies, as well as distractors, both simple and challenging. We demonstrate our pipeline and create ProPara-Logy, a dataset of analogies between scientific processes. We publish a gold-set, validated by humans, and a silver-set, generated automatically. We test LLMs' and humans' analogy recognition in binary and multiple-choice settings, and found that humans outperform the best models (~13% gap) after a light supervision. We demonstrate that our silver-set is useful for training models. Lastly, we show challenging distractors confuse LLMs, but not humans. We hope our pipeline will encourage research in this emerging field. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 434,277 |
2308.12680 | Master-slave Deep Architecture for Top-K Multi-armed Bandits with
Non-linear Bandit Feedback and Diversity Constraints | We propose a novel master-slave architecture to solve the top-$K$ combinatorial multi-armed bandits problem with non-linear bandit feedback and diversity constraints, which, to the best of our knowledge, is the first combinatorial bandits setting considering diversity constraints under bandit feedback. Specifically, to efficiently explore the combinatorial and constrained action space, we introduce six slave models with distinguished merits to generate diversified samples well balancing rewards and constraints as well as efficiency. Moreover, we propose teacher learning based optimization and the policy co-training technique to boost the performance of the multiple slave models. The master model then collects the elite samples provided by the slave models and selects the best sample estimated by a neural contextual UCB-based network to make a decision with a trade-off between exploration and exploitation. Thanks to the elaborate design of slave models, the co-training mechanism among slave models, and the novel interactions between the master and slave models, our approach significantly surpasses existing state-of-the-art algorithms in both synthetic and real datasets for recommendation tasks. The code is available at: \url{https://github.com/huanghanchi/Master-slave-Algorithm-for-Top-K-Bandits}. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 387,637 |
1905.03969 | Legal Judgment Prediction via Multi-Perspective Bi-Feedback Network | The Legal Judgment Prediction (LJP) is to determine judgment results based on the fact descriptions of the cases. LJP usually consists of multiple subtasks, such as applicable law articles prediction, charges prediction, and the term of the penalty prediction. These multiple subtasks have topological dependencies, the results of which affect and verify each other. However, existing methods use dependencies of results among multiple subtasks inefficiently. Moreover, for cases with similar descriptions but different penalties, current methods cannot predict accurately because the word collocation information is ignored. In this paper, we propose a Multi-Perspective Bi-Feedback Network with the Word Collocation Attention mechanism based on the topology structure among subtasks. Specifically, we design a multi-perspective forward prediction and backward verification framework to utilize result dependencies among multiple subtasks effectively. To distinguish cases with similar descriptions but different penalties, we integrate word collocations features of fact descriptions into the network via an attention mechanism. The experimental results show our model achieves significant improvements over baselines on all prediction tasks. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 130,337 |
1903.00900 | Competitive Bridge Bidding with Deep Neural Networks | The game of bridge consists of two stages: bidding and playing. While playing is proved to be relatively easy for computer programs, bidding is very challenging. During the bidding stage, each player knowing only his/her own cards needs to exchange information with his/her partner and interfere with opponents at the same time. Existing methods for solving perfect-information games cannot be directly applied to bidding. Most bridge programs are based on human-designed rules, which, however, cannot cover all situations and are usually ambiguous and even conflicting with each other. In this paper, we, for the first time, propose a competitive bidding system based on deep learning techniques, which exhibits two novelties. First, we design a compact representation to encode the private and public information available to a player for bidding. Second, based on the analysis of the impact of other players' unknown cards on one's final rewards, we design two neural networks to deal with imperfect information, the first one inferring the cards of the partner and the second one taking the outputs of the first one as part of its input to select a bid. Experimental results show that our bidding system outperforms the top rule-based program. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 123,130 |
2412.17751 | Group Testing with General Correlation Using Hypergraphs | Group testing, a problem with diverse applications across multiple disciplines, traditionally assumes independence across nodes' states. Recent research, however, focuses on real-world scenarios that often involve correlations among nodes, challenging the simplifying assumptions made in existing models. In this work, we consider a comprehensive model for arbitrary statistical correlation among nodes' states. To capture and leverage these correlations effectively, we model the problem by hypergraphs, inspired by [GLS22], augmented by a probability mass function on the hyper-edges. Using this model, we first design a novel greedy adaptive algorithm capable of conducting informative tests and dynamically updating the distribution. Performance analysis provides upper bounds on the number of tests required, which depend solely on the entropy of the underlying probability distribution and the average number of infections. We demonstrate that the algorithm recovers or improves upon all previously known results for group testing settings with correlation. Additionally, we provide families of graphs where the algorithm is order-wise optimal and give examples where the algorithm or its analysis is not tight. We then generalize the proposed framework of group testing with general correlation in two directions, namely noisy group testing and semi-non-adaptive group testing. In both settings, we provide novel theoretical bounds on the number of tests required. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 520,101 |
2111.14745 | A Simple Long-Tailed Recognition Baseline via Vision-Language Model | The visual world naturally exhibits a long-tailed distribution of open classes, which poses great challenges to modern visual systems. Existing approaches either perform class re-balancing strategies or directly improve network modules to address the problem. However, they still train models with a finite set of predefined labels, limiting their supervision information and restricting their transferability to novel instances. Recent advances in large-scale contrastive visual-language pretraining shed light on a new pathway for visual recognition. With open-vocabulary supervisions, pretrained contrastive vision-language models learn powerful multimodal representations that are promising to handle data deficiency and unseen concepts. By calculating the semantic similarity between visual and text inputs, visual recognition is converted to a vision-language matching problem. Inspired by this, we propose BALLAD to leverage contrastive vision-language models for long-tailed recognition. We first continue pretraining the vision-language backbone through contrastive learning on a specific long-tailed target dataset. Afterward, we freeze the backbone and further employ an additional adapter layer to enhance the representations of tail classes on balanced training samples built with re-sampling strategies. Extensive experiments have been conducted on three popular long-tailed recognition benchmarks. As a result, our simple and effective approach sets the new state-of-the-art performances and outperforms competitive baselines with a large margin. Code is released at https://github.com/gaopengcuhk/BALLAD. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 268,688 |
2411.13323 | Are Large Language Models Memorizing Bug Benchmarks? | Large Language Models (LLMs) have become integral to various software engineering tasks, including code generation, bug detection, and repair. To evaluate model performance in these domains, numerous bug benchmarks containing real-world bugs from software projects have been developed. However, a growing concern within the software engineering community is that these benchmarks may not reliably reflect true LLM performance due to the risk of data leakage. Despite this concern, limited research has been conducted to quantify the impact of potential leakage. In this paper, we systematically evaluate popular LLMs to assess their susceptibility to data leakage from widely used bug benchmarks. To identify potential leakage, we use multiple metrics, including a study of benchmark membership within commonly used training datasets, as well as analyses of negative log-likelihood and n-gram accuracy. Our findings show that certain models, in particular codegen-multi, exhibit significant evidence of memorization in widely used benchmarks like Defects4J, while newer models trained on larger datasets like LLaMa 3.1 exhibit limited signs of leakage. These results highlight the need for careful benchmark selection and the adoption of robust metrics to adequately assess models capabilities. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 509,753 |
2208.01448 | AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq
Model | In this work, we demonstrate that multilingual large-scale sequence-to-sequence (seq2seq) models, pre-trained on a mixture of denoising and Causal Language Modeling (CLM) tasks, are more efficient few-shot learners than decoder-only models on various tasks. In particular, we train a 20 billion parameter multilingual seq2seq model called Alexa Teacher Model (AlexaTM 20B) and show that it achieves state-of-the-art (SOTA) performance on 1-shot summarization tasks, outperforming a much larger 540B PaLM decoder model. AlexaTM 20B also achieves SOTA in 1-shot machine translation, especially for low-resource languages, across almost all language pairs supported by the model (Arabic, English, French, German, Hindi, Italian, Japanese, Marathi, Portuguese, Spanish, Tamil, and Telugu) on Flores-101 dataset. We also show in zero-shot setting, AlexaTM 20B outperforms GPT3 (175B) on SuperGLUE and SQuADv2 datasets and provides SOTA performance on multilingual tasks such as XNLI, XCOPA, Paws-X, and XWinograd. Overall, our results present a compelling case for seq2seq models as a powerful alternative to decoder-only models for Large-scale Language Model (LLM) training. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 311,167 |
2001.11920 | On the Coverage Performance of Boolean-Poisson Cluster Models for
Wireless Sensor Networks | In this paper, we consider wireless sensor networks (WSNs) with sensor nodes exhibiting clustering in their deployment. We model the coverage region of such WSNs by Boolean Poisson cluster models (BPCM) where sensors nodes' location is according to a Poisson cluster process (PCP) and each sensor has an independent sensing range around it. We consider two variants of PCP, in particular \matern and Thomas cluster process to form Boolean \matern and Thomas cluster models. We first derive the capacity functional of these models. Using the derived expressions, we compute the sensing probability of an event and compare it with sensing probability of a WSN modeled by a Boolean Poisson model where sensors are deployed according to a Poisson point process. We also derive the power required for each cluster to collect data from all of its sensors for the three considered WSNs. We show that a BPCM WSN has less power requirement in comparison to the Boolean Poisson WSN, but it suffers from lower coverage, leading to a trade-off between per-cluster power requirement and the sensing performance. A cluster process with desired clustering may provide better coverage while maintaining low power requirements. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 162,200 |
2108.13838 | The Interaction Flow Editor: A New Human-Robot Interaction Rapid
Prototyping Interface | Human-robot interaction can be regarded as a flow between users and robots. Designing good interaction flows takes a lot of effort and needs to be field tested. Unfortunately, the interaction flow design process is often very disjointed, with users experiencing prototypes, designers forming those prototypes, and developers implementing them as independent processes. In this paper, we present the Interaction Flow Editor (IFE), a new human-robot interaction prototyping tool that enables everyday users to create and modify their own interactions, while still providing a full suite of features that is powerful enough for developers and designers to create complex interactions. We also discuss the Flow Engine, a flexible and adaptable framework for executing robot interaction flows authors through the IFE. Finally, we present our case study results that demonstrates how older adults, aged 70 and above, can design and iterate interactions in real-time on a robot using the IFE. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 252,922 |
2212.09447 | Improving Pre-Trained Weights Through Meta-Heuristics Fine-Tuning | Machine Learning algorithms have been extensively researched throughout the last decade, leading to unprecedented advances in a broad range of applications, such as image classification and reconstruction, object recognition, and text categorization. Nonetheless, most Machine Learning algorithms are trained via derivative-based optimizers, such as the Stochastic Gradient Descent, leading to possible local optimum entrapments and inhibiting them from achieving proper performances. A bio-inspired alternative to traditional optimization techniques, denoted as meta-heuristic, has received significant attention due to its simplicity and ability to avoid local optimums imprisonment. In this work, we propose to use meta-heuristic techniques to fine-tune pre-trained weights, exploring additional regions of the search space, and improving their effectiveness. The experimental evaluation comprises two classification tasks (image and text) and is assessed under four literature datasets. Experimental results show nature-inspired algorithms' capacity in exploring the neighborhood of pre-trained weights, achieving superior results than their counterpart pre-trained architectures. Additionally, a thorough analysis of distinct architectures, such as Multi-Layer Perceptron and Recurrent Neural Networks, attempts to visualize and provide more precise insights into the most critical weights to be fine-tuned in the learning process. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 337,113 |
2406.19898 | Paraphrase Types Elicit Prompt Engineering Capabilities | Much of the success of modern language models depends on finding a suitable prompt to instruct the model. Until now, it has been largely unknown how variations in the linguistic expression of prompts affect these models. This study systematically and empirically evaluates which linguistic features influence models through paraphrase types, i.e., different linguistic changes at particular positions. We measure behavioral changes for five models across 120 tasks and six families of paraphrases (i.e., morphology, syntax, lexicon, lexico-syntax, discourse, and others). We also control for other prompt engineering factors (e.g., prompt length, lexical diversity, and proximity to training data). Our results show a potential for language models to improve tasks when their prompts are adapted in specific paraphrase types (e.g., 6.7% median gain in Mixtral 8x7B; 5.5% in LLaMA 3 8B). In particular, changes in morphology and lexicon, i.e., the vocabulary used, showed promise in improving prompts. These findings contribute to developing more robust language models capable of handling variability in linguistic expression. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 468,605 |
2302.03477 | Explainable Action Prediction through Self-Supervision on Scene Graphs | This work explores scene graphs as a distilled representation of high-level information for autonomous driving, applied to future driver-action prediction. Given the scarcity and strong imbalance of data samples, we propose a self-supervision pipeline to infer representative and well-separated embeddings. Key aspects are interpretability and explainability; as such, we embed in our architecture attention mechanisms that can create spatial and temporal heatmaps on the scene graphs. We evaluate our system on the ROAD dataset against a fully-supervised approach, showing the superiority of our training regime. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 344,348 |
2301.08019 | Identification, explanation and clinical evaluation of hospital patient
subtypes | We present a pipeline in which unsupervised machine learning techniques are used to automatically identify subtypes of hospital patients admitted between 2017 and 2021 in a large UK teaching hospital. With the use of state-of-the-art explainability techniques, the identified subtypes are interpreted and assigned clinical meaning. In parallel, clinicians assessed intra-cluster similarities and inter-cluster differences of the identified patient subtypes within the context of their clinical knowledge. By confronting the outputs of both automatic and clinician-based explanations, we aim to highlight the mutual benefit of combining machine learning techniques with clinical expertise. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 341,073 |
1509.02447 | Efficient Structured Matrix Rank Minimization | We study the problem of finding structured low-rank matrices using nuclear norm regularization where the structure is encoded by a linear map. In contrast to most known approaches for linearly structured rank minimization, we do not (a) use the full SVD, nor (b) resort to augmented Lagrangian techniques, nor (c) solve linear systems per iteration. Instead, we formulate the problem differently so that it is amenable to a generalized conditional gradient method, which results in a practical improvement with low per iteration computational cost. Numerical results show that our approach significantly outperforms state-of-the-art competitors in terms of running time, while effectively recovering low rank solutions in stochastic system realization and spectral compressed sensing problems. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 46,731 |
1505.06307 | Time Robustness in MTL and Expressivity in Hybrid System Falsification
(Extended Version) | Building on the work by Fainekos and Pappas and the one by Donze and Maler, we introduce AvSTL, an extension of metric interval temporal logic by averaged temporal operators. Its expressivity in capturing both space and time robustness helps solving falsification problems, (i.e. searching for a critical path in hybrid system models); it does so by communicating a designer's intention more faithfully to the stochastic optimization engine employed in a falsification solver. We also introduce a sliding window-like algorithm that keeps the cost of computing truth/robustness values tractable. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | 43,407 |
2007.02683 | Depthwise Separable Convolutions Versus Recurrent Neural Networks for
Monaural Singing Voice Separation | Recent approaches for music source separation are almost exclusively based on deep neural networks, mostly employing recurrent neural networks (RNNs). Although RNNs are in many cases superior than other types of deep neural networks for sequence processing, they are known to have specific difficulties in training and parallelization, especially for the typically long sequences encountered in music source separation. In this paper we present a use-case of replacing RNNs with depth-wise separable (DWS) convolutions, which are a lightweight and faster variant of the typical convolutions. We focus on singing voice separation, employing an RNN architecture, and we replace the RNNs with DWS convolutions (DWS-CNNs). We conduct an ablation study and examine the effect of the number of channels and layers of DWS-CNNs on the source separation performance, by utilizing the standard metrics of signal-to-artifacts, signal-to-interference, and signal-to-distortion ratio. Our results show that by replacing RNNs with DWS-CNNs yields an improvement of 1.20, 0.06, 0.37 dB, respectively, while using only 20.57% of the amount of parameters of the RNN architecture. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 185,819 |
2111.03576 | Investigation of Topic Modelling Methods for Understanding the Reports
of the Mining Projects in Queensland | In the mining industry, many reports are generated in the project management process. These past documents are a great resource of knowledge for future success. However, it would be a tedious and challenging task to retrieve the necessary information if the documents are unorganized and unstructured. Document clustering is a powerful approach to cope with the problem, and many methods have been introduced in past studies. Nonetheless, there is no silver bullet that can perform the best for any types of documents. Thus, exploratory studies are required to apply the clustering methods for new datasets. In this study, we will investigate multiple topic modelling (TM) methods. The objectives are finding the appropriate approach for the mining project reports using the dataset of the Geological Survey of Queensland, Department of Resources, Queensland Government, and understanding the contents to get the idea of how to organise them. Three TM methods, Latent Dirichlet Allocation (LDA), Nonnegative Matrix Factorization (NMF), and Nonnegative Tensor Factorization (NTF) are compared statistically and qualitatively. After the evaluation, we conclude that the LDA performs the best for the dataset; however, the possibility remains that the other methods could be adopted with some improvements. | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | 265,215 |
2111.03412 | Dual Parameterization of Sparse Variational Gaussian Processes | Sparse variational Gaussian process (SVGP) methods are a common choice for non-conjugate Gaussian process inference because of their computational benefits. In this paper, we improve their computational efficiency by using a dual parameterization where each data example is assigned dual parameters, similarly to site parameters used in expectation propagation. Our dual parameterization speeds-up inference using natural gradient descent, and provides a tighter evidence lower bound for hyperparameter learning. The approach has the same memory cost as the current SVGP methods, but it is faster and more accurate. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 265,158 |
2210.10090 | How to Boost Face Recognition with StyleGAN? | State-of-the-art face recognition systems require vast amounts of labeled training data. Given the priority of privacy in face recognition applications, the data is limited to celebrity web crawls, which have issues such as limited numbers of identities. On the other hand, self-supervised revolution in the industry motivates research on the adaptation of related techniques to facial recognition. One of the most popular practical tricks is to augment the dataset by the samples drawn from generative models while preserving the identity. We show that a simple approach based on fine-tuning pSp encoder for StyleGAN allows us to improve upon the state-of-the-art facial recognition and performs better compared to training on synthetic face identities. We also collect large-scale unlabeled datasets with controllable ethnic constitution -- AfricanFaceSet-5M (5 million images of different people) and AsianFaceSet-3M (3 million images of different people) -- and we show that pretraining on each of them improves recognition of the respective ethnicities (as well as others), while combining all unlabeled datasets results in the biggest performance increase. Our self-supervised strategy is the most useful with limited amounts of labeled training data, which can be beneficial for more tailored face recognition tasks and when facing privacy concerns. Evaluation is based on a standard RFW dataset and a new large-scale RB-WebFace benchmark. The code and data are made publicly available at https://github.com/seva100/stylegan-for-facerec. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 324,780 |
1112.4708 | Transformation Networks: How Innovation and the Availability of
Technology can Increase Economic Performance | A transformation network describes how one set of resources can be transformed into another via technological processes. Transformation networks in economics are useful because they can highlight areas for future innovations, both in terms of new products, new production techniques, or better efficiency. They also make it easy to detect areas where an economy might be fragile. In this paper, we use computational simulations to investigate how the density of a transformation network affects the economic performance, as measured by the gross domestic product (GDP), of an artificial economy. Our results show that on average, the GDP of our economy increases as the density of the transformation network increases. We also find that while the average performance increases, the maximum possible performance decreases and the minimum possible performance increases. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 13,538 |
1704.03004 | Constant Modulus Beamforming via Convex Optimization | We present novel convex-optimization-based solutions to the problem of blind beamforming of constant modulus signals, and to the related problem of linearly constrained blind beamforming of constant modulus signals. These solutions ensure global optimality and are parameter free, namely, do not contain any tuneable parameters and do not require any a-priori parameter settings. The performance of these solutions, as demonstrated by simulated data, is superior to existing methods. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 71,552 |
2406.01544 | Validity Learning on Failures: Mitigating the Distribution Shift in
Autonomous Vehicle Planning | The planning problem constitutes a fundamental aspect of the autonomous driving framework. Recent strides in representation learning have empowered vehicles to comprehend their surrounding environments, thereby facilitating the integration of learning-based planning strategies. Among these approaches, Imitation Learning stands out due to its notable training efficiency. However, traditional Imitation Learning methodologies encounter challenges associated with the co-variate shift phenomenon. We propose Validity Learning on Failures, VL(on failure), as a remedy to address this issue. The essence of our method lies in deploying a pre-trained planner across diverse scenarios. Instances where the planner deviates from its immediate objectives, such as maintaining a safe distance from obstacles or adhering to traffic rules, are flagged as failures. The states corresponding to these failures are compiled into a new dataset, termed the failure dataset. Notably, the absence of expert annotations for this data precludes the applicability of standard imitation learning approaches. To facilitate learning from the closed-loop mistakes, we introduce the VL objective which aims to discern valid trajectories within the current environmental context. Experimental evaluations conducted on both reactive CARLA simulation and non-reactive log-replay simulations reveal substantial enhancements in closed-loop metrics such as \textit{Score, Progress}, and Success Rate, underscoring the effectiveness of the proposed methodology. Further evaluations against the Bench2Drive benchmark demonstrate that VL(on failure) outperforms the state-of-the-art methods by a large margin. | false | false | false | false | true | false | true | true | false | false | false | false | false | false | false | false | false | false | 460,357 |
2312.06571 | From Text to Motion: Grounding GPT-4 in a Humanoid Robot "Alter3" | We report the development of Alter3, a humanoid robot capable of generating spontaneous motion using a Large Language Model (LLM), specifically GPT-4. This achievement was realized by integrating GPT-4 into our proprietary android, Alter3, thereby effectively grounding the LLM with Alter's bodily movement. Typically, low-level robot control is hardware-dependent and falls outside the scope of LLM corpora, presenting challenges for direct LLM-based robot control. However, in the case of humanoid robots like Alter3, direct control is feasible by mapping the linguistic expressions of human actions onto the robot's body through program code. Remarkably, this approach enables Alter3 to adopt various poses, such as a 'selfie' stance or 'pretending to be a ghost,' and generate sequences of actions over time without explicit programming for each body part. This demonstrates the robot's zero-shot learning capabilities. Additionally, verbal feedback can adjust poses, obviating the need for fine-tuning. A video of Alter3's generated motions is available at https://tnoinkwms.github.io/ALTER-LLM/ | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 414,574 |
2203.13550 | Modeling Target-Side Morphology in Neural Machine Translation: A
Comparison of Strategies | Morphologically rich languages pose difficulties to machine translation. Machine translation engines that rely on statistical learning from parallel training data, such as state-of-the-art neural systems, face challenges especially with rich morphology on the output language side. Key challenges of rich target-side morphology in data-driven machine translation include: (1) A large amount of differently inflected word surface forms entails a larger vocabulary and thus data sparsity. (2) Some inflected forms of infrequent terms typically do not appear in the training corpus, which makes closed-vocabulary systems unable to generate these unobserved variants. (3) Linguistic agreement requires the system to correctly match the grammatical categories between inflected word forms in the output sentence, both in terms of target-side morpho-syntactic wellformedness and semantic adequacy with respect to the input. In this paper, we re-investigate two target-side linguistic processing techniques: a lemma-tag strategy and a linguistically informed word segmentation strategy. Our experiments are conducted on a English-German translation task under three training corpus conditions of different magnitudes. We find that a stronger Transformer baseline leaves less room for improvement than a shallow-RNN encoder-decoder model when translating in-domain. However, we find that linguistic modeling of target-side morphology does benefit the Transformer model when the same system is applied to out-of-domain input text. We also successfully apply our approach to English to Czech translation. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 287,673 |
1504.04240 | A Framework of Stability Analysis for Multi-agent Systems on Arbitrary
Topology Graph: Linear Systems | In this paper, from the structural perspective, we propose a new stability analysis approach for the consensus of linear multi-agent systems. Different from the general tools: the Laplacian matrix based method and the Lyapunov's method, this approach treats the multi-agent system as the composition of many isolated agents, and focuses on their special input and output relationship. Through transforming the construction of a graph into a standard procedure only including three basic structures, the stability analysis is recursive and independent of the specific topology. Therefore, this approach can be used for multi-agent systems on any topology graph. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 42,119 |
2308.08469 | LLM4TS: Aligning Pre-Trained LLMs as Data-Efficient Time-Series
Forecasters | Multivariate time-series forecasting is vital in various domains, e.g., economic planning and weather prediction. Deep train-from-scratch models have exhibited effective performance yet require large amounts of data, which limits real-world applicability. Recently, researchers have leveraged the representation learning transferability of pre-trained Large Language Models (LLMs) to handle limited non-linguistic datasets effectively. However, incorporating LLMs with time-series data presents challenges of limited adaptation due to different compositions between time-series and linguistic data, and the inability to process multi-scale temporal information. To tackle these challenges, we propose LLM4TS, a framework for time-series forecasting with pre-trained LLMs. LLM4TS consists of a two-stage fine-tuning strategy: the time-series alignment stage to align LLMs with the nuances of time-series data, and the forecasting fine-tuning stage for downstream time-series forecasting tasks. Furthermore, our framework features a novel two-level aggregation method that integrates multi-scale temporal data within pre-trained LLMs, enhancing their ability to interpret time-specific information. In experiments across 7 time-series forecasting datasets, LLM4TS is superior to existing state-of-the-art methods compared with trained-from-scratch models in full-shot scenarios, and also achieves the highest rank in few-shot scenarios. In addition, evaluations compared with different unsupervised representation learning approaches highlight LLM4TS's effectiveness with representation learning in forecasting tasks. Ablation studies further validate each component's contribution to LLM4TS and underscore the essential role of utilizing LLM's pre-trained weights for optimal performance. The code is available at https://github.com/blacksnail789521/LLM4TS. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 385,916 |
1908.05806 | Cross-Domain Adaptation for Animal Pose Estimation | In this paper, we are interested in pose estimation of animals. Animals usually exhibit a wide range of variations on poses and there is no available animal pose dataset for training and testing. To address this problem, we build an animal pose dataset to facilitate training and evaluation. Considering the heavy labor needed to label dataset and it is impossible to label data for all concerned animal species, we, therefore, proposed a novel cross-domain adaptation method to transform the animal pose knowledge from labeled animal classes to unlabeled animal classes. We use the modest animal pose dataset to adapt learned knowledge to multiple animals species. Moreover, humans also share skeleton similarities with some animals (especially four-footed mammals). Therefore, the easily available human pose dataset, which is of a much larger scale than our labeled animal dataset, provides important prior knowledge to boost up the performance on animal pose estimation. Experiments show that our proposed method leverages these pieces of prior knowledge well and achieves convincing results on animal pose estimation. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 141,817 |
2006.04643 | ColdGANs: Taming Language GANs with Cautious Sampling Strategies | Training regimes based on Maximum Likelihood Estimation (MLE) suffer from known limitations, often leading to poorly generated text sequences. At the root of these limitations is the mismatch between training and inference, i.e. the so-called exposure bias, exacerbated by considering only the reference texts as correct, while in practice several alternative formulations could be as good. Generative Adversarial Networks (GANs) can mitigate those limitations but the discrete nature of text has hindered their application to language generation: the approaches proposed so far, based on Reinforcement Learning, have been shown to underperform MLE. Departing from previous works, we analyze the exploration step in GANs applied to text generation, and show how classical sampling results in unstable training. We propose to consider alternative exploration strategies in a GAN framework that we name ColdGANs, where we force the sampling to be close to the distribution modes to get smoother learning dynamics. For the first time, to the best of our knowledge, the proposed language GANs compare favorably to MLE, and obtain improvements over the state-of-the-art on three generative tasks, namely unconditional text generation, question generation, and abstractive summarization. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 180,758 |
2203.03899 | Noisy Low-rank Matrix Optimization: Geometry of Local Minima and
Convergence Rate | This paper is concerned with low-rank matrix optimization, which has found a wide range of applications in machine learning. This problem in the special case of matrix sensing has been studied extensively through the notion of Restricted Isometry Property (RIP), leading to a wealth of results on the geometric landscape of the problem and the convergence rate of common algorithms. However, the existing results can handle the problem in the case with a general objective function subject to noisy data only when the RIP constant is close to 0. In this paper, we develop a new mathematical framework to solve the above-mentioned problem with a far less restrictive RIP constant. We prove that as long as the RIP constant of the noiseless objective is less than $1/3$, any spurious local solution of the noisy optimization problem must be close to the ground truth solution. By working through the strict saddle property, we also show that an approximate solution can be found in polynomial time. We characterize the geometry of the spurious local minima of the problem in a local region around the ground truth in the case when the RIP constant is greater than $1/3$. Compared to the existing results in the literature, this paper offers the strongest RIP bound and provides a complete theoretical analysis on the global and local optimization landscapes of general low-rank optimization problems under random corruptions from any finite-variance family. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 284,268 |
1201.2046 | Evaluating the performance of geographical locations in scientific
networks with an aggregation - randomization - re-sampling approach (ARR) | Knowledge creation and dissemination in science and technology systems is perceived as a prerequisite for socio-economic development. The efficiency of creating new knowledge is considered to have a geographical component, i.e. some regions are more capable in scientific knowledge production than others. This article shows a method to use a network representation of scientific interaction to assess the relative efficiency of regions with diverse boundaries in channeling knowledge through a science system. In a first step, a weighted aggregate of the betweenness centrality is produced from empirical data (aggregation). The subsequent randomization of this empirical network produces the necessary Null-model for significance testing and normalization (randomization). This step is repeated to yield higher confidence about the results (re-sampling). The results are robust estimates for the relative regional efficiency to broker knowledge, which is discussed along with cross-sectional and longitudinal empirical examples. The network representation acts as a straight-forward metaphor of conceptual ideas from economic geography and neighboring disciplines. However, the procedure is not limited to centrality measures, nor is it limited to spatial aggregates. Therefore, it offers a wide range of application for scientometrics and beyond. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 13,753 |
2207.13313 | Social Live-Streaming Use & Well-being: Examining Participation,
Financial Commitment, Social Capital, and Psychological Well-being on
Twitch.tv | This study examines how active participation, financial commitment, and passive participation in the leading social live-streaming service, Twitch.tv, relate to individuals' psychological well-being. The three dimensions of social capital-structural, relational, and cognitive-as well as parasocial relationship are explored as mediators. Cross-sectional survey data from 396 respondents was analyzed by comparing two fully saturated structural equation models. Findings indicate actively participating in a favorite streamers' Chat is positively associated with increased well-being. Structural social capital, or having more social interaction ties, positively mediates the relationship between active participation and well-being, as well as financial commitment and well-being. Greater cognitive social capital, or shared values and goals with a favorite streamer, is related to decreased well-being. Parasocial relationship does not significantly mediate the relationship between use and well-being. Our results demonstrate the importance of tangible social ties over the perceived relationships or identification with a favorite streamer. | true | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 310,262 |
2410.23686 | Towards Dynamic Message Passing on Graphs | Message passing plays a vital role in graph neural networks (GNNs) for effective feature learning. However, the over-reliance on input topology diminishes the efficacy of message passing and restricts the ability of GNNs. Despite efforts to mitigate the reliance, existing study encounters message-passing bottlenecks or high computational expense problems, which invokes the demands for flexible message passing with low complexity. In this paper, we propose a novel dynamic message-passing mechanism for GNNs. It projects graph nodes and learnable pseudo nodes into a common space with measurable spatial relations between them. With nodes moving in the space, their evolving relations facilitate flexible pathway construction for a dynamic message-passing process. Associating pseudo nodes to input graphs with their measured relations, graph nodes can communicate with each other intermediately through pseudo nodes under linear complexity. We further develop a GNN model named $\mathtt{\mathbf{N^2}}$ based on our dynamic message-passing mechanism. $\mathtt{\mathbf{N^2}}$ employs a single recurrent layer to recursively generate the displacements of nodes and construct optimal dynamic pathways. Evaluation on eighteen benchmarks demonstrates the superior performance of $\mathtt{\mathbf{N^2}}$ over popular GNNs. $\mathtt{\mathbf{N^2}}$ successfully scales to large-scale benchmarks and requires significantly fewer parameters for graph classification with the shared recurrent layer. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 504,139 |
2402.07501 | One Train for Two Tasks: An Encrypted Traffic Classification Framework
Using Supervised Contrastive Learning | As network security receives widespread attention, encrypted traffic classification has become the current research focus. However, existing methods conduct traffic classification without sufficiently considering the common characteristics between data samples, leading to suboptimal performance. Moreover, they train the packet-level and flow-level classification tasks independently, which is redundant because the packet representations learned in the packet-level task can be exploited by the flow-level task. Therefore, in this paper, we propose an effective model named a Contrastive Learning Enhanced Temporal Fusion Encoder (CLE-TFE). In particular, we utilize supervised contrastive learning to enhance the packet-level and flow-level representations and perform graph data augmentation on the byte-level traffic graph so that the fine-grained semantic-invariant characteristics between bytes can be captured through contrastive learning. We also propose cross-level multi-task learning, which simultaneously accomplishes the packet-level and flow-level classification tasks in the same model with one training. Further experiments show that CLE-TFE achieves the best overall performance on the two tasks, while its computational overhead (i.e., floating point operations, FLOPs) is only about 1/14 of the pre-trained model (e.g., ET-BERT). We release the code at https://github.com/ViktorAxelsen/CLE-TFE | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 428,747 |
2403.13839 | depyf: Open the Opaque Box of PyTorch Compiler for Machine Learning
Researchers | PyTorch \texttt{2.x} introduces a compiler designed to accelerate deep learning programs. However, for machine learning researchers, adapting to the PyTorch compiler to full potential can be challenging. The compiler operates at the Python bytecode level, making it appear as an opaque box. To address this, we introduce \texttt{depyf}, a tool designed to demystify the inner workings of the PyTorch compiler. \texttt{depyf} decompiles bytecode generated by PyTorch back into equivalent source code, and establishes connections between in-memory code objects and their on-disk source code counterparts. This feature enables users to step through the source code line by line using debuggers, thus enhancing their understanding of the underlying processes. Notably, \texttt{depyf} is non-intrusive and user-friendly, primarily relying on two convenient context managers for its core functionality. The project is \href{https://github.com/thuml/depyf}{ openly available} and is recognized as a \href{https://pytorch.org/ecosystem/}{PyTorch ecosystem project}. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 439,810 |
2312.08338 | Global Latent Neural Rendering | A recent trend among generalizable novel view synthesis methods is to learn a rendering operator acting over single camera rays. This approach is promising because it removes the need for explicit volumetric rendering, but it effectively treats target images as collections of independent pixels. Here, we propose to learn a global rendering operator acting over all camera rays jointly. We show that the right representation to enable such rendering is a 5-dimensional plane sweep volume consisting of the projection of the input images on a set of planes facing the target camera. Based on this understanding, we introduce our Convolutional Global Latent Renderer (ConvGLR), an efficient convolutional architecture that performs the rendering operation globally in a low-resolution latent space. Experiments on various datasets under sparse and generalizable setups show that our approach consistently outperforms existing methods by significant margins. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 415,266 |
nlin/0703050 | Competition of Self-Organized Rotating Spiral Autowaves in a
Nonequilibrium Dissipative System of Three-Level Phaser | We present results of cellular automata based investigations of rotating spiral autowaves in a nonequilibrium excitable medium which models three-level paramagnetic microwave phonon laser (phaser). The computational model is described in arXiv:cond-mat/0410460v2 and arXiv:cond-mat/0602345v1 . We have observed several new scenarios of self-organization, competition and dynamical stabilization of rotating spiral autowaves under conditions of cross-relaxation between three-level active centers. In particular, phenomena of inversion of topological charge, as well as processes of regeneration and replication of rotating spiral autowaves in various excitable media were revealed and visualized for mesoscopic-scale areas of phaser-type active systems, which model real phaser devices. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 540,799 |
2111.07281 | Deep Joint Demosaicing and High Dynamic Range Imaging within a Single
Shot | Spatially varying exposure (SVE) is a promising choice for high-dynamic-range (HDR) imaging (HDRI). The SVE-based HDRI, which is called single-shot HDRI, is an efficient solution to avoid ghosting artifacts. However, it is very challenging to restore a full-resolution HDR image from a real-world image with SVE because: a) only one-third of pixels with varying exposures are captured by camera in a Bayer pattern, b) some of the captured pixels are over- and under-exposed. For the former challenge, a spatially varying convolution (SVC) is designed to process the Bayer images carried with varying exposures. For the latter one, an exposure-guidance method is proposed against the interference from over- and under-exposed pixels. Finally, a joint demosaicing and HDRI deep learning framework is formalized to include the two novel components and to realize an end-to-end single-shot HDRI. Experiments indicate that the proposed end-to-end framework avoids the problem of cumulative errors and surpasses the related state-of-the-art methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 266,328 |
1904.03512 | Multi-user Communication in Difficult Interference | The co-channel interference (CCI) is one of the major impairments in wireless communication. CCI typically reduces the reliability of wireless communication links, but the difficult CCI which is no more or less strong to the desired signals destroys wireless links despite having myriad of CCI mitigation methods. It is shown in this paper that M-QAM (Quadrature Amplitude Modulation) or similar modulation schemes which modulate information both in in-phase and quadrature-phase are particularly vulnerable to difficult CCI. Despite well-known shortcomings, it is shown in this paper that M-PAM or similar schemes that use a single dimension for modulation provides an important mean for difficult CCI mitigation. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 126,747 |
2303.07879 | Fair Energy Allocation in Risk-aware Energy Communities | This work introduces a decentralized mechanism for the fair and efficient allocation of limited renewable energy sources among consumers in an energy community. In the proposed non-cooperative game, the self-interested community members independently decide whether to compete or not for access to RESs during peak hours and shift their loads analogously. In the peak hours, a proportional allocation (PA) policy is used to allocate the limited RESs among the competitors. The existence of a Nash equilibrium (NE) or dominant strategies in this non-cooperative game is shown, and closed-form expressions of the renewable energy demand and social cost are derived. Moreover, a decentralized algorithm for choosing consumers' strategies that lie on NE states is designed. The work shows that the risk attitude of the consumers can have a significant impact on the deviation of the induced social cost from the optimal. Besides, the proposed decentralized mechanism with the PA policy is shown to attain a much lower social cost than one using the naive equal sharing policy. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 351,419 |
1609.05317 | Deep Kinematic Pose Regression | Learning articulated object pose is inherently difficult because the pose is high dimensional but has many structural constraints. Most existing work do not model such constraints and does not guarantee the geometric validity of their pose estimation, therefore requiring a post-processing to recover the correct geometry if desired, which is cumbersome and sub-optimal. In this work, we propose to directly embed a kinematic object model into the deep neutral network learning for general articulated object pose estimation. The kinematic function is defined on the appropriately parameterized object motion variables. It is differentiable and can be used in the gradient descent based optimization in network training. The prior knowledge on the object geometric model is fully exploited and the structure is guaranteed to be valid. We show convincing experiment results on a toy example and the 3D human pose estimation problem. For the latter we achieve state-of-the-art result on Human3.6M dataset. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 61,122 |
2101.10020 | Personalization Paradox in Behavior Change Apps: Lessons from a Social
Comparison-Based Personalized App for Physical Activity | Social comparison-based features are widely used in social computing apps. However, most existing apps are not grounded in social comparison theories and do not consider individual differences in social comparison preferences and reactions. This paper is among the first to automatically personalize social comparison targets. In the context of an m-health app for physical activity, we use artificial intelligence (AI) techniques of multi-armed bandits. Results from our user study (n=53) indicate that there is some evidence that motivation can be increased using the AI-based personalization of social comparison. The detected effects achieved small-to-moderate effect sizes, illustrating the real-world implications of the intervention for enhancing motivation and physical activity. In addition to design implications for social comparison features in social apps, this paper identified the personalization paradox, the conflict between user modeling and adaptation, as a key design challenge of personalized applications for behavior change. Additionally, we propose research directions to mitigate this Personalization Paradox. | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 216,794 |
2203.09789 | Constitutive model characterization and discovery using physics-informed
deep learning | Classically, the mechanical response of materials is described through constitutive models, often in the form of constrained ordinary differential equations. These models have a very limited number of parameters, yet, they are extremely efficient in reproducing complex responses observed in experiments. Additionally, in their discretized form, they are computationally very efficient, often resulting in a simple algebraic relation, and therefore they have been extensively used within large-scale explicit and implicit finite element models. However, it is very challenging to formulate new constitutive models, particularly for materials with complex microstructures such as composites. A recent trend in constitutive modeling leverages complex neural network architectures to construct complex material responses where a constitutive model does not yet exist. Whilst very accurate, they suffer from two deficiencies. First, they are interpolation models and often do poorly in extrapolation. Second, due to their complex architecture and numerous parameters, they are inefficient to be used as a constitutive model within a large-scale finite element model. In this study, we propose a novel approach based on the physics-informed learning machines for the characterization and discovery of constitutive models. Unlike data-driven constitutive models, we leverage foundations of elastoplasticity theory as regularization terms in the total loss function to find parametric constitutive models that are also theoretically sound. We demonstrate that our proposed framework can efficiently identify the underlying constitutive model describing different datasets from the von Mises family. | false | true | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 286,290 |
2211.08653 | #maskUp: Selective Attribute Encryption for Sensitive Vocalization for
English language on Social Media Platforms | Social media has become a platform for people to stand up and raise their voices against social and criminal acts. Vocalization of such information has allowed the investigation and identification of criminals. However, revealing such sensitive information may jeopardize the victim's safety. We propose #maskUp, a safe method for information communication in a secure fashion to the relevant authorities, discouraging potential bullying of the victim. This would ensure security by conserving their privacy through natural language processing supplemented with selective encryption for sensitive attribute masking. To our knowledge, this is the first work that aims to protect the privacy of the victims by masking their private details as well as emboldening them to come forward to report crimes. The use of masking technology allows only binding authorities to view/un-mask this data. We construct and evaluate the proposed methodology on continual learning tasks, allowing practical implementation of the same in a real-world scenario. #maskUp successfully demonstrates this integration on sample datasets validating the presented objective. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 330,714 |
1508.03604 | MOLNs: A cloud platform for interactive, reproducible and scalable
spatial stochastic computational experiments in systems biology using PyURDME | Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools, a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 46,017 |
2207.11592 | Thermal half-lives of azobenzene derivatives: virtual screening based on
intersystem crossing using a machine learning potential | Molecular photoswitches are the foundation of light-activated drugs. A key photoswitch is azobenzene, which exhibits trans-cis isomerism in response to light. The thermal half-life of the cis isomer is of crucial importance, since it controls the duration of the light-induced biological effect. Here we introduce a computational tool for predicting the thermal half-lives of azobenzene derivatives. Our automated approach uses a fast and accurate machine learning potential trained on quantum chemistry data. Building on well-established earlier evidence, we argue that thermal isomerization proceeds through rotation mediated by intersystem crossing, and incorporate this mechanism into our automated workflow. We use our approach to predict the thermal half-lives of 19,000 azobenzene derivatives. We explore trends and tradeoffs between barriers and absorption wavelengths, and open-source our data and software to accelerate research in photopharmacology. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 309,702 |
2410.15656 | LightFusionRec: Lightweight Transformers-Based Cross-Domain
Recommendation Model | This paper presents LightFusionRec, a novel lightweight cross-domain recommendation system that integrates DistilBERT for textual feature extraction and FastText for genre embedding. Important issues in recommendation systems, such as data sparsity, computational efficiency, and cold start issues, are addressed in methodology. LightFusionRec uses a small amount of information to produce precise and contextually relevant recommendations for many media formats by fusing genre vector embedding with natural language processing algorithms. Tests conducted on extensive movie and book datasets show notable enhancements in suggestion quality when compared to conventional methods. Because of its lightweight design, the model can be used for a variety of purposes and allows for ondevice inference. LightFusionRec is a noteworthy development in cross-domain recommendation systems, providing accurate and scalable recommendations to improve user experience on digital content platforms. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 500,662 |
2206.13974 | Joint Generator-Ranker Learning for Natural Language Generation | Generate-then-rank is a widely used mechanism for text generation, where a generator produces multiple text candidates and a ranker chooses the best one among the text candidates. However, existing methods usually train the generator and the ranker individually, neglecting the mutual feedback that could further enhance the generation quality. To tackle this limitation, we propose JGR, a novel joint training algorithm that integrates the generator and the ranker in a single framework. JGR optimizes the generator with a hybrid objective that combines data likelihood and ranker reward, and trains the ranker with a contrastive loss that compares the generator outputs. By iteratively updating the generator and the ranker, JGR can effectively harmonize their learning and enhance their quality jointly. We evaluate JGR on various text generation tasks and demonstrate that it surpasses existing methods on four public datasets across three common generation scenarios. Our code and models are publicly available at https://github.com/microsoft/ProphetNet/tree/master/JGR. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 305,141 |
2110.00806 | Swiss-Judgment-Prediction: A Multilingual Legal Judgment Prediction
Benchmark | In many jurisdictions, the excessive workload of courts leads to high delays. Suitable predictive AI models can assist legal professionals in their work, and thus enhance and speed up the process. So far, Legal Judgment Prediction (LJP) datasets have been released in English, French, and Chinese. We publicly release a multilingual (German, French, and Italian), diachronic (2000-2020) corpus of 85K cases from the Federal Supreme Court of Switzerland (FSCS). We evaluate state-of-the-art BERT-based methods including two variants of BERT that overcome the BERT input (text) length limitation (up to 512 tokens). Hierarchical BERT has the best performance (approx. 68-70% Macro-F1-Score in German and French). Furthermore, we study how several factors (canton of origin, year of publication, text length, legal area) affect performance. We release both the benchmark dataset and our code to accelerate future research and ensure reproducibility. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 258,539 |
2101.07621 | Trading Transforms of Non-weighted Simple Games and Integer Weights of
Weighted Simple Games | This study investigates simple games. A fundamental research question in this field is to determine necessary and sufficient conditions for a simple game to be a weighted majority game. Taylor and Zwicker (1992) showed that a simple game is non-weighted if and only if there exists a trading transform of finite size. They also provided an upper bound on the size of such a trading transform, if it exists. Gvozdeva and Slinko (2011) improved that upper bound; their proof employed a property of linear inequalities demonstrated by Muroga (1971).In this study, we provide a new proof of the existence of a trading transform when a given simple game is non-weighted. Our proof employs Farkas' lemma (1894), and yields an improved upper bound on the size of a trading transform. We also discuss an integer-weight representation of a weighted simple game, improving the bounds obtained by Muroga (1971). We show that our bound on the quota is tight when the number of players is less than or equal to five, based on the computational results obtained by Kurz (2012). Furthermore, we discuss the problem of finding an integer-weight representation under the assumption that we have minimal winning coalitions and maximal losing coalitions.In particular, we show a performance of a rounding method. Lastly, we address roughly weighted simple games. Gvozdeva and Slinko (2011) showed that a given simple game is not roughly weighted if and only if there exists a potent certificate of non-weightedness. We give an upper bound on the length of a potent certificate of non-weightedness. We also discuss an integer-weight representation of a roughly weighted simple game. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 216,092 |
2312.15614 | A Comprehensive Evaluation of Parameter-Efficient Fine-Tuning on
Software Engineering Tasks | Pre-trained models (PTMs) have achieved great success in various Software Engineering (SE) downstream tasks following the ``pre-train then fine-tune'' paradigm. As fully fine-tuning all parameters of PTMs can be computationally expensive, a widely used solution is parameter-efficient fine-tuning (PEFT), which freezes PTMs while introducing extra parameters. Though work has been done to test PEFT methods in the SE field, a comprehensive evaluation is still lacking. This paper aims to fill in this gap by evaluating the effectiveness of five PEFT methods on eight PTMs and four SE downstream tasks. For different tasks and PEFT methods, we seek answers to the following research questions: 1) Is it more effective to use PTMs trained specifically on source code, or is it sufficient to use PTMs trained on natural language text? 2) What is the impact of varying model sizes? 3) How does the model architecture affect the performance? Besides effectiveness, we also discuss the efficiency of PEFT methods, concerning the costs of required training time and GPU resource consumption. We hope that our findings can provide a deeper understanding of PEFT methods on various PTMs and SE downstream tasks. All the codes and data are available at \url{https://github.com/zwtnju/PEFT.git}. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | true | 418,067 |
2210.14595 | Safe and Efficient Switching Mechanism Design for Uncertified Linear
Controller | Sustained research efforts have been devoted to learning optimal controllers for linear stochastic dynamical systems with unknown parameters, but due to the corruption of noise, learned controllers are usually uncertified in the sense that they may destabilize the system. To address this potential instability, we propose a "plug-and-play" modification to the uncertified controller which falls back to a known stabilizing controller when the norm of the difference between the uncertified and the fall-back control input exceeds a certain threshold. We show that the switching strategy is both safe and efficient, in the sense that: 1) the linear-quadratic cost of the system is always bounded even if original uncertified controller is destabilizing; 2) in case the uncertified controller is stabilizing, the performance loss caused by switching converges super-exponentially to $0$ for Gaussian noise, while the converging polynomially for general heavy-tailed noise. Finally, we demonstrate the effectiveness of the proposed switching strategy via numerical simulation on the Tennessee Eastman Process. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 326,611 |
2409.00283 | RealFace -- Pedestrian Face Dataset | The Real Face Dataset is a pedestrian face detection benchmark dataset in the wild, comprising over 11,000 images and over 55,000 detected faces in various ambient conditions. The dataset aims to provide a comprehensive and diverse collection of real-world face images for the evaluation and development of face detection and recognition algorithms. The Real Face Dataset is a valuable resource for researchers and developers working on face detection and recognition algorithms. With over 11,000 images and 55,000 detected faces, the dataset offers a comprehensive and diverse collection of real-world face images. This diversity is crucial for evaluating the performance of algorithms under various ambient conditions, such as lighting, scale, pose, and occlusion. The dataset's focus on real-world scenarios makes it particularly relevant for practical applications, where faces may be captured in challenging environments. In addition to its size, the dataset's inclusion of images with a high degree of variability in scale, pose, and occlusion, as well as its focus on practical application scenarios, sets it apart as a valuable resource for benchmarking and testing face detection and recognition methods. The challenges presented by the dataset align with the difficulties faced in real-world surveillance applications, where the ability to detect faces and extract discriminative features is paramount. The Real Face Dataset provides an opportunity to assess the performance of face detection and recognition methods on a large scale. Its relevance to real-world scenarios makes it an important resource for researchers and developers aiming to create robust and effective algorithms for practical applications. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 484,856 |
2009.10991 | Attention Driven Fusion for Multi-Modal Emotion Recognition | Deep learning has emerged as a powerful alternative to hand-crafted methods for emotion recognition on combined acoustic and text modalities. Baseline systems model emotion information in text and acoustic modes independently using Deep Convolutional Neural Networks (DCNN) and Recurrent Neural Networks (RNN), followed by applying attention, fusion, and classification. In this paper, we present a deep learning-based approach to exploit and fuse text and acoustic data for emotion classification. We utilize a SincNet layer, based on parameterized sinc functions with band-pass filters, to extract acoustic features from raw audio followed by a DCNN. This approach learns filter banks tuned for emotion recognition and provides more effective features compared to directly applying convolutions over the raw speech signal. For text processing, we use two branches (a DCNN and a Bi-direction RNN followed by a DCNN) in parallel where cross attention is introduced to infer the N-gram level correlations on hidden representations received from the Bi-RNN. Following existing state-of-the-art, we evaluate the performance of the proposed system on the IEMOCAP dataset. Experimental results indicate that the proposed system outperforms existing methods, achieving 3.5% improvement in weighted accuracy. | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 197,042 |
2305.01148 | PU-EdgeFormer: Edge Transformer for Dense Prediction in Point Cloud
Upsampling | Despite the recent development of deep learning-based point cloud upsampling, most MLP-based point cloud upsampling methods have limitations in that it is difficult to train the local and global structure of the point cloud at the same time. To solve this problem, we present a combined graph convolution and transformer for point cloud upsampling, denoted by PU-EdgeFormer. The proposed method constructs EdgeFormer unit that consists of graph convolution and multi-head self-attention modules. We employ graph convolution using EdgeConv, which learns the local geometry and global structure of point cloud better than existing point-to-feature method. Through in-depth experiments, we confirmed that the proposed method has better point cloud upsampling performance than the existing state-of-the-art method in both subjective and objective aspects. The code is available at https://github.com/dohoon2045/PU-EdgeFormer. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 361,573 |
2304.07314 | Uncovering the Inner Workings of STEGO for Safe Unsupervised Semantic
Segmentation | Self-supervised pre-training strategies have recently shown impressive results for training general-purpose feature extraction backbones in computer vision. In combination with the Vision Transformer architecture, the DINO self-distillation technique has interesting emerging properties, such as unsupervised clustering in the latent space and semantic correspondences of the produced features without using explicit human-annotated labels. The STEGO method for unsupervised semantic segmentation contrastively distills feature correspondences of a DINO-pre-trained Vision Transformer and recently set a new state of the art. However, the detailed workings of STEGO have yet to be disentangled, preventing its usage in safety-critical applications. This paper provides a deeper understanding of the STEGO architecture and training strategy by conducting studies that uncover the working mechanisms behind STEGO, reproduce and extend its experimental validation, and investigate the ability of STEGO to transfer to different datasets. Results demonstrate that the STEGO architecture can be interpreted as a semantics-preserving dimensionality reduction technique. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 358,308 |
2311.03189 | Safe Control for Soft-Rigid Robots with Self-Contact using Control
Barrier Functions | Incorporating both flexible and rigid components in robot designs offers a unique solution to the limitations of traditional rigid robotics by enabling both compliance and strength. This paper explores the challenges and solutions for controlling soft-rigid hybrid robots, particularly addressing the issue of self-contact. Conventional control methods prioritize precise state tracking, inadvertently increasing the system's overall stiffness, which is not always desirable in interactions with the environment or within the robot itself. To address this, we investigate the application of Control Barrier Functions (CBFs) and High Order CBFs to manage self-contact scenarios in serially connected soft-rigid hybrid robots. Through an analysis based on Piecewise Constant Curvature (PCC) kinematics, we establish CBFs within a classical control framework for self-contact dynamics. Our methodology is rigorously evaluated in both simulation environments and physical hardware systems. The findings demonstrate that our proposed control strategy effectively regulates self-contact in soft-rigid hybrid robotic systems, marking a significant advancement in the field of robotics. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 405,749 |
1108.6132 | Distributed MAC Protocol Supporting Physical-Layer Network Coding | Physical-layer network coding (PNC) is a promising approach for wireless networks. It allows nodes to transmit simultaneously. Due to the difficulties of scheduling simultaneous transmissions, existing works on PNC are based on simplified medium access control (MAC) protocols, which are not applicable to general multi-hop wireless networks, to the best of our knowledge. In this paper, we propose a distributed MAC protocol that supports PNC in multi-hop wireless networks. The proposed MAC protocol is based on the carrier sense multiple access (CSMA) strategy and can be regarded as an extension to the IEEE 802.11 MAC protocol. In the proposed protocol, each node collects information on the queue status of its neighboring nodes. When a node finds that there is an opportunity for some of its neighbors to perform PNC, it notifies its corresponding neighboring nodes and initiates the process of packet exchange using PNC, with the node itself as a relay. During the packet exchange process, the relay also works as a coordinator which coordinates the transmission of source nodes. Meanwhile, the proposed protocol is compatible with conventional network coding and conventional transmission schemes. Simulation results show that the proposed protocol is advantageous in various scenarios of wireless applications. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 11,885 |
1812.04427 | Zero-Shot Learning with Sparse Attribute Propagation | Zero-shot learning (ZSL) aims to recognize a set of unseen classes without any training images. The standard approach to ZSL requires a set of training images annotated with seen class labels and a semantic descriptor for seen/unseen classes (attribute vector is the most widely used). Class label/attribute annotation is expensive; it thus severely limits the scalability of ZSL. In this paper, we define a new ZSL setting where only a few annotated images are collected from each seen class. This is clearly more challenging yet more realistic than the conventional ZSL setting. To overcome the resultant image-level attribute sparsity, we propose a novel inductive ZSL model termed sparse attribute propagation (SAP) by propagating attribute annotations to more unannotated images using sparse coding. This is followed by learning bidirectional projections between features and attributes for ZSL. An efficient solver is provided, together with rigorous theoretic algorithm analysis. With our SAP, we show that a ZSL training dataset can now be augmented by the abundant web images returned by image search engine, to further improve the model performance. Moreover, the general applicability of SAP is demonstrated on solving the social image annotation (SIA) problem. Extensive experiments show that our model achieves superior performance on both ZSL and SIA. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 116,217 |
2310.13039 | Human Pose-based Estimation, Tracking and Action Recognition with Deep
Learning: A Survey | Human pose analysis has garnered significant attention within both the research community and practical applications, owing to its expanding array of uses, including gaming, video surveillance, sports performance analysis, and human-computer interactions, among others. The advent of deep learning has significantly improved the accuracy of pose capture, making pose-based applications increasingly practical. This paper presents a comprehensive survey of pose-based applications utilizing deep learning, encompassing pose estimation, pose tracking, and action recognition.Pose estimation involves the determination of human joint positions from images or image sequences. Pose tracking is an emerging research direction aimed at generating consistent human pose trajectories over time. Action recognition, on the other hand, targets the identification of action types using pose estimation or tracking data. These three tasks are intricately interconnected, with the latter often reliant on the former. In this survey, we comprehensively review related works, spanning from single-person pose estimation to multi-person pose estimation, from 2D pose estimation to 3D pose estimation, from single image to video, from mining temporal context gradually to pose tracking, and lastly from tracking to pose-based action recognition. As a survey centered on the application of deep learning to pose analysis, we explicitly discuss both the strengths and limitations of existing techniques. Notably, we emphasize methodologies for integrating these three tasks into a unified framework within video sequences. Additionally, we explore the challenges involved and outline potential directions for future research. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 401,275 |
2403.03427 | Single Transit Detection In Kepler With Machine Learning And Onboard
Spacecraft Diagnostics | Exoplanet discovery at long orbital periods requires reliably detecting individual transits without additional information about the system. Techniques like phase-folding of light curves and periodogram analysis of radial velocity data are more sensitive to planets with shorter orbital periods, leaving a dearth of planet discoveries at long periods. We present a novel technique using an ensemble of Convolutional Neural Networks incorporating the onboard spacecraft diagnostics of \emph{Kepler} to classify transits within a light curve. We create a pipeline to recover the location of individual transits, and the period of the orbiting planet, which maintains $>80\%$ transit recovery sensitivity out to an 800-day orbital period. Our neural network pipeline has the potential to discover additional planets in the \emph{Kepler} dataset, and crucially, within the $\eta$-Earth regime. We report our first candidate from this pipeline, KOI 1271.02. KOI 1271.01 is known to exhibit strong Transit Timing Variations (TTVs), and so we jointly model the TTVs and transits of both transiting planets to constrain the orbital configuration and planetary parameters and conclude with a series of potential parameters for KOI 1271.02, as there is not enough data currently to uniquely constrain the system. We conclude that KOI 1271.02 has a radius of 5.32 $\pm$ 0.20 $R_{\oplus}$ and a mass of $28.94^{0.23}_{-0.47}$ $M_{\oplus}$. Future constraints on the nature of KOI 1271.02 require measuring additional TTVs of KOI 1271.01 or observing a second transit of KOI 1271.02. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 435,188 |
2111.02593 | Energy-Efficient Online Data Sensing and Processing in Wireless Powered
Edge Computing Systems | This paper focuses on developing energy-efficient online data processing strategy of wireless powered MEC systems under stochastic fading channels. In particular, we consider a hybrid access point (HAP) transmitting RF energy to and processing the sensing data offloaded from multiple WDs. Under an average power constraint of the HAP, we aim to maximize the long-term average data sensing rate of the WDs while maintaining task data queue stability. We formulate the problem as a multi-stage stochastic optimization to control the energy transfer and task data processing in sequential time slots. Without the knowledge of future channel fading, it is very challenging to determine the sequential control actions that are tightly coupled by the battery and data buffer dynamics. To solve the problem, we propose an online algorithm named LEESE that applies the perturbed Lyapunov optimization technique to decompose the multi-stage stochastic problem into per-slot deterministic optimization problems. We show that each per-slot problem can be equivalently transformed into a convex optimization problem. To facilitate online implementation in large-scale MEC systems, instead of solving the per-slot problem with off-the-shelf convex algorithms, we propose a block coordinate descent (BCD)-based method that produces close-to-optimal solution in less than 0.04\% of the computation delay. Simulation results demonstrate that the proposed LEESE algorithm can provide 21.9\% higher data sensing rate than the representative benchmark methods considered, while incurring sub-millisecond computation delay suitable for real-time control under fading channel. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 264,914 |
2402.05558 | Flashback: Understanding and Mitigating Forgetting in Federated Learning | In Federated Learning (FL), forgetting, or the loss of knowledge across rounds, hampers algorithm convergence, particularly in the presence of severe data heterogeneity among clients. This study explores the nuances of this issue, emphasizing the critical role of forgetting in FL's inefficient learning within heterogeneous data contexts. Knowledge loss occurs in both client-local updates and server-side aggregation steps; addressing one without the other fails to mitigate forgetting. We introduce a metric to measure forgetting granularly, ensuring distinct recognition amid new knowledge acquisition. Leveraging these insights, we propose Flashback, an FL algorithm with a dynamic distillation approach that is used to regularize the local models, and effectively aggregate their knowledge. Across different benchmarks, Flashback outperforms other methods, mitigates forgetting, and achieves faster round-to-target-accuracy, by converging in 6 to 16 rounds. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | true | 427,907 |
2409.16535 | Prompt Sliders for Fine-Grained Control, Editing and Erasing of Concepts
in Diffusion Models | Diffusion models have recently surpassed GANs in image synthesis and editing, offering superior image quality and diversity. However, achieving precise control over attributes in generated images remains a challenge. Concept Sliders introduced a method for fine-grained image control and editing by learning concepts (attributes/objects). However, this approach adds parameters and increases inference time due to the loading and unloading of Low-Rank Adapters (LoRAs) used for learning concepts. These adapters are model-specific and require retraining for different architectures, such as Stable Diffusion (SD) v1.5 and SD-XL. In this paper, we propose a straightforward textual inversion method to learn concepts through text embeddings, which are generalizable across models that share the same text encoder, including different versions of the SD model. We refer to our method as Prompt Sliders. Besides learning new concepts, we also show that Prompt Sliders can be used to erase undesirable concepts such as artistic styles or mature content. Our method is 30% faster than using LoRAs because it eliminates the need to load and unload adapters and introduces no additional parameters aside from the target concept text embedding. Each concept embedding only requires 3KB of storage compared to the 8922KB or more required for each LoRA adapter, making our approach more computationally efficient. Project Page: https://deepaksridhar.github.io/promptsliders.github.io/ | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 491,381 |
2501.09635 | Unified Face Matching and Physical-Digital Spoofing Attack Detection | Face recognition technology has dramatically transformed the landscape of security, surveillance, and authentication systems, offering a user-friendly and non-invasive biometric solution. However, despite its significant advantages, face recognition systems face increasing threats from physical and digital spoofing attacks. Current research typically treats face recognition and attack detection as distinct classification challenges. This approach necessitates the implementation of separate models for each task, leading to considerable computational complexity, particularly on devices with limited resources. Such inefficiencies can stifle scalability and hinder performance. In response to these challenges, this paper introduces an innovative unified model designed for face recognition and detection of physical and digital attacks. By leveraging the advanced Swin Transformer backbone and incorporating HiLo attention in a convolutional neural network framework, we address unified face recognition and spoof attack detection more effectively. Moreover, we introduce augmentation techniques that replicate the traits of physical and digital spoofing cues, significantly enhancing our model robustness. Through comprehensive experimental evaluation across various datasets, we showcase the effectiveness of our model in unified face recognition and spoof detection. Additionally, we confirm its resilience against unseen physical and digital spoofing attacks, underscoring its potential for real-world applications. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 525,211 |
1611.05146 | A Semi-Markov Switching Linear Gaussian Model for Censored Physiological
Data | Critically ill patients in regular wards are vulnerable to unanticipated clinical dete- rioration which requires timely transfer to the intensive care unit (ICU). To allow for risk scoring and patient monitoring in such a setting, we develop a novel Semi- Markov Switching Linear Gaussian Model (SSLGM) for the inpatients' physiol- ogy. The model captures the patients' latent clinical states and their corresponding observable lab tests and vital signs. We present an efficient unsupervised learn- ing algorithm that capitalizes on the informatively censored data in the electronic health records (EHR) to learn the parameters of the SSLGM; the learned model is then used to assess the new inpatients' risk for clinical deterioration in an online fashion, allowing for timely ICU admission. Experiments conducted on a het- erogeneous cohort of 6,094 patients admitted to a large academic medical center show that the proposed model significantly outperforms the currently deployed risk scores such as Rothman index, MEWS, SOFA and APACHE. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 63,962 |
2310.16919 | Wide Flat Minimum Watermarking for Robust Ownership Verification of GANs | We propose a novel multi-bit box-free watermarking method for the protection of Intellectual Property Rights (IPR) of GANs with improved robustness against white-box attacks like fine-tuning, pruning, quantization, and surrogate model attacks. The watermark is embedded by adding an extra watermarking loss term during GAN training, ensuring that the images generated by the GAN contain an invisible watermark that can be retrieved by a pre-trained watermark decoder. In order to improve the robustness against white-box model-level attacks, we make sure that the model converges to a wide flat minimum of the watermarking loss term, in such a way that any modification of the model parameters does not erase the watermark. To do so, we add random noise vectors to the parameters of the generator and require that the watermarking loss term is as invariant as possible with respect to the presence of noise. This procedure forces the generator to converge to a wide flat minimum of the watermarking loss. The proposed method is architectureand dataset-agnostic, thus being applicable to many different generation tasks and models, as well as to CNN-based image processing architectures. We present the results of extensive experiments showing that the presence of the watermark has a negligible impact on the quality of the generated images, and proving the superior robustness of the watermark against model modification and surrogate model attacks. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 402,919 |
1206.6648 | Asynchronous Decentralized Event-triggered Control | In this paper we propose an approach to the implementation of controllers with decentralized strategies triggering controller updates. We consider set-ups with a central node in charge of the computation of the control commands, and a set of not co-located sensors providing measurements to the controller node. The solution we propose does not require measurements from the sensors to be synchronized in time. The sensors in our proposal provide measurements in an aperiodic way triggered by local conditions. Furthermore, in the proposed implementation (most of) the communication between nodes requires only the exchange of one bit of information (per controller update), which could aid in reducing transmission delays and as a secondary effect result in fewer transmissions being triggered. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 17,032 |
2005.11164 | Decentralized Deep Reinforcement Learning for a Distributed and Adaptive
Locomotion Controller of a Hexapod Robot | Locomotion is a prime example for adaptive behavior in animals and biological control principles have inspired control architectures for legged robots. While machine learning has been successfully applied to many tasks in recent years, Deep Reinforcement Learning approaches still appear to struggle when applied to real world robots in continuous control tasks and in particular do not appear as robust solutions that can handle uncertainties well. Therefore, there is a new interest in incorporating biological principles into such learning architectures. While inducing a hierarchical organization as found in motor control has shown already some success, we here propose a decentralized organization as found in insect motor control for coordination of different legs. A decentralized and distributed architecture is introduced on a simulated hexapod robot and the details of the controller are learned through Deep Reinforcement Learning. We first show that such a concurrent local structure is able to learn better walking behavior. Secondly, that the simpler organization is learned faster compared to holistic approaches. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | 178,403 |
2105.07826 | TopicsRanksDC: Distance-based Topic Ranking applied on Two-Class Data | In this paper, we introduce a novel approach named TopicsRanksDC for topics ranking based on the distance between two clusters that are generated by each topic. We assume that our data consists of text documents that are associated with two-classes. Our approach ranks each topic contained in these text documents by its significance for separating the two-classes. Firstly, the algorithm detects topics using Latent Dirichlet Allocation (LDA). The words defining each topic are represented as two clusters, where each one is associated with one of the classes. We compute four distance metrics, Single Linkage, Complete Linkage, Average Linkage and distance between the centroid. We compare the results of LDA topics and random topics. The results show that the rank for LDA topics is much higher than random topics. The results of TopicsRanksDC tool are promising for future work to enable search engines to suggest related topics. | false | false | false | false | true | true | true | false | false | false | false | false | false | false | false | false | false | false | 235,568 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.