id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1711.07784 | Hidden Tree Markov Networks: Deep and Wide Learning for Structured Data | The paper introduces the Hidden Tree Markov Network (HTN), a neuro-probabilistic hybrid fusing the representation power of generative models for trees with the incremental and discriminative learning capabilities of neural networks. We put forward a modular architecture in which multiple generative models of limited complexity are trained to learn structural feature detectors whose outputs are then combined and integrated by neural layers at a later stage. In this respect, the model is both deep, thanks to the unfolding of the generative models on the input structures, as well as wide, given the potentially large number of generative modules that can be trained in parallel. Experimental results show that the proposed approach can outperform state-of-the-art syntactic kernels as well as generative kernels built on the same probabilistic model as the HTN. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | true | false | false | 85,068 |
2207.01839 | What Do Graph Convolutional Neural Networks Learn? | Graph neural networks (GNNs) have gained traction over the past few years for their superior performance in numerous machine learning tasks. Graph Convolutional Neural Networks (GCN) are a common variant of GNNs that are known to have high performance in semi-supervised node classification (SSNC), and work well under the assumption of homophily. Recent literature has highlighted that GCNs can achieve strong performance on heterophilous graphs under certain "special conditions". These arguments motivate us to understand why, and how, GCNs learn to perform SSNC. We find a positive correlation between similarity of latent node embeddings of nodes within a class and the performance of a GCN. Our investigation on underlying graph structures of a dataset finds that a GCN's SSNC performance is significantly influenced by the consistency and uniqueness in neighborhood structure of nodes within a class. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 306,311 |
2209.06522 | ScaTE: A Scalable Framework for Self-Supervised Traversability
Estimation in Unstructured Environments | For the safe and successful navigation of autonomous vehicles in unstructured environments, the traversability of terrain should vary based on the driving capabilities of the vehicles. Actual driving experience can be utilized in a self-supervised fashion to learn vehicle-specific traversability. However, existing methods for learning self-supervised traversability are not highly scalable for learning the traversability of various vehicles. In this work, we introduce a scalable framework for learning self-supervised traversability, which can learn the traversability directly from vehicle-terrain interaction without any human supervision. We train a neural network that predicts the proprioceptive experience that a vehicle would undergo from 3D point clouds. Using a novel PU learning method, the network simultaneously identifies non-traversable regions where estimations can be overconfident. With driving data of various vehicles gathered from simulation and the real world, we show that our framework is capable of learning the self-supervised traversability of various vehicles. By integrating our framework with a model predictive controller, we demonstrate that estimated traversability results in effective navigation that enables distinct maneuvers based on the driving characteristics of the vehicles. In addition, experimental results validate the ability of our method to identify and avoid non-traversable regions. | false | false | false | false | true | false | false | true | false | false | false | true | false | false | false | false | false | false | 317,429 |
2210.02535 | Attention-based Ingredient Phrase Parser | As virtual personal assistants have now penetrated the consumer market, with products such as Siri and Alexa, the research community has produced several works on task-oriented dialogue tasks such as hotel booking, restaurant booking, and movie recommendation. Assisting users to cook is one of these tasks that are expected to be solved by intelligent assistants, where ingredients and their corresponding attributes, such as name, unit, and quantity, should be provided to users precisely and promptly. However, existing ingredient information scraped from the cooking website is in the unstructured form with huge variation in the lexical structure, for example, '1 garlic clove, crushed', and '1 (8 ounce) package cream cheese, softened', making it difficult to extract information exactly. To provide an engaged and successful conversational service to users for cooking tasks, we propose a new ingredient parsing model that can parse an ingredient phrase of recipes into the structure form with its corresponding attributes with over 0.93 F1-score. Experimental results show that our model achieves state-of-the-art performance on AllRecipes and Food.com datasets. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 321,677 |
2306.14293 | Multi-Scale Cross Contrastive Learning for Semi-Supervised Medical Image
Segmentation | Semi-supervised learning has demonstrated great potential in medical image segmentation by utilizing knowledge from unlabeled data. However, most existing approaches do not explicitly capture high-level semantic relations between distant regions, which limits their performance. In this paper, we focus on representation learning for semi-supervised learning, by developing a novel Multi-Scale Cross Supervised Contrastive Learning (MCSC) framework, to segment structures in medical images. We jointly train CNN and Transformer models, regularising their features to be semantically consistent across different scales. Our approach contrasts multi-scale features based on ground-truth and cross-predicted labels, in order to extract robust feature representations that reflect intra- and inter-slice relationships across the whole dataset. To tackle class imbalance, we take into account the prevalence of each class to guide contrastive learning and ensure that features adequately capture infrequent classes. Extensive experiments on two multi-structure medical segmentation datasets demonstrate the effectiveness of MCSC. It not only outperforms state-of-the-art semi-supervised methods by more than 3.0% in Dice, but also greatly reduces the performance gap with fully supervised methods. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 375,628 |
2411.15876 | An Extensive Study on D2C: Overfitting Remediation in Deep Learning
Using a Decentralized Approach | Overfitting remains a significant challenge in deep learning, often arising from data outliers, noise, and limited training data. To address this, we propose Divide2Conquer (D2C), a novel technique to mitigate overfitting. D2C partitions the training data into multiple subsets and trains identical models independently on each subset. To balance model generalization and subset-specific learning, the model parameters are periodically aggregated and averaged during training. This process enables the learning of robust patterns while minimizing the influence of outliers and noise. Empirical evaluations on benchmark datasets across diverse deep-learning tasks demonstrate that D2C significantly enhances generalization performance, particularly with larger datasets. Our analysis includes evaluations of decision boundaries, loss curves, and other performance metrics, highlighting D2C's effectiveness both as a standalone technique and in combination with other overfitting reduction methods. We further provide a rigorous mathematical justification for D2C's underlying principles and examine its applicability across multiple domains. Finally, we explore the trade-offs associated with D2C and propose strategies to address them, offering a holistic view of its strengths and limitations. This study establishes D2C as a versatile and effective approach to combating overfitting in deep learning. Our codes are publicly available at: https://github.com/Saiful185/Divide2Conquer. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 510,808 |
2309.13308 | Calibrating LLM-Based Evaluator | Recent advancements in large language models (LLMs) on language modeling and emergent capabilities make them a promising reference-free evaluator of natural language generation quality, and a competent alternative to human evaluation. However, hindered by the closed-source or high computational demand to host and tune, there is a lack of practice to further calibrate an off-the-shelf LLM-based evaluator towards better human alignment. In this work, we propose AutoCalibrate, a multi-stage, gradient-free approach to automatically calibrate and align an LLM-based evaluator toward human preference. Instead of explicitly modeling human preferences, we first implicitly encompass them within a set of human labels. Then, an initial set of scoring criteria is drafted by the language model itself, leveraging in-context learning on different few-shot examples. To further calibrate this set of criteria, we select the best performers and re-draft them with self-refinement. Our experiments on multiple text quality evaluation datasets illustrate a significant improvement in correlation with expert evaluation through calibration. Our comprehensive qualitative analysis conveys insightful intuitions and observations on the essence of effective scoring criteria. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 394,153 |
2408.10615 | Enhancing Robustness in Large Language Models: Prompting for Mitigating
the Impact of Irrelevant Information | In recent years, Large language models (LLMs) have garnered significant attention due to their superior performance in complex reasoning tasks. However, recent studies may diminish their reasoning capabilities markedly when problem descriptions contain irrelevant information, even with the use of advanced prompting techniques. To further investigate this issue, a dataset of primary school mathematics problems containing irrelevant information, named GSMIR, was constructed. Testing prominent LLMs and prompting techniques on this dataset revealed that while LLMs can identify irrelevant information, they do not effectively mitigate the interference it causes once identified. A novel automatic construction method, ATF, which enhances the ability of LLMs to identify and self-mitigate the influence of irrelevant information, is proposed to address this shortcoming. This method operates in two steps: first, analysis of irrelevant information, followed by its filtering. The ATF method, as demonstrated by experimental results, significantly improves the reasoning performance of LLMs and prompting techniques, even in the presence of irrelevant information on the GSMIR dataset. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 481,940 |
2501.08924 | Learning Joint Denoising, Demosaicing, and Compression from the Raw
Natural Image Noise Dataset | This paper introduces the Raw Natural Image Noise Dataset (RawNIND), a diverse collection of paired raw images designed to support the development of denoising models that generalize across sensors, image development workflows, and styles. Two denoising methods are proposed: one operates directly on raw Bayer data, leveraging computational efficiency, while the other processes linear RGB images for improved generalization to different sensors, with both preserving flexibility for subsequent development. Both methods outperform traditional approaches which rely on developed images. Additionally, the integration of denoising and compression at the raw data level significantly enhances rate-distortion performance and computational efficiency. These findings suggest a paradigm shift toward raw data workflows for efficient and flexible image processing. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 524,945 |
2403.14292 | HySim: An Efficient Hybrid Similarity Measure for Patch Matching in
Image Inpainting | Inpainting, for filling missing image regions, is a crucial task in various applications, such as medical imaging and remote sensing. Trending data-driven approaches efficiency, for image inpainting, often requires extensive data preprocessing. In this sense, there is still a need for model-driven approaches in case of application constrained with data availability and quality, especially for those related for time series forecasting using image inpainting techniques. This paper proposes an improved modeldriven approach relying on patch-based techniques. Our approach deviates from the standard Sum of Squared Differences (SSD) similarity measure by introducing a Hybrid Similarity (HySim), which combines both strengths of Chebychev and Minkowski distances. This hybridization enhances patch selection, leading to high-quality inpainting results with reduced mismatch errors. Experimental results proved the effectiveness of our approach against other model-driven techniques, such as diffusion or patch-based approaches, showcasing its effectiveness in achieving visually pleasing restorations. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 440,002 |
2401.15600 | A Mechatronic System for the Visualisation and Analysis of Orchestral
Conducting | This paper quantitatively analysed orchestral conducting patterns, and detected variations as a result of extraneous body movement during conducting, in the first experiment of its kind. A novel live conducting system featuring data capture, processing, and analysis was developed. Reliable data of an expert conductor's movements was collected, processed, and used to calculate average trajectories for different conducting techniques with various extraneous body movements; variations between extraneous body movement techniques and controlled technique were definitively determined in a novel quantitative analysis. A portable and affordable mechatronic system was created to capture and process live baton tip data, and was found to be accurate through calibration against a reliable reference. Experimental conducting field data was captured through the mechatronic system, and analysed against previously calculated average trajectories; the extraneous movement used during the field data capture was successfully identified by the system. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 424,511 |
2107.13735 | Learning the temporal evolution of multivariate densities via
normalizing flows | In this work, we propose a method to learn multivariate probability distributions using sample path data from stochastic differential equations. Specifically, we consider temporally evolving probability distributions (e.g., those produced by integrating local or nonlocal Fokker-Planck equations). We analyze this evolution through machine learning assisted construction of a time-dependent mapping that takes a reference distribution (say, a Gaussian) to each and every instance of our evolving distribution. If the reference distribution is the initial condition of a Fokker-Planck equation, what we learn is the time-T map of the corresponding solution. Specifically, the learned map is a multivariate normalizing flow that deforms the support of the reference density to the support of each and every density snapshot in time. We demonstrate that this approach can approximate probability density function evolutions in time from observed sampled data for systems driven by both Brownian and L\'evy noise. We present examples with two- and three-dimensional, uni- and multimodal distributions to validate the method. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 248,286 |
1909.09309 | CNN-based RGB-D Salient Object Detection: Learn, Select and Fuse | The goal of this work is to present a systematic solution for RGB-D salient object detection, which addresses the following three aspects with a unified framework: modal-specific representation learning, complementary cue selection and cross-modal complement fusion. To learn discriminative modal-specific features, we propose a hierarchical cross-modal distillation scheme, in which the well-learned source modality provides supervisory signals to facilitate the learning process for the new modality. To better extract the complementary cues, we formulate a residual function to incorporate complements from the paired modality adaptively. Furthermore, a top-down fusion structure is constructed for sufficient cross-modal interactions and cross-level transmissions. The experimental results demonstrate the effectiveness of the proposed cross-modal distillation scheme in zero-shot saliency detection and pre-training on a new modality, as well as the advantages in selecting and fusing cross-modal/cross-level complements. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 146,225 |
1906.08107 | Constrained Bilinear Factorization Multi-view Subspace Clustering | Multi-view clustering is an important and fundamental problem. Many multi-view subspace clustering methods have been proposed, and most of them assume that all views share a same coefficient matrix. However, the underlying information of multi-view data are not fully exploited under this assumption, since the coefficient matrices of different views should have the same clustering properties rather than be uniform among multiple views. To this end, this paper proposes a novel Constrained Bilinear Factorization Multi-view Subspace Clustering (CBF-MSC) method. Specifically, the bilinear factorization with an orthonormality constraint and a low-rank constraint is imposed for all coefficient matrices to make them have the same trace-norm instead of being equivalent, so as to explore the consensus information of multi-view data more fully. Finally, an Augmented Lagrangian Multiplier (ALM) based algorithm is designed to optimize the objective function. Comprehensive experiments tested on nine benchmark datasets validate the effectiveness and competitiveness of the proposed approach compared with several state-of-the-arts. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 135,783 |
2204.14173 | Evolutionary Approach to Security Games with Signaling | Green Security Games have become a popular way to model scenarios involving the protection of natural resources, such as wildlife. Sensors (e.g. drones equipped with cameras) have also begun to play a role in these scenarios by providing real-time information. Incorporating both human and sensor defender resources strategically is the subject of recent work on Security Games with Signaling (SGS). However, current methods to solve SGS do not scale well in terms of time or memory. We therefore propose a novel approach to SGS, which, for the first time in this domain, employs an Evolutionary Computation paradigm: EASGS. EASGS effectively searches the huge SGS solution space via suitable solution encoding in a chromosome and a specially-designed set of operators. The operators include three types of mutations, each focusing on a particular aspect of the SGS solution, optimized crossover and a local coverage improvement scheme (a memetic aspect of EASGS). We also introduce a new set of benchmark games, based on dense or locally-dense graphs that reflect real-world SGS settings. In the majority of 342 test game instances, EASGS outperforms state-of-the-art methods, including a reinforcement learning method, in terms of time scalability, nearly constant memory utilization, and quality of the returned defender's strategies (expected payoffs). | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | true | 294,077 |
1802.01279 | Zero-Shot Kernel Learning | In this paper, we address an open problem of zero-shot learning. Its principle is based on learning a mapping that associates feature vectors extracted from i.e. images and attribute vectors that describe objects and/or scenes of interest. In turns, this allows classifying unseen object classes and/or scenes by matching feature vectors via mapping to a newly defined attribute vector describing a new class. Due to importance of such a learning task, there exist many methods that learn semantic, probabilistic, linear or piece-wise linear mappings. In contrast, we apply well-established kernel methods to learn a non-linear mapping between the feature and attribute spaces. We propose an easy learning objective inspired by the Linear Discriminant Analysis, Kernel-Target Alignment and Kernel Polarization methods that promotes incoherence. We evaluate performance of our algorithm on the Polynomial as well as shift-invariant Gaussian and Cauchy kernels. Despite simplicity of our approach, we obtain state-of-the-art results on several zero-shot learning datasets and benchmarks including a recent AWA2 dataset. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 89,580 |
2211.04076 | Linear Self-Attention Approximation via Trainable Feedforward Kernel | In pursuit of faster computation, Efficient Transformers demonstrate an impressive variety of approaches -- models attaining sub-quadratic attention complexity can utilize a notion of sparsity or a low-rank approximation of inputs to reduce the number of attended keys; other ways to reduce complexity include locality-sensitive hashing, key pooling, additional memory to store information in compacted or hybridization with other architectures, such as CNN. Often based on a strong mathematical basis, kernelized approaches allow for the approximation of attention with linear complexity while retaining high accuracy. Therefore, in the present paper, we aim to expand the idea of trainable kernel methods to approximate the self-attention mechanism of the Transformer architecture. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 329,129 |
2306.09421 | FLAIR: A Metric for Liquidity Provider Competitiveness in Automated
Market Makers | This paper aims to enhance the understanding of liquidity provider (LP) returns in automated market makers (AMMs). LPs face market risk as well as adverse selection due to risky asset holdings in the pool that they provide liquidity to and the informational asymmetry between informed traders (arbitrageurs) and AMMs. Loss-versus-rebalancing (LVR) quantifies the adverse selection cost (Milionis et al., 2022a), and is a popular metric to evaluate the flow toxicity to an AMM. However, individual LP returns are critically affected by another factor orthogonal to the above: the competitiveness among LPs. This work introduces a novel metric for LP competitiveness, called FLAIR (short for fee liquidity-adjusted instantaneous returns), that aims to supplement LVR in assessments of LP performance to capture the dynamic behavior of LPs in a pool. Our metric reflects the characteristics of fee return-on-capital, and differentiates active liquidity provisioning strategies in AMMs. To illustrate how both flow toxicity, accounting for the sophistication of the counterparty of LPs, as well as LP competitiveness, accounting for the sophistication of the competition among LPs, affect individual LP returns, we propose a quadrant interpretation where all of these characteristics may be readily visualized. We examine LP competitiveness in an ex-post fashion, and show example cases in all of which our metric confirms the expected nuances and intuition of competitiveness among LPs. FLAIR has particular merit in empirical analyses, and is able to better inform practical assessments of AMM pools. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 373,823 |
2410.17473 | DROP: Distributional and Regular Optimism and Pessimism for
Reinforcement Learning | In reinforcement learning (RL), temporal difference (TD) error is known to be related to the firing rate of dopamine neurons. It has been observed that each dopamine neuron does not behave uniformly, but each responds to the TD error in an optimistic or pessimistic manner, interpreted as a kind of distributional RL. To explain such a biological data, a heuristic model has also been designed with learning rates asymmetric for the positive and negative TD errors. However, this heuristic model is not theoretically-grounded and unknown whether it can work as a RL algorithm. This paper therefore introduces a novel theoretically-grounded model with optimism and pessimism, which is derived from control as inference. In combination with ensemble learning, a distributional value function as a critic is estimated from regularly introduced optimism and pessimism. Based on its central value, a policy in an actor is improved. This proposed algorithm, so-called DROP (distributional and regular optimism and pessimism), is compared on dynamic tasks. Although the heuristic model showed poor learning performance, DROP showed excellent one in all tasks with high generality. In other words, it was suggested that DROP is a new model that can elicit the potential contributions of optimism and pessimism. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 501,474 |
2501.06283 | Dafny as Verification-Aware Intermediate Language for Code Generation | Using large language models (LLMs) to generate source code from natural language prompts is a popular and promising idea with a wide range of applications. One of its limitations is that the generated code can be faulty at times, often in a subtle way, despite being presented to the user as correct. In this paper, we explore ways in which formal methods can assist with increasing the quality of code generated by an LLM. Instead of emitting code in a target language directly, we propose that the user guides the LLM to first generate an opaque intermediate representation, in the verification-aware language Dafny, that can be automatically validated for correctness against agreed on specifications. The correct Dafny program is then compiled to the target language and returned to the user. All user-system interactions throughout the procedure occur via natural language; Dafny code is never exposed. We describe our current prototype and report on its performance on the HumanEval Python code generation benchmarks. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | true | 523,931 |
1504.05900 | The Degraded Gaussian Diamond-Wiretap Channel | In this paper, we present nontrivial upper and lower bounds on the secrecy capacity of the degraded Gaussian diamond-wiretap channel and identify several ranges of channel parameters where these bounds coincide with useful intuitions. Furthermore, we investigate the effect of the presence of an eavesdropper on the capacity. We consider the following two scenarios regarding the availability of randomness: 1) a common randomness is available at the source and the two relays and 2) a randomness is available only at the source and there is no available randomness at the relays. We obtain the upper bound by taking into account the correlation between the two relay signals and the availability of randomness at each encoder. For the lower bound, we propose two types of coding schemes: 1) a decode-and-forward scheme where the relays cooperatively transmit the message and the fictitious message and 2) a partial DF scheme incorporated with multicoding in which each relay sends an independent partial message and the whole or partial fictitious message using dependent codewords. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 42,336 |
2209.10926 | Equivariant Transduction through Invariant Alignment | The ability to generalize compositionally is key to understanding the potentially infinite number of sentences that can be constructed in a human language from only a finite number of words. Investigating whether NLP models possess this ability has been a topic of interest: SCAN (Lake and Baroni, 2018) is one task specifically proposed to test for this property. Previous work has achieved impressive empirical results using a group-equivariant neural network that naturally encodes a useful inductive bias for SCAN (Gordon et al., 2020). Inspired by this, we introduce a novel group-equivariant architecture that incorporates a group-invariant hard alignment mechanism. We find that our network's structure allows it to develop stronger equivariance properties than existing group-equivariant approaches. We additionally find that it outperforms previous group-equivariant networks empirically on the SCAN task. Our results suggest that integrating group-equivariance into a variety of neural architectures is a potentially fruitful avenue of research, and demonstrate the value of careful analysis of the theoretical properties of such architectures. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 319,020 |
2404.13166 | FoMo: A Proposal for a Multi-Season Dataset for Robot Navigation in
For\^et Montmorency | In this paper, we propose the FoMo (For\^et Montmorency) dataset: a comprehensive, multi-season data collection. Located in the Montmorency Forest, Quebec, Canada, our dataset will capture a rich variety of sensory data over six distinct trajectories totaling 6 kilometers, repeated through different seasons to accumulate 42 kilometers of recorded data. The boreal forest environment increases the diversity of datasets for mobile robot navigation. This proposed dataset will feature a broad array of sensor modalities, including lidar, radar, and a navigation-grade Inertial Measurement Unit (IMU), against the backdrop of challenging boreal forest conditions. Notably, the FoMo dataset will be distinguished by its inclusion of seasonal variations, such as changes in tree canopy and snow depth up to 2 meters, presenting new challenges for robot navigation algorithms. Alongside, we will offer a centimeter-level accurate ground truth, obtained through Post Processed Kinematic (PPK) Global Navigation Satellite System (GNSS) correction, facilitating precise evaluation of odometry and localization algorithms. This work aims to spur advancements in autonomous navigation, enabling the development of robust algorithms capable of handling the dynamic, unstructured environments characteristic of boreal forests. With a public odometry and localization leaderboard and a dedicated software suite, we invite the robotics community to engage with the FoMo dataset by exploring new frontiers in robot navigation under extreme environmental variations. We seek feedback from the community based on this proposal to make the dataset as useful as possible. For further details and supplementary materials, please visit https://norlab-ulaval.github.io/FoMo-website/. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 448,190 |
2303.11771 | Self-Sufficient Framework for Continuous Sign Language Recognition | The goal of this work is to develop self-sufficient framework for Continuous Sign Language Recognition (CSLR) that addresses key issues of sign language recognition. These include the need for complex multi-scale features such as hands, face, and mouth for understanding, and absence of frame-level annotations. To this end, we propose (1) Divide and Focus Convolution (DFConv) which extracts both manual and non-manual features without the need for additional networks or annotations, and (2) Dense Pseudo-Label Refinement (DPLR) which propagates non-spiky frame-level pseudo-labels by combining the ground truth gloss sequence labels with the predicted sequence. We demonstrate that our model achieves state-of-the-art performance among RGB-based methods on large-scale CSLR benchmarks, PHOENIX-2014 and PHOENIX-2014-T, while showing comparable results with better efficiency when compared to other approaches that use multi-modality or extra annotations. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 353,003 |
2104.07661 | E2Style: Improve the Efficiency and Effectiveness of StyleGAN Inversion | This paper studies the problem of StyleGAN inversion, which plays an essential role in enabling the pretrained StyleGAN to be used for real image editing tasks. The goal of StyleGAN inversion is to find the exact latent code of the given image in the latent space of StyleGAN. This problem has a high demand for quality and efficiency. Existing optimization-based methods can produce high-quality results, but the optimization often takes a long time. On the contrary, forward-based methods are usually faster but the quality of their results is inferior. In this paper, we present a new feed-forward network "E2Style" for StyleGAN inversion, with significant improvement in terms of efficiency and effectiveness. In our inversion network, we introduce: 1) a shallower backbone with multiple efficient heads across scales; 2) multi-layer identity loss and multi-layer face parsing loss to the loss function; and 3) multi-stage refinement. Combining these designs together forms an effective and efficient method that exploits all benefits of optimization-based and forward-based methods. Quantitative and qualitative results show that our E2Style performs better than existing forward-based methods and comparably to state-of-the-art optimization-based methods while maintaining the high efficiency as well as forward-based methods. Moreover, a number of real image editing applications demonstrate the efficacy of our E2Style. Our code is available at \url{https://github.com/wty-ustc/e2style} | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 230,503 |
2111.09212 | Single-pass Object-adaptive Data Undersampling and Reconstruction for
MRI | There is much recent interest in techniques to accelerate the data acquisition process in MRI by acquiring limited measurements. Often sophisticated reconstruction algorithms are deployed to maintain high image quality in such settings. In this work, we propose a data-driven sampler using a convolutional neural network, MNet, to provide object-specific sampling patterns adaptive to each scanned object. The network observes very limited low-frequency k-space data for each object and rapidly predicts the desired undersampling pattern in one go that achieves high image reconstruction quality. We propose an accompanying alternating-type training framework with a mask-backward procedure that efficiently generates training labels for the sampler network and jointly trains an image reconstruction network. Experimental results on the fastMRI knee dataset demonstrate the ability of the proposed learned undersampling network to generate object-specific masks at fourfold and eightfold acceleration that achieve superior image reconstruction performance than several existing schemes. The source code for the proposed joint sampling and reconstruction learning framework is available at https://github.com/zhishenhuang/mri. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 266,951 |
2404.09071 | Statistical Analysis of Block Coordinate Descent Algorithms for Linear
Continuous-time System Identification | Block coordinate descent is an optimization technique that is used for estimating multi-input single-output (MISO) continuous-time models, as well as single-input single output (SISO) models in additive form. Despite its widespread use in various optimization contexts, the statistical properties of block coordinate descent in continuous-time system identification have not been covered in the literature. The aim of this paper is to formally analyze the bias properties of the block coordinate descent approach for the identification of MISO and additive SISO systems. We characterize the asymptotic bias at each iteration, and provide sufficient conditions for the consistency of the estimator for each identification setting. The theoretical results are supported by simulation examples. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 446,526 |
1811.11304 | Universal Adversarial Training | Standard adversarial attacks change the predicted class label of a selected image by adding specially tailored small perturbations to its pixels. In contrast, a universal perturbation is an update that can be added to any image in a broad class of images, while still changing the predicted class label. We study the efficient generation of universal adversarial perturbations, and also efficient methods for hardening networks to these attacks. We propose a simple optimization-based universal attack that reduces the top-1 accuracy of various network architectures on ImageNet to less than 20%, while learning the universal perturbation 13X faster than the standard method. To defend against these perturbations, we propose universal adversarial training, which models the problem of robust classifier generation as a two-player min-max game, and produces robust models with only 2X the cost of natural training. We also propose a simultaneous stochastic gradient method that is almost free of extra computation, which allows us to do universal adversarial training on ImageNet. | false | false | false | false | false | false | true | false | false | false | false | true | true | false | false | false | false | false | 114,737 |
2302.14420 | Estimation-of-Distribution Algorithms for Multi-Valued Decision
Variables | The majority of research on estimation-of-distribution algorithms (EDAs) concentrates on pseudo-Boolean optimization and permutation problems, leaving the domain of EDAs for problems in which the decision variables can take more than two values, but which are not permutation problems, mostly unexplored. To render this domain more accessible, we propose a natural way to extend the known univariate EDAs to this setting. Different from a naive reduction to the binary case, our approach avoids additional constraints. Since understanding genetic drift is crucial for an optimal parameter choice, we extend the known quantitative analysis of genetic drift to EDAs for multi-valued variables. Roughly speaking, when the variables take $r$ different values, the time for genetic drift to become significant is $r$ times shorter than in the binary case. Consequently, the update strength of the probabilistic model has to be chosen $r$ times lower now. To investigate how desired model updates take place in this framework, we undertake a mathematical runtime analysis on the $r$-valued \leadingones problem. We prove that with the right parameters, the multi-valued UMDA solves this problem efficiently in $O(r\ln(r)^2 n^2 \ln(n))$ function evaluations. This bound is nearly tight as our lower bound $\Omega(r\ln(r) n^2 \ln(n))$ shows. Overall, our work shows that our good understanding of binary EDAs naturally extends to the multi-valued setting, and it gives advice on how to set the main parameters of multi-values EDAs. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 348,291 |
1909.00893 | A Predictive Deep Learning Approach to Output Regulation: The Case of
Collaborative Pursuit Evasion | In this paper, we consider the problem of controlling an underactuated system in unknown, and potentially adversarial environments. The emphasis will be on autonomous aerial vehicles, modelled by Dubins dynamics. The proposed control law is based on a variable integrator via online prediction for target tracking. To showcase the efficacy of our method, we analyze a pursuit evasion game between multiple autonomous agents. To obviate the need for perfect knowledge of the evader's future strategy, we use a deep neural network that is trained to approximate the behavior of the evader based on measurements gathered online during the pursuit. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | 143,744 |
2006.15756 | Age of Information in Ultra-Dense IoT Systems: Performance and
Mean-Field Game Analysis | In this paper, a dense Internet of Things (IoT) monitoring system is considered in which a large number of IoT devices contend for channel access so as to transmit timely status updates to the corresponding receivers using a carrier sense multiple access (CSMA) scheme. Under two packet management schemes with and without preemption in service, the closed-form expressions of the average age of information (AoI) and the average peak AoI of each device is characterized. It is shown that the scheme with preemption in service always leads to a smaller average AoI and a smaller average peak AoI, compared to the scheme without preemption in service. Then, a distributed noncooperative medium access control game is formulated in which each device optimizes its waiting rate so as to minimize its average AoI or average peak AoI under an average energy cost constraint on channel sensing and packet transmitting. To overcome the challenges of solving this game for an ultra-dense IoT, a mean-field game (MFG) approach is proposed to study the asymptotic performance of each device for the system in the large population regime. The accuracy of the MFG is analyzed, and the existence, uniqueness, and convergence of the mean-field equilibrium (MFE) are investigated. Simulation results show that the proposed MFG is accurate even for a small number of devices; and the proposed CSMA-type scheme under the MFG analysis outperforms three baseline schemes with fixed and dynamic waiting rates. Moreover, it is observed that the average AoI and the average peak AoI under the MFE do not necessarily decrease with the arrival rate. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 184,615 |
1805.01692 | A Convex Approximation of the Relaxed Binaural Beamforming Optimization
Problem | The recently proposed relaxed binaural beamforming (RBB) optimization problem provides a flexible trade-off between noise suppression and binaural-cue preservation of the sound sources in the acoustic scene. It minimizes the output noise power, under the constraints which guarantee that the target remains unchanged after processing and the binaural-cue distortions of the acoustic sources will be less than a user-defined threshold. However, the RBB problem is a computationally demanding non-convex optimization problem. The only existing suboptimal method which approximately solves the RBB is a successive convex optimization (SCO) method which, typically, requires to solve multiple convex optimization problems per frequency bin, in order to converge. Convergence is achieved when all constraints of the RBB optimization problem are satisfied. In this paper, we propose a semi-definite convex relaxation (SDCR) of the RBB optimization problem. The proposed suboptimal SDCR method solves a single convex optimization problem per frequency bin, resulting in a much lower computational complexity than the SCO method. Unlike the SCO method, the SDCR method does not guarantee user-controlled upper-bounded binaural-cue distortions. To tackle this problem we also propose a suboptimal hybrid method which combines the SDCR and SCO methods. Instrumental measures combined with a listening test show that the SDCR and hybrid methods achieve significantly lower computational complexity than the SCO method, and in most cases better trade-off between predicted intelligibility and binaural-cue preservation than the SCO method. | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 96,695 |
1804.07275 | Deep Triplet Ranking Networks for One-Shot Recognition | Despite the breakthroughs achieved by deep learning models in conventional supervised learning scenarios, their dependence on sufficient labeled training data in each class prevents effective applications of these deep models in situations where labeled training instances for a subset of novel classes are very sparse -- in the extreme case only one instance is available for each class. To tackle this natural and important challenge, one-shot learning, which aims to exploit a set of well labeled base classes to build classifiers for the new target classes that have only one observed instance per class, has recently received increasing attention from the research community. In this paper we propose a novel end-to-end deep triplet ranking network to perform one-shot learning. The proposed approach learns class universal image embeddings on the well labeled base classes under a triplet ranking loss, such that the instances from new classes can be categorized based on their similarity with the one-shot instances in the learned embedding space. Moreover, our approach can naturally incorporate the available one-shot instances from the new classes into the embedding learning process to improve the triplet ranking model. We conduct experiments on two popular datasets for one-shot learning. The results show the proposed approach achieves better performance than the state-of-the- art comparison methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 95,496 |
2005.04399 | Estimating g-Leakage via Machine Learning | This paper considers the problem of estimating the information leakage of a system in the black-box scenario. It is assumed that the system's internals are unknown to the learner, or anyway too complicated to analyze, and the only available information are pairs of input-output data samples, possibly obtained by submitting queries to the system or provided by a third party. Previous research has mainly focused on counting the frequencies to estimate the input-output conditional probabilities (referred to as frequentist approach), however this method is not accurate when the domain of possible outputs is large. To overcome this difficulty, the estimation of the Bayes error of the ideal classifier was recently investigated using Machine Learning (ML) models and it has been shown to be more accurate thanks to the ability of those models to learn the input-output correspondence. However, the Bayes vulnerability is only suitable to describe one-try attacks. A more general and flexible measure of leakage is the g-vulnerability, which encompasses several different types of adversaries, with different goals and capabilities. In this paper, we propose a novel approach to perform black-box estimation of the g-vulnerability using ML. A feature of our approach is that it does not require to estimate the conditional probabilities, and that it is suitable for a large class of ML algorithms. First, we formally show the learnability for all data distributions. Then, we evaluate the performance via various experiments using k-Nearest Neighbors and Neural Networks. Our results outperform the frequentist approach when the observables domain is large. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 176,454 |
2310.03274 | Fragment-based Pretraining and Finetuning on Molecular Graphs | Property prediction on molecular graphs is an important application of Graph Neural Networks. Recently, unlabeled molecular data has become abundant, which facilitates the rapid development of self-supervised learning for GNNs in the chemical domain. In this work, we propose pretraining GNNs at the fragment level, a promising middle ground to overcome the limitations of node-level and graph-level pretraining. Borrowing techniques from recent work on principal subgraph mining, we obtain a compact vocabulary of prevalent fragments from a large pretraining dataset. From the extracted vocabulary, we introduce several fragment-based contrastive and predictive pretraining tasks. The contrastive learning task jointly pretrains two different GNNs: one on molecular graphs and the other on fragment graphs, which represents higher-order connectivity within molecules. By enforcing consistency between the fragment embedding and the aggregated embedding of the corresponding atoms from the molecular graphs, we ensure that the embeddings capture structural information at multiple resolutions. The structural information of fragment graphs is further exploited to extract auxiliary labels for graph-level predictive pretraining. We employ both the pretrained molecular-based and fragment-based GNNs for downstream prediction, thus utilizing the fragment information during finetuning. Our graph fragment-based pretraining (GraphFP) advances the performances on 5 out of 8 common molecular benchmarks and improves the performances on long-range biological benchmarks by at least 11.5%. Code is available at: https://github.com/lvkd84/GraphFP. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 397,210 |
2107.03056 | Position Constrained, Adaptive Control of Robotic Manipulators without
Velocity Measurements | This work presents the design and the corresponding stability analysis of a model based, joint position tracking error constrained, adaptive output feedback controller for robot manipulators. Specifically, provided that the initial joint position tracking error starts within a predefined region, the proposed controller algorithm ensures that the joint tracking error remains inside this region and asymptotically approaches to zero, despite the lack of joint velocity measurements and uncertainties associated with the system dynamics. The need for the joint velocity measurements are removed via the use of a surrogate filter formulation in conjunction with the use of desired model compensation. The stability and the convergence of the closed loop system are proved via a barrier Lyapunov function based argument. A simulation performed on a two-link robotic manipulator is provided in order to illustrate the feasibility and effectiveness of the proposed method. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 245,038 |
2302.01703 | DAMS-LIO: A Degeneration-Aware and Modular Sensor-Fusion LiDAR-inertial
Odometry | With robots being deployed in increasingly complex environments like underground mines and planetary surfaces, the multi-sensor fusion method has gained more and more attention which is a promising solution to state estimation in the such scene. The fusion scheme is a central component of these methods. In this paper, a light-weight iEKF-based LiDAR-inertial odometry system is presented, which utilizes a degeneration-aware and modular sensor-fusion pipeline that takes both LiDAR points and relative pose from another odometry as the measurement in the update process only when degeneration is detected. Both the Cramer-Rao Lower Bound (CRLB) theory and simulation test are used to demonstrate the higher accuracy of our method compared to methods using a single observation. Furthermore, the proposed system is evaluated in perceptually challenging datasets against various state-of-the-art sensor-fusion methods. The results show that the proposed system achieves real-time and high estimation accuracy performance despite the challenging environment and poor observations. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 343,710 |
2205.01076 | Classification of Buildings' Potential for Seismic Damage by Means of
Artificial Intelligence Techniques | Developing a rapid, but also reliable and efficient, method for classifying the seismic damage potential of buildings constructed in countries with regions of high seismicity is always at the forefront of modern scientific research. Such a technique would be essential for estimating the pre-seismic vulnerability of the buildings, so that the authorities will be able to develop earthquake safety plans for seismic rehabilitation of the highly earthquake-susceptible structures. In the last decades, several researchers have proposed such procedures, some of which were adopted by seismic code guidelines. These procedures usually utilize methods based either on simple calculations or on the application of statistics theory. Recently, the increase of the computers' power has led to the development of modern statistical methods based on the adoption of Machine Learning algorithms. These methods have been shown to be useful for predicting seismic performance and classifying structural damage level by means of extracting patterns from data collected via various sources. A large training dataset is used for the implementation of the classification algorithms. To this end, 90 3D R/C buildings with three different masonry infills' distributions are analysed utilizing Nonlinear Time History Analysis method for 65 real seismic records. The level of the seismic damage is expressed in terms of the Maximum Interstory Drift Ratio. A large number of Machine Learning algorithms is utilized in order to estimate the buildings' damage response. The most significant conclusion which is extracted is that the Machine Learning methods that are mathematically well-established and their operations that are clearly interpretable step by step can be used to solve some of the most sophisticated real-world problems in consideration with high accuracy. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 294,472 |
2006.02548 | Graphical Normalizing Flows | Normalizing flows model complex probability distributions by combining a base distribution with a series of bijective neural networks. State-of-the-art architectures rely on coupling and autoregressive transformations to lift up invertible functions from scalars to vectors. In this work, we revisit these transformations as probabilistic graphical models, showing they reduce to Bayesian networks with a pre-defined topology and a learnable density at each node. From this new perspective, we propose the graphical normalizing flow, a new invertible transformation with either a prescribed or a learnable graphical structure. This model provides a promising way to inject domain knowledge into normalizing flows while preserving both the interpretability of Bayesian networks and the representation capacity of normalizing flows. We show that graphical conditioners discover relevant graph structure when we cannot hypothesize it. In addition, we analyze the effect of $\ell_1$-penalization on the recovered structure and on the quality of the resulting density estimation. Finally, we show that graphical conditioners lead to competitive white box density estimators. Our implementation is available at https://github.com/AWehenkel/DAG-NF. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 180,064 |
1503.07568 | Router-level community structure of the Internet Autonomous Systems | The Internet is composed of routing devices connected between them and organized into independent administrative entities: the Autonomous Systems. The existence of different types of Autonomous Systems (like large connectivity providers, Internet Service Providers or universities) together with geographical and economical constraints, turns the Internet into a complex modular and hierarchical network. This organization is reflected in many properties of the Internet topology, like its high degree of clustering and its robustness. In this work, we study the modular structure of the Internet router-level graph in order to assess to what extent the Autonomous Systems satisfy some of the known notions of community structure. We show that the modular structure of the Internet is much richer than what can be captured by the current community detection methods, which are severely affected by resolution limits and by the heterogeneity of the Autonomous Systems. Here we overcome this issue by using a multiresolution detection algorithm combined with a small sample of nodes. We also discuss recent work on community structure in the light of our results. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 41,487 |
1805.00599 | Placement Delivery Array Design via Attention-Based Deep Neural Network | A decentralized coded caching scheme has been proposed by Maddah-Ali and Niesen, and has been shown to alleviate the load of networks. Recently, placement delivery array (PDA) was proposed to characterize the coded caching scheme. In this paper, a neural architecture is first proposed to learn the construction of PDAs. Our model solves the problem of variable size PDAs using mechanism of neural attention and reinforcement learning. It differs from the previous attempts in that, instead of using combined optimization algorithms to get PDAs, it uses sequence-to-sequence model to learn construct PDAs. Numerical results are given to demonstrate that the proposed method can effectively implement coded caching. We also show that the complexity of our method to construct PDAs is low. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 96,466 |
2406.12020 | When Box Meets Graph Neural Network in Tag-aware Recommendation | Last year has witnessed the re-flourishment of tag-aware recommender systems supported by the LLM-enriched tags. Unfortunately, though large efforts have been made, current solutions may fail to describe the diversity and uncertainty inherent in user preferences with only tag-driven profiles. Recently, with the development of geometry-based techniques, e.g., box embedding, diversity of user preferences now could be fully modeled as the range within a box in high dimension space. However, defect still exists as these approaches are incapable of capturing high-order neighbor signals, i.e., semantic-rich multi-hop relations within the user-tag-item tripartite graph, which severely limits the effectiveness of user modeling. To deal with this challenge, in this paper, we propose a novel algorithm, called BoxGNN, to perform the message aggregation via combination of logical operations, thereby incorporating high-order signals. Specifically, we first embed users, items, and tags as hyper-boxes rather than simple points in the representation space, and define two logical operations to facilitate the subsequent process. Next, we perform the message aggregation mechanism via the combination of logical operations, to obtain the corresponding high-order box representations. Finally, we adopt a volume-based learning objective with Gumbel smoothing techniques to refine the representation of boxes. Extensive experiments on two publicly available datasets and one LLM-enhanced e-commerce dataset have validated the superiority of BoxGNN compared with various state-of-the-art baselines. The code is released online | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | false | 465,181 |
2003.13474 | Extending a Tag-based Collaborative Recommender with Co-occurring
Information Interests | Collaborative Filtering is largely applied to personalize item recommendation but its performance is affected by the sparsity of rating data. In order to address this issue, recent systems have been developed to improve recommendation by extracting latent factors from the rating matrices, or by exploiting trust relations established among users in social networks. In this work, we are interested in evaluating whether other sources of preference information than ratings and social ties can be used to improve recommendation performance. Specifically, we aim at testing whether the integration of frequently co-occurring interests in information search logs can improve recommendation performance in User-to-User Collaborative Filtering (U2UCF). For this purpose, we propose the Extended Category-based Collaborative Filtering (ECCF) recommender, which enriches category-based user profiles derived from the analysis of rating behavior with data categories that are frequently searched together by people in search sessions. We test our model using a big rating dataset and a log of a largely used search engine to extract the co-occurrence of interests. The experiments show that ECCF outperforms U2UCF and category-based collaborative recommendation in accuracy, MRR, diversity of recommendations and user coverage. Moreover, it outperforms the SVD++ Matrix Factorization algorithm in accuracy and diversity of recommendation lists. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 170,216 |
1804.08986 | Feedback Control Goes Wireless: Guaranteed Stability over Low-power
Multi-hop Networks | Closing feedback loops fast and over long distances is key to emerging applications; for example, robot motion control and swarm coordination require update intervals of tens of milliseconds. Low-power wireless technology is preferred for its low cost, small form factor, and flexibility, especially if the devices support multi-hop communication. So far, however, feedback control over wireless multi-hop networks has only been shown for update intervals on the order of seconds. This paper presents a wireless embedded system that tames imperfections impairing control performance (e.g., jitter and message loss), and a control design that exploits the essential properties of this system to provably guarantee closed-loop stability for physical processes with linear time-invariant dynamics. Using experiments on a cyber-physical testbed with 20 wireless nodes and multiple cart-pole systems, we are the first to demonstrate and evaluate feedback control and coordination over wireless multi-hop networks for update intervals of 20 to 50 milliseconds. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | true | false | false | true | 95,881 |
1907.00480 | Predicting video saliency using crowdsourced mouse-tracking data | This paper presents a new way of getting high-quality saliency maps for video, using a cheaper alternative to eye-tracking data. We designed a mouse-contingent video viewing system which simulates the viewers' peripheral vision based on the position of the mouse cursor. The system enables the use of mouse-tracking data recorded from an ordinary computer mouse as an alternative to real gaze fixations recorded by a more expensive eye-tracker. We developed a crowdsourcing system that enables the collection of such mouse-tracking data at large scale. Using the collected mouse-tracking data we showed that it can serve as an approximation of eye-tracking data. Moreover, trying to increase the efficiency of collected mouse-tracking data we proposed a novel deep neural network algorithm that improves the quality of mouse-tracking saliency maps. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 137,056 |
2408.12128 | Diffusion-Based Visual Art Creation: A Survey and New Perspectives | The integration of generative AI in visual art has revolutionized not only how visual content is created but also how AI interacts with and reflects the underlying domain knowledge. This survey explores the emerging realm of diffusion-based visual art creation, examining its development from both artistic and technical perspectives. We structure the survey into three phases, data feature and framework identification, detailed analyses using a structured coding process, and open-ended prospective outlooks. Our findings reveal how artistic requirements are transformed into technical challenges and highlight the design and application of diffusion-based methods within visual art creation. We also provide insights into future directions from technical and synergistic perspectives, suggesting that the confluence of generative AI and art has shifted the creative paradigm and opened up new possibilities. By summarizing the development and trends of this emerging interdisciplinary area, we aim to shed light on the mechanisms through which AI systems emulate and possibly, enhance human capacities in artistic perception and creativity. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 482,601 |
2502.01527 | Enhancing Bayesian Network Structural Learning with Monte Carlo Tree
Search | This article presents MCTS-BN, an adaptation of the Monte Carlo Tree Search (MCTS) algorithm for the structural learning of Bayesian Networks (BNs). Initially designed for game tree exploration, MCTS has been repurposed to address the challenge of learning BN structures by exploring the search space of potential ancestral orders in Bayesian Networks. Then, it employs Hill Climbing (HC) to derive a Bayesian Network structure from each order. In large BNs, where the search space for variable orders becomes vast, using completely random orders during the rollout phase is often unreliable and impractical. We adopt a semi-randomized approach to address this challenge by incorporating variable orders obtained from other heuristic search algorithms such as Greedy Equivalent Search (GES), PC, or HC itself. This hybrid strategy mitigates the computational burden and enhances the reliability of the rollout process. Experimental evaluations demonstrate the effectiveness of MCTS-BN in improving BNs generated by traditional structural learning algorithms, exhibiting robust performance even when base algorithm orders are suboptimal and surpassing the gold standard when provided with favorable orders. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 529,895 |
2101.01338 | Federated Learning for 6G: Applications, Challenges, and Opportunities | Traditional machine learning is centralized in the cloud (data centers). Recently, the security concern and the availability of abundant data and computation resources in wireless networks are pushing the deployment of learning algorithms towards the network edge. This has led to the emergence of a fast growing area, called federated learning (FL), which integrates two originally decoupled areas: wireless communication and machine learning. In this paper, we provide a comprehensive study on the applications of FL for sixth generation (6G) wireless networks. First, we discuss the key requirements in applying FL for wireless communications. Then, we focus on the motivating application of FL for wireless communications. We identify the main problems, challenges, and provide a comprehensive treatment of implementing FL techniques for wireless communications. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 214,342 |
2105.11475 | Semi-supervised learning of images with strong rotational disorder:
assembling nanoparticle libraries | The proliferation of optical, electron, and scanning probe microscopies gives rise to large volumes of imaging data of objects as diversified as cells, bacteria, pollen, to nanoparticles and atoms and molecules. In most cases, the experimental data streams contain images having arbitrary rotations and translations within the image. At the same time, for many cases, small amounts of labeled data are available in the form of prior published results, image collections, and catalogs, or even theoretical models. Here we develop an approach that allows generalizing from a small subset of labeled data with a weak orientational disorder to a large unlabeled dataset with a much stronger orientational (and positional) disorder, i.e., it performs a classification of image data given a small number of examples even in the presence of a distribution shift between the labeled and unlabeled parts. This approach is based on the semi-supervised rotationally invariant variational autoencoder (ss-rVAE) model consisting of the encoder-decoder "block" that learns a rotationally (and translationally) invariant continuous latent representation of data and a classifier that encodes data into a finite number of discrete classes. The classifier part of the trained ss-rVAE inherits the rotational (and translational) invariances and can be deployed independently of the other parts of the model. The performance of the ss-rVAE is illustrated using the synthetic data sets with known factors of variation. We further demonstrate its application for experimental data sets of nanoparticles, creating nanoparticle libraries and disentangling the representations defining the physical factors of variation in the data. The code reproducing the results is available at https://github.com/ziatdinovmax/Semi-Supervised-VAE-nanoparticles. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 236,709 |
2305.00927 | Cross-Institutional Transfer Learning for Educational Models:
Implications for Model Performance, Fairness, and Equity | Modern machine learning increasingly supports paradigms that are multi-institutional (using data from multiple institutions during training) or cross-institutional (using models from multiple institutions for inference), but the empirical effects of these paradigms are not well understood. This study investigates cross-institutional learning via an empirical case study in higher education. We propose a framework and metrics for assessing the utility and fairness of student dropout prediction models that are transferred across institutions. We examine the feasibility of cross-institutional transfer under real-world data- and model-sharing constraints, quantifying model biases for intersectional student identities, characterizing potential disparate impact due to these biases, and investigating the impact of various cross-institutional ensembling approaches on fairness and overall model performance. We perform this analysis on data representing over 200,000 enrolled students annually from four universities without sharing training data between institutions. We find that a simple zero-shot cross-institutional transfer procedure can achieve similar performance to locally-trained models for all institutions in our study, without sacrificing model fairness. We also find that stacked ensembling provides no additional benefits to overall performance or fairness compared to either a local model or the zero-shot transfer procedure we tested. We find no evidence of a fairness-accuracy tradeoff across dozens of models and transfer schemes evaluated. Our auditing procedure also highlights the importance of intersectional fairness analysis, revealing performance disparities at the intersection of sensitive identity groups that are concealed under one-dimensional analysis. | false | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | 361,488 |
2502.06394 | SynthDetoxM: Modern LLMs are Few-Shot Parallel Detoxification Data
Annotators | Existing approaches to multilingual text detoxification are hampered by the scarcity of parallel multilingual datasets. In this work, we introduce a pipeline for the generation of multilingual parallel detoxification data. We also introduce SynthDetoxM, a manually collected and synthetically generated multilingual parallel text detoxification dataset comprising 16,000 high-quality detoxification sentence pairs across German, French, Spanish and Russian. The data was sourced from different toxicity evaluation datasets and then rewritten with nine modern open-source LLMs in few-shot setting. Our experiments demonstrate that models trained on the produced synthetic datasets have superior performance to those trained on the human-annotated MultiParaDetox dataset even in data limited setting. Models trained on SynthDetoxM outperform all evaluated LLMs in few-shot setting. We release our dataset and code to help further research in multilingual text detoxification. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 532,065 |
2003.05222 | Estimation of lateral track irregularity through Kalman filtering
techniques | The aim of this work is to develop a model-based methodology for monitoring lateral track irregularities based on the use of inertial sensors mounted on an in-service train. To this end, a gyroscope is used to measure the wheelset yaw angular velocity and two accelerometers are used to measure lateral acceleration of the wheelset and the bogie frame. Using a highly simplified linear bogie model that is able to capture the most relevant dynamic behaviour allows for the set-up of a very efficient Kalman-based monitoring strategy. The behaviour of the designed filter is assessed through the use of a detailed multibody model of an in-service vehicle running on a straight track with realistic irregularities. The model output is used to generate virtual measurements that are subsequently used to run the filter and validate the proposed estimator. In addition, the equivalent parameters of the simplified model are identified based on these simulations. In order to prove the robustness of the proposed technique, a systematic parametric analysis has been performed. The results obtained with the proposed method are promising, showing high accuracy and robustness for monitoring lateral alignment on straight tracks, with a very low computational cost. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 167,808 |
2408.00867 | An Extreme Value Theory Approach for Understanding Queue Length Dynamics
in Adaptive Corridors | This paper introduces a novel approach employing extreme value theory to analyze queue lengths within a corridor controlled by adaptive controllers. We consider the maximum queue lengths of a signalized corridor consisting of nine intersections every two minutes, roughly equivalent to the cycle length. Our research shows that maximum queue lengths at all the intersections follow the extreme value distributions. To the best knowledge of the authors, this is the first attempt to characterize queue length time series using extreme value analysis. These findings are significant as they offer a mechanism to assess the extremity of queue lengths, thereby aiding in evaluating the effectiveness of the adaptive signal controllers and corridor management. Given that extreme queue lengths often precipitate spillover effects, this insight can be instrumental in preempting such scenarios. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 478,008 |
2203.12595 | PhysioMTL: Personalizing Physiological Patterns using Optimal Transport
Multi-Task Regression | Heart rate variability (HRV) is a practical and noninvasive measure of autonomic nervous system activity, which plays an essential role in cardiovascular health. However, using HRV to assess physiology status is challenging. Even in clinical settings, HRV is sensitive to acute stressors such as physical activity, mental stress, hydration, alcohol, and sleep. Wearable devices provide convenient HRV measurements, but the irregularity of measurements and uncaptured stressors can bias conventional analytical methods. To better interpret HRV measurements for downstream healthcare applications, we learn a personalized diurnal rhythm as an accurate physiological indicator for each individual. We develop Physiological Multitask-Learning (PhysioMTL) by harnessing Optimal Transport theory within a Multitask-learning (MTL) framework. The proposed method learns an individual-specific predictive model from heterogeneous observations, and enables estimation of an optimal transport map that yields a push forward operation onto the demographic features for each task. Our model outperforms competing MTL methodologies on unobserved predictive tasks for synthetic and two real-world datasets. Specifically, our method provides remarkable prediction results on unseen held-out subjects given only $20\%$ of the subjects in real-world observational studies. Furthermore, our model enables a counterfactual engine that generates the effect of acute stressors and chronic conditions on HRV rhythms. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 287,322 |
2310.05881 | Controllable Chest X-Ray Report Generation from Longitudinal
Representations | Radiology reports are detailed text descriptions of the content of medical scans. Each report describes the presence/absence and location of relevant clinical findings, commonly including comparison with prior exams of the same patient to describe how they evolved. Radiology reporting is a time-consuming process, and scan results are often subject to delays. One strategy to speed up reporting is to integrate automated reporting systems, however clinical deployment requires high accuracy and interpretability. Previous approaches to automated radiology reporting generally do not provide the prior study as input, precluding comparison which is required for clinical accuracy in some types of scans, and offer only unreliable methods of interpretability. Therefore, leveraging an existing visual input format of anatomical tokens, we introduce two novel aspects: (1) longitudinal representation learning -- we input the prior scan as an additional input, proposing a method to align, concatenate and fuse the current and prior visual information into a joint longitudinal representation which can be provided to the multimodal report generation model; (2) sentence-anatomy dropout -- a training strategy for controllability in which the report generator model is trained to predict only sentences from the original report which correspond to the subset of anatomical regions given as input. We show through in-depth experiments on the MIMIC-CXR dataset how the proposed approach achieves state-of-the-art results while enabling anatomy-wise controllable report generation. | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | 398,335 |
2205.03738 | Synthetic Point Cloud Generation for Class Segmentation Applications | Maintenance of industrial facilities is a growing hazard due to the cumbersome process needed to identify infrastructure degradation. Digital Twins have the potential to improve maintenance by monitoring the continuous digital representation of infrastructure. However, the time needed to map the existing geometry makes their use prohibitive. We previously developed class segmentation algorithms to automate digital twinning, however a vast amount of annotated point clouds is needed. Currently, synthetic data generation for automated segmentation is non-existent. We used Helios++ to automatically segment point clouds from 3D models. Our research has the potential to pave the ground for efficient industrial class segmentation. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 295,392 |
2402.16871 | Bike3S: A Tool for Bike Sharing Systems Simulation | Vehicle sharing systems are becoming increasingly popular. The effectiveness of such systems depends, among other factors, on different strategic and operational management decisions and policies, like the dimension of the fleet or the distribution of vehicles. It is of foremost importance to be able to anticipate and evaluate the potential effects of such strategies before they can be successfully deployed. In this paper we present Bike3S, a simulator for a station-based bike sharing system. The simulator performs semi-realistic simulations of the operation of a bike sharing system and allows for evaluating and testing different management decisions and strategies. In particular, the simulator has been designed to test different station capacities, station distributions, and balancing strategies. The simulator carries out microscopic agent-based simulations, where users of different types can be defined that act according to their individual goals and objectives which influences the overall dynamics of the whole system. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | 432,739 |
1709.09768 | Colonel Blotto Game for Secure State Estimation in Interdependent
Critical Infrastructure | Securing the physical components of a city's interdependent critical infrastructure (ICI) such as power, natural gas, and water systems is a challenging task due to their interdependence and a large number of involved sensors. In this paper, using a novel integrated state-space model that captures the interdependence, a two-stage cyber attack on an ICI is studied in which the attacker first compromises the ICI's sensors by decoding their messages, and, subsequently, it alters the compromised sensors' data to cause state estimation errors. To thwart such attacks, the administrator of each critical infrastructure (CI) must assign protection levels to the sensors based on their importance in the state estimation process. To capture the interdependence between the attacker and the ICI administrator's actions and analyze their interactions, a Colonel Blotto game framework is proposed. The mixed-strategy Nash equilibrium of this game is derived analytically. At this equilibrium, it is shown that the administrator can strategically randomize between the protection levels of the sensors to deceive the attacker. Simulation results coupled with theoretical analysis show that, using the proposed game, the administrator can reduce the state estimation error by at least $ 50\% $ compared to a non-strategic approach that assigns protection levels proportional to sensor values. | false | false | false | false | false | false | false | false | false | true | true | false | false | false | false | false | false | true | 81,679 |
1804.01694 | Feedback GAN (FBGAN) for DNA: a Novel Feedback-Loop Architecture for
Optimizing Protein Functions | Generative Adversarial Networks (GANs) represent an attractive and novel approach to generate realistic data, such as genes, proteins, or drugs, in synthetic biology. Here, we apply GANs to generate synthetic DNA sequences encoding for proteins of variable length. We propose a novel feedback-loop architecture, called Feedback GAN (FBGAN), to optimize the synthetic gene sequences for desired properties using an external function analyzer. The proposed architecture also has the advantage that the analyzer need not be differentiable. We apply the feedback-loop mechanism to two examples: 1) generating synthetic genes coding for antimicrobial peptides, and 2) optimizing synthetic genes for the secondary structure of their resulting peptides. A suite of metrics demonstrate that the GAN generated proteins have desirable biophysical properties. The FBGAN architecture can also be used to optimize GAN-generated datapoints for useful properties in domains beyond genomics. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 94,273 |
2312.03459 | F3-Pruning: A Training-Free and Generalized Pruning Strategy towards
Faster and Finer Text-to-Video Synthesis | Recently Text-to-Video (T2V) synthesis has undergone a breakthrough by training transformers or diffusion models on large-scale datasets. Nevertheless, inferring such large models incurs huge costs.Previous inference acceleration works either require costly retraining or are model-specific.To address this issue, instead of retraining we explore the inference process of two mainstream T2V models using transformers and diffusion models.The exploration reveals the redundancy in temporal attention modules of both models, which are commonly utilized to establish temporal relations among frames.Consequently, we propose a training-free and generalized pruning strategy called F3-Pruning to prune redundant temporal attention weights.Specifically, when aggregate temporal attention values are ranked below a certain ratio, corresponding weights will be pruned.Extensive experiments on three datasets using a classic transformer-based model CogVideo and a typical diffusion-based model Tune-A-Video verify the effectiveness of F3-Pruning in inference acceleration, quality assurance and broad applicability. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 413,265 |
2206.08514 | A Unified Evaluation of Textual Backdoor Learning: Frameworks and
Benchmarks | Textual backdoor attacks are a kind of practical threat to NLP systems. By injecting a backdoor in the training phase, the adversary could control model predictions via predefined triggers. As various attack and defense models have been proposed, it is of great significance to perform rigorous evaluations. However, we highlight two issues in previous backdoor learning evaluations: (1) The differences between real-world scenarios (e.g. releasing poisoned datasets or models) are neglected, and we argue that each scenario has its own constraints and concerns, thus requires specific evaluation protocols; (2) The evaluation metrics only consider whether the attacks could flip the models' predictions on poisoned samples and retain performances on benign samples, but ignore that poisoned samples should also be stealthy and semantic-preserving. To address these issues, we categorize existing works into three practical scenarios in which attackers release datasets, pre-trained models, and fine-tuned models respectively, then discuss their unique evaluation methodologies. On metrics, to completely evaluate poisoned samples, we use grammar error increase and perplexity difference for stealthiness, along with text similarity for validity. After formalizing the frameworks, we develop an open-source toolkit OpenBackdoor to foster the implementations and evaluations of textual backdoor learning. With this toolkit, we perform extensive experiments to benchmark attack and defense models under the suggested paradigm. To facilitate the underexplored defenses against poisoned datasets, we further propose CUBE, a simple yet strong clustering-based defense baseline. We hope that our frameworks and benchmarks could serve as the cornerstones for future model development and evaluations. | false | false | false | false | false | false | true | false | true | false | false | false | true | false | false | false | false | false | 303,173 |
2401.15296 | Recognizing Identities From Human Skeletons: A Survey on 3D Skeleton
Based Person Re-Identification | Person re-identification via 3D skeletons is an important emerging research area that attracts increasing attention within the pattern recognition community. With distinctive advantages across various application scenarios, numerous 3D skeleton based person re-identification (SRID) methods with diverse skeleton modeling and learning paradigms have been proposed in recent years. In this survey, we provide a comprehensive review and analysis of recent SRID advances. First of all, we define the SRID task and provide an overview of its origin and major advancements. Secondly, we formulate a systematic taxonomy that organizes existing methods into three categories based on different skeleton modeling ($i.e.,$ hand-crafted, sequence-based, graph-based). Then, we elaborate on the representative models along these three categories with an analysis of their merits and limitations. Meanwhile, we provide an in-depth review of mainstream supervised, self-supervised, and unsupervised SRID learning paradigms and corresponding skeleton semantics learning tasks. A thorough evaluation of state-of-the-art SRID methods is further conducted over various types of benchmarks and protocols to compare their effectiveness and efficiency. Finally, we discuss the challenges of existing studies along with promising directions for future research, highlighting research impacts and potential applications of SRID. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 424,395 |
2110.12645 | SgSum: Transforming Multi-document Summarization into Sub-graph
Selection | Most of existing extractive multi-document summarization (MDS) methods score each sentence individually and extract salient sentences one by one to compose a summary, which have two main drawbacks: (1) neglecting both the intra and cross-document relations between sentences; (2) neglecting the coherence and conciseness of the whole summary. In this paper, we propose a novel MDS framework (SgSum) to formulate the MDS task as a sub-graph selection problem, in which source documents are regarded as a relation graph of sentences (e.g., similarity graph or discourse graph) and the candidate summaries are its sub-graphs. Instead of selecting salient sentences, SgSum selects a salient sub-graph from the relation graph as the summary. Comparing with traditional methods, our method has two main advantages: (1) the relations between sentences are captured by modeling both the graph structure of the whole document set and the candidate sub-graphs; (2) directly outputs an integrate summary in the form of sub-graph which is more informative and coherent. Extensive experiments on MultiNews and DUC datasets show that our proposed method brings substantial improvements over several strong baselines. Human evaluation results also demonstrate that our model can produce significantly more coherent and informative summaries compared with traditional MDS methods. Moreover, the proposed architecture has strong transfer ability from single to multi-document input, which can reduce the resource bottleneck in MDS tasks. Our code and results are available at: \url{https://github.com/PaddlePaddle/Research/tree/master/NLP/EMNLP2021-SgSum}. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 262,921 |
1605.05797 | Analysis of Network Clustering Algorithms and Cluster Quality Metrics at
Scale | Notions of community quality underlie network clustering. While studies surrounding network clustering are increasingly common, a precise understanding of the realtionship between different cluster quality metrics is unknown. In this paper, we examine the relationship between stand-alone cluster quality metrics and information recovery metrics through a rigorous analysis of four widely-used network clustering algorithms -- Louvain, Infomap, label propagation, and smart local moving. We consider the stand-alone quality metrics of modularity, conductance, and coverage, and we consider the information recovery metrics of adjusted Rand score, normalized mutual information, and a variant of normalized mutual information used in previous work. Our study includes both synthetic graphs and empirical data sets of sizes varying from 1,000 to 1,000,000 nodes. We find significant differences among the results of the different cluster quality metrics. For example, clustering algorithms can return a value of 0.4 out of 1 on modularity but score 0 out of 1 on information recovery. We find conductance, though imperfect, to be the stand-alone quality metric that best indicates performance on information recovery metrics. Our study shows that the variant of normalized mutual information used in previous work cannot be assumed to differ only slightly from traditional normalized mutual information. Smart local moving is the best performing algorithm in our study, but discrepancies between cluster evaluation metrics prevent us from declaring it absolutely superior. Louvain performed better than Infomap in nearly all the tests in our study, contradicting the results of previous work in which Infomap was superior to Louvain. We find that although label propagation performs poorly when clusters are less clearly defined, it scales efficiently and accurately to large graphs with well-defined clusters. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 56,046 |
1805.10180 | Pyramid Attention Network for Semantic Segmentation | A Pyramid Attention Network(PAN) is proposed to exploit the impact of global contextual information in semantic segmentation. Different from most existing works, we combine attention mechanism and spatial pyramid to extract precise dense features for pixel labeling instead of complicated dilated convolution and artificially designed decoder networks. Specifically, we introduce a Feature Pyramid Attention module to perform spatial pyramid attention structure on high-level output and combining global pooling to learn a better feature representation, and a Global Attention Upsample module on each decoder layer to provide global context as a guidance of low-level features to select category localization details. The proposed approach achieves state-of-the-art performance on PASCAL VOC 2012 and Cityscapes benchmarks with a new record of mIoU accuracy 84.0% on PASCAL VOC 2012, while training without COCO dataset. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 98,601 |
2009.07889 | Image Separation with Side Information: A Connected Auto-Encoders Based
Approach | X-radiography (X-ray imaging) is a widely used imaging technique in art investigation. It can provide information about the condition of a painting as well as insights into an artist's techniques and working methods, often revealing hidden information invisible to the naked eye. In this paper, we deal with the problem of separating mixed X-ray images originating from the radiography of double-sided paintings. Using the visible color images (RGB images) from each side of the painting, we propose a new Neural Network architecture, based upon 'connected' auto-encoders, designed to separate the mixed X-ray image into two simulated X-ray images corresponding to each side. In this proposed architecture, the convolutional auto encoders extract features from the RGB images. These features are then used to (1) reproduce both of the original RGB images, (2) reconstruct the hypothetical separated X-ray images, and (3) regenerate the mixed X-ray image. The algorithm operates in a totally self-supervised fashion without requiring a sample set that contains both the mixed X-ray images and the separated ones. The methodology was tested on images from the double-sided wing panels of the \textsl{Ghent Altarpiece}, painted in 1432 by the brothers Hubert and Jan van Eyck. These tests show that the proposed approach outperforms other state-of-the-art X-ray image separation methods for art investigation applications. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 196,068 |
2402.06663 | Explainable Adversarial Learning Framework on Physical Layer Secret Keys
Combating Malicious Reconfigurable Intelligent Surface | Reconfigurable intelligent surfaces (RIS) can both help and hinder the physical layer secret key generation (PL-SKG) of communications systems. Whilst a legitimate RIS can yield beneficial impacts, including increased channel randomness to enhance PL-SKG, a malicious RIS can poison legitimate channels and crack almost all existing PL-SKGs. In this work, we propose an adversarial learning framework that addresses Man-in-the-middle RIS (MITM-RIS) eavesdropping which can exist between legitimate parties, namely Alice and Bob. First, the theoretical mutual information gap between legitimate pairs and MITM-RIS is deduced. From this, Alice and Bob leverage adversarial learning to learn a common feature space that assures no mutual information overlap with MITM-RIS. Next, to explain the trained legitimate common feature generator, we aid signal processing interpretation of black-box neural networks using a symbolic explainable AI (xAI) representation. These symbolic terms of dominant neurons aid the engineering of feature designs and the validation of the learned common feature space. Simulation results show that our proposed adversarial learning- and symbolic-based PL-SKGs can achieve high key agreement rates between legitimate users, and is further resistant to an MITM-RIS Eve with the full knowledge of legitimate feature generation (NNs or formulas). This therefore paves the way to secure wireless communications with untrusted reflective devices in future 6G. | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | false | false | 428,385 |
2003.07637 | Motion-Excited Sampler: Video Adversarial Attack with Sparked Prior | Deep neural networks are known to be susceptible to adversarial noise, which are tiny and imperceptible perturbations. Most of previous work on adversarial attack mainly focus on image models, while the vulnerability of video models is less explored. In this paper, we aim to attack video models by utilizing intrinsic movement pattern and regional relative motion among video frames. We propose an effective motion-excited sampler to obtain motion-aware noise prior, which we term as sparked prior. Our sparked prior underlines frame correlations and utilizes video dynamics via relative motion. By using the sparked prior in gradient estimation, we can successfully attack a variety of video classification models with fewer number of queries. Extensive experimental results on four benchmark datasets validate the efficacy of our proposed method. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 168,495 |
2308.05036 | Collaborative Wideband Spectrum Sensing and Scheduling for Networked
UAVs in UTM Systems | In this paper, we propose a data-driven framework for collaborative wideband spectrum sensing and scheduling for networked unmanned aerial vehicles (UAVs), which act as the secondary users to opportunistically utilize detected spectrum holes. To this end, we propose a multi-class classification problem for wideband spectrum sensing to detect vacant spectrum spots based on collected I/Q samples. To enhance the accuracy of the spectrum sensing module, the outputs from the multi-class classification by each individual UAV are fused at a server in the unmanned aircraft system traffic management (UTM) ecosystem. In the spectrum scheduling phase, we leverage reinforcement learning (RL) solutions to dynamically allocate the detected spectrum holes to the secondary users (i.e., UAVs). To evaluate the proposed methods, we establish a comprehensive simulation framework that generates a near-realistic synthetic dataset using MATLAB LTE toolbox by incorporating base-station~(BS) locations in a chosen area of interest, performing ray-tracing, and emulating the primary users channel usage in terms of I/Q samples. This evaluation methodology provides a flexible framework to generate large spectrum datasets that could be used for developing ML/AI-based spectrum management solutions for aerial devices. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | true | 384,659 |
1701.06731 | Weak Adaptive Submodularity and Group-Based Active Diagnosis with
Applications to State Estimation with Persistent Sensor Faults | In this paper, we consider adaptive decision-making problems for stochastic state estimation with partial observations. First, we introduce the concept of weak adaptive submodularity, a generalization of adaptive submodularity, which has found great success in solving challenging adaptive state estimation problems. Then, for the problem of active diagnosis, i.e., discrete state estimation via active sensing, we show that an adaptive greedy policy has a near-optimal performance guarantee when the reward function possesses this property. We further show that the reward function for group-based active diagnosis, which arises in applications such as medical diagnosis and state estimation with persistent sensor faults, is also weakly adaptive submodular. Finally, in experiments of state estimation for an aircraft electrical system with persistent sensor faults, we observe that an adaptive greedy policy performs equally well as an exhaustive search. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 67,188 |
1906.07651 | Scheduled Sampling for Transformers | Scheduled sampling is a technique for avoiding one of the known problems in sequence-to-sequence generation: exposure bias. It consists of feeding the model a mix of the teacher forced embeddings and the model predictions from the previous step in training time. The technique has been used for improving the model performance with recurrent neural networks (RNN). In the Transformer model, unlike the RNN, the generation of a new word attends to the full sentence generated so far, not only to the last word, and it is not straightforward to apply the scheduled sampling technique. We propose some structural changes to allow scheduled sampling to be applied to Transformer architecture, via a two-pass decoding strategy. Experiments on two language pairs achieve performance close to a teacher-forcing baseline and show that this technique is promising for further exploration. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 135,651 |
2307.04604 | EchoVest: Real-Time Sound Classification and Depth Perception Expressed
through Transcutaneous Electrical Nerve Stimulation | Over 1.5 billion people worldwide live with hearing impairment. Despite various technologies that have been created for individuals with such disabilities, most of these technologies are either extremely expensive or inaccessible for everyday use in low-medium income countries. In order to combat this issue, we have developed a new assistive device, EchoVest, for blind/deaf people to intuitively become more aware of their environment. EchoVest transmits vibrations to the user's body by utilizing transcutaneous electric nerve stimulation (TENS) based on the source of the sounds. EchoVest also provides various features, including sound localization, sound classification, noise reduction, and depth perception. We aimed to outperform CNN-based machine-learning models, the most commonly used machine learning model for classification tasks, in accuracy and computational costs. To do so, we developed and employed a novel audio pipeline that adapts the Audio Spectrogram Transformer (AST) model, an attention-based model, for our sound classification purposes, and Fast Fourier Transforms for noise reduction. The application of Otsu's Method helped us find the optimal thresholds for background noise sound filtering and gave us much greater accuracy. In order to calculate direction and depth accurately, we applied Complex Time Difference of Arrival algorithms and SOTA localization. Our last improvement was to use blind source separation to make our algorithms applicable to multiple microphone inputs. The final algorithm achieved state-of-the-art results on numerous checkpoints, including a 95.7\% accuracy on the ESC-50 dataset for environmental sound classification. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 378,461 |
2410.19495 | Beyond One Solution: The Case for a Comprehensive Exploration of
Solution Space in Community Detection | This article explores the importance of examining the solution space in community detection, highlighting its role in achieving reliable results when dealing with real-world problems. A Bayesian framework is used to estimate the stability of the solution space and classify it into categories Single, Dominant, Multiple, Sparse or Empty. By applying this approach to real-world networks, the study highlights the importance of considering multiple solutions rather than relying on a single partition. This ensures more reliable results and efficient use of computational resources in community detection analysis. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 502,341 |
1508.06092 | An analysis of numerical issues in neural training by pseudoinversion | Some novel strategies have recently been proposed for single hidden layer neural network training that set randomly the weights from input to hidden layer, while weights from hidden to output layer are analytically determined by pseudoinversion. These techniques are gaining popularity in spite of their known numerical issues when singular and/or almost singular matrices are involved. In this paper we discuss a critical use of Singular Value Analysis for identification of these drawbacks and we propose an original use of regularisation to determine the output weights, based on the concept of critical hidden layer size. This approach also allows to limit the training computational effort. Besides, we introduce a novel technique which relies an effective determination of input weights to the hidden layer dimension. This approach is tested for both regression and classification tasks, resulting in a significant performance improvement with respect to alternative methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 46,288 |
1401.6728 | A Generalized Typicality for Abstract Alphabets | A new notion of typicality for arbitrary probability measures on standard Borel spaces is proposed, which encompasses the classical notions of weak and strong typicality as special cases. Useful lemmas about strong typical sets, including conditional typicality lemma, joint typicality lemma, and packing and covering lemmas, which are fundamental tools for deriving many inner bounds of various multi-terminal coding problems, are obtained in terms of the proposed notion. This enables us to directly generalize lots of results on finite alphabet problems to general problems involving abstract alphabets, without any complicated additional arguments. For instance, quantization procedure is no longer necessary to achieve such generalizations. Another fundamental lemma, Markov lemma, is also obtained but its scope of application is quite limited compared to others. Yet, an alternative theory of typical sets for Gaussian measures, free from this limitation, is also developed. Some remarks on a possibility to generalize the proposed notion for sources with memory are also given. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 30,395 |
1606.03203 | Causal Bandits: Learning Good Interventions via Causal Inference | We study the problem of using causal models to improve the rate at which good interventions can be learned online in a stochastic environment. Our formalism combines multi-arm bandits and causal inference to model a novel type of bandit feedback that is not exploited by existing approaches. We propose a new algorithm that exploits the causal feedback and prove a bound on its simple regret that is strictly better (in all quantities) than algorithms that do not use the additional causal information. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 57,071 |
2411.16043 | Downlink MIMO Channel Estimation from Bits: Recoverability and Algorithm | In frequency division duplex (FDD) massive MIMO systems, a major challenge lies in acquiring the downlink channel state information}\ (CSI) at the base station (BS) from limited feedback sent by the user equipment (UE). To tackle this fundamental task, our contribution is twofold: First, a simple feedback framework is proposed, where a compression and Gaussian dithering-based quantization strategy is adopted at the UE side, and then a maximum likelihood estimator (MLE) is formulated at the BS side. Recoverability of the MIMO channel under the widely used double directional model is established. Specifically, analyses are presented for two compression schemes -- showing one being more overhead-economical and the other computationally lighter at the UE side. Second, to realize the MLE, an alternating direction method of multipliers (ADMM) algorithm is proposed. The algorithm is carefully designed to integrate a sophisticated harmonic retrieval (HR) solver as subroutine, which turns out to be the key of effectively tackling this hard MLE problem.Extensive numerical experiments are conducted to validate the efficacy of our approach. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 510,864 |
2502.05179 | FlashVideo:Flowing Fidelity to Detail for Efficient High-Resolution
Video Generation | DiT diffusion models have achieved great success in text-to-video generation, leveraging their scalability in model capacity and data scale. High content and motion fidelity aligned with text prompts, however, often require large model parameters and a substantial number of function evaluations (NFEs). Realistic and visually appealing details are typically reflected in high resolution outputs, further amplifying computational demands especially for single stage DiT models. To address these challenges, we propose a novel two stage framework, FlashVideo, which strategically allocates model capacity and NFEs across stages to balance generation fidelity and quality. In the first stage, prompt fidelity is prioritized through a low resolution generation process utilizing large parameters and sufficient NFEs to enhance computational efficiency. The second stage establishes flow matching between low and high resolutions, effectively generating fine details with minimal NFEs. Quantitative and visual results demonstrate that FlashVideo achieves state-of-the-art high resolution video generation with superior computational efficiency. Additionally, the two-stage design enables users to preview the initial output before committing to full resolution generation, thereby significantly reducing computational costs and wait times as well as enhancing commercial viability . | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 531,480 |
2206.12779 | Evolutionary Preference Learning via Graph Nested GRU ODE for
Session-based Recommendation | Session-based recommendation (SBR) aims to predict the user next action based on the ongoing sessions. Recently, there has been an increasing interest in modeling the user preference evolution to capture the fine-grained user interests. While latent user preferences behind the sessions drift continuously over time, most existing approaches still model the temporal session data in discrete state spaces, which are incapable of capturing the fine-grained preference evolution and result in sub-optimal solutions. To this end, we propose Graph Nested GRU ordinary differential equation (ODE), namely GNG-ODE, a novel continuum model that extends the idea of neural ODEs to continuous-time temporal session graphs. The proposed model preserves the continuous nature of dynamic user preferences, encoding both temporal and structural patterns of item transitions into continuous-time dynamic embeddings. As the existing ODE solvers do not consider graph structure change and thus cannot be directly applied to the dynamic graph, we propose a time alignment technique, called t-Alignment, to align the updating time steps of the temporal session graphs within a batch. Empirical results on three benchmark datasets show that GNG-ODE significantly outperforms other baselines. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 304,730 |
1908.01323 | ARGAN: Attentive Recurrent Generative Adversarial Network for Shadow
Detection and Removal | In this paper we propose an attentive recurrent generative adversarial network (ARGAN) to detect and remove shadows in an image. The generator consists of multiple progressive steps. At each step a shadow attention detector is firstly exploited to generate an attention map which specifies shadow regions in the input image.Given the attention map, a negative residual by a shadow remover encoder will recover a shadow-lighter or even a shadow-free image. A discriminator is designed to classify whether the output image in the last progressive step is real or fake. Moreover, ARGAN is suitable to be trained with a semi-supervised strategy to make full use of sufficient unsupervised data. The experiments on four public datasets have demonstrated that our ARGAN is robust to detect both simple and complex shadows and to produce more realistic shadow removal results. It outperforms the state-of-the-art methods, especially in detail of recovering shadow areas. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 140,735 |
2203.02688 | Zoom In and Out: A Mixed-scale Triplet Network for Camouflaged Object
Detection | The recently proposed camouflaged object detection (COD) attempts to segment objects that are visually blended into their surroundings, which is extremely complex and difficult in real-world scenarios. Apart from high intrinsic similarity between the camouflaged objects and their background, the objects are usually diverse in scale, fuzzy in appearance, and even severely occluded. To deal with these problems, we propose a mixed-scale triplet network, \textbf{ZoomNet}, which mimics the behavior of humans when observing vague images, i.e., zooming in and out. Specifically, our ZoomNet employs the zoom strategy to learn the discriminative mixed-scale semantics by the designed scale integration unit and hierarchical mixed-scale unit, which fully explores imperceptible clues between the candidate objects and background surroundings. Moreover, considering the uncertainty and ambiguity derived from indistinguishable textures, we construct a simple yet effective regularization constraint, uncertainty-aware loss, to promote the model to accurately produce predictions with higher confidence in candidate regions. Without bells and whistles, our proposed highly task-friendly model consistently surpasses the existing 23 state-of-the-art methods on four public datasets. Besides, the superior performance over the recent cutting-edge models on the SOD task also verifies the effectiveness and generality of our model. The code will be available at \url{https://github.com/lartpang/ZoomNet}. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 283,828 |
2312.07077 | On the Potential of an Independent Avatar to Augment Metaverse Social
Networks | We present a computational modelling approach which targets capturing the specifics on how to virtually augment a Metaverse user's available social time capacity via using an independent and autonomous version of her digital representation in the Metaverse. We motivate why this is a fundamental building block to model large-scale social networks in the Metaverse, and emerging properties herein. We envision a Metaverse-focused extension of the traditional avatar concept: An avatar can be as well programmed to operate independently when its user is not controlling it directly, thus turning it into an agent-based digital human representation. This way, we highlight how such an independent avatar could help its user to better navigate their social relationships and optimize their socializing time in the Metaverse by (partly) offloading some interactions to the avatar. We model the setting and identify the characteristic variables by using selected concepts from social sciences: ego networks, social presence, and social cues. Then, we formulate the problem of maximizing the user's non-avatar-mediated spare time as a linear optimization. Finally, we analyze the feasible region of the problem and we present some initial insights on the spare time that can be achieved for different parameter values of the avatar-mediated interactions. | true | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 414,791 |
1912.03624 | Bayesian Structure Adaptation for Continual Learning | Continual Learning is a learning paradigm where learning systems are trained with sequential or streaming tasks. Two notable directions among the recent advances in continual learning with neural networks are ($i$) variational Bayes based regularization by learning priors from previous tasks, and, ($ii$) learning the structure of deep networks to adapt to new tasks. So far, these two approaches have been orthogonal. We present a novel Bayesian approach to continual learning based on learning the structure of deep neural networks, addressing the shortcomings of both these approaches. The proposed model learns the deep structure for each task by learning which weights to be used, and supports inter-task transfer through the overlapping of different sparse subsets of weights learned by different tasks. Experimental results on supervised and unsupervised benchmarks shows that our model performs comparably or better than recent advances in continual learning setting. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 156,644 |
1704.06254 | Multi-view Supervision for Single-view Reconstruction via Differentiable
Ray Consistency | We study the notion of consistency between a 3D shape and a 2D observation and propose a differentiable formulation which allows computing gradients of the 3D shape given an observation from an arbitrary view. We do so by reformulating view consistency using a differentiable ray consistency (DRC) term. We show that this formulation can be incorporated in a learning framework to leverage different types of multi-view observations e.g. foreground masks, depth, color images, semantics etc. as supervision for learning single-view 3D prediction. We present empirical analysis of our technique in a controlled setting. We also show that this approach allows us to improve over existing techniques for single-view reconstruction of objects from the PASCAL VOC dataset. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 72,145 |
1201.2036 | Hierarchical multiresolution method to overcome the resolution limit in
complex networks | The analysis of the modular structure of networks is a major challenge in complex networks theory. The validity of the modular structure obtained is essential to confront the problem of the topology-functionality relationship. Recently, several authors have worked on the limit of resolution that different community detection algorithms have, making impossible the detection of natural modules when very different topological scales coexist in the network. Existing multiresolution methods are not the panacea for solving the problem in extreme situations, and also fail. Here, we present a new hierarchical multiresolution scheme that works even when the network decomposition is very close to the resolution limit. The idea is to split the multiresolution method for optimal subgraphs of the network, focusing the analysis on each part independently. We also propose a new algorithm to speed up the computational cost of screening the mesoscale looking for the resolution parameter that best splits every subgraph. The hierarchical algorithm is able to solve a difficult benchmark proposed in [Lancichinetti & Fortunato, 2011], encouraging the further analysis of hierarchical methods based on the modularity quality function. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 13,752 |
2307.09184 | You've Got Two Teachers: Co-evolutionary Image and Report Distillation
for Semi-supervised Anatomical Abnormality Detection in Chest X-ray | Chest X-ray (CXR) anatomical abnormality detection aims at localizing and characterising cardiopulmonary radiological findings in the radiographs, which can expedite clinical workflow and reduce observational oversights. Most existing methods attempted this task in either fully supervised settings which demanded costly mass per-abnormality annotations, or weakly supervised settings which still lagged badly behind fully supervised methods in performance. In this work, we propose a co-evolutionary image and report distillation (CEIRD) framework, which approaches semi-supervised abnormality detection in CXR by grounding the visual detection results with text-classified abnormalities from paired radiology reports, and vice versa. Concretely, based on the classical teacher-student pseudo label distillation (TSD) paradigm, we additionally introduce an auxiliary report classification model, whose prediction is used for report-guided pseudo detection label refinement (RPDLR) in the primary vision detection task. Inversely, we also use the prediction of the vision detection model for abnormality-guided pseudo classification label refinement (APCLR) in the auxiliary report classification task, and propose a co-evolution strategy where the vision and report models mutually promote each other with RPDLR and APCLR performed alternatively. To this end, we effectively incorporate the weak supervision by reports into the semi-supervised TSD pipeline. Besides the cross-modal pseudo label refinement, we further propose an intra-image-modal self-adaptive non-maximum suppression, where the pseudo detection labels generated by the teacher vision model are dynamically rectified by high-confidence predictions by the student. Experimental results on the public MIMIC-CXR benchmark demonstrate CEIRD's superior performance to several up-to-date weakly and semi-supervised methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 380,086 |
2406.19149 | "A network of mutualities of being": socio-material archaeological
networks and biological ties at \c{C}atalh\"oy\"uk | Recent advances in archaeogenomics have granted access to previously unavailable biological information with the potential to further our understanding of past social dynamics at a range of scales. However, to properly integrate these data within archaeological narratives, new methodological and theoretical tools are required. Effort must be put into finding new methods for weaving together different datasets where material culture and archaeogenomic data are both constitutive elements. This is true on a small scale, when we study relationships at the individual level, and at a larger scale when we deal with social and population dynamics. Specifically, in the study of kinship systems it is essential to contextualize and make sense of biological relatedness through social relations, which, in archaeology, is achieved by using material culture as a proxy. In this paper we propose a Network Science framework to integrate archaeogenomic data and material culture at an intrasite scale to study biological relatedness and social organization at the Neolithic site of \c{C}atalh\"oy\"uk. Methodologically, we propose the use of network variance to investigate the concentration of biological relatedness and material culture within networks of houses. This approach allowed us to observe how material culture similarity between buildings gives valuable information on potential biological relationships between individuals and how biogenetic ties concentrate at specific localities on site. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 468,317 |
2110.04925 | Quadratic Multiform Separation: A New Classification Model in Machine
Learning | In this paper we present a new classification model in machine learning. Our result is threefold: 1) The model produces comparable predictive accuracy to that of most common classification models. 2) It runs significantly faster than most common classification models. 3) It has the ability to identify a portion of unseen samples for which class labels can be found with much higher predictive accuracy. Currently there are several patents pending on the proposed model. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 260,089 |
1407.1425 | On the relationship between Gaussian stochastic blockmodels and label
propagation algorithms | The problem of community detection receives great attention in recent years. Many methods have been proposed to discover communities in networks. In this paper, we propose a Gaussian stochastic blockmodel that uses Gaussian distributions to fit weight of edges in networks for non-overlapping community detection. The maximum likelihood estimation of this model has the same objective function as general label propagation with node preference. The node preference of a specific vertex turns out to be a value proportional to the intra-community eigenvector centrality (the corresponding entry in principal eigenvector of the adjacency matrix of the subgraph inside that vertex's community) under maximum likelihood estimation. Additionally, the maximum likelihood estimation of a constrained version of our model is highly related to another extension of label propagation algorithm, namely, the label propagation algorithm under constraint. Experiments show that the proposed Gaussian stochastic blockmodel performs well on various benchmark networks. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 34,428 |
2005.03767 | Deep Learning Interfacial Momentum Closures in Coarse-Mesh CFD Two-Phase
Flow Simulation Using Validation Data | Multiphase flow phenomena have been widely observed in the industrial applications, yet it remains a challenging unsolved problem. Three-dimensional computational fluid dynamics (CFD) approaches resolve of the flow fields on finer spatial and temporal scales, which can complement dedicated experimental study. However, closures must be introduced to reflect the underlying physics in multiphase flow. Among them, the interfacial forces, including drag, lift, turbulent-dispersion and wall-lubrication forces, play an important role in bubble distribution and migration in liquid-vapor two-phase flows. Development of those closures traditionally rely on the experimental data and analytical derivation with simplified assumptions that usually cannot deliver a universal solution across a wide range of flow conditions. In this paper, a data-driven approach, named as feature-similarity measurement (FSM), is developed and applied to improve the simulation capability of two-phase flow with coarse-mesh CFD approach. Interfacial momentum transfer in adiabatic bubbly flow serves as the focus of the present study. Both a mature and a simplified set of interfacial closures are taken as the low-fidelity data. Validation data (including relevant experimental data and validated fine-mesh CFD simulations results) are adopted as high-fidelity data. Qualitative and quantitative analysis are performed in this paper. These reveal that FSM can substantially improve the prediction of the coarse-mesh CFD model, regardless of the choice of interfacial closures, and it provides scalability and consistency across discontinuous flow regimes. It demonstrates that data-driven methods can aid the multiphase flow modeling by exploring the connections between local physical features and simulation errors. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 176,245 |
2111.01201 | Unintended Selection: Persistent Qualification Rate Disparities and
Interventions | Realistically -- and equitably -- modeling the dynamics of group-level disparities in machine learning remains an open problem. In particular, we desire models that do not suppose inherent differences between artificial groups of people -- but rather endogenize disparities by appeal to unequal initial conditions of insular subpopulations. In this paper, agents each have a real-valued feature $X$ (e.g., credit score) informed by a "true" binary label $Y$ representing qualification (e.g., for a loan). Each agent alternately (1) receives a binary classification label $\hat{Y}$ (e.g., loan approval) from a Bayes-optimal machine learning classifier observing $X$ and (2) may update their qualification $Y$ by imitating successful strategies (e.g., seek a raise) within an isolated group $G$ of agents to which they belong. We consider the disparity of qualification rates $\Pr(Y=1)$ between different groups and how this disparity changes subject to a sequence of Bayes-optimal classifiers repeatedly retrained on the global population. We model the evolving qualification rates of each subpopulation (group) using the replicator equation, which derives from a class of imitation processes. We show that differences in qualification rates between subpopulations can persist indefinitely for a set of non-trivial equilibrium states due to uniformed classifier deployments, even when groups are identical in all aspects except initial qualification densities. We next simulate the effects of commonly proposed fairness interventions on this dynamical system along with a new feedback control mechanism capable of permanently eliminating group-level qualification rate disparities. We conclude by discussing the limitations of our model and findings and by outlining potential future work. | false | false | false | false | true | false | true | false | false | false | true | false | false | false | false | false | false | true | 264,479 |
1801.03857 | Discovering the hidden community structure of public transportation
networks | Advances in public transit modeling and smart card technologies can reveal detailed contact patterns of passengers. A natural way to represent such contact patterns is in the form of networks. In this paper we utilize known contact patterns from a public transit assignment model in a major metropolitan city, and propose the development of two novel network structures, each of which elucidate certain aspects of passenger travel behavior. We first propose the development of a transfer network, which can reveal passenger groups that travel together on a given day. Second, we propose the development of a community network, which is derived from the transfer network, and captures the similarity of travel patterns among passengers. We then explore the application of each of these network structures to identify the most frequently used travel paths, i.e., routes and transfers, in the public transit system, and model epidemic spreading risk among passengers of a public transit network, respectively. In the latter our conclusions reinforce previous observations, that routes crossing or connecting to the city center in the morning and afternoon peak hours are the most "dangerous" during an outbreak. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 88,165 |
2411.02973 | [Vision Paper] PRObot: Enhancing Patient-Reported Outcome Measures for
Diabetic Retinopathy using Chatbots and Generative AI | We present an outline of the first large language model (LLM) based chatbot application in the context of patient-reported outcome measures (PROMs) for diabetic retinopathy. By utilizing the capabilities of current LLMs, we enable patients to provide feedback about their quality of life and treatment progress via an interactive application. The proposed framework offers significant advantages over the current approach, which encompasses only qualitative collection of survey data or a static survey with limited answer options. Using the PROBot LLM-PROM application, patients will be asked tailored questions about their individual challenges, and can give more detailed feedback on the progress of their treatment. Based on this input, we will use machine learning to infer conventional PROM scores, which can be used by clinicians to evaluate the treatment status. The goal of the application is to improve adherence to the healthcare system and treatments, and thus ultimately reduce cases of subsequent vision impairment. The approach needs to be further validated using a survey and a clinical study. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 505,741 |
2209.07790 | A Large-scale Multiple-objective Method for Black-box Attack against
Object Detection | Recent studies have shown that detectors based on deep models are vulnerable to adversarial examples, even in the black-box scenario where the attacker cannot access the model information. Most existing attack methods aim to minimize the true positive rate, which often shows poor attack performance, as another sub-optimal bounding box may be detected around the attacked bounding box to be the new true positive one. To settle this challenge, we propose to minimize the true positive rate and maximize the false positive rate, which can encourage more false positive objects to block the generation of new true positive bounding boxes. It is modeled as a multi-objective optimization (MOP) problem, of which the generic algorithm can search the Pareto-optimal. However, our task has more than two million decision variables, leading to low searching efficiency. Thus, we extend the standard Genetic Algorithm with Random Subset selection and Divide-and-Conquer, called GARSDC, which significantly improves the efficiency. Moreover, to alleviate the sensitivity to population quality in generic algorithms, we generate a gradient-prior initial population, utilizing the transferability between different detectors with similar backbones. Compared with the state-of-art attack methods, GARSDC decreases by an average 12.0 in the mAP and queries by about 1000 times in extensive experiments. Our codes can be found at https://github.com/LiangSiyuan21/ GARSDC. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 317,887 |
2402.14000 | Real-time 3D-aware Portrait Editing from a Single Image | This work presents 3DPE, a practical method that can efficiently edit a face image following given prompts, like reference images or text descriptions, in a 3D-aware manner. To this end, a lightweight module is distilled from a 3D portrait generator and a text-to-image model, which provide prior knowledge of face geometry and superior editing capability, respectively. Such a design brings two compelling advantages over existing approaches. First, our method achieves real-time editing with a feedforward network (i.e., ~0.04s per image), over 100x faster than the second competitor. Second, thanks to the powerful priors, our module could focus on the learning of editing-related variations, such that it manages to handle various types of editing simultaneously in the training phase and further supports fast adaptation to user-specified customized types of editing during inference (e.g., with ~5min fine-tuning per style). | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 431,490 |
2203.01593 | 3D Human Motion Prediction: A Survey | 3D human motion prediction, predicting future poses from a given sequence, is an issue of great significance and challenge in computer vision and machine intelligence, which can help machines in understanding human behaviors. Due to the increasing development and understanding of Deep Neural Networks (DNNs) and the availability of large-scale human motion datasets, the human motion prediction has been remarkably advanced with a surge of interest among academia and industrial community. In this context, a comprehensive survey on 3D human motion prediction is conducted for the purpose of retrospecting and analyzing relevant works from existing released literature. In addition, a pertinent taxonomy is constructed to categorize these existing approaches for 3D human motion prediction. In this survey, relevant methods are categorized into three categories: human pose representation, network structure design, and \textit{prediction target}. We systematically review all relevant journal and conference papers in the field of human motion prediction since 2015, which are presented in detail based on proposed categorizations in this survey. Furthermore, the outline for the public benchmark datasets, evaluation criteria, and performance comparisons are respectively presented in this paper. The limitations of the state-of-the-art methods are discussed as well, hoping for paving the way for future explorations. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 283,440 |
2203.01005 | Sequential Offloading for Distributed DNN Computation in Multiuser MEC
Systems | This paper studies a sequential task offloading problem for a multiuser mobile edge computing (MEC) system. We consider a dynamic optimization approach, which embraces wireless channel fluctuations and random deep neural network (DNN) task arrivals over an infinite horizon. Specifically, we introduce a local CPU workload queue (WD-QSI) and an MEC server workload queue (MEC-QSI) to model the dynamic workload of DNN tasks at each WD and the MEC server, respectively. The transmit power and the partitioning of the local DNN task at each WD are dynamically determined based on the instantaneous channel conditions (to capture the transmission opportunities) and the instantaneous WD-QSI and MEC-QSI (to capture the dynamic urgency of the tasks) to minimize the average latency of the DNN tasks. The joint optimization can be formulated as an ergodic Markov decision process (MDP), in which the optimality condition is characterized by a centralized Bellman equation. However, the brute force solution of the MDP is not viable due to the curse of dimensionality as well as the requirement for knowledge of the global state information. To overcome these issues, we first decompose the MDP into multiple lower dimensional sub-MDPs, each of which can be associated with a WD or the MEC server. Next, we further develop a parametric online Q-learning algorithm, so that each sub-MDP is solved locally at its associated WD or the MEC server. The proposed solution is completely decentralized in the sense that the transmit power for sequential offloading and the DNN task partitioning can be determined based on the local channel state information (CSI) and the local WD-QSI at the WD only. Additionally, no prior knowledge of the distribution of the DNN task arrivals or the channel statistics will be needed for the MEC server. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 283,220 |
2005.02525 | Neural-Symbolic Relational Reasoning on Graph Models: Effective Link
Inference and Computation from Knowledge Bases | The recent developments and growing interest in neural-symbolic models has shown that hybrid approaches can offer richer models for Artificial Intelligence. The integration of effective relational learning and reasoning methods is one of the key challenges in this direction, as neural learning and symbolic reasoning offer complementary characteristics that can benefit the development of AI systems. Relational labelling or link prediction on knowledge graphs has become one of the main problems in deep learning-based natural language processing research. Moreover, other fields which make use of neural-symbolic techniques may also benefit from such research endeavours. There have been several efforts towards the identification of missing facts from existing ones in knowledge graphs. Two lines of research try and predict knowledge relations between two entities by considering all known facts connecting them or several paths of facts connecting them. We propose a neural-symbolic graph neural network which applies learning over all the paths by feeding the model with the embedding of the minimal subset of the knowledge graph containing such paths. By learning to produce representations for entities and facts corresponding to word embeddings, we show how the model can be trained end-to-end to decode these representations and infer relations between entities in a multitask approach. Our contribution is two-fold: a neural-symbolic methodology leverages the resolution of relational inference in large graphs, and we also demonstrate that such neural-symbolic model is shown more effective than path-based approaches | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 175,894 |
2403.05963 | Robust Emotion Recognition in Context Debiasing | Context-aware emotion recognition (CAER) has recently boosted the practical applications of affective computing techniques in unconstrained environments. Mainstream CAER methods invariably extract ensemble representations from diverse contexts and subject-centred characteristics to perceive the target person's emotional state. Despite advancements, the biggest challenge remains due to context bias interference. The harmful bias forces the models to rely on spurious correlations between background contexts and emotion labels in likelihood estimation, causing severe performance bottlenecks and confounding valuable context priors. In this paper, we propose a counterfactual emotion inference (CLEF) framework to address the above issue. Specifically, we first formulate a generalized causal graph to decouple the causal relationships among the variables in CAER. Following the causal graph, CLEF introduces a non-invasive context branch to capture the adverse direct effect caused by the context bias. During the inference, we eliminate the direct context effect from the total causal effect by comparing factual and counterfactual outcomes, resulting in bias mitigation and robust prediction. As a model-agnostic framework, CLEF can be readily integrated into existing methods, bringing consistent performance gains. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 436,234 |
1710.06931 | OhioState at IJCNLP-2017 Task 4: Exploring Neural Architectures for
Multilingual Customer Feedback Analysis | This paper describes our systems for IJCNLP 2017 Shared Task on Customer Feedback Analysis. We experimented with simple neural architectures that gave competitive performance on certain tasks. This includes shallow CNN and Bi-Directional LSTM architectures with Facebook's Fasttext as a baseline model. Our best performing model was in the Top 5 systems using the Exact-Accuracy and Micro-Average-F1 metrics for the Spanish (85.28% for both) and French (70% and 73.17% respectively) task, and outperformed all the other models on comment (87.28%) and meaningless (51.85%) tags using Micro Average F1 by Tags metric for the French task. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 82,851 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.