id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2207.05278 | Photonic Reconfigurable Accelerators for Efficient Inference of CNNs
with Mixed-Sized Tensors | Photonic Microring Resonator (MRR) based hardware accelerators have been shown to provide disruptive speedup and energy-efficiency improvements for processing deep Convolutional Neural Networks (CNNs). However, previous MRR-based CNN accelerators fail to provide efficient adaptability for CNNs with mixed-sized tensors. One example of such CNNs is depthwise separable CNNs. Performing inferences of CNNs with mixed-sized tensors on such inflexible accelerators often leads to low hardware utilization, which diminishes the achievable performance and energy efficiency from the accelerators. In this paper, we present a novel way of introducing reconfigurability in the MRR-based CNN accelerators, to enable dynamic maximization of the size compatibility between the accelerator hardware components and the CNN tensors that are processed using the hardware components. We classify the state-of-the-art MRR-based CNN accelerators from prior works into two categories, based on the layout and relative placements of the utilized hardware components in the accelerators. We then use our method to introduce reconfigurability in accelerators from these two classes, to consequently improve their parallelism, the flexibility of efficiently mapping tensors of different sizes, speed, and overall energy efficiency. We evaluate our reconfigurable accelerators against three prior works for the area proportionate outlook (equal hardware area for all accelerators). Our evaluation for the inference of four modern CNNs indicates that our designed reconfigurable CNN accelerators provide improvements of up to 1.8x in Frames-Per-Second (FPS) and up to 1.5x in FPS/W, compared to an MRR-based accelerator from prior work. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | true | 307,475 |
2210.02928 | MuRAG: Multimodal Retrieval-Augmented Generator for Open Question
Answering over Images and Text | While language Models store a massive amount of world knowledge implicitly in their parameters, even very large models often fail to encode information about rare entities and events, while incurring huge computational costs. Recently, retrieval-augmented models, such as REALM, RAG, and RETRO, have incorporated world knowledge into language generation by leveraging an external non-parametric index and have demonstrated impressive performance with constrained model sizes. However, these methods are restricted to retrieving only textual knowledge, neglecting the ubiquitous amount of knowledge in other modalities like images -- much of which contains information not covered by any text. To address this limitation, we propose the first Multimodal Retrieval-Augmented Transformer (MuRAG), which accesses an external non-parametric multimodal memory to augment language generation. MuRAG is pre-trained with a mixture of large-scale image-text and text-only corpora using a joint contrastive and generative loss. We perform experiments on two different datasets that require retrieving and reasoning over both images and text to answer a given query: WebQA, and MultimodalQA. Our results show that MuRAG achieves state-of-the-art accuracy, outperforming existing models by 10-20\% absolute on both datasets and under both distractor and full-wiki settings. | false | false | false | false | true | false | false | false | true | false | false | true | false | false | false | false | false | false | 321,827 |
2309.11275 | Open-endedness induced through a predator-prey scenario using modular
robots | This work investigates how a predator-prey scenario can induce the emergence of Open-Ended Evolution (OEE). We utilize modular robots of fixed morphologies whose controllers are subject to evolution. In both species, robots can send and receive signals and perceive the relative positions of other robots in the environment. Specifically, we introduce a feature we call a tagging system: it modifies how individuals can perceive each other and is expected to increase behavioral complexity. Our results show the emergence of adaptive strategies, demonstrating the viability of inducing OEE through predator-prey dynamics using modular robots. Such emergence, nevertheless, seemed to depend on conditioning reproduction to an explicit behavioral criterion. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | 393,345 |
1807.01011 | A First Analysis of Kernels for Kriging-based Optimization in
Hierarchical Search Spaces | Many real-world optimization problems require significant resources for objective function evaluations. This is a challenge to evolutionary algorithms, as it limits the number of available evaluations. One solution are surrogate models, which replace the expensive objective. A particular issue in this context are hierarchical variables. Hierarchical variables only influence the objective function if other variables satisfy some condition. We study how this kind of hierarchical structure can be integrated into the model based optimization framework. We discuss an existing kernel and propose alternatives. An artificial test function is used to investigate how different kernels and assumptions affect model quality and search performance. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 101,967 |
1912.01291 | Radial-Layer Jamming Mechanism for String Configuration | Grippers can be attached to objects in a rigid mode, and they are therefore used in various applications, for example granular jamming gripper. This paper introduces a cutting-edge radial layer jamming mechanism with is tunable stiffness, which is critical for the development of grippers. The layer jamming mechanism generates friction between the layers of multi cylindrical walls by pulling wire. This paper describes the principles of three types of proposed tendon-driven jamming mechanism, in addition to their prototypes of string configuration and the experiments conducted on the holding torques of their joints. Due to the string configuration, the surface and three-dimensional (3D) shape. This mechanism can be implemented in various applications. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 156,049 |
2304.14492 | Ultra-Fast Zernike Moments using FFT and GPU | Zernike moments can be used to generate invariant features that are applied in various machine vision applications. They, however, suffer from slow implementation and numerical stability problems. We propose a novel method for computing Zernike using Fast Fourier Transform (FFT) and GPU computing. The method can be used to generate accurate moments up to high orders, and can compute Zernike moments of 4K resolution images in real-time. Numerical accuracies of Zernike moments computed with the proposed FFT approach have been analyzed using the orthogonality property and the results show that they beat other methods in numerical stability. The proposed method is simple and fast and can make use of the huge GPU-FFT libraries that are available in several programming frameworks. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 360,981 |
2010.10673 | Pushing the Limits of AMR Parsing with Self-Learning | Abstract Meaning Representation (AMR) parsing has experienced a notable growth in performance in the last two years, due both to the impact of transfer learning and the development of novel architectures specific to AMR. At the same time, self-learning techniques have helped push the performance boundaries of other natural language processing applications, such as machine translation or question answering. In this paper, we explore different ways in which trained models can be applied to improve AMR parsing performance, including generation of synthetic text and AMR annotations as well as refinement of actions oracle. We show that, without any additional human annotations, these techniques improve an already performant parser and achieve state-of-the-art results on AMR 1.0 and AMR 2.0. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 201,958 |
2110.07577 | UniPELT: A Unified Framework for Parameter-Efficient Language Model
Tuning | Recent parameter-efficient language model tuning (PELT) methods manage to match the performance of fine-tuning with much fewer trainable parameters and perform especially well when training data is limited. However, different PELT methods may perform rather differently on the same task, making it nontrivial to select the most appropriate method for a specific task, especially considering the fast-growing number of new PELT methods and tasks. In light of model diversity and the difficulty of model selection, we propose a unified framework, UniPELT, which incorporates different PELT methods as submodules and learns to activate the ones that best suit the current data or task setup via gating mechanism. On the GLUE benchmark, UniPELT consistently achieves 1~4% gains compared to the best individual PELT method that it incorporates and even outperforms fine-tuning under different setups. Moreover, UniPELT generally surpasses the upper bound that takes the best performance of all its submodules used individually on each task, indicating that a mixture of multiple PELT methods may be inherently more effective than single methods. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 261,041 |
2008.07734 | Trust and Medical AI: The challenges we face and the expertise needed to
overcome them | Artificial intelligence (AI) is increasingly of tremendous interest in the medical field. However, failures of medical AI could have serious consequences for both clinical outcomes and the patient experience. These consequences could erode public trust in AI, which could in turn undermine trust in our healthcare institutions. This article makes two contributions. First, it describes the major conceptual, technical, and humanistic challenges in medical AI. Second, it proposes a solution that hinges on the education and accreditation of new expert groups who specialize in the development, verification, and operation of medical AI technologies. These groups will be required to maintain trust in our healthcare institutions. | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | 192,206 |
2210.15855 | Optimal Task Offloading Policy in Edge Computing Systems with Firm
Deadlines | The recent drastic increase in mobile data traffic has pushed the mobile edge computing systems to the limit of their capacity. A promising solution to this problem is the task migration provided by unmanned aerial vehicles (UAV). Key factors to be taken into account in the design of UAV offloading schemes must include the number of tasks waiting in the system as well as their corresponding deadlines. An appropriate system cost which is used as an objective function to be minimized comprises two parts. First, an offloading cost which can be interpreted as the cost of using computational resources at the UAV. Second, a penalty cost due to potential task expiration. In order to minimize the expected (time average) cost over a time horizon, we formulate a Dynamic Programming (DP) equation and analyze it to describe properties of a candidate optimal offloading policy. The DP equation suffers from the well-known "Curse of Dimensionality" that makes computations intractable, especially when the state space is infinite. In order to reduce the computational burden, we identify three important properties of the optimal policy. Based on these properties, we show that it suffices to evaluate the DP equation on a finite subset of the state space only. We then show that the optimal task offloading decision associated with a state can be inferred from the decision taken at its "adjacent" states, further reducing the computational load. Finally, we provide numerical results to evaluate the influence of different parameters on the system performance as well as verify the theoretical results. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 327,111 |
2408.03149 | Leveraging Entity Information for Cross-Modality Correlation Learning:
The Entity-Guided Multimodal Summarization | The rapid increase in multimedia data has spurred advancements in Multimodal Summarization with Multimodal Output (MSMO), which aims to produce a multimodal summary that integrates both text and relevant images. The inherent heterogeneity of content within multimodal inputs and outputs presents a significant challenge to the execution of MSMO. Traditional approaches typically adopt a holistic perspective on coarse image-text data or individual visual objects, overlooking the essential connections between objects and the entities they represent. To integrate the fine-grained entity knowledge, we propose an Entity-Guided Multimodal Summarization model (EGMS). Our model, building on BART, utilizes dual multimodal encoders with shared weights to process text-image and entity-image information concurrently. A gating mechanism then combines visual data for enhanced textual summary generation, while image selection is refined through knowledge distillation from a pre-trained vision-language model. Extensive experiments on public MSMO dataset validate the superiority of the EGMS method, which also prove the necessity to incorporate entity information into MSMO problem. | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | 478,911 |
2111.09151 | Barrier Forming: Separating Polygonal Sets with Minimum Number of Lines | In this work, we carry out structural and algorithmic studies of a problem of barrier forming: selecting theminimum number of straight line segments (barriers) that separate several sets of mutually disjoint objects in the plane. The problem models the optimal placement of line sensors (e.g., infrared laser beams) for isolating many types of regions in a pair-wise manner for practical purposes (e.g., guarding against intrusions). The problem is NP-hard even if we want to find the minimum number of lines to separate two sets of points in the plane. Under the umbrella problem of barrier forming with minimum number of line segments, three settings are examined: barrier forming for point sets, point sets with polygonal obstacles, polygonal sets with polygonal obstacles. We describe methods for computing the optimal solution for the first two settings with the assistance of mathematical programming, and provide a 2-OPT solution for the third. We demonstrate the effectiveness of our methods through extensive simulations. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 266,933 |
2205.01316 | HL-Net: Heterophily Learning Network for Scene Graph Generation | Scene graph generation (SGG) aims to detect objects and predict their pairwise relationships within an image. Current SGG methods typically utilize graph neural networks (GNNs) to acquire context information between objects/relationships. Despite their effectiveness, however, current SGG methods only assume scene graph homophily while ignoring heterophily. Accordingly, in this paper, we propose a novel Heterophily Learning Network (HL-Net) to comprehensively explore the homophily and heterophily between objects/relationships in scene graphs. More specifically, HL-Net comprises the following 1) an adaptive reweighting transformer module, which adaptively integrates the information from different layers to exploit both the heterophily and homophily in objects; 2) a relationship feature propagation module that efficiently explores the connections between relationships by considering heterophily in order to refine the relationship representation; 3) a heterophily-aware message-passing scheme to further distinguish the heterophily and homophily between objects/relationships, thereby facilitating improved message passing in graphs. We conducted extensive experiments on two public datasets: Visual Genome (VG) and Open Images (OI). The experimental results demonstrate the superiority of our proposed HL-Net over existing state-of-the-art approaches. In more detail, HL-Net outperforms the second-best competitors by 2.1$\%$ on the VG dataset for scene graph classification and 1.2$\%$ on the IO dataset for the final score. Code is available at https://github.com/siml3/HL-Net. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 294,542 |
2109.11480 | Improving Tuberculosis (TB) Prediction using Synthetically Generated
Computed Tomography (CT) Images | The evaluation of infectious disease processes on radiologic images is an important and challenging task in medical image analysis. Pulmonary infections can often be best imaged and evaluated through computed tomography (CT) scans, which are often not available in low-resource environments and difficult to obtain for critically ill patients. On the other hand, X-ray, a different type of imaging procedure, is inexpensive, often available at the bedside and more widely available, but offers a simpler, two dimensional image. We show that by relying on a model that learns to generate CT images from X-rays synthetically, we can improve the automatic disease classification accuracy and provide clinicians with a different look at the pulmonary disease process. Specifically, we investigate Tuberculosis (TB), a deadly bacterial infectious disease that predominantly affects the lungs, but also other organ systems. We show that relying on synthetically generated CT improves TB identification by 7.50% and distinguishes TB properties up to 12.16% better than the X-ray baseline. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 256,960 |
2303.14035 | Statistical Age-of-Information Bounds for Parallel Systems: When Do
Independent Channels Make a Difference? | This paper contributes tail bounds of the age-of-information of a general class of parallel systems and explores their potential. Parallel systems arise in relevant cases, such as in multi-band mobile networks, multi-technology wireless access, or multi-path protocols, just to name a few. Typically, control over each communication channel is limited and random service outages and congestion cause buffering that impairs the age-of-information. The parallel use of independent channels promises a remedy, since outages on one channel may be compensated for by another. Surprisingly, for the well-known case of M$\mid$M$\mid$1 queues we find the opposite: pooling capacity in one channel performs better than a parallel system with the same total capacity. A generalization is not possible since there are no solutions for other types of parallel queues at hand. In this work, we prove a dual representation of age-of-information in min-plus algebra that connects to queueing models known from the theory of effective bandwidth/capacity and the stochastic network calculus. Exploiting these methods, we derive tail bounds of the age-of-information of parallel G$\mid$G$\mid$1 queues. In addition to parallel classical queues, we investigate Markov channels where, depending on the memory of the channel, we show the true advantage of parallel systems. We continue to investigate this new finding and provide insight into when capacity should be pooled in one channel or when independent parallel channels perform better. We complement our analysis with simulation results and evaluate different update policies, scheduling policies, and the use of heterogeneous channels that is most relevant for latest multi-band networks. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 353,923 |
2411.06237 | Leveraging Retrieval-Augmented Generation for Persian University
Knowledge Retrieval | This paper introduces an innovative approach using Retrieval-Augmented Generation (RAG) pipelines with Large Language Models (LLMs) to enhance information retrieval and query response systems for university-related question answering. By systematically extracting data from the university official webpage and employing advanced prompt engineering techniques, we generate accurate, contextually relevant responses to user queries. We developed a comprehensive university benchmark, UniversityQuestionBench (UQB), to rigorously evaluate our system performance, based on common key metrics in the filed of RAG pipelines, assessing accuracy and reliability through various metrics and real-world scenarios. Our experimental results demonstrate significant improvements in the precision and relevance of generated responses, enhancing user experience and reducing the time required to obtain relevant answers. In summary, this paper presents a novel application of RAG pipelines and LLMs, supported by a meticulously prepared university benchmark, offering valuable insights into advanced AI techniques for academic data retrieval and setting the stage for future research in this domain. | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | 507,026 |
1811.00693 | Meta-path Augmented Response Generation | We propose a chatbot, namely Mocha to make good use of relevant entities when generating responses. Augmented with meta-path information, Mocha is able to mention proper entities following the conversation flow. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 112,168 |
2005.10741 | HQC-RMRS, an instantiation of the HQC encryption framework with a more
efficient auxiliary error-correcting code | The HQC encryption framework is a general code-based encryption scheme for which decryption returns a noisy version of the plaintext. Any instantiation of the scheme will therefore use an error-correcting procedure relying on a fixed auxiliary code. Unlike the McEliece encryption framework whose security is directly related to how well one can hide the structure of an error-correcting code, the security reduction of the HQC encryption framework is independent of the nature of the auxiliary decoding procedure which is publicly available. What is expected from it is that the decoding algorithm is both efficient and has a decoding failure rate which can be easily modelized and analyzed. The original error-correction procedure proposed for the HQC framework was to use tensor products of BCH codes and repetition codes. In this paper we consider another code family for removing the error vector deriving from the general framework: the concatenation of Reed-Muller and Reed-Solomon codes. We denote this instantiation of the HQC framework by HQC-RMRS. These codes yield better decoding results than the BCH and repetition codes: overall we gain roughly 17\% in the size of the key and the ciphertext, while keeping a simple modelization of the decoding error rate. The paper also presents a simplified and more precise analysis of the distribution of the error vector output by the HQC protocol. | false | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | 178,270 |
2312.11376 | CLIM: Contrastive Language-Image Mosaic for Region Representation | Detecting objects accurately from a large or open vocabulary necessitates the vision-language alignment on region representations. However, learning such a region-text alignment by obtaining high-quality box annotations with text labels or descriptions is expensive and infeasible. In contrast, collecting image-text pairs is simpler but lacks precise object location information to associate regions with texts. In this paper, we propose a novel approach called Contrastive Language-Image Mosaic (CLIM), which leverages large-scale image-text pairs effectively for aligning region and text representations. CLIM combines multiple images into a mosaicked image and treats each image as a `pseudo region'. The feature of each pseudo region is extracted and trained to be similar to the corresponding text embedding while dissimilar from others by a contrastive loss, enabling the model to learn the region-text alignment without costly box annotations. As a generally applicable approach, CLIM consistently improves different open-vocabulary object detection methods that use caption supervision. Furthermore, CLIM can effectively enhance the region representation of vision-language models, thus providing stronger backbones for open-vocabulary object detectors. Our experimental results demonstrate that CLIM improves different baseline open-vocabulary object detectors by a large margin on both OV-COCO and OV-LVIS benchmarks. The code is available at https://github.com/wusize/CLIM. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 416,540 |
1909.06737 | Understanding and Improving Virtual Adversarial Training | In semi-supervised learning, virtual adversarial training (VAT) approach is one of the most attractive method due to its intuitional simplicity and powerful performances. VAT finds a classifier which is robust to data perturbation toward the adversarial direction. In this study, we provide a fundamental explanation why VAT works well in semi-supervised learning case and propose new techniques which are simple but powerful to improve the VAT method. Especially we employ the idea of Bad GAN approach, which utilizes bad samples distributed on complement of the support of the input data, without any additional deep generative architectures. We generate bad samples of high-quality by use of the adversarial training used in VAT and also give theoretical explanations why the adversarial training is good at both generating bad samples. An advantage of our proposed method is to achieve the competitive performances compared with other recent studies with much fewer computations. We demonstrate advantages our method by various experiments with well known benchmark image datasets. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 145,468 |
2001.09048 | Cooperative versus decentralized strategies in three-pursuer
single-evader games | The value of cooperation in pursuit-evasion games is investigated. The considered setting is that of three pursuers chasing one evader in a planar environment. The optimal evader trajectory for a well-known decentralized pursuer strategy is characterized. This result is instrumental to derive upper and lower bounds to the game length, in the case in which the pursuers cooperate in the chasing strategy. It is shown that the cooperation cannot reduce the capture time by more than one half with respect to the decentralized case, and that such bound is tight. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 161,466 |
1111.5548 | Computation of generalized inverses using Php/MySql environment | The main aim of this paper is to develop a client/server-based model for computing the weighted Moore-Penrose inverse using the partitioning method as well as for storage of generated results. The web application is developed in the PHP/MySQL environment. The source code is open and free for testing by using a web browser. Influence of different matrix representations and storage systems on the computational time is investigated. The CPU time for searching the previously stored pseudo-inverses is compared with the CPU time spent for new computation of the same inverses. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | true | 13,149 |
2210.07361 | On the ergodicity assumption in Performance-Based Engineering | In the Performance-Based Engineering (PBE) framework, uncertainties in system parameters, or modelling uncertainties, have been shown to have significant effects on capacity fragilities and annual collapse rates of buildings. Yet, since modelling uncertainties are non-ergodic variables, their consideration in failure rate calculations offends the Poisson assumption of independent crossings. This problem has been addressed in the literature, and errors found negligible for small annual collapse failure rates. However, the errors could be significant for serviceability limit states, and when failure rates are integrated in time, to provide lifetime failure probabilities. Herein, we present a novel formulation to fully avoid the error in integration of non-ergodic variables. The proposed product-of-lognormals formulation is fully compatible with popular fragility modelling approaches in PBE context. Moreover, we address collapse limit states of realistic reinforced concrete buildings, and find errors of the order of 5 to 8% for 50-year lifetimes, up to 14% for 100 years. Computation of accurate lifetime failure probabilities in a PBE context is clearly important, as it allows comparison with lifetime target reliability values for other structural analysis formulations. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 323,670 |
2402.09141 | Advancing NLP Models with Strategic Text Augmentation: A Comprehensive
Study of Augmentation Methods and Curriculum Strategies | This study conducts a thorough evaluation of text augmentation techniques across a variety of datasets and natural language processing (NLP) tasks to address the lack of reliable, generalized evidence for these methods. It examines the effectiveness of these techniques in augmenting training sets to improve performance in tasks such as topic classification, sentiment analysis, and offensive language detection. The research emphasizes not only the augmentation methods, but also the strategic order in which real and augmented instances are introduced during training. A major contribution is the development and evaluation of Modified Cyclical Curriculum Learning (MCCL) for augmented datasets, which represents a novel approach in the field. Results show that specific augmentation methods, especially when integrated with MCCL, significantly outperform traditional training approaches in NLP model performance. These results underscore the need for careful selection of augmentation techniques and sequencing strategies to optimize the balance between speed and quality improvement in various NLP tasks. The study concludes that the use of augmentation methods, especially in conjunction with MCCL, leads to improved results in various classification tasks, providing a foundation for future advances in text augmentation strategies in NLP. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 429,387 |
2110.10149 | Continuous Control with Action Quantization from Demonstrations | In this paper, we propose a novel Reinforcement Learning (RL) framework for problems with continuous action spaces: Action Quantization from Demonstrations (AQuaDem). The proposed approach consists in learning a discretization of continuous action spaces from human demonstrations. This discretization returns a set of plausible actions (in light of the demonstrations) for each input state, thus capturing the priors of the demonstrator and their multimodal behavior. By discretizing the action space, any discrete action deep RL technique can be readily applied to the continuous control problem. Experiments show that the proposed approach outperforms state-of-the-art methods such as SAC in the RL setup, and GAIL in the Imitation Learning setup. We provide a website with interactive videos: https://google-research.github.io/aquadem/ and make the code available: https://github.com/google-research/google-research/tree/master/aquadem. | false | false | false | false | true | false | true | true | false | false | false | false | false | false | false | false | false | false | 262,051 |
2012.05163 | A Deep Learning Approach to Anomaly Sequence Detection for
High-Resolution Monitoring of Power Systems | A deep learning approach is proposed to detect data and system anomalies using high-resolution continuous point-on-wave (CPOW) or phasor measurements. Both the anomaly and anomaly-free measurement models are assumed to have unknown temporal dependencies and probability distributions. Historical training samples are assumed for the anomaly-free model, while no training samples are available for the anomaly measurements. By transforming the anomaly-free observations into uniform independent and identically distributed sequences via a generative adversarial network, the proposed approach deploys a uniformity test for anomaly detection at the sensor level. A distributed detection scheme that combines sensor level detections at the control center is also proposed that combines local detections to form more reliable detections. Numerical results demonstrate significant improvement over the state-of-the-art solutions for various bad-data cases using real and synthetic CPOW and PMU data sets. | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | 210,694 |
2203.13733 | Blocks Assemble! Learning to Assemble with Large-Scale Structured
Reinforcement Learning | Assembly of multi-part physical structures is both a valuable end product for autonomous robotics, as well as a valuable diagnostic task for open-ended training of embodied intelligent agents. We introduce a naturalistic physics-based environment with a set of connectable magnet blocks inspired by children's toy kits. The objective is to assemble blocks into a succession of target blueprints. Despite the simplicity of this objective, the compositional nature of building diverse blueprints from a set of blocks leads to an explosion of complexity in structures that agents encounter. Furthermore, assembly stresses agents' multi-step planning, physical reasoning, and bimanual coordination. We find that the combination of large-scale reinforcement learning and graph-based policies -- surprisingly without any additional complexity -- is an effective recipe for training agents that not only generalize to complex unseen blueprints in a zero-shot manner, but even operate in a reset-free setting without being trained to do so. Through extensive experiments, we highlight the importance of large-scale training, structured representations, contributions of multi-task vs. single-task learning, as well as the effects of curriculums, and discuss qualitative behaviors of trained agents. | false | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | 287,744 |
2012.13617 | A New Perspective to Node Influence Evaluation in Complex Network Using
Subgraph Tr-Centrality | There is great significance in evaluating a node's Influence ranking in complex networks. Over the years, many researchers have presented different measures for quantifying node interconnectedness within networks. Therefore, this paper introduces a centrality measure called Tr-centrality which focuses on using the node triangle structure and the node neighborhood information to define the strength of a node, which is defined as the summation of Gruebler's Equation of the node's one-hop triangle neighborhood to the number of all the edges in the subgraph. Furthermore, we socially consider it as the local trust of a node. To verify the validity of Tr-centrality [1], we apply it to four real-world networks with different densities and shapes, and Tr-centrality has proven to yield better results. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 213,273 |
2002.10710 | End-to-end Emotion-Cause Pair Extraction via Learning to Link | Emotion-cause pair extraction (ECPE), as an emergent natural language processing task, aims at jointly investigating emotions and their underlying causes in documents. It extends the previous emotion cause extraction (ECE) task, yet without requiring a set of pre-given emotion clauses as in ECE. Existing approaches to ECPE generally adopt a two-stage method, i.e., (1) emotion and cause detection, and then (2) pairing the detected emotions and causes. Such pipeline method, while intuitive, suffers from two critical issues, including error propagation across stages that may hinder the effectiveness, and high computational cost that would limit the practical application of the method. To tackle these issues, we propose a multi-task learning model that can extract emotions, causes and emotion-cause pairs simultaneously in an end-to-end manner. Specifically, our model regards pair extraction as a link prediction task, and learns to link from emotion clauses to cause clauses, i.e., the links are directional. Emotion extraction and cause extraction are incorporated into the model as auxiliary tasks, which further boost the pair extraction. Experiments are conducted on an ECPE benchmarking dataset. The results show that our proposed model outperforms a range of state-of-the-art approaches. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 165,493 |
2412.02264 | Technical Report on Reinforcement Learning Control on the Lucas-N\"ulle
Inverted Pendulum | The discipline of automatic control is making increased use of concepts that originate from the domain of machine learning. Herein, reinforcement learning (RL) takes an elevated role, as it is inherently designed for sequential decision making, and can be applied to optimal control problems without the need for a plant system model. To advance education of control engineers and operators in this field, this contribution targets an RL framework that can be applied to educational hardware provided by the Lucas-N\"ulle company. Specifically, the goal of inverted pendulum control is pursued by means of RL, including both, swing-up and stabilization within a single holistic design approach. Herein, the actual learning is enabled by separating corresponding computations from the real-time control computer and outsourcing them to a different hardware. This distributed architecture, however, necessitates communication of the involved components, which is realized via CAN bus. The experimental proof of concept is presented with an applied safeguarding algorithm that prevents the plant from being operated harmfully during the trial-and-error training phase. | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | true | 513,463 |
2105.04790 | Learning to Warm Up Cold Item Embeddings for Cold-start Recommendation
with Meta Scaling and Shifting Networks | Recently, embedding techniques have achieved impressive success in recommender systems. However, the embedding techniques are data demanding and suffer from the cold-start problem. Especially, for the cold-start item which only has limited interactions, it is hard to train a reasonable item ID embedding, called cold ID embedding, which is a major challenge for the embedding techniques. The cold item ID embedding has two main problems: (1) A gap is existing between the cold ID embedding and the deep model. (2) Cold ID embedding would be seriously affected by noisy interaction. However, most existing methods do not consider both two issues in the cold-start problem, simultaneously. To address these problems, we adopt two key ideas: (1) Speed up the model fitting for the cold item ID embedding (fast adaptation). (2) Alleviate the influence of noise. Along this line, we propose Meta Scaling and Shifting Networks to generate scaling and shifting functions for each item, respectively. The scaling function can directly transform cold item ID embeddings into warm feature space which can fit the model better, and the shifting function is able to produce stable embeddings from the noisy embeddings. With the two meta networks, we propose Meta Warm Up Framework (MWUF) which learns to warm up cold ID embeddings. Moreover, MWUF is a general framework that can be applied upon various existing deep recommendation models. The proposed model is evaluated on three popular benchmarks, including both recommendation and advertising datasets. The evaluation results demonstrate its superior performance and compatibility. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 234,618 |
1807.01253 | Who did What at Where and When: Simultaneous Multi-Person Tracking and
Activity Recognition | We present a bootstrapping framework to simultaneously improve multi-person tracking and activity recognition at individual, interaction and social group activity levels. The inference consists of identifying trajectories of all pedestrian actors, individual activities, pairwise interactions, and collective activities, given the observed pedestrian detections. Our method uses a graphical model to represent and solve the joint tracking and recognition problems via multi-stages: (1) activity-aware tracking, (2) joint interaction recognition and occlusion recovery, and (3) collective activity recognition. We solve the where and when problem with visual tracking, as well as the who and what problem with recognition. High-order correlations among the visible and occluded individuals, pairwise interactions, groups, and activities are then solved using a hypergraph formulation within the Bayesian framework. Experiments on several benchmarks show the advantages of our approach over state-of-art methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 102,020 |
2306.01015 | How to Estimate Model Transferability of Pre-Trained Speech Models? | In this work, we introduce a "score-based assessment" framework for estimating the transferability of pre-trained speech models (PSMs) for fine-tuning target tasks. We leverage upon two representation theories, Bayesian likelihood estimation and optimal transport, to generate rank scores for the PSM candidates using the extracted representations. Our framework efficiently computes transferability scores without actual fine-tuning of candidate models or layers by making a temporal independent hypothesis. We evaluate some popular supervised speech models (e.g., Conformer RNN-Transducer) and self-supervised speech models (e.g., HuBERT) in cross-layer and cross-model settings using public data. Experimental results show a high Spearman's rank correlation and low $p$-value between our estimation framework and fine-tuning ground truth. Our proposed transferability framework requires less computational time and resources, making it a resource-saving and time-efficient approach for tuning speech foundation models. | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | 370,266 |
2402.01327 | Supervised Algorithmic Fairness in Distribution Shifts: A Survey | Supervised fairness-aware machine learning under distribution shifts is an emerging field that addresses the challenge of maintaining equitable and unbiased predictions when faced with changes in data distributions from source to target domains. In real-world applications, machine learning models are often trained on a specific dataset but deployed in environments where the data distribution may shift over time due to various factors. This shift can lead to unfair predictions, disproportionately affecting certain groups characterized by sensitive attributes, such as race and gender. In this survey, we provide a summary of various types of distribution shifts and comprehensively investigate existing methods based on these shifts, highlighting six commonly used approaches in the literature. Additionally, this survey lists publicly available datasets and evaluation metrics for empirical studies. We further explore the interconnection with related research fields, discuss the significant challenges, and identify potential directions for future studies. | false | false | false | false | true | false | true | false | false | false | false | false | false | true | false | false | false | false | 425,966 |
2310.17185 | Adaptive importance sampling for Deep Ritz | We introduce an adaptive sampling method for the Deep Ritz method aimed at solving partial differential equations (PDEs). Two deep neural networks are used. One network is employed to approximate the solution of PDEs, while the other one is a deep generative model used to generate new collocation points to refine the training set. The adaptive sampling procedure consists of two main steps. The first step is solving the PDEs using the Deep Ritz method by minimizing an associated variational loss discretized by the collocation points in the training set. The second step involves generating a new training set, which is then used in subsequent computations to further improve the accuracy of the current approximate solution. We treat the integrand in the variational loss as an unnormalized probability density function (PDF) and approximate it using a deep generative model called bounded KRnet. The new samples and their associated PDF values are obtained from the bounded KRnet. With these new samples and their associated PDF values, the variational loss can be approximated more accurately by importance sampling. Compared to the original Deep Ritz method, the proposed adaptive method improves accuracy, especially for problems characterized by low regularity and high dimensionality. We demonstrate the effectiveness of our new method through a series of numerical experiments. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 403,036 |
2205.05867 | Over-the-Air Federated Learning with Joint Adaptive Computation and
Power Control | This paper considers over-the-air federated learning (OTA-FL). OTA-FL exploits the superposition property of the wireless medium, and performs model aggregation over the air for free. Thus, it can greatly reduce the communication cost incurred in communicating model updates from the edge devices. In order to fully utilize this advantage while providing comparable learning performance to conventional federated learning that presumes model aggregation via noiseless channels, we consider the joint design of transmission scaling and the number of local iterations at each round, given the power constraint at each edge device. We first characterize the training error due to such channel noise in OTA-FL by establishing a fundamental lower bound for general functions with Lipschitz-continuous gradients. Then, by introducing an adaptive transceiver power scaling scheme, we propose an over-the-air federated learning algorithm with joint adaptive computation and power control (ACPC-OTA-FL). We provide the convergence analysis for ACPC-OTA-FL in training with non-convex objective functions and heterogeneous data. We show that the convergence rate of ACPC-OTA-FL matches that of FL with noise-free communications. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 296,066 |
2008.02749 | The VISIONE Video Search System: Exploiting Off-the-Shelf Text Search
Engines for Large-Scale Video Retrieval | In this paper, we describe in details VISIONE, a video search system that allows users to search for videos using textual keywords, occurrence of objects and their spatial relationships, occurrence of colors and their spatial relationships, and image similarity. These modalities can be combined together to express complex queries and satisfy user needs. The peculiarity of our approach is that we encode all the information extracted from the keyframes, such as visual deep features, tags, color and object locations, using a convenient textual encoding indexed in a single text retrieval engine. This offers great flexibility when results corresponding to various parts of the query (visual, text and locations) have to be merged. In addition, we report an extensive analysis of the system retrieval performance, using the query logs generated during the Video Browser Showdown (VBS) 2019 competition. This allowed us to fine-tune the system by choosing the optimal parameters and strategies among the ones that we tested. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 190,701 |
2112.07837 | Central-Smoothing Hypergraph Neural Networks for Predicting Drug-Drug
Interactions | Predicting drug-drug interactions (DDI) is the problem of predicting side effects (unwanted outcomes) of a pair of drugs using drug information and known side effects of many pairs. This problem can be formulated as predicting labels (i.e. side effects) for each pair of nodes in a DDI graph, of which nodes are drugs and edges are interacting drugs with known labels. State-of-the-art methods for this problem are graph neural networks (GNNs), which leverage neighborhood information in the graph to learn node representations. For DDI, however, there are many labels with complicated relationships due to the nature of side effects. Usual GNNs often fix labels as one-hot vectors that do not reflect label relationships and potentially do not obtain the highest performance in the difficult cases of infrequent labels. In this paper, we formulate DDI as a hypergraph where each hyperedge is a triple: two nodes for drugs and one node for a label. We then present CentSmoothie, a hypergraph neural network that learns representations of nodes and labels altogether with a novel central-smoothing formulation. We empirically demonstrate the performance advantages of CentSmoothie in simulations as well as real datasets. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 271,601 |
2209.07000 | VIPHY: Probing "Visible" Physical Commonsense Knowledge | In recent years, vision-language models (VLMs) have shown remarkable performance on visual reasoning tasks (e.g. attributes, location). While such tasks measure the requisite knowledge to ground and reason over a given visual instance, they do not, however, measure the ability of VLMs to retain and generalize such knowledge. In this work, we evaluate their ability to acquire "visible" physical knowledge -- the information that is easily accessible from images of static scenes, particularly across the dimensions of object color, size and space. We build an automatic pipeline to derive a comprehensive knowledge resource for calibrating and probing these models. Our results indicate a severe gap between model and human performance across all three tasks. Furthermore, our caption pretrained baseline (CapBERT) significantly outperforms VLMs on both size and spatial tasks -- highlighting that despite sufficient access to ground language with visual modality, they struggle to retain such knowledge. The dataset and code are available at https://github.com/Axe--/ViPhy . | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 317,582 |
1603.07442 | Pixel-Level Domain Transfer | We present an image-conditional image generation model. The model transfers an input domain to a target domain in semantic level, and generates the target image in pixel level. To generate realistic target images, we employ the real/fake-discriminator as in Generative Adversarial Nets, but also introduce a novel domain-discriminator to make the generated image relevant to the input image. We verify our model through a challenging task of generating a piece of clothing from an input image of a dressed person. We present a high quality clothing dataset containing the two domains, and succeed in demonstrating decent results. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 53,629 |
1612.04666 | Efficient Sampling for Better OSN Data Provisioning | Data concerning the users and usage of Online Social Networks (OSNs) has become available externally, from public resources (e.g., user profiles), participation in OSNs (e.g., establishing relationships and recording transactions such as user updates) and APIs of the OSN provider (such as the Twitter API). APIs let OSN providers monetize the release of data while helping control measurement load, e.g. by providing samples with different cost-granularity tradeoffs. To date, this approach has been more suited to releasing transactional data, with graphical data still being obtained by resource intensive methods such a graph crawling. In this paper, we propose a method for OSNs to provide samples of the user graph of tunable size, in non-intersecting increments, with sample selection that can be weighted to enhance accuracy when estimating different features of the graph. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 65,555 |
1909.13164 | Deep K-SVD Denoising | This work considers noise removal from images, focusing on the well known K-SVD denoising algorithm. This sparsity-based method was proposed in 2006, and for a short while it was considered as state-of-the-art. However, over the years it has been surpassed by other methods, including the recent deep-learning-based newcomers. The question we address in this paper is whether K-SVD was brought to its peak in its original conception, or whether it can be made competitive again. The approach we take in answering this question is to redesign the algorithm to operate in a supervised manner. More specifically, we propose an end-to-end deep architecture with the exact K-SVD computational path, and train it for optimized denoising. Our work shows how to overcome difficulties arising in turning the K-SVD scheme into a differentiable, and thus learnable, machine. With a small number of parameters to learn and while preserving the original K-SVD essence, the proposed architecture is shown to outperform the classical K-SVD algorithm substantially, and getting closer to recent state-of-the-art learning-based denoising methods. Adopting a broader context, this work touches on themes around the design of deep-learning solutions for image processing tasks, while paving a bridge between classic methods and novel deep-learning-based ones. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 147,349 |
2106.12429 | TCEP: Transitions in Operator Placement to Adapt to Dynamic Network
Environments | Distributed Complex Event Processing (DCEP) is a commonly used paradigm to detect and act on situational changes of many applications, including the Internet of Things (IoT). DCEP achieves this using a simple specification of analytical tasks on data streams called operators and their distributed execution on a set of infrastructure. The adaptivity of DCEP to the dynamics of IoT applications is essential and very challenging in the face of changing demands concerning Quality of Service. In our previous work, we addressed this issue by enabling transitions, which allow for the adaptive use of multiple operator placement mechanisms. In this article, we extend the transition methodology by optimizing the costs of transition and analyzing the behaviour using multiple operator placement mechanisms. Furthermore, we provide an extensive evaluation on the costs of transition imposed by operator migrations and learning, as it can inflict overhead on the performance if operated uncoordinatedly. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | true | 242,723 |
2408.02709 | Enhancing Medical Learning and Reasoning Systems: A Boxology-Based
Comparative Analysis of Design Patterns | This study analyzes hybrid AI systems' design patterns and their effectiveness in clinical decision-making using the boxology framework. It categorizes and copares various architectures combining machine learning and rule-based reasoning to provide insights into their structural foundations and healthcare applications. Addressing two main questions, how to categorize these systems againts established design patterns and how to extract insights through comparative analysis, the study uses design patterns from software engineering to understand and optimize healthcare AI systems. Boxology helps identify commonalities and create reusable solutions, enhancing these systems' scalability, reliability, and performance. Five primary architectures are examined: REML, MLRB, RBML, RMLT, and PERML. Each has unique strengths and weaknesses, highlighting the need for tailored approaches in clinical tasks. REML excels in high-accuracy prediction for datasets with limited data; MLRB in handling large datasets and complex data integration; RBML in explainability and trustworthiness; RMLT in managing high-dimensional data; and PERML, though limited in analysis, shows promise in urgent care scenarios. The study introduces four new patterns, creates five abstract categorization patterns, and refines those five further to specific systems. These contributions enhance Boxlogy's taxonomical organization and offer novel approaches to integrating expert knowledge with machine learning. Boxology's structured, modular apporach offers significant advantages in developing and analyzing hybrid AI systems, revealing commonalities, and promoting reusable solutions. In conclusion, this study underscores hybrid AI systems' crucial role in advancing healthcare and Boxology's potential to drive further innovation in AI integration, ultimately improving clinical decision support and patient outcomes. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 478,739 |
2008.11200 | GRAB: A Dataset of Whole-Body Human Grasping of Objects | Training computers to understand, model, and synthesize human grasping requires a rich dataset containing complex 3D object shapes, detailed contact information, hand pose and shape, and the 3D body motion over time. While "grasping" is commonly thought of as a single hand stably lifting an object, we capture the motion of the entire body and adopt the generalized notion of "whole-body grasps". Thus, we collect a new dataset, called GRAB (GRasping Actions with Bodies), of whole-body grasps, containing full 3D shape and pose sequences of 10 subjects interacting with 51 everyday objects of varying shape and size. Given MoCap markers, we fit the full 3D body shape and pose, including the articulated face and hands, as well as the 3D object pose. This gives detailed 3D meshes over time, from which we compute contact between the body and object. This is a unique dataset, that goes well beyond existing ones for modeling and understanding how humans grasp and manipulate objects, how their full body is involved, and how interaction varies with the task. We illustrate the practical value of GRAB with an example application; we train GrabNet, a conditional generative network, to predict 3D hand grasps for unseen 3D object shapes. The dataset and code are available for research purposes at https://grab.is.tue.mpg.de. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 193,201 |
2205.09738 | AIGenC: An AI generalisation model via creativity | Inspired by cognitive theories of creativity, this paper introduces a computational model (AIGenC) that lays down the necessary components to enable artificial agents to learn, use and generate transferable representations. Unlike machine representation learning, which relies exclusively on raw sensory data, biological representations incorporate relational and associative information that embeds rich and structured concept spaces. The AIGenC model poses a hierarchical graph architecture with various levels and types of representations procured by different components. The first component, Concept Processing, extracts objects and affordances from sensory input and encodes them into a concept space. The resulting representations are stored in a dual memory system and enriched with goal-directed and temporal information acquired through reinforcement learning, creating a higher-level of abstraction. Two additional components work in parallel to detect and recover relevant concepts and create new ones, respectively, in a process akin to cognitive Reflective Reasoning and Blending. The Reflective Reasoning unit detects and recovers from memory concepts relevant to the task by means of a matching process that calculates a similarity value between the current state and memory graph structures. Once the matching interaction ends, rewards and temporal information are added to the graph, building further abstractions. If the reflective reasoning processing fails to offer a suitable solution, a blending operation comes into place, creating new concepts by combining past information. We discuss the model's capability to yield better out-of-distribution generalisation in artificial agents, thus advancing toward Artificial General Intelligence. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 297,399 |
2007.15429 | Searching for Pneumothorax in Half a Million Chest X-Ray Images | Pneumothorax, a collapsed or dropped lung, is a fatal condition typically detected on a chest X-ray by an experienced radiologist. Due to shortage of such experts, automated detection systems based on deep neural networks have been developed. Nevertheless, applying such systems in practice remains a challenge. These systems, mostly compute a single probability as output, may not be enough for diagnosis. On the contrary, content-based medical image retrieval (CBIR) systems, such as image search, can assist clinicians for diagnostic purposes by enabling them to compare the case they are examining with previous (already diagnosed) cases. However, there is a lack of study on such attempt. In this study, we explored the use of image search to classify pneumothorax among chest X-ray images. All chest X-ray images were first tagged with deep pretrained features, which were obtained from existing deep learning models. Given a query chest X-ray image, the majority voting of the top K retrieved images was then used as a classifier, in which similar cases in the archive of past cases are provided besides the probability output. In our experiments, 551,383 chest X-ray images were obtained from three large recently released public datasets. Using 10-fold cross-validation, it is shown that image search on deep pretrained features achieved promising results compared to those obtained by traditional classifiers trained on the same features. To the best of knowledge, it is the first study to demonstrate that deep pretrained features can be used for CBIR of pneumothorax in half a million chest X-ray images. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 189,668 |
1702.05181 | RIPML: A Restricted Isometry Property based Approach to Multilabel
Learning | The multilabel learning problem with large number of labels, features, and data-points has generated a tremendous interest recently. A recurring theme of these problems is that only a few labels are active in any given datapoint as compared to the total number of labels. However, only a small number of existing work take direct advantage of this inherent extreme sparsity in the label space. By the virtue of Restricted Isometry Property (RIP), satisfied by many random ensembles, we propose a novel procedure for multilabel learning known as RIPML. During the training phase, in RIPML, labels are projected onto a random low-dimensional subspace followed by solving a least-square problem in this subspace. Inference is done by a k-nearest neighbor (kNN) based approach. We demonstrate the effectiveness of RIPML by conducting extensive simulations and comparing results with the state-of-the-art linear dimensionality reduction based approaches. | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | 68,363 |
2312.15851 | Hypergraph Enhanced Knowledge Tree Prompt Learning for Next-Basket
Recommendation | Next-basket recommendation (NBR) aims to infer the items in the next basket given the corresponding basket sequence. Existing NBR methods are mainly based on either message passing in a plain graph or transition modelling in a basket sequence. However, these methods only consider point-to-point binary item relations while item dependencies in real world scenarios are often in higher order. Additionally, the importance of the same item to different users varies due to variation of user preferences, and the relations between items usually involve various aspects. As pretrained language models (PLMs) excel in multiple tasks in natural language processing (NLP) and computer vision (CV), many researchers have made great efforts in utilizing PLMs to boost recommendation. However, existing PLM-based recommendation methods degrade when encountering Out-Of-Vocabulary (OOV) items. OOV items are those whose IDs are out of PLM's vocabulary and thus unintelligible to PLM. To settle the above challenges, we propose a novel method HEKP4NBR, which transforms the knowledge graph (KG) into prompts, namely Knowledge Tree Prompt (KTP), to help PLM encode the OOV item IDs in the user's basket sequence. A hypergraph convolutional module is designed to build a hypergraph based on item similarities measured by an MoE model from multiple aspects and then employ convolution on the hypergraph to model correlations among multiple items. Extensive experiments are conducted on HEKP4NBR on two datasets based on real company data and validate its effectiveness against multiple state-of-the-art methods. | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | false | 418,158 |
2012.11802 | Structure-preserving, energy stable numerical schemes for a liquid thin
film coarsening model | In this paper, two finite difference numerical schemes are proposed and analyzed for the droplet liquid film model, with a singular Leonard-Jones energy potential involved. Both first and second order accurate temporal algorithms are considered. In the first order scheme, the convex potential and the surface diffusion terms are implicitly, while the concave potential term is updated explicitly. Furthermore, we provide a theoretical justification that this numerical algorithm has a unique solution, such that the positivity is always preserved for the phase variable at a point-wise level, so that a singularity is avoided in the scheme. In fact, the singular nature of the Leonard-Jones potential term around the value of 0 prevents the numerical solution reaching such singular value, so that the positivity structure is always preserved. Moreover, an unconditional energy stability of the numerical scheme is derived, without any restriction for the time step size. In the second order numerical scheme, the BDF temporal stencil is applied, and an alternate convex-concave decomposition is derived, so that the concave part corresponds to a quadratic energy. In turn, the combined Leonard-Jones potential term is treated implicitly, and the concave part the is approximated by a second order Adams-Bashforth explicit extrapolation, and an artificial Douglas-Dupont regularization term is added to ensure the energy stability. The unique solvability and the positivity-preserving property for the second order scheme could be similarly established. In addition, optimal rate convergence analysis is provided for both the first and second order accurate schemes. A few numerical simulation results are also presented, which demonstrate the robustness of the numerical schemes. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 212,736 |
1605.09211 | Going Deeper for Multilingual Visual Sentiment Detection | This technical report details several improvements to the visual concept detector banks built on images from the Multilingual Visual Sentiment Ontology (MVSO). The detector banks are trained to detect a total of 9,918 sentiment-biased visual concepts from six major languages: English, Spanish, Italian, French, German and Chinese. In the original MVSO release, adjective-noun pair (ANP) detectors were trained for the six languages using an AlexNet-styled architecture by fine-tuning from DeepSentiBank. Here, through a more extensive set of experiments, parameter tuning, and training runs, we detail and release higher accuracy models for detecting ANPs across six languages from the same image pool and setting as in the original release using a more modern architecture, GoogLeNet, providing comparable or better performance with reduced network parameter cost. In addition, since the image pool in MVSO can be corrupted by user noise from social interactions, we partitioned out a sub-corpus of MVSO images based on tag-restricted queries for higher fidelity labels. We show that as a result of these higher fidelity labels, higher performing AlexNet-styled ANP detectors can be trained using the tag-restricted image subset as compared to the models in full corpus. We release all these newly trained models for public research use along with the list of tag-restricted images from the MVSO dataset. | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | true | 56,539 |
2008.08878 | Reinforcement Learning based dynamic weighing of Ensemble Models for
Time Series Forecasting | Ensemble models are powerful model building tools that are developed with a focus to improve the accuracy of model predictions. They find applications in time series forecasting in varied scenarios including but not limited to process industries, health care, and economics where a single model might not provide optimal performance. It is known that if models selected for data modelling are distinct (linear/non-linear, static/dynamic) and independent (minimally correlated models), the accuracy of the predictions is improved. Various approaches suggested in the literature to weigh the ensemble models use a static set of weights. Due to this limitation, approaches using a static set of weights for weighing ensemble models cannot capture the dynamic changes or local features of the data effectively. To address this issue, a Reinforcement Learning (RL) approach to dynamically assign and update weights of each of the models at different time instants depending on the nature of data and the individual model predictions is proposed in this work. The RL method implemented online, essentially learns to update the weights and reduce the errors as the time progresses. Simulation studies on time series data showed that the dynamic weighted approach using RL learns the weight better than existing approaches. The accuracy of the proposed method is compared with an existing approach of online Neural Network tuning quantitatively through normalized mean square error(NMSE) values. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 192,527 |
2304.11490 | Boosting Theory-of-Mind Performance in Large Language Models via
Prompting | Large language models (LLMs) excel in many tasks in 2023, but they still face challenges in complex reasoning. Theory-of-mind (ToM) tasks, which require understanding agents' beliefs, goals, and mental states, are essential for common-sense reasoning involving humans, making it crucial to enhance LLM performance in this area. This study measures the ToM performance of GPT-4 and three GPT-3.5 variants (Davinci-2, Davinci-3, GPT-3.5-Turbo), and investigates the effectiveness of in-context learning in improving their ToM comprehension. We evaluated prompts featuring two-shot chain of thought reasoning and step-by-step thinking instructions. We found that LLMs trained with Reinforcement Learning from Human Feedback (RLHF) (all models excluding Davinci-2) improved their ToM accuracy via in-context learning. GPT-4 performed best in zero-shot settings, reaching nearly 80% ToM accuracy, but still fell short of the 87% human accuracy on the test set. However, when supplied with prompts for in-context learning, all RLHF-trained LLMs exceeded 80% ToM accuracy, with GPT-4 reaching 100%. These results demonstrate that appropriate prompting enhances LLM ToM reasoning, and they underscore the context-dependent nature of LLM cognitive capacities. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 359,838 |
1910.07799 | Exploring Semi-Automatic Map Labeling | Label placement in maps is a very challenging task that is critical for the overall map quality. Most previous work focused on designing and implementing fully automatic solutions, but the resulting visual and aesthetic quality has not reached the same level of sophistication that skilled human cartographers achieve. We investigate a different strategy that combines the strengths of humans and algorithms. In our proposed method, first an initial labeling is computed that has many well-placed labels but is not claiming to be perfect. Instead it serves as a starting point for an expert user who can then interactively and locally modify the labeling where necessary. In an iterative human-in-the-loop process alternating between user modifications and local algorithmic updates and refinements the labeling can be tuned to the user's needs. We demonstrate our approach by performing different possible modification steps in a sample workflow with a prototypical interactive labeling editor. Further, we report computational performance results from a simulation experiment in QGIS, which investigates the differences between exact and heuristic algorithms for semi-automatic map labeling. To that end, we compare several alternatives for recomputing the labeling after local modifications and updates, as a major ingredient for an interactive labeling editor. | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 149,712 |
2407.07586 | Simplifying Source-Free Domain Adaptation for Object Detection:
Effective Self-Training Strategies and Performance Insights | This paper focuses on source-free domain adaptation for object detection in computer vision. This task is challenging and of great practical interest, due to the cost of obtaining annotated data sets for every new domain. Recent research has proposed various solutions for Source-Free Object Detection (SFOD), most being variations of teacher-student architectures with diverse feature alignment, regularization and pseudo-label selection strategies. Our work investigates simpler approaches and their performance compared to more complex SFOD methods in several adaptation scenarios. We highlight the importance of batch normalization layers in the detector backbone, and show that adapting only the batch statistics is a strong baseline for SFOD. We propose a simple extension of a Mean Teacher with strong-weak augmentation in the source-free setting, Source-Free Unbiased Teacher (SF-UT), and show that it actually outperforms most of the previous SFOD methods. Additionally, we showcase that an even simpler strategy consisting in training on a fixed set of pseudo-labels can achieve similar performance to the more complex teacher-student mutual learning, while being computationally efficient and mitigating the major issue of teacher-student collapse. We conduct experiments on several adaptation tasks using benchmark driving datasets including (Foggy)Cityscapes, Sim10k and KITTI, and achieve a notable improvement of 4.7\% AP50 on Cityscapes$\rightarrow$Foggy-Cityscapes compared with the latest state-of-the-art in SFOD. Source code is available at https://github.com/EPFL-IMOS/simple-SFOD. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 471,820 |
1805.01361 | Machine learning regression on hyperspectral data to estimate multiple
water parameters | In this paper, we present a regression framework involving several machine learning models to estimate water parameters based on hyperspectral data. Measurements from a multi-sensor field campaign, conducted on the River Elbe, Germany, represent the benchmark dataset. It contains hyperspectral data and the five water parameters chlorophyll a, green algae, diatoms, CDOM and turbidity. We apply a PCA for the high-dimensional data as a possible preprocessing step. Then, we evaluate the performance of the regression framework with and without this preprocessing step. The regression results of the framework clearly reveal the potential of estimating water parameters based on hyperspectral data with machine learning. The proposed framework provides the basis for further investigations, such as adapting the framework to estimate water parameters of different inland waters. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 96,643 |
2305.13733 | Enhancing Large Language Models Against Inductive Instructions with
Dual-critique Prompting | Numerous works are proposed to align large language models (LLMs) with human intents to better fulfill instructions, ensuring they are trustful and helpful. Nevertheless, some human instructions are often malicious or misleading and following them will lead to untruthful and unsafe responses. Previous work rarely focused on understanding how LLMs manage instructions based on counterfactual premises, referred to here as \textit{inductive instructions}, which may stem from users' false beliefs or malicious intents. In this paper, we aim to reveal the behaviors of LLMs towards \textit{inductive instructions} and enhance their truthfulness and helpfulness accordingly. Specifically, we first introduce a benchmark of \underline{\textbf{Indu}}ctive {In\underline{\textbf{st}}ruct}ions (\textsc{\textbf{INDust}}), where the false knowledge is incorporated into instructions in multiple different styles. After extensive human and automatic evaluations, we uncovered a universal vulnerability among LLMs in processing inductive instructions. Additionally, we identified that different inductive styles affect the models' ability to identify the same underlying errors, and the complexity of the underlying assumptions also influences the model's performance. Motivated by these results, we propose \textsc{Dual-critique} prompting to improve LLM robustness against inductive instructions. Our experiments demonstrate that \textsc{Dual-critique} prompting significantly bolsters the robustness of a diverse array of LLMs, even when confronted with varying degrees of inductive instruction complexity and differing inductive styles. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 366,657 |
2309.01947 | TODM: Train Once Deploy Many Efficient Supernet-Based RNN-T Compression
For On-device ASR Models | Automatic Speech Recognition (ASR) models need to be optimized for specific hardware before they can be deployed on devices. This can be done by tuning the model's hyperparameters or exploring variations in its architecture. Re-training and re-validating models after making these changes can be a resource-intensive task. This paper presents TODM (Train Once Deploy Many), a new approach to efficiently train many sizes of hardware-friendly on-device ASR models with comparable GPU-hours to that of a single training job. TODM leverages insights from prior work on Supernet, where Recurrent Neural Network Transducer (RNN-T) models share weights within a Supernet. It reduces layer sizes and widths of the Supernet to obtain subnetworks, making them smaller models suitable for all hardware types. We introduce a novel combination of three techniques to improve the outcomes of the TODM Supernet: adaptive dropouts, an in-place Alpha-divergence knowledge distillation, and the use of ScaledAdam optimizer. We validate our approach by comparing Supernet-trained versus individually tuned Multi-Head State Space Model (MH-SSM) RNN-T using LibriSpeech. Results demonstrate that our TODM Supernet either matches or surpasses the performance of manually tuned models by up to a relative of 3% better in word error rate (WER), while efficiently keeping the cost of training many models at a small constant. | false | false | true | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 389,860 |
2204.01643 | On Convergence Lemma and Convergence Stability for Piecewise Analytic
Functions | In this work, a convergence lemma for function $f$ being finite compositions of analytic mappings and the maximum operator is proved. The lemma shows that the set of $\delta$-stationary points near an isolated local minimum point $x^*$ is shrinking to $x^*$ as $\delta\to 0$. It is a natural extension of the version for strongly convex $C^1$ functions. However, the correctness of the lemma is subtle. Analytic mappings are necessary for the lemma in the sense that replacing it with differentiable or $C^\infty$ mappings makes the lemma false. The proof is based on stratification theorems of semi-analytic sets by {\L}ojasiewicz. An extension of this proof presents a geometric characterization of the set of stationary points of $f$. Finally, a notion of stability on stationary points, called convergence stability, is proposed. It asks, under small numerical errors, whether a reasonable convergent optimization method started near a stationary point should eventually converge to the same stationary point. The concept of convergence stability becomes nontrivial qualitatively only when the objective function is both nonsmooth and nonconvex. Via the convergence lemma, an intuitive equivalent condition for convergence stability of $f$ is proved. These results together provide a new geometric perspective to study the problem of "where-to-converge" in nonsmooth nonconvex optimization. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 289,669 |
2403.01438 | Privacy-Preserving Collaborative Split Learning Framework for Smart Grid
Load Forecasting | Accurate load forecasting is crucial for energy management, infrastructure planning, and demand-supply balancing. Smart meter data availability has led to the demand for sensor-based load forecasting. Conventional ML allows training a single global model using data from multiple smart meters requiring data transfer to a central server, raising concerns for network requirements, privacy, and security. We propose a split learning-based framework for load forecasting to alleviate this issue. We split a deep neural network model into two parts, one for each Grid Station (GS) responsible for an entire neighbourhood's smart meters and the other for the Service Provider (SP). Instead of sharing their data, client smart meters use their respective GSs' model split for forward pass and only share their activations with the GS. Under this framework, each GS is responsible for training a personalized model split for their respective neighbourhoods, whereas the SP can train a single global or personalized model for each GS. Experiments show that the proposed models match or exceed a centrally trained model's performance and generalize well. Privacy is analyzed by assessing information leakage between data and shared activations of the GS model split. Additionally, differential privacy enhances local data privacy while examining its impact on performance. A transformer model is used as our base learner. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 434,417 |
2403.09996 | MEDPNet: Achieving High-Precision Adaptive Registration for Complex Die
Castings | Due to their complex spatial structure and diverse geometric features, achieving high-precision and robust point cloud registration for complex Die Castings has been a significant challenge in the die-casting industry. Existing point cloud registration methods primarily optimize network models using well-established high-quality datasets, often neglecting practical application in real scenarios. To address this gap, this paper proposes a high-precision adaptive registration method called Multiscale Efficient Deep Closest Point (MEDPNet) and introduces a die-casting point cloud dataset, DieCastCloud, specifically designed to tackle the challenges of point cloud registration in the die-casting industry. The MEDPNet method performs coarse die-casting point cloud data registration using the Efficient-DCP method, followed by precision registration using the Multiscale feature fusion dual-channel registration (MDR) method. We enhance the modeling capability and computational efficiency of the model by replacing the attention mechanism of the Transformer in DCP with Efficient Attention and implementing a collaborative scale mechanism through the combination of serial and parallel blocks. Additionally, we propose the MDR method, which utilizes multilayer perceptrons (MLP), Normal Distributions Transform (NDT), and Iterative Closest Point (ICP) to achieve learnable adaptive fusion, enabling high-precision, scalable, and noise-resistant global point cloud registration. Our proposed method demonstrates excellent performance compared to state-of-the-art geometric and learning-based registration methods when applied to complex die-casting point cloud data. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 437,991 |
2402.15213 | Statistical Agnostic Regression: a machine learning method to validate
regression models | Regression analysis is a central topic in statistical modeling, aimed at estimating the relationships between a dependent variable, commonly referred to as the response variable, and one or more independent variables, i.e., explanatory variables. Linear regression is by far the most popular method for performing this task in various fields of research, such as data integration and predictive modeling when combining information from multiple sources. Classical methods for solving linear regression problems, such as Ordinary Least Squares (OLS), Ridge, or Lasso regressions, often form the foundation for more advanced machine learning (ML) techniques, which have been successfully applied, though without a formal definition of statistical significance. At most, permutation or analyses based on empirical measures (e.g., residuals or accuracy) have been conducted, leveraging the greater sensitivity of ML estimations for detection. In this paper, we introduce Statistical Agnostic Regression (SAR) for evaluating the statistical significance of ML-based linear regression models. This is achieved by analyzing concentration inequalities of the actual risk (expected loss) and considering the worst-case scenario. To this end, we define a threshold that ensures there is sufficient evidence, with a probability of at least $1-\eta$, to conclude the existence of a linear relationship in the population between the explanatory (feature) and the response (label) variables. Simulations demonstrate the ability of the proposed agnostic (non-parametric) test to provide an analysis of variance similar to the classical multivariate $F$-test for the slope parameter, without relying on the underlying assumptions of classical methods. Moreover, the residuals computed from this method represent a trade-off between those obtained from ML approaches and the classical OLS. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 432,043 |
2403.05765 | Physics-informed Neural Motion Planning on Constraint Manifolds | Constrained Motion Planning (CMP) aims to find a collision-free path between the given start and goal configurations on the kinematic constraint manifolds. These problems appear in various scenarios ranging from object manipulation to legged-robot locomotion. However, the zero-volume nature of manifolds makes the CMP problem challenging, and the state-of-the-art methods still take several seconds to find a path and require a computationally expansive path dataset for imitation learning. Recently, physics-informed motion planning methods have emerged that directly solve the Eikonal equation through neural networks for motion planning and do not require expert demonstrations for learning. Inspired by these approaches, we propose the first physics-informed CMP framework that solves the Eikonal equation on the constraint manifolds and trains neural function for CMP without expert data. Our results show that the proposed approach efficiently solves various CMP problems in both simulation and real-world, including object manipulation under orientation constraints and door opening with a high-dimensional 6-DOF robot manipulator. In these complex settings, our method exhibits high success rates and finds paths in sub-seconds, which is many times faster than the state-of-the-art CMP methods. | false | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | 436,142 |
2106.03500 | Density estimation on smooth manifolds with normalizing flows | We present a framework for learning probability distributions on topologically non-trivial manifolds, utilizing normalizing flows. Current methods focus on manifolds that are homeomorphic to Euclidean space, enforce strong structural priors on the learned models or use operations that do not easily scale to high dimensions. In contrast, our method learns distributions on a data manifold by "gluing" together multiple local models, thus defining an open cover of the data manifold. We demonstrate the efficiency of our approach on synthetic data of known manifolds, as well as higher dimensional manifolds of unknown topology, where our method exhibits better sample efficiency and competitive or superior performance against baselines in a number of tasks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 239,343 |
1811.10210 | City-Scale Road Audit System using Deep Learning | Road networks in cities are massive and is a critical component of mobility. Fast response to defects, that can occur not only due to regular wear and tear but also because of extreme events like storms, is essential. Hence there is a need for an automated system that is quick, scalable and cost-effective for gathering information about defects. We propose a system for city-scale road audit, using some of the most recent developments in deep learning and semantic segmentation. For building and benchmarking the system, we curated a dataset which has annotations required for road defects. However, many of the labels required for road audit have high ambiguity which we overcome by proposing a label hierarchy. We also propose a multi-step deep learning model that segments the road, subdivide the road further into defects, tags the frame for each defect and finally localizes the defects on a map gathered using GPS. We analyze and evaluate the models on image tagging as well as segmentation at different levels of the label hierarchy. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 114,441 |
2010.14618 | A computationally and cognitively plausible model of supervised and
unsupervised learning | Both empirical and mathematical demonstrations of the importance of chance-corrected measures are discussed, and a new model of learning is proposed based on empirical psychological results on association learning. Two forms of this model are developed, the Informatron as a chance-corrected Perceptron, and AdaBook as a chance-corrected AdaBoost procedure. Computational results presented show chance correction facilitates learning. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | false | false | 203,508 |
2106.07137 | Why Can You Lay Off Heads? Investigating How BERT Heads Transfer | The huge size of the widely used BERT family models has led to recent efforts about model distillation. The main goal of distillation is to create a task-agnostic pre-trained model that can be fine-tuned on downstream tasks without fine-tuning its full-sized version. Despite the progress of distillation, to what degree and for what reason a task-agnostic model can be created from distillation has not been well studied. Also, the mechanisms behind transfer learning of those BERT models are not well investigated either. Therefore, this work focuses on analyzing the acceptable deduction when distillation for guiding the future distillation procedure. Specifically, we first inspect the prunability of the Transformer heads in RoBERTa and ALBERT using their head importance estimation proposed by Michel et al. (2019), and then check the coherence of the important heads between the pre-trained task and downstream tasks. Hence, the acceptable deduction of performance on the pre-trained task when distilling a model can be derived from the results, and we further compare the behavior of the pruned model before and after fine-tuning. Our studies provide guidance for future directions about BERT family model distillation. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 240,787 |
1902.09302 | Configuration Models of Random Hypergraphs | Many empirical networks are intrinsically polyadic, with interactions occurring within groups of agents of arbitrary size. There are, however, few flexible null models that can support statistical inference for such polyadic networks. We define a class of null random hypergraphs that hold constant both the node degree and edge dimension sequences, generalizing the classical dyadic configuration model. We provide a Markov Chain Monte Carlo scheme for sampling from these models, and discuss connections and distinctions between our proposed models and previous approaches. We then illustrate these models through a triplet of applications. We start with two classical network topics -- triadic clustering and degree-assortativity. In each, we emphasize the importance of randomizing over hypergraph space rather than projected graph space, showing that this choice can dramatically alter statistical inference and study findings. We then define and study the edge intersection profile of a hypergraph as a measure of higher-order correlation between edges, and derive asymptotic approximations under the stub-labeled null. Our experiments emphasize the ability of explicit, statistically-grounded polyadic modeling to significantly enhance the toolbox of network data science. We close with suggestions for multiple avenues of future work. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 122,388 |
2002.00082 | Regret Minimization in Partially Observable Linear Quadratic Control | We study the problem of regret minimization in partially observable linear quadratic control systems when the model dynamics are unknown a priori. We propose ExpCommit, an explore-then-commit algorithm that learns the model Markov parameters and then follows the principle of optimism in the face of uncertainty to design a controller. We propose a novel way to decompose the regret and provide an end-to-end sublinear regret upper bound for partially observable linear quadratic control. Finally, we provide stability guarantees and establish a regret upper bound of $\tilde{\mathcal{O}}(T^{2/3})$ for ExpCommit, where $T$ is the time horizon of the problem. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 162,246 |
1508.04022 | Joint Source-Channel Coding for Broadcast Channel with Cooperating
Receivers | It is known that, as opposed to point-to-point channel, separate source and channel coding is not optimal in general for sending correlated sources over multiuser channels. In some works joint source-channel coding has been investigated for some certain multiuser channels; i.g., multiple access channel (MAC) and broadcast channel (BC). In this paper, we obtain a sufficient condition for transmitting arbitrarily correlated sources over a discrete memoryless BC with cooperating receivers, where the receivers are allowed to exchange messages via a pair of noisy cooperative links. It is seen that our results is a general form of previous ones and includes them as its special cases. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 46,073 |
1907.07783 | Patient-specific Conditional Joint Models of Shape, Image Features and
Clinical Indicators | We propose and demonstrate a joint model of anatomical shapes, image features and clinical indicators for statistical shape modeling and medical image analysis. The key idea is to employ a copula model to separate the joint dependency structure from the marginal distributions of variables of interest. This separation provides flexibility on the assumptions made during the modeling process. The proposed method can handle binary, discrete, ordinal and continuous variables. We demonstrate a simple and efficient way to include binary, discrete and ordinal variables into the modeling. We build Bayesian conditional models based on observed partial clinical indicators, features or shape based on Gaussian processes capturing the dependency structure. We apply the proposed method on a stroke dataset to jointly model the shape of the lateral ventricles, the spatial distribution of the white matter hyperintensity associated with periventricular white matter disease, and clinical indicators. The proposed method yields interpretable joint models for data exploration and patient-specific statistical shape models for medical image analysis. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | true | 138,966 |
2402.18951 | Percept, Chat, and then Adapt: Multimodal Knowledge Transfer of
Foundation Models for Open-World Video Recognition | Open-world video recognition is challenging since traditional networks are not generalized well on complex environment variations. Alternatively, foundation models with rich knowledge have recently shown their generalization power. However, how to apply such knowledge has not been fully explored for open-world video recognition. To this end, we propose a generic knowledge transfer pipeline, which progressively exploits and integrates external multimodal knowledge from foundation models to boost open-world video recognition. We name it PCA, based on three stages of Percept, Chat, and Adapt. First, we perform Percept process to reduce the video domain gap and obtain external visual knowledge. Second, we generate rich linguistic semantics as external textual knowledge in Chat stage. Finally, we blend external multimodal knowledge in Adapt stage, by inserting multimodal knowledge adaptation modules into networks. We conduct extensive experiments on three challenging open-world video benchmarks, i.e., TinyVIRAT, ARID, and QV-Pipe. Our approach achieves state-of-the-art performance on all three datasets. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 433,632 |
1805.04628 | Constrained-CNN losses for weakly supervised segmentation | Weakly-supervised learning based on, e.g., partially labelled images or image-tags, is currently attracting significant attention in CNN segmentation as it can mitigate the need for full and laborious pixel/voxel annotations. Enforcing high-order (global) inequality constraints on the network output (for instance, to constrain the size of the target region) can leverage unlabeled data, guiding the training process with domain-specific knowledge. Inequality constraints are very flexible because they do not assume exact prior knowledge. However, constrained Lagrangian dual optimization has been largely avoided in deep networks, mainly for computational tractability reasons. To the best of our knowledge, the method of [Pathak et al., 2015] is the only prior work that addresses deep CNNs with linear constraints in weakly supervised segmentation. It uses the constraints to synthesize fully-labeled training masks (proposals) from weak labels, mimicking full supervision and facilitating dual optimization. We propose to introduce a differentiable penalty, which enforces inequality constraints directly in the loss function, avoiding expensive Lagrangian dual iterates and proposal generation. From constrained-optimization perspective, our simple penalty-based approach is not optimal as there is no guarantee that the constraints are satisfied. However, surprisingly, it yields substantially better results than the Lagrangian-based constrained CNNs in [Pathak et al., 2015], while reducing the computational demand for training. By annotating only a small fraction of the pixels, the proposed approach can reach a level of segmentation performance that is comparable to full supervision on three separate tasks. While our experiments focused on basic linear constraints such as the target-region size and image tags, our framework can be easily extended to other non-linear constraints. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 97,275 |
1902.00703 | Evaluating MAP-Elites on Constrained Optimization Problems | Constrained optimization problems are often characterized by multiple constraints that, in the practice, must be satisfied with different tolerance levels. While some constraints are hard and as such must be satisfied with zero-tolerance, others may be soft, such that non-zero violations are acceptable. Here, we evaluate the applicability of MAP-Elites to "illuminate" constrained search spaces by mapping them into feature spaces where each feature corresponds to a different constraint. On the one hand, MAP-Elites implicitly preserves diversity, thus allowing a good exploration of the search space. On the other hand, it provides an effective visualization that facilitates a better understanding of how constraint violations correlate with the objective function. We demonstrate the feasibility of this approach on a large set of benchmark problems, in various dimensionalities, and with different algorithmic configurations. As expected, numerical results show that a basic version of MAP-Elites cannot compete on all problems (especially those with equality constraints) with state-of-the-art algorithms that use gradient information or advanced constraint handling techniques. Nevertheless, it has a higher potential at finding constraint violations vs. objectives trade-offs and providing new problem information. As such, it could be used in the future as an effective building-block for designing new constrained optimization algorithms. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 120,484 |
1202.6350 | Prime tight frames | We introduce a class of finite tight frames called prime tight frames and prove some of their elementary properties. In particular, we show that any finite tight frame can be written as a union of prime tight frames. We then characterize all prime harmonic tight frames and use this characterization to suggest effective analysis and synthesis computation strategies for such frames. Finally, we describe all prime frames constructed from the spectral tetris method, and, as a byproduct, we obtain a characterization of when the spectral tetris construction works for redundancies below two. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 14,629 |
2303.10753 | Fr\'echet Statistics Based Change Point Detection in Dynamic Social
Networks | This paper proposes a method to detect change points in dynamic social networks using Fr\'echet statistics. We address two main questions: (1) what metric can quantify the distances between graph Laplacians in a dynamic network and enable efficient computation, and (2) how can the Fr\'echet statistics be extended to detect multiple change points while maintaining the significance level of the hypothesis test? Our solution defines a metric space for graph Laplacians using the Log-Euclidean metric, enabling a closed-form formula for Fr\'echet mean and variance. We present a framework for change point detection using Fr\'echet statistics and extend it to multiple change points with binary segmentation. The proposed algorithm uses incremental computation for Fr\'echet mean and variance to improve efficiency and is validated on simulated and two real-world datasets, namely the UCI message dataset and the Enron email dataset. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 352,570 |
2108.00669 | Towards Making Deep Learning-based Vulnerability Detectors Robust | Automatically detecting software vulnerabilities in source code is an important problem that has attracted much attention. In particular, deep learning-based vulnerability detectors, or DL-based detectors, are attractive because they do not need human experts to define features or patterns of vulnerabilities. However, such detectors' robustness is unclear. In this paper, we initiate the study in this aspect by demonstrating that DL-based detectors are not robust against simple code transformations, dubbed attacks in this paper, as these transformations may be leveraged for malicious purposes. As a first step towards making DL-based detectors robust against such attacks, we propose an innovative framework, dubbed ZigZag, which is centered at (i) decoupling feature learning and classifier learning and (ii) using a ZigZag-style strategy to iteratively refine them until they converge to robust features and robust classifiers. Experimental results show that the ZigZag framework can substantially improve the robustness of DL-based detectors. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 248,792 |
2310.18481 | MOSEL: Inference Serving Using Dynamic Modality Selection | Rapid advancements over the years have helped machine learning models reach previously hard-to-achieve goals, sometimes even exceeding human capabilities. However, to attain the desired accuracy, the model sizes and in turn their computational requirements have increased drastically. Thus, serving predictions from these models to meet any target latency and cost requirements of applications remains a key challenge, despite recent work in building inference-serving systems as well as algorithmic approaches that dynamically adapt models based on inputs. In this paper, we introduce a form of dynamism, modality selection, where we adaptively choose modalities from inference inputs while maintaining the model quality. We introduce MOSEL, an automated inference serving system for multi-modal ML models that carefully picks input modalities per request based on user-defined performance and accuracy requirements. MOSEL exploits modality configurations extensively, improving system throughput by 3.6$\times$ with an accuracy guarantee and shortening job completion times by 11$\times$. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 403,564 |
2006.02535 | A Survey on Deep Learning Techniques for Stereo-based Depth Estimation | Estimating depth from RGB images is a long-standing ill-posed problem, which has been explored for decades by the computer vision, graphics, and machine learning communities. Among the existing techniques, stereo matching remains one of the most widely used in the literature due to its strong connection to the human binocular system. Traditionally, stereo-based depth estimation has been addressed through matching hand-crafted features across multiple images. Despite the extensive amount of research, these traditional techniques still suffer in the presence of highly textured areas, large uniform regions, and occlusions. Motivated by their growing success in solving various 2D and 3D vision problems, deep learning for stereo-based depth estimation has attracted growing interest from the community, with more than 150 papers published in this area between 2014 and 2019. This new generation of methods has demonstrated a significant leap in performance, enabling applications such as autonomous driving and augmented reality. In this article, we provide a comprehensive survey of this new and continuously growing field of research, summarize the most commonly used pipelines, and discuss their benefits and limitations. In retrospect of what has been achieved so far, we also conjecture what the future may hold for deep learning-based stereo for depth estimation research. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 180,060 |
2408.16655 | Optimal Trace Distance and Fidelity Estimations for Pure Quantum States | Measuring the distinguishability between quantum states is a basic problem in quantum information theory. In this paper, we develop optimal quantum algorithms that estimate both the trace distance and the (square root) fidelity between pure states to within additive error $\varepsilon$ using $\Theta(1/\varepsilon)$ queries to their state-preparation circuits, quadratically improving the long-standing folklore $O(1/\varepsilon^2)$. At the heart of our construction, is an algorithmic tool for quantum square root amplitude estimation, which generalizes the well-known quantum amplitude estimation. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 484,408 |
2411.10345 | Comparative Analysis of Machine Learning Approaches for Bone Age
Assessment: A Comprehensive Study on Three Distinct Models | Radiologists and doctors make use of X-ray images of the non-dominant hands of children and infants to assess the possibility of genetic conditions and growth abnormalities. This is done by assessing the difference between the actual extent of growth found using the X-rays and the chronological age of the subject. The assessment was done conventionally using The Greulich Pyle (GP) or Tanner Whitehouse (TW) approach. These approaches require a high level of expertise and may often lead to observer bias. Hence, to automate the process of assessing the X-rays, and to increase its accuracy and efficiency, several machine learning models have been developed. These machine-learning models have several differences in their accuracy and efficiencies, leading to an unclear choice for the suitable model depending on their needs and available resources. Methods: In this study, we have analyzed the 3 most widely used models for the automation of bone age prediction, which are the Xception model, VGG model and CNN model. These models were trained on the preprocessed dataset and the accuracy was measured using the MAE in terms of months for each model. Using this, the comparison between the models was done. Results: The 3 models, Xception, VGG, and CNN models have been tested for accuracy and other relevant factors. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 508,604 |
2406.17517 | Preserving Node Distinctness in Graph Autoencoders via Similarity
Distillation | Graph autoencoders (GAEs), as a kind of generative self-supervised learning approach, have shown great potential in recent years. GAEs typically rely on distance-based criteria, such as mean-square-error (MSE), to reconstruct the input graph. However, relying solely on a single reconstruction criterion may lead to a loss of distinctiveness in the reconstructed graph, causing nodes to collapse into similar representations and resulting in sub-optimal performance. To address this issue, we have developed a simple yet effective strategy to preserve the necessary distinctness in the reconstructed graph. Inspired by the knowledge distillation technique, we found that the dual encoder-decoder architecture of GAEs can be viewed as a teacher-student relationship. Therefore, we propose transferring the knowledge of distinctness from the raw graph to the reconstructed graph, achieved through a simple KL constraint. Specifically, we compute pairwise node similarity scores in the raw graph and reconstructed graph. During the training process, the KL constraint is optimized alongside the reconstruction criterion. We conducted extensive experiments across three types of graph tasks, demonstrating the effectiveness and generality of our strategy. This indicates that the proposed approach can be employed as a plug-and-play method to avoid vague reconstructions and enhance overall performance. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 467,601 |
2310.18237 | Generative AI Model for Artistic Style Transfer Using Convolutional
Neural Networks | Artistic style transfer, a captivating application of generative artificial intelligence, involves fusing the content of one image with the artistic style of another to create unique visual compositions. This paper presents a comprehensive overview of a novel technique for style transfer using Convolutional Neural Networks (CNNs). By leveraging deep image representations learned by CNNs, we demonstrate how to separate and manipulate image content and style, enabling the synthesis of high-quality images that combine content and style in a harmonious manner. We describe the methodology, including content and style representations, loss computation, and optimization, and showcase experimental results highlighting the effectiveness and versatility of the approach across different styles and content | true | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 403,439 |
2102.03952 | Competition Dynamics in the Meme Ecosystem | The creation and sharing of memes is a common modality of online social interactions. The goal of the present work is to better understand the collective dynamics of memes in this accelerating and competitive environment. By taking an ecological perspective and tracking the meme-text from 352 popular memes over the entirety of Reddit, we are able to show that the frequency of memes has scaled almost exactly with the total amount of content created over the past decade. This means that as more data is posted, an equal proportion of memes are posted. One consequence of limited human attention in the face of a growing number of memes is that the diversity of these memes has decreased at the community level, albeit slightly, in the same period. Another consequence is that the average lifespan of a meme has decreased dramatically, which is further evidence of an increase in competition and a decreasing collective attention span. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 218,930 |
2305.19707 | Building Extractive Question Answering System to Support Human-AI Health
Coaching Model for Sleep Domain | Non-communicable diseases (NCDs) are a leading cause of global deaths, necessitating a focus on primary prevention and lifestyle behavior change. Health coaching, coupled with Question Answering (QA) systems, has the potential to transform preventive healthcare. This paper presents a human-Artificial Intelligence (AI) health coaching model incorporating a domain-specific extractive QA system. A sleep-focused dataset, SleepQA, was manually assembled and used to fine-tune domain-specific BERT models. The QA system was evaluated using automatic and human methods. A data-centric framework enhanced the system's performance by improving passage retrieval and question reformulation. Although the system did not outperform the baseline in automatic evaluation, it excelled in the human evaluation of real-world questions. Integration into a Human-AI health coaching model was tested in a pilot Randomized Controlled Trial (RCT). | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 369,648 |
2409.09316 | Discrete-time Indirect Adaptive Control for Systems with State-Dependent
Disturbances via Directional Forgetting: Concurrent Learning Approach | An adaptive controller design for cases with disturbances is critical in practical applications for preventing unexpected control performance degradation and instability. Recently, adaptive control systems with relaxed persistent excitation (PE) conditions have been proposed to solve this problem; however, most discussions have focused on continuous-time systems. In this study, we propose a novel adaptive control method for discrete-time systems with disturbances that combines directional forgetting and concurrent learning. The proposed method does not require the PE condition, information on disturbances, unknown parameters, or matching conditions, and it guarantees exponential uniform ultimate unbounded (UUB). It was also theoretically demonstrated that the upper bound of the UUB can be designed based on the forgetting factor, which is a design parameter. Numerical simulation results illustrate the effectiveness of the proposed method. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 488,267 |
2411.08651 | Estimating unknown parameters in differential equations with a
reinforcement learning based PSO method | Differential equations offer a foundational yet powerful framework for modeling interactions within complex dynamic systems and are widely applied across numerous scientific fields. One common challenge in this area is estimating the unknown parameters of these dynamic relationships. However, traditional numerical optimization methods rely on the selection of initial parameter values, making them prone to local optima. Meanwhile, deep learning and Bayesian methods require training models on specific differential equations, resulting in poor versatility. This paper reformulates the parameter estimation problem of differential equations as an optimization problem by introducing the concept of particles from the particle swarm optimization algorithm. Building on reinforcement learning-based particle swarm optimization (RLLPSO), this paper proposes a novel method, DERLPSO, for estimating unknown parameters of differential equations. We compared its performance on three typical ordinary differential equations with the state-of-the-art methods, including the RLLPSO algorithm, traditional numerical methods, deep learning approaches, and Bayesian methods. The experimental results demonstrate that our DERLPSO consistently outperforms other methods in terms of performance, achieving an average Mean Square Error of 1.13e-05, which reduces the error by approximately 4 orders of magnitude compared to other methods. Apart from ordinary differential equations, our DERLPSO also show great promise for estimating unknown parameters of partial differential equations. The DERLPSO method proposed in this paper has high accuracy, is independent of initial parameter values, and possesses strong versatility and stability. This work provides new insights into unknown parameter estimation for differential equations. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 507,966 |
2202.04820 | L0Learn: A Scalable Package for Sparse Learning using L0 Regularization | We present L0Learn: an open-source package for sparse linear regression and classification using $\ell_0$ regularization. L0Learn implements scalable, approximate algorithms, based on coordinate descent and local combinatorial optimization. The package is built using C++ and has user-friendly R and Python interfaces. L0Learn can address problems with millions of features, achieving competitive run times and statistical performance with state-of-the-art sparse learning packages. L0Learn is available on both CRAN and GitHub (https://cran.r-project.org/package=L0Learn and https://github.com/hazimehh/L0Learn). | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 279,677 |
2311.03967 | CeCNN: Copula-enhanced convolutional neural networks in joint prediction
of refraction error and axial length based on ultra-widefield fundus images | The ultra-widefield (UWF) fundus image is an attractive 3D biomarker in AI-aided myopia screening because it provides much richer myopia-related information. Though axial length (AL) has been acknowledged to be highly related to the two key targets of myopia screening, Spherical Equivalence (SE) measurement and high myopia diagnosis, its prediction based on the UWF fundus image is rarely considered. To save the high expense and time costs of measuring SE and AL, we propose the Copula-enhanced Convolutional Neural Network (CeCNN), a one-stop UWF-based ophthalmic AI framework to jointly predict SE, AL, and myopia status. The CeCNN formulates a multiresponse regression that relates multiple dependent discrete-continuous responses and the image covariate, where the nonlinearity of the association is modeled by a backbone CNN. To thoroughly describe the dependence structure among the responses, we model and incorporate the conditional dependence among responses in a CNN through a new copula-likelihood loss. We provide statistical interpretations of the conditional dependence among responses, and reveal that such dependence is beyond the dependence explained by the image covariate. We heuristically justify that the proposed loss can enhance the estimation efficiency of the CNN weights. We apply the CeCNN to the UWF dataset collected by us and demonstrate that the CeCNN sharply enhances the predictive capability of various backbone CNNs. Our study evidences the ophthalmology view that besides SE, AL is also an important measure to myopia. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 406,051 |
2210.16537 | Phonemic Representation and Transcription for Speech to Text
Applications for Under-resourced Indigenous African Languages: The Case of
Kiswahili | Building automatic speech recognition (ASR) systems is a challenging task, especially for under-resourced languages that need to construct corpora nearly from scratch and lack sufficient training data. It has emerged that several African indigenous languages, including Kiswahili, are technologically under-resourced. ASR systems are crucial, particularly for the hearing-impaired persons who can benefit from having transcripts in their native languages. However, the absence of transcribed speech datasets has complicated efforts to develop ASR models for these indigenous languages. This paper explores the transcription process and the development of a Kiswahili speech corpus, which includes both read-out texts and spontaneous speech data from native Kiswahili speakers. The study also discusses the vowels and consonants in Kiswahili and provides an updated Kiswahili phoneme dictionary for the ASR model that was created using the CMU Sphinx speech recognition toolbox, an open-source speech recognition toolkit. The ASR model was trained using an extended phonetic set that yielded a WER and SER of 18.87% and 49.5%, respectively, an improved performance than previous similar research for under-resourced languages. | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 327,371 |
2410.21187 | A cross-platform analysis of polarization and echo chambers in climate
change discussions | With the intensification of climate change discussion, social media has become prominent in disseminating reliable and unreliable content. In this study, we present a cross-platform analysis on Youtube and Twitter, and examine the polarization and echo chambers in social media discussions in four datasets related to climate change: COP27, IPCC, Climate Refugees, and Do\~{n}ana. We have identified communities of users spreading misinformation on Twitter, although they remain relatively isolated from the rest of the network. The analysis by interaction type reveals that climate change sceptics use mentions to draw the attention of other communities. The YouTube posts referenced on Twitter reveal a strong correlation in the community organisation of social media, suggesting a platform alignment. Moreover, we report the presence of echo chambers in YouTube post-sharing related to mainstream and sceptical content. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 503,122 |
2002.12096 | Action Quality Assessment using Siamese Network-Based Deep Metric
Learning | Automated vision-based score estimation models can be used as an alternate opinion to avoid judgment bias. In the past works the score estimation models were learned by regressing the video representations to the ground truth score provided by the judges. However such regression-based solutions lack interpretability in terms of giving reasons for the awarded score. One solution to make the scores more explicable is to compare the given action video with a reference video. This would capture the temporal variations w.r.t. the reference video and map those variations to the final score. In this work, we propose a new action scoring system as a two-phase system: (1) A Deep Metric Learning Module that learns similarity between any two action videos based on their ground truth scores given by the judges; (2) A Score Estimation Module that uses the first module to find the resemblance of a video to a reference video in order to give the assessment score. The proposed scoring model has been tested for Olympics Diving and Gymnastic vaults and the model outperforms the existing state-of-the-art scoring models. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 165,926 |
2404.05675 | Normalizing Flows on the Product Space of SO(3) Manifolds for
Probabilistic Human Pose Modeling | Normalizing flows have proven their efficacy for density estimation in Euclidean space, but their application to rotational representations, crucial in various domains such as robotics or human pose modeling, remains underexplored. Probabilistic models of the human pose can benefit from approaches that rigorously consider the rotational nature of human joints. For this purpose, we introduce HuProSO3, a normalizing flow model that operates on a high-dimensional product space of SO(3) manifolds, modeling the joint distribution for human joints with three degrees of freedom. HuProSO3's advantage over state-of-the-art approaches is demonstrated through its superior modeling accuracy in three different applications and its capability to evaluate the exact likelihood. This work not only addresses the technical challenge of learning densities on SO(3) manifolds, but it also has broader implications for domains where the probabilistic regression of correlated 3D rotations is of importance. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 445,170 |
2404.16292 | One Noise to Rule Them All: Learning a Unified Model of
Spatially-Varying Noise Patterns | Procedural noise is a fundamental component of computer graphics pipelines, offering a flexible way to generate textures that exhibit "natural" random variation. Many different types of noise exist, each produced by a separate algorithm. In this paper, we present a single generative model which can learn to generate multiple types of noise as well as blend between them. In addition, it is capable of producing spatially-varying noise blends despite not having access to such data for training. These features are enabled by training a denoising diffusion model using a novel combination of data augmentation and network conditioning techniques. Like procedural noise generators, the model's behavior is controllable via interpretable parameters and a source of randomness. We use our model to produce a variety of visually compelling noise textures. We also present an application of our model to improving inverse procedural material design; using our model in place of fixed-type noise nodes in a procedural material graph results in higher-fidelity material reconstructions without needing to know the type of noise in advance. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | true | 449,426 |
2308.04585 | Kernel Single Proxy Control for Deterministic Confounding | We consider the problem of causal effect estimation with an unobserved confounder, where we observe a proxy variable that is associated with the confounder. Although Proxy causal learning (PCL) uses two proxy variables to recover the true causal effect, we show that a single proxy variable is sufficient for causal estimation if the outcome is generated deterministically, generalizing Control Outcome Calibration Approach (COCA). We propose two kernel-based methods for this setting: the first based on the two-stage regression approach, and the second based on a maximum moment restriction approach. We prove that both approaches can consistently estimate the causal effect, and we empirically demonstrate that we can successfully recover the causal effect on challenging synthetic benchmarks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 384,462 |
1011.1972 | Assisted Entanglement Distillation | Motivated by the problem of designing quantum repeaters, we study entanglement distillation between two parties, Alice and Bob, starting from a mixed state and with the help of "repeater" stations. To treat the case of a single repeater, we extend the notion of entanglement of assistance to arbitrary mixed tripartite states and exhibit a protocol, based on a random coding strategy, for extracting pure entanglement. The rates achievable by this protocol formally resemble those achievable if the repeater station could merge its state to one of Alice and Bob even when such merging is impossible. This rate is provably better than the hashing bound for sufficiently pure tripartite states. We also compare our assisted distillation protocol to a hierarchical strategy consisting of entanglement distillation followed by entanglement swapping. We demonstrate by the use of a simple example that our random measurement strategy outperforms hierarchical distillation strategies when the individual helper stations' states fail to individually factorize into portions associated specifically with Alice and Bob. Finally, we use these results to find achievable rates for the more general scenario, where many spatially separated repeaters help two recipients distill entanglement. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 8,177 |
2311.04047 | Extracting human interpretable structure-property relationships in
chemistry using XAI and large language models | Explainable Artificial Intelligence (XAI) is an emerging field in AI that aims to address the opaque nature of machine learning models. Furthermore, it has been shown that XAI can be used to extract input-output relationships, making them a useful tool in chemistry to understand structure-property relationships. However, one of the main limitations of XAI methods is that they are developed for technically oriented users. We propose the XpertAI framework that integrates XAI methods with large language models (LLMs) accessing scientific literature to generate accessible natural language explanations of raw chemical data automatically. We conducted 5 case studies to evaluate the performance of XpertAI. Our results show that XpertAI combines the strengths of LLMs and XAI tools in generating specific, scientific, and interpretable explanations. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 406,080 |
2305.04541 | High Quality Large-Scale 3-D Urban Mapping with Multi-Master TomoSAR | Multi-baseline interferometric synthetic aperture radar (InSAR) techniques are effective approaches for retrieving the 3-D information of urban areas. In order to obtain a plausible reconstruction, it is necessary to use large-stack interferograms. Hence, these methods are commonly not appropriate for large-scale 3-D urban mapping using TanDEM-X data where only a few acquisitions are available in average for each city. This work proposes a new SAR tomographic processing framework to work with those extremely small stacks, which integrates the non-local filtering into SAR tomography inversion. The applicability of the algorithm is demonstrated using a TanDEM-X multi-baseline stack with 5 bistatic interferograms over the whole city of Munich, Germany. Systematic comparison of our result with airborne LiDAR data shows that the relative height accuracy of two third buildings is within two meters, which outperforms the TanDEM-X raw DEM. The promising performance of the proposed algorithm paved the first step towards high quality large-scale 3-D urban mapping. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 362,812 |
2409.16904 | Discriminative Anchor Learning for Efficient Multi-view Clustering | Multi-view clustering aims to study the complementary information across views and discover the underlying structure. For solving the relatively high computational cost for the existing approaches, works based on anchor have been presented recently. Even with acceptable clustering performance, these methods tend to map the original representation from multiple views into a fixed shared graph based on the original dataset. However, most studies ignore the discriminative property of the learned anchors, which ruin the representation capability of the built model. Moreover, the complementary information among anchors across views is neglected to be ensured by simply learning the shared anchor graph without considering the quality of view-specific anchors. In this paper, we propose discriminative anchor learning for multi-view clustering (DALMC) for handling the above issues. We learn discriminative view-specific feature representations according to the original dataset and build anchors from different views based on these representations, which increase the quality of the shared anchor graph. The discriminative feature learning and consensus anchor graph construction are integrated into a unified framework to improve each other for realizing the refinement. The optimal anchors from multiple views and the consensus anchor graph are learned with the orthogonal constraints. We give an iterative algorithm to deal with the formulated problem. Extensive experiments on different datasets show the effectiveness and efficiency of our method compared with other methods. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 491,567 |
2402.07691 | Evaluation of a Smart Mobile Robotic System for Industrial Plant
Inspection and Supervision | Automated and autonomous industrial inspection is a longstanding research field, driven by the necessity to enhance safety and efficiency within industrial settings. In addressing this need, we introduce an autonomously navigating robotic system designed for comprehensive plant inspection. This innovative system comprises a robotic platform equipped with a diverse array of sensors integrated to facilitate the detection of various process and infrastructure parameters. These sensors encompass optical (LiDAR, Stereo, UV/IR/RGB cameras), olfactory (electronic nose), and acoustic (microphone array) capabilities, enabling the identification of factors such as methane leaks, flow rates, and infrastructural anomalies. The proposed system underwent individual evaluation at a wastewater treatment site within a chemical plant, providing a practical and challenging environment for testing. The evaluation process encompassed key aspects such as object detection, 3D localization, and path planning. Furthermore, specific evaluations were conducted for optical methane leak detection and localization, as well as acoustic assessments focusing on pump equipment and gas leak localization. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 428,813 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.