id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2101.06459 | Robustness to Augmentations as a Generalization metric | Generalization is the ability of a model to predict on unseen domains and is a fundamental task in machine learning. Several generalization bounds, both theoretical and empirical have been proposed but they do not provide tight bounds .In this work, we propose a simple yet effective method to predict the generalization performance of a model by using the concept that models that are robust to augmentations are more generalizable than those which are not. We experiment with several augmentations and composition of augmentations to check the generalization capacity of a model. We also provide a detailed motivation behind the proposed method. The proposed generalization metric is calculated based on the change in the output of the model after augmenting the input. The proposed method was the first runner up solution for the NeurIPS competition on Predicting Generalization in Deep Learning. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 215,724 |
2407.06438 | SOLO: A Single Transformer for Scalable Vision-Language Modeling | We present SOLO, a single transformer for Scalable visiOn-Language mOdeling. Current large vision-language models (LVLMs) such as LLaVA mostly employ heterogeneous architectures that connect pre-trained visual encoders with large language models (LLMs) to facilitate visual recognition and complex reasoning. Although achieving remarkable performance with relatively lightweight training, we identify four primary scalability limitations: (1) The visual capacity is constrained by pre-trained visual encoders, which are typically an order of magnitude smaller than LLMs. (2) The heterogeneous architecture complicates the use of established hardware and software infrastructure. (3) Study of scaling laws on such architecture must consider three separate components - visual encoder, connector, and LLMs, which complicates the analysis. (4) The use of existing visual encoders typically requires following a pre-defined specification of image inputs pre-processing, for example, by reshaping inputs to fixed-resolution square images, which presents difficulties in processing and training on high-resolution images or those with unusual aspect ratio. A unified single Transformer architecture, like SOLO, effectively addresses these scalability concerns in LVLMs; however, its limited adoption in the modern context likely stems from the absence of reliable training recipes that balance both modalities and ensure stable training for billion-scale models. In this paper, we introduce the first open-source training recipe for developing SOLO, an open-source 7B LVLM using moderate academic resources. The training recipe involves initializing from LLMs, sequential pre-training on ImageNet and web-scale data, and instruction fine-tuning on our curated high-quality datasets. On extensive evaluation, SOLO demonstrates performance comparable to LLaVA-v1.5-7B, particularly excelling in visual mathematical reasoning. | false | false | false | false | false | false | true | false | true | false | false | true | false | false | false | false | false | false | 471,391 |
2103.10571 | Generic Perceptual Loss for Modeling Structured Output Dependencies | The perceptual loss has been widely used as an effective loss term in image synthesis tasks including image super-resolution, and style transfer. It was believed that the success lies in the high-level perceptual feature representations extracted from CNNs pretrained with a large set of images. Here we reveal that, what matters is the network structure instead of the trained weights. Without any learning, the structure of a deep network is sufficient to capture the dependencies between multiple levels of variable statistics using multiple layers of CNNs. This insight removes the requirements of pre-training and a particular network structure (commonly, VGG) that are previously assumed for the perceptual loss, thus enabling a significantly wider range of applications. To this end, we demonstrate that a randomly-weighted deep CNN can be used to model the structured dependencies of outputs. On a few dense per-pixel prediction tasks such as semantic segmentation, depth estimation and instance segmentation, we show improved results of using the extended randomized perceptual loss, compared to the baselines using pixel-wise loss alone. We hope that this simple, extended perceptual loss may serve as a generic structured-output loss that is applicable to most structured output learning tasks. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 225,496 |
2110.07111 | A Novel Traffic Simulation Framework for Testing Autonomous Vehicles
Using SUMO and CARLA | Traffic simulation is an efficient and cost-effective way to test Autonomous Vehicles (AVs) in a complex and dynamic environment. Numerous studies have been conducted for AV evaluation using traffic simulation over the past decades. However, the current simulation environments fall behind on two fronts -- the background vehicles (BVs) fail to simulate naturalistic driving behavior and the existing environments do not test the entire pipeline in a modular fashion. This study aims to propose a simulation framework that creates a complex and naturalistic traffic environment. Specifically, we combine a modified version of the Simulation of Urban MObility (SUMO) simulator with the Cars Learning to Act (CARLA) simulator to generate a simulation environment that could emulate the complexities of the external environment while providing realistic sensor outputs to the AV pipeline. In a past research work, we created an open-source Python package called SUMO-Gym which generates a realistic road network and naturalistic traffic through SUMO and combines that with OpenAI Gym to provide ease of use for the end user. We propose to extend our developed software by adding CARLA, which in turn will enrich the perception of the ego vehicle by providing realistic sensors outputs of the AVs surrounding environment. Using the proposed framework, AVs perception, planning, and control could be tested in a complex and realistic driving environment. The performance of the proposed framework in constructing output generation and AV evaluations are demonstrated using several case studies. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 260,864 |
2410.20679 | MCI-GRU: Stock Prediction Model Based on Multi-Head Cross-Attention and
Improved GRU | As financial markets grow increasingly complex in the big data era, accurate stock prediction has become more critical. Traditional time series models, such as GRUs, have been widely used but often struggle to capture the intricate nonlinear dynamics of markets, particularly in the flexible selection and effective utilization of key historical information. Recently, methods like Graph Neural Networks and Reinforcement Learning have shown promise in stock prediction but require high data quality and quantity, and they tend to exhibit instability when dealing with data sparsity and noise. Moreover, the training and inference processes for these models are typically complex and computationally expensive, limiting their broad deployment in practical applications. Existing approaches also generally struggle to capture unobservable latent market states effectively, such as market sentiment and expectations, microstructural factors, and participant behavior patterns, leading to an inadequate understanding of market dynamics and subsequently impact prediction accuracy. To address these challenges, this paper proposes a stock prediction model, MCI-GRU, based on a multi-head cross-attention mechanism and an improved GRU. First, we enhance the GRU model by replacing the reset gate with an attention mechanism, thereby increasing the model's flexibility in selecting and utilizing historical information. Second, we design a multi-head cross-attention mechanism for learning unobservable latent market state representations, which are further enriched through interactions with both temporal features and cross-sectional features. Finally, extensive experiments on four main stock markets show that the proposed method outperforms SOTA techniques across multiple metrics. Additionally, its successful application in real-world fund management operations confirms its effectiveness and practicality. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 502,913 |
1405.4927 | Reverse Engineering Socialbot Infiltration Strategies in Twitter | Data extracted from social networks like Twitter are increasingly being used to build applications and services that mine and summarize public reactions to events, such as traffic monitoring platforms, identification of epidemic outbreaks, and public perception about people and brands. However, such services are vulnerable to attacks from socialbots $-$ automated accounts that mimic real users $-$ seeking to tamper statistics by posting messages generated automatically and interacting with legitimate users. Potentially, if created in large scale, socialbots could be used to bias or even invalidate many existing services, by infiltrating the social networks and acquiring trust of other users with time. This study aims at understanding infiltration strategies of socialbots in the Twitter microblogging platform. To this end, we create 120 socialbot accounts with different characteristics and strategies (e.g., gender specified in the profile, how active they are, the method used to generate their tweets, and the group of users they interact with), and investigate the extent to which these bots are able to infiltrate the Twitter social network. Our results show that even socialbots employing simple automated mechanisms are able to successfully infiltrate the network. Additionally, using a $2^k$ factorial design, we quantify infiltration effectiveness of different bot strategies. Our analysis unveils findings that are key for the design of detection and counter measurements approaches. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 33,220 |
2104.12278 | Causal Learning for Socially Responsible AI | There have been increasing concerns about Artificial Intelligence (AI) due to its unfathomable potential power. To make AI address ethical challenges and shun undesirable outcomes, researchers proposed to develop socially responsible AI (SRAI). One of these approaches is causal learning (CL). We survey state-of-the-art methods of CL for SRAI. We begin by examining the seven CL tools to enhance the social responsibility of AI, then review how existing works have succeeded using these tools to tackle issues in developing SRAI such as fairness. The goal of this survey is to bring forefront the potentials and promises of CL for SRAI. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 232,161 |
1406.6281 | Experimental Investigation of Control Updating Period Monitoring In
Industrial PLC-based Fast MPC: Application to The Constrained Control of a
Cryogenic Refrigerator | In this paper, a complete industrial validation of a recently published scheme for on-line adaptation of the control updating period in Model Predictive Control is proposed. The industrial process that serves in the validation is a cryogenic refrigerator that is used to cool the supra-conductors involved in particle accelerators or experimental nuclear reactors. Two recently predicted features are validated: the first states that it is sometimes better to use less efficient (per iteration) optimizer if the lack of efficiency is over-compensated by an increase in the updating control frequency. The second is that for a given solver, it is worth monitoring the control updating period based on the on-line measured behavior of the cost function. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 34,106 |
2107.13576 | Social Processes: Self-Supervised Meta-Learning over Conversational
Groups for Forecasting Nonverbal Social Cues | Free-standing social conversations constitute a yet underexplored setting for human behavior forecasting. While the task of predicting pedestrian trajectories has received much recent attention, an intrinsic difference between these settings is how groups form and disband. Evidence from social psychology suggests that group members in a conversation explicitly self-organize to sustain the interaction by adapting to one another's behaviors. Crucially, the same individual is unlikely to adapt similarly across different groups; contextual factors such as perceived relationships, attraction, rapport, etc., influence the entire spectrum of participants' behaviors. A question arises: how can we jointly forecast the mutually dependent futures of conversation partners by modeling the dynamics unique to every group? In this paper, we propose the Social Process (SP) models, taking a novel meta-learning and stochastic perspective of group dynamics. Training group-specific forecasting models hinders generalization to unseen groups and is challenging given limited conversation data. In contrast, our SP models treat interaction sequences from a single group as a meta-dataset: we condition forecasts for a sequence from a given group on other observed-future sequence pairs from the same group. In this way, an SP model learns to adapt its forecasts to the unique dynamics of the interacting partners, generalizing to unseen groups in a data-efficient manner. Additionally, we first rethink the task formulation itself, motivating task requirements from social science literature that prior formulations have overlooked. For our formulation of Social Cue Forecasting, we evaluate the empirical performance of our SP models against both non-meta-learning and meta-learning approaches with similar assumptions. The SP models yield improved performance on synthetic and real-world behavior datasets. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 248,234 |
2303.11028 | MAQA: A Quantum Framework for Supervised Learning | Quantum Machine Learning has the potential to improve traditional machine learning methods and overcome some of the main limitations imposed by the classical computing paradigm. However, the practical advantages of using quantum resources to solve pattern recognition tasks are still to be demonstrated. This work proposes a universal, efficient framework that can reproduce the output of a plethora of classical supervised machine learning algorithms exploiting quantum computation's advantages. The proposed framework is named Multiple Aggregator Quantum Algorithm (MAQA) due to its capability to combine multiple and diverse functions to solve typical supervised learning problems. In its general formulation, MAQA can be potentially adopted as the quantum counterpart of all those models falling into the scheme of aggregation of multiple functions, such as ensemble algorithms and neural networks. From a computational point of view, the proposed framework allows generating an exponentially large number of different transformations of the input at the cost of increasing the depth of the corresponding quantum circuit linearly. Thus, MAQA produces a model with substantial descriptive power to broaden the horizon of possible applications of quantum machine learning with a computational advantage over classical methods. As a second meaningful addition, we discuss the adoption of the proposed framework as hybrid quantum-classical and fault-tolerant quantum algorithm. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 352,677 |
1711.11487 | FRAPpuccino: Fault-detection through Runtime Analysis of Provenance | We present FRAPpuccino (or FRAP), a provenance-based fault detection mechanism for Platform as a Service (PaaS) users, who run many instances of an application on a large cluster of machines. FRAP models, records, and analyzes the behavior of an application and its impact on the system as a directed acyclic provenance graph. It assumes that most instances behave normally and uses their behavior to construct a model of legitimate behavior. Given a model of legitimate behavior, FRAP uses a dynamic sliding window algorithm to compare a new instance's execution to that of the model. Any instance that does not conform to the model is identified as an anomaly. We present the FRAP prototype and experimental results showing that it can accurately detect application anomalies. | false | false | false | false | false | false | false | false | false | false | true | false | true | false | false | false | false | false | 85,789 |
2003.03490 | Adversarial Online Learning with Changing Action Sets: Efficient
Algorithms with Approximate Regret Bounds | We revisit the problem of online learning with sleeping experts/bandits: in each time step, only a subset of the actions are available for the algorithm to choose from (and learn about). The work of Kleinberg et al. (2010) showed that there exist no-regret algorithms which perform no worse than the best ranking of actions asymptotically. Unfortunately, achieving this regret bound appears computationally hard: Kanade and Steinke (2014) showed that achieving this no-regret performance is at least as hard as PAC-learning DNFs, a notoriously difficult problem. In the present work, we relax the original problem and study computationally efficient no-approximate-regret algorithms: such algorithms may exceed the optimal cost by a multiplicative constant in addition to the additive regret. We give an algorithm that provides a no-approximate-regret guarantee for the general sleeping expert/bandit problems. For several canonical special cases of the problem, we give algorithms with significantly better approximation ratios; these algorithms also illustrate different techniques for achieving no-approximate-regret guarantees. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 167,240 |
2001.05161 | Pose-Assisted Multi-Camera Collaboration for Active Object Tracking | Active Object Tracking (AOT) is crucial to many visionbased applications, e.g., mobile robot, intelligent surveillance. However, there are a number of challenges when deploying active tracking in complex scenarios, e.g., target is frequently occluded by obstacles. In this paper, we extend the single-camera AOT to a multi-camera setting, where cameras tracking a target in a collaborative fashion. To achieve effective collaboration among cameras, we propose a novel Pose-Assisted Multi-Camera Collaboration System, which enables a camera to cooperate with the others by sharing camera poses for active object tracking. In the system, each camera is equipped with two controllers and a switcher: The vision-based controller tracks targets based on observed images. The pose-based controller moves the camera in accordance to the poses of the other cameras. At each step, the switcher decides which action to take from the two controllers according to the visibility of the target. The experimental results demonstrate that our system outperforms all the baselines and is capable of generalizing to unseen environments. The code and demo videos are available on our website https://sites.google.com/view/pose-assistedcollaboration. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | true | false | false | false | 160,458 |
1403.3297 | Channel Capacity Analysis of MIMO System in Correlated Nakagami-m Fading
Environment | We consider Vertical Bell Laboratories Layered Space-Time (V-BLAST) systems in correlated multiple-input multiple-output (MIMO) Nakagami-m fading channels with equal power allocated to each transmit antenna and also we consider that the channel state information (CSI) is available only at the receiver. Now for practical application, study of the VBLAST MIMO system in correlated environment is necessary. In this paper, we present a detailed study of the channel capacity in correlated and uncorrelated channel condition and also validated the result with appropriate mathematical relation. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 31,555 |
1406.6425 | Compressive Imaging and Characterization of Sparse Light Deflection Maps | Light rays incident on a transparent object of uniform refractive index undergo deflections, which uniquely characterize the surface geometry of the object. Associated with each point on the surface is a deflection map (or spectrum) which describes the pattern of deflections in various directions. This article presents a novel method to efficiently acquire and reconstruct sparse deflection spectra induced by smooth object surfaces. To this end, we leverage the framework of Compressed Sensing (CS) in a particular implementation of a schlieren deflectometer, i.e., an optical system providing linear measurements of deflection spectra with programmable spatial light modulation patterns. We design those modulation patterns on the principle of spread spectrum CS for reducing the number of observations. The ability of our device to simultaneously observe the deflection spectra on a dense discretization of the object surface is related to a Multiple Measurement Vector (MMV) model. This scheme allows us to estimate both the noise power and the instrumental point spread function. We formulate the spectrum reconstruction task as the solving of a linear inverse problem regularized by an analysis sparsity prior using a translation invariant wavelet frame. Our results demonstrate the capability and advantages of using a CS based approach for deflectometric imaging both on simulated data and experimental deflectometric data. Finally, the paper presents an extension of our method showing how we can extract the main deflection direction in each point of the object surface from a few compressive measurements, without needing any costly reconstruction procedures. This compressive characterization is then confirmed with experimental results on simple plano-convex and multifocal intra-ocular lenses studying the evolution of the main deflection as a function of the object point location. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 34,117 |
2305.14673 | ORRN: An ODE-based Recursive Registration Network for Deformable
Respiratory Motion Estimation with Lung 4DCT Images | Deformable Image Registration (DIR) plays a significant role in quantifying deformation in medical data. Recent Deep Learning methods have shown promising accuracy and speedup for registering a pair of medical images. However, in 4D (3D + time) medical data, organ motion, such as respiratory motion and heart beating, can not be effectively modeled by pair-wise methods as they were optimized for image pairs but did not consider the organ motion patterns necessary when considering 4D data. This paper presents ORRN, an Ordinary Differential Equations (ODE)-based recursive image registration network. Our network learns to estimate time-varying voxel velocities for an ODE that models deformation in 4D image data. It adopts a recursive registration strategy to progressively estimate a deformation field through ODE integration of voxel velocities. We evaluate the proposed method on two publicly available lung 4DCT datasets, DIRLab and CREATIS, for two tasks: 1) registering all images to the extreme inhale image for 3D+t deformation tracking and 2) registering extreme exhale to inhale phase images. Our method outperforms other learning-based methods in both tasks, producing the smallest Target Registration Error of 1.24mm and 1.26mm, respectively. Additionally, it produces less than 0.001\% unrealistic image folding, and the computation speed is less than 1 second for each CT volume. ORRN demonstrates promising registration accuracy, deformation plausibility, and computation efficiency on group-wise and pair-wise registration tasks. It has significant implications in enabling fast and accurate respiratory motion estimation for treatment planning in radiation therapy or robot motion planning in thoracic needle insertion. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 367,171 |
1904.07659 | Semantically Aligned Bias Reducing Zero Shot Learning | Zero shot learning (ZSL) aims to recognize unseen classes by exploiting semantic relationships between seen and unseen classes. Two major problems faced by ZSL algorithms are the hubness problem and the bias towards the seen classes. Existing ZSL methods focus on only one of these problems in the conventional and generalized ZSL setting. In this work, we propose a novel approach, Semantically Aligned Bias Reducing (SABR) ZSL, which focuses on solving both the problems. It overcomes the hubness problem by learning a latent space that preserves the semantic relationship between the labels while encoding the discriminating information about the classes. Further, we also propose ways to reduce the bias of the seen classes through a simple cross-validation process in the inductive setting and a novel weak transfer constraint in the transductive setting. Extensive experiments on three benchmark datasets suggest that the proposed model significantly outperforms existing state-of-the-art algorithms by ~1.5-9% in the conventional ZSL setting and by ~2-14% in the generalized ZSL for both the inductive and transductive settings. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 127,856 |
2408.11947 | Assessing skin thermal injury risk in exposure tests of heating until
flight | We assess the skin thermal injury risk in the situation where a test subject is exposed to an electromagnetic beam until the occurrence of flight action. The physical process is modeled as follows. The absorbed electromagnetic power increases the skin temperature. Wherever it is above a temperature threshold, thermal nociceptors are activated and transduce an electrical signal. When the activated skin volume reaches a threshold, the flight signal is initiated. After the delay of human reaction time, the flight action is materialized when the subject moves away or the beam power is turned off. The injury risk is quantified by the thermal damage parameter calculated in the Arrhenius equation. It depends on the beam power density absorbed into the skin, which is not measurable. In addition, the volume threshold for flight initiation is unknown. To circumference these difficulties, we normalize the formulation and write the thermal damage parameter in terms of the occurrence time of flight action, which is reliably observed in exposure tests. This thermal injury formulation provides a viable framework for investigating the effects of model parameters. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 482,516 |
2206.07936 | Universality of regularized regression estimators in high dimensions | The Convex Gaussian Min-Max Theorem (CGMT) has emerged as a prominent theoretical tool for analyzing the precise stochastic behavior of various statistical estimators in the so-called high dimensional proportional regime, where the sample size and the signal dimension are of the same order. However, a well recognized limitation of the existing CGMT machinery rests in its stringent requirement on the exact Gaussianity of the design matrix, therefore rendering the obtained precise high dimensional asymptotics largely a specific Gaussian theory in various important statistical models. This paper provides a structural universality framework for a broad class of regularized regression estimators that is particularly compatible with the CGMT machinery. In particular, we show that with a good enough $\ell_\infty$ bound for the regression estimator $\hat{\mu}_A$, any `structural property' that can be detected via the CGMT for $\hat{\mu}_G$ (under a standard Gaussian design $G$) also holds for $\hat{\mu}_A$ under a general design $A$ with independent entries. As a proof of concept, we demonstrate our new universality framework in three key examples of regularized regression estimators: the Ridge, Lasso and regularized robust regression estimators, where new universality properties of risk asymptotics and/or distributions of regression estimators and other related quantities are proved. As a major statistical implication of the Lasso universality results, we validate inference procedures using the degrees-of-freedom adjusted debiased Lasso under general design and error distributions. We also provide a counterexample, showing that universality properties for regularized regression estimators do not extend to general isotropic designs. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 302,942 |
2410.18374 | Integrating Canonical Neural Units and Multi-Scale Training for
Handwritten Text Recognition | The segmentation-free research efforts for addressing handwritten text recognition can be divided into three categories: connectionist temporal classification (CTC), hidden Markov model and encoder-decoder methods. In this paper, inspired by the above three modeling methods, we propose a new recognition network by using a novel three-dimensional (3D) attention module and global-local context information. Based on the feature maps of the last convolutional layer, a series of 3D blocks with different resolutions are split. Then, these 3D blocks are fed into the 3D attention module to generate sequential visual features. Finally, by integrating the visual features and the corresponding global-local context features, a well-designed representation can be obtained. Main canonical neural units including attention mechanisms, fully-connected layer, recurrent unit and convolutional layer are efficiently organized into a network and can be jointly trained by the CTC loss and the cross-entropy loss. Experiments on the latest Chinese handwritten text datasets (the SCUT-HCCDoc and the SCUT-EPT) and one English handwritten text dataset (the IAM) show that the proposed method can make a new milestone. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 501,856 |
1804.10080 | On deep speaker embeddings for text-independent speaker recognition | We investigate deep neural network performance in the textindependent speaker recognition task. We demonstrate that using angular softmax activation at the last classification layer of a classification neural network instead of a simple softmax activation allows to train a more generalized discriminative speaker embedding extractor. Cosine similarity is an effective metric for speaker verification in this embedding space. We also address the problem of choosing an architecture for the extractor. We found that deep networks with residual frame level connections outperform wide but relatively shallow architectures. This paper also proposes several improvements for previous DNN-based extractor systems to increase the speaker recognition accuracy. We show that the discriminatively trained similarity metric learning approach outperforms the standard LDA-PLDA method as an embedding backend. The results obtained on Speakers in the Wild and NIST SRE 2016 evaluation sets demonstrate robustness of the proposed systems when dealing with close to real-life conditions. | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 96,091 |
1608.03664 | To Prolong the Lifetime of Wireless Sensor Networks: The Min-max Fair
Scheduling in a Multi-access Contra-Polymatroid | From an information-theoretic point of view, we investigate the min-max power scheduling problem in multi-access transmission. We prove that the min-max optimal vector in a contrapolymatroid is the base with the minimal distance to the equal allocation vector. Because we can realize any base of the contra-polymatroid by time sharing among the vertices, the problem searching for the min-max optimal vector is converted to a convex optimization problem solving the time sharing coefficients. Different from the traditional algorithms exploiting bottleneck link or waterfilling, the proposed method simplifies the computation of the min-max optimal vector and directly outputs the system parameters to realize the min-max fair scheduling. By adopting the proposed method and applying the acquired min-max optimal scheduling to multi-access transmission, the network lifetime of the wireless sensor network is prolonged. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 59,705 |
1907.06946 | A Subjective Interestingness measure for Business Intelligence
explorations | This paper addresses the problem of defining a subjective interestingness measure for BI exploration. Such a measure involves prior modeling of the belief of the user. The complexity of this problem lies in the impossibility to ask the user about the degree of belief in each element composing their knowledge prior to the writing of a query. To this aim, we propose to automatically infer this user belief based on the user's past interactions over a data cube, the cube schema and other users past activities. We express the belief under the form of a probability distribution over all the query parts potentially accessible to the user, and use a random walk to learn this distribution. This belief is then used to define a first Subjective Interestingness measure over multidimensional queries. Experiments conducted on simulated and real explorations show how this new subjective interestingness measure relates to prototypical and real user behaviors, and that query parts offer a reasonable proxy to infer user belief. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 138,746 |
1802.07280 | Simulating the Ridesharing Economy: The Individual Agent
Metro-Washington Area Ridesharing Model | The ridesharing economy is experiencing rapid growth and innovation. Companies such as Uber and Lyft are continuing to grow at a considerable pace while providing their platform as an organizing medium for ridesharing services, increasing consumer utility as well as employing thousands in part-time positions. However, many challenges remain in the modeling of ridesharing services, many of which are not currently under wide consideration. In this paper, an agent-based model is developed to simulate a ridesharing service in the Washington D.C. metropolitan region. The model is used to examine levels of utility gained for both riders (customers) and drivers (service providers) of a generic ridesharing service. A description of the Individual Agent Metro-Washington Area Ridesharing Model (IAMWARM) is provided, as well as a description of a typical simulation run. We investigate the financial gains of drivers for a 24-hour period under two scenarios and two spatial movement behaviors. The two spatial behaviors were random movement and Voronoi movement, which we describe. Both movement behaviors were tested under a stationary run conditions scenario and a variable run conditions scenario. We find that Voronoi movement increased drivers' utility gained but that emergence of this system property was only viable under variable scenario conditions. This result provides two important insights: The first is that driver movement decisions prior to passenger pickup can impact financial gain for the service and drivers, and consequently, rate of successful pickup for riders. The second is that this phenomenon is only evident under experimentation conditions where variability in passenger and driver arrival rates are administered. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | 90,862 |
2310.09443 | G10: Enabling An Efficient Unified GPU Memory and Storage Architecture
with Smart Tensor Migrations | To break the GPU memory wall for scaling deep learning workloads, a variety of architecture and system techniques have been proposed recently. Their typical approaches include memory extension with flash memory and direct storage access. However, these techniques still suffer from suboptimal performance and introduce complexity to the GPU memory management, making them hard to meet the scalability requirement of deep learning workloads today. In this paper, we present a unified GPU memory and storage architecture named G10 driven by the fact that the tensor behaviors of deep learning workloads are highly predictable. G10 integrates the host memory, GPU memory, and flash memory into a unified memory space, to scale the GPU memory capacity while enabling transparent data migrations. Based on this unified GPU memory and storage architecture, G10 utilizes compiler techniques to characterize the tensor behaviors in deep learning workloads. Therefore, it can schedule data migrations in advance by considering the available bandwidth of flash memory and host memory. The cooperative mechanism between deep learning compilers and the unified memory architecture enables G10 to hide data transfer overheads in a transparent manner. We implement G10 based on an open-source GPU simulator. Our experiments demonstrate that G10 outperforms state-of-the-art GPU memory solutions by up to 1.75$\times$, without code modifications to deep learning workloads. With the smart data migration mechanism, G10 can reach 90.3\% of the performance of the ideal case assuming unlimited GPU memory. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 399,771 |
1905.13367 | PAC-Bayes Un-Expected Bernstein Inequality | We present a new PAC-Bayesian generalization bound. Standard bounds contain a $\sqrt{L_n \cdot \KL/n}$ complexity term which dominates unless $L_n$, the empirical error of the learning algorithm's randomized predictions, vanishes. We manage to replace $L_n$ by a term which vanishes in many more situations, essentially whenever the employed learning algorithm is sufficiently stable on the dataset at hand. Our new bound consistently beats state-of-the-art bounds both on a toy example and on UCI datasets (with large enough $n$). Theoretically, unlike existing bounds, our new bound can be expected to converge to $0$ faster whenever a Bernstein/Tsybakov condition holds, thus connecting PAC-Bayesian generalization and {\em excess risk\/} bounds---for the latter it has long been known that faster convergence can be obtained under Bernstein conditions. Our main technical tool is a new concentration inequality which is like Bernstein's but with $X^2$ taken outside its expectation. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 133,097 |
2001.06590 | A Foreground-background Parallel Compression with Residual Encoding for
Surveillance Video | The data storage has been one of the bottlenecks in surveillance systems. The conventional video compression algorithms such as H.264 and H.265 do not fully utilize the low information density characteristic of the surveillance video. In this paper, we propose a video compression method that extracts and compresses the foreground and background of the video separately. The compression ratio is greatly improved by sharing background information among multiple adjacent frames through an adaptive background updating and interpolation module. Besides, we present two different schemes to compress the foreground and compare their performance in the ablation study to show the importance of temporal information for video compression. In the decoding end, a coarse-to-fine two-stage module is applied to achieve the composition of the foreground and background and the enhancements of frame quality. Furthermore, an adaptive sampling method for surveillance cameras is proposed, and we have shown its effects through software simulation. The experimental results show that our proposed method requires 69.5% less bpp (bits per pixel) than the conventional algorithm H.265 to achieve the same PSNR (36 dB) on the HECV dataset. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 160,832 |
2406.08469 | PAL: Pluralistic Alignment Framework for Learning from Heterogeneous
Preferences | Large foundation models pretrained on raw web-scale data are not readily deployable without additional step of extensive alignment to human preferences. Such alignment is typically done by collecting large amounts of pairwise comparisons from humans ("Do you prefer output A or B?") and learning a reward model or a policy with the Bradley-Terry-Luce (BTL) model as a proxy for a human's underlying implicit preferences. These methods generally suffer from assuming a universal preference shared by all humans, which lacks the flexibility of adapting to plurality of opinions and preferences. In this work, we propose PAL, a framework to model human preference complementary to existing pretraining strategies, which incorporates plurality from the ground up. We propose using the ideal point model as a lens to view alignment using preference comparisons. Together with our novel reformulation and using mixture modeling, our framework captures the plurality of population preferences while simultaneously learning a common preference latent space across different preferences, which can few-shot generalize to new, unseen users. Our approach enables us to use the penultimate-layer representation of large foundation models and simple MLP layers to learn reward functions that are on-par with the existing large state-of-the-art reward models, thereby enhancing efficiency of reward modeling significantly. We show that PAL achieves competitive reward model accuracy compared to strong baselines on 1) Language models with Summary dataset ; 2) Image Generative models with Pick-a-Pic dataset ; 3) A new semisynthetic heterogeneous dataset generated using Anthropic Personas. Finally, our experiments also highlight the shortcoming of current preference datasets that are created using rigid rubrics which wash away heterogeneity, and call for more nuanced data collection approaches. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 463,503 |
1608.06148 | Multiple objects tracking in surveillance video using color and Hu
moments | Multiple objects tracking finds its applications in many high level vision analysis like object behaviour interpretation and gait recognition. In this paper, a feature based method to track the multiple moving objects in surveillance video sequence is proposed. Object tracking is done by extracting the color and Hu moments features from the motion segmented object blob and establishing the association of objects in the successive frames of the video sequence based on Chi-Square dissimilarity measure and nearest neighbor classifier. The benchmark IEEE PETS and IEEE Change Detection datasets has been used to show the robustness of the proposed method. The proposed method is assessed quantitatively using the precision and recall accuracy metrics. Further, comparative evaluation with related works has been carried out to exhibit the efficacy of the proposed method. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 60,079 |
2408.14678 | Bridging the Gap: Unpacking the Hidden Challenges in Knowledge
Distillation for Online Ranking Systems | Knowledge Distillation (KD) is a powerful approach for compressing a large model into a smaller, more efficient model, particularly beneficial for latency-sensitive applications like recommender systems. However, current KD research predominantly focuses on Computer Vision (CV) and NLP tasks, overlooking unique data characteristics and challenges inherent to recommender systems. This paper addresses these overlooked challenges, specifically: (1) mitigating data distribution shifts between teacher and student models, (2) efficiently identifying optimal teacher configurations within time and budgetary constraints, and (3) enabling computationally efficient and rapid sharing of teacher labels to support multiple students. We present a robust KD system developed and rigorously evaluated on multiple large-scale personalized video recommendation systems within Google. Our live experiment results demonstrate significant improvements in student model performance while ensuring consistent and reliable generation of high quality teacher labels from a continuous data stream of data. | false | false | false | false | true | true | true | false | false | false | false | false | false | false | false | false | false | false | 483,625 |
1801.07964 | Evaluation of Interactive Machine Learning Systems | The evaluation of interactive machine learning systems remains a difficult task. These systems learn from and adapt to the human, but at the same time, the human receives feedback and adapts to the system. Getting a clear understanding of these subtle mechanisms of co-operation and co-adaptation is challenging. In this chapter, we report on our experience in designing and evaluating various interactive machine learning applications from different domains. We argue for coupling two types of validation: algorithm-centered analysis, to study the computational behaviour of the system; and human-centered evaluation, to observe the utility and effectiveness of the application for end-users. We use a visual analytics application for guided search, built using an interactive evolutionary approach, as an exemplar of our work. Our observation is that human-centered design and evaluation complement algorithmic analysis, and can play an important role in addressing the "black-box" effect of machine learning. Finally, we discuss research opportunities that require human-computer interaction methodologies, in order to support both the visible and hidden roles that humans play in interactive machine learning. | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 88,884 |
2112.10703 | Mega-NeRF: Scalable Construction of Large-Scale NeRFs for Virtual
Fly-Throughs | We use neural radiance fields (NeRFs) to build interactive 3D environments from large-scale visual captures spanning buildings or even multiple city blocks collected primarily from drones. In contrast to single object scenes (on which NeRFs are traditionally evaluated), our scale poses multiple challenges including (1) the need to model thousands of images with varying lighting conditions, each of which capture only a small subset of the scene, (2) prohibitively large model capacities that make it infeasible to train on a single GPU, and (3) significant challenges for fast rendering that would enable interactive fly-throughs. To address these challenges, we begin by analyzing visibility statistics for large-scale scenes, motivating a sparse network structure where parameters are specialized to different regions of the scene. We introduce a simple geometric clustering algorithm for data parallelism that partitions training images (or rather pixels) into different NeRF submodules that can be trained in parallel. We evaluate our approach on existing datasets (Quad 6k and UrbanScene3D) as well as against our own drone footage, improving training speed by 3x and PSNR by 12%. We also evaluate recent NeRF fast renderers on top of Mega-NeRF and introduce a novel method that exploits temporal coherence. Our technique achieves a 40x speedup over conventional NeRF rendering while remaining within 0.8 db in PSNR quality, exceeding the fidelity of existing fast renderers. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | true | 272,506 |
2501.14997 | Causal Discovery via Bayesian Optimization | Existing score-based methods for directed acyclic graph (DAG) learning from observational data struggle to recover the causal graph accurately and sample-efficiently. To overcome this, in this study, we propose DrBO (DAG recovery via Bayesian Optimization)-a novel DAG learning framework leveraging Bayesian optimization (BO) to find high-scoring DAGs. We show that, by sophisticatedly choosing the promising DAGs to explore, we can find higher-scoring ones much more efficiently. To address the scalability issues of conventional BO in DAG learning, we replace Gaussian Processes commonly employed in BO with dropout neural networks, trained in a continual manner, which allows for (i) flexibly modeling the DAG scores without overfitting, (ii) incorporation of uncertainty into the estimated scores, and (iii) scaling with the number of evaluations. As a result, DrBO is computationally efficient and can find the accurate DAG in fewer trials and less time than existing state-of-the-art methods. This is demonstrated through an extensive set of empirical evaluations on many challenging settings with both synthetic and real data. Our implementation is available at https://github.com/baosws/DrBO. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 527,351 |
1911.03552 | Electromagnetically induced transparency at a chiral exceptional point | Electromagnetically induced transparency, as a quantum interference effect to eliminate optical absorption in an opaque medium, has found extensive applications in slow light generation, optical storage, frequency conversion, optical quantum memory as well as enhanced nonlinear interactions at the few-photon level in all kinds of systems. Recently, there have been great interests in exceptional points, a spectral singularity that could be reached by tuning various parameters in open systems, to render unusual features to the physical systems, such as optical states with chirality. Here we theoretically and experimentally study transparency and absorption modulated by chiral optical states at exceptional points in an indirectly-coupled resonator system. By tuning one resonator to an exceptional point, transparency or absorption occurs depending on the chirality of the eigenstate. Our results demonstrate a new strategy to manipulate the light flow and the spectra of a photonic resonator system by exploiting a discrete optical state associated with specific chirality at an exceptional point as a unique control bit, which opens up a new horizon of controlling slow light using optical states. Compatible with the idea of state control in quantum gate operation, this strategy hence bridges optical computing and storage. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 152,674 |
2412.21102 | Exploring and Controlling Diversity in LLM-Agent Conversation | Diversity is a critical aspect of multi-agent communication. In this paper, we focus on controlling and exploring diversity in the context of open-domain multi-agent conversations, particularly for world simulation applications. We propose Adaptive Prompt Pruning (APP), a novel method that dynamically adjusts the content of the utterance generation prompt to control diversity using a single parameter, lambda. Through extensive experiments, we show that APP effectively controls the output diversity across models and datasets, with pruning more information leading to more diverse output. We comprehensively analyze the relationship between prompt content and conversational diversity. Our findings reveal that information from all components of the prompt generally constrains the diversity of the output, with the Memory block exerting the most significant influence. APP is compatible with established techniques like temperature sampling and top-p sampling, providing a versatile tool for diversity management. To address the trade-offs of increased diversity, such as inconsistencies with omitted information, we incorporate a post-generation correction step, which effectively balances diversity enhancement with output consistency. Additionally, we examine how prompt structure, including component order and length, impacts diversity. This study addresses key questions surrounding diversity in multi-agent world simulation, offering insights into its control, influencing factors, and associated trade-offs. Our contributions lay the foundation for systematically engineering diversity in LLM-based multi-agent collaborations, advancing their effectiveness in real-world applications. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 521,458 |
2411.09706 | AI-Driven Feedback Loops in Digital Technologies: Psychological Impacts
on User Behaviour and Well-Being | The rapid spread of digital technologies has produced data-driven feedback loops, wearable devices, social media networks, and mobile applications that shape user behavior, motivation, and mental well-being. While these systems encourage self-improvement and the development of healthier habits through real-time feedback, they also create psychological risks such as technostress, addiction, and loss of autonomy. The present study also aims to investigate the positive and negative psychological consequences of feedback mechanisms on users' behaviour and well-being. Employing a descriptive survey method, the study collected data from 200 purposely selected users to assess changes in behaviour, motivation, and mental well-being related to health, social, and lifestyle applications. Results indicate that while feedback mechanisms facilitate goal attainment and social interconnection through streaks and badges, among other components, they also enhance anxiety, mental weariness, and loss of productivity due to actions that are considered feedback-seeking. Furthermore, test subjects reported that their actions are unconsciously shaped by app feedback, often at the expense of personal autonomy, while real-time feedback minimally influences professional or social interactions. The study shows that data-driven feedback loops deliver not only motivational benefits but also psychological challenges. To mitigate these risks, users should establish boundaries regarding their use of technology to prevent burnout and addiction, while developers need to refine feedback mechanisms to reduce cognitive load and foster more inclusive participation. Future research should focus on designing feedback mechanisms that promote well-being without compromising individual freedom or increasing social comparison. | true | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | 508,338 |
2406.04348 | Gaining Insights into Group-Level Course Difficulty via Differential
Course Functioning | Curriculum Analytics (CA) studies curriculum structure and student data to ensure the quality of educational programs. One desirable property of courses within curricula is that they are not unexpectedly more difficult for students of different backgrounds. While prior work points to likely variations in course difficulty across student groups, robust methodologies for capturing such variations are scarce, and existing approaches do not adequately decouple course-specific difficulty from students' general performance levels. The present study introduces Differential Course Functioning (DCF) as an Item Response Theory (IRT)-based CA methodology. DCF controls for student performance levels and examines whether significant differences exist in how distinct student groups succeed in a given course. Leveraging data from over 20,000 students at a large public university, we demonstrate DCF's ability to detect inequities in undergraduate course difficulty across student groups described by grade achievement. We compare major pairs with high co-enrollment and transfer students to their non-transfer peers. For the former, our findings suggest a link between DCF effect sizes and the alignment of course content to student home department motivating interventions targeted towards improving course preparedness. For the latter, results suggest minor variations in course-specific difficulty between transfer and non-transfer students. While this is desirable, it also suggests that interventions targeted toward mitigating grade achievement gaps in transfer students should encompass comprehensive support beyond enhancing preparedness for individual courses. By providing more nuanced and equitable assessments of academic performance and difficulties experienced by diverse student populations, DCF could support policymakers, course articulation officers, and student advisors. | false | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | 461,650 |
1809.00188 | MS-UEdin Submission to the WMT2018 APE Shared Task: Dual-Source
Transformer for Automatic Post-Editing | This paper describes the Microsoft and University of Edinburgh submission to the Automatic Post-editing shared task at WMT2018. Based on training data and systems from the WMT2017 shared task, we re-implement our own models from the last shared task and introduce improvements based on extensive parameter sharing. Next we experiment with our implementation of dual-source transformer models and data selection for the IT domain. Our submissions decisively wins the SMT post-editing sub-task establishing the new state-of-the-art and is a very close second (or equal, 16.46 vs 16.50 TER) in the NMT sub-task. Based on the rather weak results in the NMT sub-task, we hypothesize that neural-on-neural APE might not be actually useful. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 106,513 |
1609.06118 | Adaptive Decontamination of the Training Set: A Unified Formulation for
Discriminative Visual Tracking | Tracking-by-detection methods have demonstrated competitive performance in recent years. In these approaches, the tracking model heavily relies on the quality of the training set. Due to the limited amount of labeled training data, additional samples need to be extracted and labeled by the tracker itself. This often leads to the inclusion of corrupted training samples, due to occlusions, misalignments and other perturbations. Existing tracking-by-detection methods either ignore this problem, or employ a separate component for managing the training set. We propose a novel generic approach for alleviating the problem of corrupted training samples in tracking-by-detection frameworks. Our approach dynamically manages the training set by estimating the quality of the samples. Contrary to existing approaches, we propose a unified formulation by minimizing a single loss over both the target appearance model and the sample quality weights. The joint formulation enables corrupted samples to be down-weighted while increasing the impact of correct ones. Experiments are performed on three benchmarks: OTB-2015 with 100 videos, VOT-2015 with 60 videos, and Temple-Color with 128 videos. On the OTB-2015, our unified formulation significantly improves the baseline, with a gain of 3.8% in mean overlap precision. Finally, our method achieves state-of-the-art results on all three datasets. Code and supplementary material are available at http://www.cvl.isy.liu.se/research/objrec/visualtracking/decontrack/index.html . | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 61,244 |
2409.15119 | Log-normal Mutations and their Use in Detecting Surreptitious Fake
Images | In many cases, adversarial attacks are based on specialized algorithms specifically dedicated to attacking automatic image classifiers. These algorithms perform well, thanks to an excellent ad hoc distribution of initial attacks. However, these attacks are easily detected due to their specific initial distribution. We therefore consider other black-box attacks, inspired from generic black-box optimization tools, and in particular the log-normal algorithm. We apply the log-normal method to the attack of fake detectors, and get successful attacks: importantly, these attacks are not detected by detectors specialized on classical adversarial attacks. Then, combining these attacks and deep detection, we create improved fake detectors. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 490,763 |
2405.11685 | ColorFoil: Investigating Color Blindness in Large Vision and Language
Models | With the utilization of Transformer architecture, large Vision and Language (V&L) models have shown promising performance in even zero-shot settings. Several studies, however, indicate a lack of robustness of the models when dealing with complex linguistics and visual attributes. In this work, we introduce a novel V&L benchmark - ColorFoil, by creating color-related foils to assess the models' perception ability to detect colors like red, white, green, etc. We evaluate seven state-of-the-art V&L models including CLIP, ViLT, GroupViT, and BridgeTower, etc. in a zero-shot setting and present intriguing findings from the V&L models. The experimental evaluation indicates that ViLT and BridgeTower demonstrate much better color perception capabilities compared to CLIP and its variants and GroupViT. Moreover, CLIP-based models and GroupViT struggle to distinguish colors that are visually distinct to humans with normal color perception ability. | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | 455,240 |
2012.08250 | Rigid chain in parallel kinematic positioning system | The article presents an analysis of the trends in the development of kinematic structures of modern machine-building technological equipment. The prospects of using machines with parallel kinematics in processing, measuring and handling equipment, their advantages and disadvantages are demonstrated. It is shown that it is inexpedient to use ball screw drives in machines with parallel kinematics, performing tasks of low accuracy, but with displacements of more than 3000 mm. The rigid chain system of the Serapid firm and the possibility of its use in machines with parallel kinematics are considered. A schematic solution of a three-coordinate manipulation robot based on parallel kinematics with drive mechanisms on rigid chains is proposed. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 211,718 |
1901.01255 | Generic Primitive Detection in Point Clouds Using Novel Minimal Quadric
Fits | We present a novel and effective method for detecting 3D primitives in cluttered, unorganized point clouds, without axillary segmentation or type specification. We consider the quadric surfaces for encapsulating the basic building blocks of our environments - planes, spheres, ellipsoids, cones or cylinders, in a unified fashion. Moreover, quadrics allow us to model higher degree of freedom shapes, such as hyperboloids or paraboloids that could be used in non-rigid settings. We begin by contributing two novel quadric fits targeting 3D point sets that are endowed with tangent space information. Based upon the idea of aligning the quadric gradients with the surface normals, our first formulation is exact and requires as low as four oriented points. The second fit approximates the first, and reduces the computational effort. We theoretically analyze these fits with rigor, and give algebraic and geometric arguments. Next, by re-parameterizing the solution, we devise a new local Hough voting scheme on the null-space coefficients that is combined with RANSAC, reducing the complexity from $O(N^4)$ to $O(N^3)$ (three points). To the best of our knowledge, this is the first method capable of performing a generic cross-type multi-object primitive detection in difficult scenes without segmentation. Our extensive qualitative and quantitative results show that our method is efficient and flexible, as well as being accurate. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | true | 117,942 |
2212.12522 | An Exact Mapping From ReLU Networks to Spiking Neural Networks | Deep spiking neural networks (SNNs) offer the promise of low-power artificial intelligence. However, training deep SNNs from scratch or converting deep artificial neural networks to SNNs without loss of performance has been a challenge. Here we propose an exact mapping from a network with Rectified Linear Units (ReLUs) to an SNN that fires exactly one spike per neuron. For our constructive proof, we assume that an arbitrary multi-layer ReLU network with or without convolutional layers, batch normalization and max pooling layers was trained to high performance on some training set. Furthermore, we assume that we have access to a representative example of input data used during training and to the exact parameters (weights and biases) of the trained ReLU network. The mapping from deep ReLU networks to SNNs causes zero percent drop in accuracy on CIFAR10, CIFAR100 and the ImageNet-like data sets Places365 and PASS. More generally our work shows that an arbitrary deep ReLU network can be replaced by an energy-efficient single-spike neural network without any loss of performance. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 338,053 |
1901.00001 | Impact of Ground Truth Annotation Quality on Performance of Semantic
Image Segmentation of Traffic Conditions | Preparation of high-quality datasets for the urban scene understanding is a labor-intensive task, especially, for datasets designed for the autonomous driving applications. The application of the coarse ground truth (GT) annotations of these datasets without detriment to the accuracy of semantic image segmentation (by the mean intersection over union - mIoU) could simplify and speedup the dataset preparation and model fine tuning before its practical application. Here the results of the comparative analysis for semantic segmentation accuracy obtained by PSPNet deep learning architecture are presented for fine and coarse annotated images from Cityscapes dataset. Two scenarios were investigated: scenario 1 - the fine GT images for training and prediction, and scenario 2 - the fine GT images for training and the coarse GT images for prediction. The obtained results demonstrated that for the most important classes the mean accuracy values of semantic image segmentation for coarse GT annotations are higher than for the fine GT ones, and the standard deviation values are vice versa. It means that for some applications some unimportant classes can be excluded and the model can be tuned further for some classes and specific regions on the coarse GT dataset without loss of the accuracy even. Moreover, this opens the perspectives to use deep neural networks for the preparation of such coarse GT datasets. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 117,665 |
2205.13371 | A Rotated Hyperbolic Wrapped Normal Distribution for Hierarchical
Representation Learning | We present a rotated hyperbolic wrapped normal distribution (RoWN), a simple yet effective alteration of a hyperbolic wrapped normal distribution (HWN). The HWN expands the domain of probabilistic modeling from Euclidean to hyperbolic space, where a tree can be embedded with arbitrary low distortion in theory. In this work, we analyze the geometric properties of the diagonal HWN, a standard choice of distribution in probabilistic modeling. The analysis shows that the distribution is inappropriate to represent the data points at the same hierarchy level through their angular distance with the same norm in the Poincar\'e disk model. We then empirically verify the presence of limitations of HWN, and show how RoWN, the proposed distribution, can alleviate the limitations on various hierarchical datasets, including noisy synthetic binary tree, WordNet, and Atari 2600 Breakout. The code is available at https://github.com/ml-postech/RoWN. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 298,915 |
2003.11066 | A Survey of Methods for Low-Power Deep Learning and Computer Vision | Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 169,500 |
2406.16367 | On the Role of Long-tail Knowledge in Retrieval Augmented Large Language
Models | Retrieval augmented generation (RAG) exhibits outstanding performance in promoting the knowledge capabilities of large language models (LLMs) with retrieved documents related to user queries. However, RAG only focuses on improving the response quality of LLMs via enhancing queries indiscriminately with retrieved information, paying little attention to what type of knowledge LLMs really need to answer original queries more accurately. In this paper, we suggest that long-tail knowledge is crucial for RAG as LLMs have already remembered common world knowledge during large-scale pre-training. Based on our observation, we propose a simple but effective long-tail knowledge detection method for LLMs. Specifically, the novel Generative Expected Calibration Error (GECE) metric is derived to measure the ``long-tailness'' of knowledge based on both statistics and semantics. Hence, we retrieve relevant documents and infuse them into the model for patching knowledge loopholes only when the input query relates to long-tail knowledge. Experiments show that, compared to existing RAG pipelines, our method achieves over 4x speedup in average inference time and consistent performance improvement in downstream tasks. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 467,107 |
2310.05867 | Domain-wise Invariant Learning for Panoptic Scene Graph Generation | Panoptic Scene Graph Generation (PSG) involves the detection of objects and the prediction of their corresponding relationships (predicates). However, the presence of biased predicate annotations poses a significant challenge for PSG models, as it hinders their ability to establish a clear decision boundary among different predicates. This issue substantially impedes the practical utility and real-world applicability of PSG models. To address the intrinsic bias above, we propose a novel framework to infer potentially biased annotations by measuring the predicate prediction risks within each subject-object pair (domain), and adaptively transfer the biased annotations to consistent ones by learning invariant predicate representation embeddings. Experiments show that our method significantly improves the performance of benchmark models, achieving a new state-of-the-art performance, and shows great generalization and effectiveness on PSG dataset. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 398,326 |
2009.05684 | AttnGrounder: Talking to Cars with Attention | We propose Attention Grounder (AttnGrounder), a single-stage end-to-end trainable model for the task of visual grounding. Visual grounding aims to localize a specific object in an image based on a given natural language text query. Unlike previous methods that use the same text representation for every image region, we use a visual-text attention module that relates each word in the given query with every region in the corresponding image for constructing a region dependent text representation. Furthermore, for improving the localization ability of our model, we use our visual-text attention module to generate an attention mask around the referred object. The attention mask is trained as an auxiliary task using a rectangular mask generated with the provided ground-truth coordinates. We evaluate AttnGrounder on the Talk2Car dataset and show an improvement of 3.26% over the existing methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 195,388 |
2102.01824 | Dermo-DOCTOR: A framework for concurrent skin lesion detection and
recognition using a deep convolutional neural network with end-to-end dual
encoders | Automated skin lesion analysis for simultaneous detection and recognition is still challenging for inter-class homogeneity and intra-class heterogeneity, leading to low generic capability of a Single Convolutional Neural Network (CNN) with limited datasets. This article proposes an end-to-end deep CNN-based framework for simultaneous detection and recognition of the skin lesions, named Dermo-DOCTOR, consisting of two encoders. The feature maps from two encoders are fused channel-wise, called Fused Feature Map (FFM). The FFM is utilized for decoding in the detection sub-network, concatenating each stage of two encoders' outputs with corresponding decoder layers to retrieve the lost spatial information due to pooling in the encoders. For the recognition sub-network, the outputs of three fully connected layers, utilizing feature maps of two encoders and FFM, are aggregated to obtain a final lesion class. We train and evaluate the proposed Dermo-Doctor utilizing two publicly available benchmark datasets, such as ISIC-2016 and ISIC-2017. The achieved segmentation results exhibit mean intersection over unions of 85.0 % and 80.0 % respectively for ISIC-2016 and ISIC-2017 test datasets. The proposed Dermo-DOCTOR also demonstrates praiseworthy success in lesion recognition, providing the areas under the receiver operating characteristic curves of 0.98 and 0.91 respectively for those two datasets. The experimental results show that the proposed Dermo-DOCTOR outperforms the alternative methods mentioned in the literature, designed for skin lesion detection and recognition. As the Dermo-DOCTOR provides better-results on two different test datasets, even with limited training data, it can be an auspicious computer-aided assistive tool for dermatologists. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 218,229 |
0905.2919 | Succinct Representation of Codes with Applications to Testing | Motivated by questions in property testing, we search for linear error-correcting codes that have the "single local orbit" property: i.e., they are specified by a single local constraint and its translations under the symmetry group of the code. We show that the dual of every "sparse" binary code whose coordinates are indexed by elements of F_{2^n} for prime n, and whose symmetry group includes the group of non-singular affine transformations of F_{2^n} has the single local orbit property. (A code is said to be "sparse" if it contains polynomially many codewords in its block length.) In particular this class includes the dual-BCH codes for whose duals (i.e., for BCH codes) simple bases were not known. Our result gives the first short (O(n)-bit, as opposed to the natural exp(n)-bit) description of a low-weight basis for BCH codes. The interest in the "single local orbit" property comes from the recent result of Kaufman and Sudan (STOC 2008) that shows that the duals of codes that have the single local orbit property under the affine symmetry group are locally testable. When combined with our main result, this shows that all sparse affine-invariant codes over the coordinates F_{2^n} for prime n are locally testable. If, in addition to n being prime, if 2^n-1 is also prime (i.e., 2^n-1 is a Mersenne prime), then we get that every sparse cyclic code also has the single local orbit. In particular this implies that BCH codes of Mersenne prime length are generated by a single low-weight codeword and its cyclic shifts. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 3,716 |
2107.00525 | SearchGCN: Powering Embedding Retrieval by Graph Convolution Networks
for E-Commerce Search | Graph convolution networks (GCN), which recently becomes new state-of-the-art method for graph node classification, recommendation and other applications, has not been successfully applied to industrial-scale search engine yet. In this proposal, we introduce our approach, namely SearchGCN, for embedding-based candidate retrieval in one of the largest e-commerce search engine in the world. Empirical studies demonstrate that SearchGCN learns better embedding representations than existing methods, especially for long tail queries and items. Thus, SearchGCN has been deployed into JD.com's search production since July 2020. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 244,190 |
2404.19689 | Continuum limit of $p$-biharmonic equations on graphs | This paper studies the $p$-biharmonic equation on graphs, which arises in point cloud processing and can be interpreted as a natural extension of the graph $p$-Laplacian from the perspective of hypergraph. The asymptotic behavior of the solution is investigated when the random geometric graph is considered and the number of data points goes to infinity. We show that the continuum limit is an appropriately weighted $p$-biharmonic equation with homogeneous Neumann boundary conditions. The result relies on the uniform $L^p$ estimates for solutions and gradients of nonlocal and graph Poisson equations. The $L^\infty$ estimates of solutions are also obtained as a byproduct. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 450,736 |
1503.03184 | Fundamental Limits of Blind Deconvolution Part II: Sparsity-Ambiguity
Trade-offs | Blind deconvolution is an ubiquitous non-linear inverse problem in applications like wireless communications and image processing. This problem is generally ill-posed since signal identifiability is a key concern, and there have been efforts to use sparse models for regularizing blind deconvolution to promote signal identifiability. Part I of this two-part paper establishes a measure theoretically tight characterization of the ambiguity space for blind deconvolution and unidentifiability of this inverse problem under unconstrained inputs. Part II of this paper analyzes the identifiability of the canonical-sparse blind deconvolution problem and establishes surprisingly strong negative results on the sparsity-ambiguity trade-off scaling laws. Specifically, the ill-posedness of canonical-sparse blind deconvolution is quantified by exemplifying the dimension of the unidentifiable signal sets. An important conclusion of the paper is that canonical sparsity occurring naturally in applications is insufficient and that coding is necessary for signal identifiability in blind deconvolution. The methods developed herein are applied to a second-hop channel estimation problem to show that a family of subspace coded signals (including repetition coding and geometrically decaying signals) are unidentifiable under blind deconvolution. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 41,022 |
2203.00479 | Uncertainty Estimation for Computed Tomography with a Linearised Deep
Image Prior | Existing deep-learning based tomographic image reconstruction methods do not provide accurate estimates of reconstruction uncertainty, hindering their real-world deployment. This paper develops a method, termed as the linearised deep image prior (DIP), to estimate the uncertainty associated with reconstructions produced by the DIP with total variation regularisation (TV). Specifically, we endow the DIP with conjugate Gaussian-linear model type error-bars computed from a local linearisation of the neural network around its optimised parameters. To preserve conjugacy, we approximate the TV regulariser with a Gaussian surrogate. This approach provides pixel-wise uncertainty estimates and a marginal likelihood objective for hyperparameter optimisation. We demonstrate the method on synthetic data and real-measured high-resolution 2D $\mu$CT data, and show that it provides superior calibration of uncertainty estimates relative to previous probabilistic formulations of the DIP. Our code is available at https://github.com/educating-dip/bayes_dip. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 283,013 |
2501.00463 | SAT-LDM: Provably Generalizable Image Watermarking for Latent Diffusion
Models with Self-Augmented Training | The rapid proliferation of AI-generated images necessitates effective watermarking techniques to protect intellectual property and detect fraudulent content. While existing training-based watermarking methods show promise, they often struggle with generalizing across diverse prompts and tend to introduce visible artifacts. To this end, we propose a novel, provably generalizable image watermarking approach for Latent Diffusion Models, termed Self-Augmented Training (SAT-LDM). Our method aligns the training and testing phases through a free generation distribution, thereby enhancing the watermarking module's generalization capabilities. We theoretically consolidate SAT-LDM by proving that the free generation distribution contributes to its tight generalization bound, without the need for additional data collection. Extensive experiments show that SAT-LDM not only achieves robust watermarking but also significantly improves the quality of watermarked images across a wide range of prompts. Moreover, our experimental analyses confirm the strong generalization abilities of SAT-LDM. We hope that our method provides a practical and efficient solution for securing high-fidelity AI-generated content. | false | false | false | false | false | false | true | false | false | false | false | true | true | false | false | false | false | false | 521,674 |
2210.03352 | The Ethical Risks of Analyzing Crisis Events on Social Media with
Machine Learning | Social media platforms provide a continuous stream of real-time news regarding crisis events on a global scale. Several machine learning methods utilize the crowd-sourced data for the automated detection of crises and the characterization of their precursors and aftermaths. Early detection and localization of crisis-related events can help save lives and economies. Yet, the applied automation methods introduce ethical risks worthy of investigation - especially given their high-stakes societal context. This work identifies and critically examines ethical risk factors of social media analyses of crisis events focusing on machine learning methods. We aim to sensitize researchers and practitioners to the ethical pitfalls and promote fairer and more reliable designs. | false | false | false | true | false | false | true | false | false | false | false | false | false | true | false | false | false | false | 322,003 |
2011.00800 | Real-to-Sim Registration of Deformable Soft Tissue with Position-Based
Dynamics for Surgical Robot Autonomy | Autonomy in robotic surgery is very challenging in unstructured environments, especially when interacting with deformable soft tissues. The main difficulty is to generate model-based control methods that account for deformation dynamics during tissue manipulation. Previous works in vision-based perception can capture the geometric changes within the scene, however, model-based controllers integrated with dynamic properties, a more accurate and safe approach, has not been studied before. Considering the mechanic coupling between the robot and the environment, it is crucial to develop a registered, simulated dynamical model. In this work, we propose an online, continuous, real-to-sim registration method to bridge 3D visual perception with position-based dynamics (PBD) modeling of tissues. The PBD method is employed to simulate soft tissue dynamics as well as rigid tool interactions for model-based control. Meanwhile, a vision-based strategy is used to generate 3D reconstructed point cloud surfaces based on real-world manipulation, so as to register and update the simulation. To verify this real-to-sim approach, tissue experiments have been conducted on the da Vinci Research Kit. Our real-to-sim approach successfully reduces registration error online, which is especially important for safety during autonomous control. Moreover, it achieves higher accuracy in occluded areas than fusion-based reconstruction. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 204,381 |
1409.0083 | Sparse Coding on Symmetric Positive Definite Manifolds using Bregman
Divergences | This paper introduces sparse coding and dictionary learning for Symmetric Positive Definite (SPD) matrices, which are often used in machine learning, computer vision and related areas. Unlike traditional sparse coding schemes that work in vector spaces, in this paper we discuss how SPD matrices can be described by sparse combination of dictionary atoms, where the atoms are also SPD matrices. We propose to seek sparse coding by embedding the space of SPD matrices into Hilbert spaces through two types of Bregman matrix divergences. This not only leads to an efficient way of performing sparse coding, but also an online and iterative scheme for dictionary learning. We apply the proposed methods to several computer vision tasks where images are represented by region covariance matrices. Our proposed algorithms outperform state-of-the-art methods on a wide range of classification tasks, including face recognition, action recognition, material classification and texture categorization. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 35,698 |
2309.03440 | Punctate White Matter Lesion Segmentation in Preterm Infants Powered by
Counterfactually Generative Learning | Accurate segmentation of punctate white matter lesions (PWMLs) are fundamental for the timely diagnosis and treatment of related developmental disorders. Automated PWMLs segmentation from infant brain MR images is challenging, considering that the lesions are typically small and low-contrast, and the number of lesions may dramatically change across subjects. Existing learning-based methods directly apply general network architectures to this challenging task, which may fail to capture detailed positional information of PWMLs, potentially leading to severe under-segmentations. In this paper, we propose to leverage the idea of counterfactual reasoning coupled with the auxiliary task of brain tissue segmentation to learn fine-grained positional and morphological representations of PWMLs for accurate localization and segmentation. A simple and easy-to-implement deep-learning framework (i.e., DeepPWML) is accordingly designed. It combines the lesion counterfactual map with the tissue probability map to train a lightweight PWML segmentation network, demonstrating state-of-the-art performance on a real-clinical dataset of infant T1w MR images. The code is available at \href{https://github.com/ladderlab-xjtu/DeepPWML}{https://github.com/ladderlab-xjtu/DeepPWML}. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 390,377 |
2104.01772 | Convolutional Neural Opacity Radiance Fields | Photo-realistic modeling and rendering of fuzzy objects with complex opacity are critical for numerous immersive VR/AR applications, but it suffers from strong view-dependent brightness, color. In this paper, we propose a novel scheme to generate opacity radiance fields with a convolutional neural renderer for fuzzy objects, which is the first to combine both explicit opacity supervision and convolutional mechanism into the neural radiance field framework so as to enable high-quality appearance and global consistent alpha mattes generation in arbitrary novel views. More specifically, we propose an efficient sampling strategy along with both the camera rays and image plane, which enables efficient radiance field sampling and learning in a patch-wise manner, as well as a novel volumetric feature integration scheme that generates per-patch hybrid feature embeddings to reconstruct the view-consistent fine-detailed appearance and opacity output. We further adopt a patch-wise adversarial training scheme to preserve both high-frequency appearance and opacity details in a self-supervised framework. We also introduce an effective multi-view image capture system to capture high-quality color and alpha maps for challenging fuzzy objects. Extensive experiments on existing and our new challenging fuzzy object dataset demonstrate that our method achieves photo-realistic, globally consistent, and fined detailed appearance and opacity free-viewpoint rendering for various fuzzy objects. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 228,473 |
0710.5333 | Neutrosophic Relational Data Model | In this paper, we present a generalization of the relational data model based on interval neutrosophic set. Our data model is capable of manipulating incomplete as well as inconsistent information. Fuzzy relation or intuitionistic fuzzy relation can only handle incomplete information. Associated with each relation are two membership functions one is called truth-membership function T which keeps track of the extent to which we believe the tuple is in the relation, another is called falsity-membership function F which keeps track of the extent to which we believe that it is not in the relation. A neutrosophic relation is inconsistent if there exists one tuple a such that T(a) + F(a) > 1 . In order to handle inconsistent situation, we propose an operator called "split" to transform inconsistent neutrosophic relations into pseudo-consistent neutrosophic relations and do the set-theoretic and relation-theoretic operations on them and finally use another operator called "combine" to transform the result back to neutrosophic relation. For this data model, we define algebraic operators that are generalizations of the usual operators such as intersection, union, selection, join on fuzzy relations. Our data model can underlie any database and knowledge-base management system that deals with incomplete and inconsistent information. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 844 |
2401.11429 | Joint Downlink and Uplink Optimization for RIS-Aided FDD MIMO
Communication Systems | This paper investigates reconfigurable intelligent surface (RIS)-aided frequency division duplexing (FDD) communication systems. Since the downlink and uplink signals are simultaneously transmitted in FDD, the phase shifts at the RIS should be designed to support both transmissions. Considering a single-user multiple-input multiple-output system, we formulate a weighted sum-rate maximization problem to jointly maximize the downlink and uplink system performance. To tackle the non-convex optimization problem, we adopt an alternating optimization (AO) algorithm, in which two phase shift optimization techniques are developed to handle the unit-modulus constraints induced by the reflection coefficients at the RIS. The first technique exploits the manifold optimization-based algorithm, while the second uses a lower-complexity AO approach. Numerical results verify that the proposed techniques rapidly converge to local optima and significantly improve the overall system performance compared to existing benchmark schemes. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 422,997 |
2008.01531 | TOAD-GAN: Coherent Style Level Generation from a Single Example | In this work, we present TOAD-GAN (Token-based One-shot Arbitrary Dimension Generative Adversarial Network), a novel Procedural Content Generation (PCG) algorithm that generates token-based video game levels. TOAD-GAN follows the SinGAN architecture and can be trained using only one example. We demonstrate its application for Super Mario Bros. levels and are able to generate new levels of similar style in arbitrary sizes. We achieve state-of-the-art results in modeling the patterns of the training level and provide a comparison with different baselines under several metrics. Additionally, we present an extension of the method that allows the user to control the generation process of certain token structures to ensure a coherent global level layout. We provide this tool to the community to spur further research by publishing our source code. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 190,353 |
1807.05527 | Learning Probabilistic Logic Programs in Continuous Domains | The field of statistical relational learning aims at unifying logic and probability to reason and learn from data. Perhaps the most successful paradigm in the field is probabilistic logic programming: the enabling of stochastic primitives in logic programming, which is now increasingly seen to provide a declarative background to complex machine learning applications. While many systems offer inference capabilities, the more significant challenge is that of learning meaningful and interpretable symbolic representations from data. In that regard, inductive logic programming and related techniques have paved much of the way for the last few decades. Unfortunately, a major limitation of this exciting landscape is that much of the work is limited to finite-domain discrete probability distributions. Recently, a handful of systems have been extended to represent and perform inference with continuous distributions. The problem, of course, is that classical solutions for inference are either restricted to well-known parametric families (e.g., Gaussians) or resort to sampling strategies that provide correct answers only in the limit. When it comes to learning, moreover, inducing representations remains entirely open, other than "data-fitting" solutions that force-fit points to aforementioned parametric families. In this paper, we take the first steps towards inducing probabilistic logic programs for continuous and mixed discrete-continuous data, without being pigeon-holed to a fixed set of distribution families. Our key insight is to leverage techniques from piecewise polynomial function approximation theory, yielding a principled way to learn and compositionally construct density functions. We test the framework and discuss the learned representations. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 102,942 |
0910.3349 | b-Bit Minwise Hashing | This paper establishes the theoretical framework of b-bit minwise hashing. The original minwise hashing method has become a standard technique for estimating set similarity (e.g., resemblance) with applications in information retrieval, data management, social networks and computational advertising. By only storing the lowest $b$ bits of each (minwise) hashed value (e.g., b=1 or 2), one can gain substantial advantages in terms of computational efficiency and storage space. We prove the basic theoretical results and provide an unbiased estimator of the resemblance for any b. We demonstrate that, even in the least favorable scenario, using b=1 may reduce the storage space at least by a factor of 21.3 (or 10.7) compared to using b=64 (or b=32), if one is interested in resemblance > 0.5. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | true | 4,754 |
2112.03222 | On Complexity of 1-Center in Various Metrics | We consider the classic 1-center problem: Given a set $P$ of $n$ points in a metric space find the point in $P$ that minimizes the maximum distance to the other points of $P$. We study the complexity of this problem in $d$-dimensional $\ell_p$-metrics and in edit and Ulam metrics over strings of length $d$. Our results for the 1-center problem may be classified based on $d$ as follows. $\bullet$ Small $d$: Assuming the hitting set conjecture (HSC), we show that when $d=\omega(\log n)$, no subquadratic algorithm can solve 1-center problem in any of the $\ell_p$-metrics, or in edit or Ulam metrics. $\bullet$ Large $d$: When $d=\Omega(n)$, we extend our conditional lower bound to rule out subquartic algorithms for 1-center problem in edit metric (assuming Quantified SETH). On the other hand, we give a $(1+\epsilon)$-approximation for 1-center in Ulam metric with running time $\tilde{O_{\varepsilon}}(nd+n^2\sqrt{d})$. We also strengthen some of the above lower bounds by allowing approximations or by reducing the dimension $d$, but only against a weaker class of algorithms which list all requisite solutions. Moreover, we extend one of our hardness results to rule out subquartic algorithms for the well-studied 1-median problem in the edit metric, where given a set of $n$ strings each of length $n$, the goal is to find a string in the set that minimizes the sum of the edit distances to the rest of the strings in the set. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 270,123 |
2204.08797 | Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation | Precise segmentation of teeth from intra-oral scanner images is an essential task in computer-aided orthodontic surgical planning. The state-of-the-art deep learning-based methods often simply concatenate the raw geometric attributes (i.e., coordinates and normal vectors) of mesh cells to train a single-stream network for automatic intra-oral scanner image segmentation. However, since different raw attributes reveal completely different geometric information, the naive concatenation of different raw attributes at the (low-level) input stage may bring unnecessary confusion in describing and differentiating between mesh cells, thus hampering the learning of high-level geometric representations for the segmentation task. To address this issue, we design a two-stream graph convolutional network (i.e., TSGCN), which can effectively handle inter-view confusion between different raw attributes to more effectively fuse their complementary information and learn discriminative multi-view geometric representations. Specifically, our TSGCN adopts two input-specific graph-learning streams to extract complementary high-level geometric representations from coordinates and normal vectors, respectively. Then, these single-view representations are further fused by a self-attention module to adaptively balance the contributions of different views in learning more discriminative multi-view representations for accurate and fully automatic tooth segmentation. We have evaluated our TSGCN on a real-patient dataset of dental (mesh) models acquired by 3D intraoral scanners. Experimental results show that our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation. Github: https://github.com/ZhangLingMing1/TSGCNet. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 292,214 |
2107.03337 | Samplets: A new paradigm for data compression | In this article, we introduce the concept of samplets by transferring the construction of Tausch-White wavelets to the realm of data. This way we obtain a multilevel representation of discrete data which directly enables data compression, detection of singularities and adaptivity. Applying samplets to represent kernel matrices, as they arise in kernel based learning or Gaussian process regression, we end up with quasi-sparse matrices. By thresholding small entries, these matrices are compressible to O(N log N) relevant entries, where N is the number of data points. This feature allows for the use of fill-in reducing reorderings to obtain a sparse factorization of the compressed matrices. Besides the comprehensive introduction to samplets and their properties, we present extensive numerical studies to benchmark the approach. Our results demonstrate that samplets mark a considerable step in the direction of making large data sets accessible for analysis. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | true | 245,130 |
2107.06866 | Asynchronous games on Petri nets and ATL | We define a game on distributed Petri nets, where several players interact with each other, and with an environment. The players, or users, have perfect knowledge of the current state, and pursue a common goal. Such goal is expressed by Alternating-time Temporal Logic (ATL). The users have a winning strategy if they can cooperate to reach their goal, no matter how the environment behaves. We show that such a game can be translated into a game on concurrent game structures (introduced in order to give a semantics to ATL). We compare our game with the game on concurrent game structures and discuss the differences between the two approaches. Finally, we show that, when we consider memoryless strategies and a fragment of ATL, we can construct a concurrent game structure from the Petri net, such that an ATL formula is verified on the net if, and only if, it is verified on the game structure. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | 246,220 |
1505.05286 | Measuring Visibility using Atmospheric Transmission and Digital Surface
Model | Reliable and exact assessment of visibility is essential for safe air traffic. In order to overcome the drawbacks of the currently subjective reports from human observers, we present an approach to automatically derive visibility measures by means of image processing. It first exploits image based estimation of the atmospheric transmission describing the portion of the light that is not scattered by atmospheric phenomena (e.g., haze, fog, smoke) and reaches the camera. Once the atmospheric transmission is estimated, a 3D representation of the vicinity (digital surface model: DMS) is used to compute depth measurements for the haze-free pixels and then derive a global visibility estimation for the airport. Results on foggy images demonstrate the validity of the proposed method. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 43,285 |
2411.10845 | Automatic Discovery and Assessment of Interpretable Systematic Errors in
Semantic Segmentation | This paper presents a novel method for discovering systematic errors in segmentation models. For instance, a systematic error in the segmentation model can be a sufficiently large number of misclassifications from the model as a parking meter for a target class of pedestrians. With the rapid deployment of these models in critical applications such as autonomous driving, it is vital to detect and interpret these systematic errors. However, the key challenge is automatically discovering such failures on unlabelled data and forming interpretable semantic sub-groups for intervention. For this, we leverage multimodal foundation models to retrieve errors and use conceptual linkage along with erroneous nature to study the systematic nature of these errors. We demonstrate that such errors are present in SOTA segmentation models (UperNet ConvNeXt and UperNet Swin) trained on the Berkeley Deep Drive and benchmark the approach qualitatively and quantitatively, showing its effectiveness by discovering coherent systematic errors for these models. Our work opens up the avenue to model analysis and intervention that have so far been underexplored in semantic segmentation. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 508,816 |
1008.2160 | An early warning method for crush | Fatal crush conditions occur in crowds with tragic frequency. Event organisers and architects are often criticised for failing to consider the causes and implications of crush, but the reality is that the prediction and mitigation of such conditions offers a significant technical challenge. Full treatment of physical force within crowd simulations is precise but computationally expensive; the more common method of human interpretation of results is computationally "cheap" but subjective and time-consuming. In this paper we propose an alternative method for the analysis of crowd behaviour, which uses information theory to measure crowd disorder. We show how this technique may be easily incorporated into an existing simulation framework, and validate it against an historical event. Our results show that this method offers an effective and efficient route towards automatic detection of crush. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | 7,264 |
1401.3493 | Predicting the Performance of IDA* using Conditional Distributions | Korf, Reid, and Edelkamp introduced a formula to predict the number of nodes IDA* will expand on a single iteration for a given consistent heuristic, and experimentally demonstrated that it could make very accurate predictions. In this paper we show that, in addition to requiring the heuristic to be consistent, their formulas predictions are accurate only at levels of the brute-force search tree where the heuristic values obey the unconditional distribution that they defined and then used in their formula. We then propose a new formula that works well without these requirements, i.e., it can make accurate predictions of IDA*s performance for inconsistent heuristics and if the heuristic values in any level do not obey the unconditional distribution. In order to achieve this we introduce the conditional distribution of heuristic values which is a generalization of their unconditional heuristic distribution. We also provide extensions of our formula that handle individual start states and the augmentation of IDA* with bidirectional pathmax (BPMX), a technique for propagating heuristic values when inconsistent heuristics are used. Experimental results demonstrate the accuracy of our new method and all its variations. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 29,897 |
0905.4369 | Automating Quantified Multimodal Logics in Simple Type Theory -- A Case
Study | In a case study we investigate whether off the shelf higher-order theorem provers and model generators can be employed to automate reasoning in and about quantified multimodal logics. In our experiments we exploit the new TPTP infrastructure for classical higher-order logic. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 3,777 |
1611.01974 | Item-to-item recommendation based on Contextual Fisher Information | Web recommendation services bear great importance in e-commerce, as they aid the user in navigating through the items that are most relevant to her needs. In a typical Web site, long history of previous activities or purchases by the user is rarely available. Hence in most cases, recommenders propose items that are similar to the most recent ones viewed in the current user session. The corresponding task is called session based item-to-item recommendation. For frequent items, it is easy to present item-to-item recommendations by "people who viewed this, also viewed" lists. However, most of the items belong to the long tail, where previous actions are sparsely available. Another difficulty is the so-called cold start problem, when the item has recently appeared and had no time yet to accumulate sufficient number of transactions. In order to recommend a next item in a session in sparse or cold start situations, we also have to incorporate item similarity models. In this paper we describe a probabilistic similarity model based on Random Fields to approximate item-to-item transition probabilities. We give a generative model for the item interactions based on arbitrary distance measures over the items including explicit, implicit ratings and external metadata. The model may change in time to fit better recent events and recommend the next item based on the updated Fisher Information. Our new model outperforms both simple similarity baseline methods and recent item-to-item recommenders, under several different performance metrics and publicly available data sets. We reach significant gains in particular for recommending a new item following a rare item. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 63,483 |
1301.0560 | Generalized Instrumental Variables | This paper concerns the assessment of direct causal effects from a combination of: (i) non-experimental data, and (ii) qualitative domain knowledge. Domain knowledge is encoded in the form of a directed acyclic graph (DAG), in which all interactions are assumed linear, and some variables are presumed to be unobserved. We provide a generalization of the well-known method of Instrumental Variables, which allows its application to models with few conditional independeces. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 20,742 |
2108.01624 | Large-Scale Differentially Private BERT | In this work, we study the large-scale pretraining of BERT-Large with differentially private SGD (DP-SGD). We show that combined with a careful implementation, scaling up the batch size to millions (i.e., mega-batches) improves the utility of the DP-SGD step for BERT; we also enhance its efficiency by using an increasing batch size schedule. Our implementation builds on the recent work of [SVK20], who demonstrated that the overhead of a DP-SGD step is minimized with effective use of JAX [BFH+18, FJL18] primitives in conjunction with the XLA compiler [XLA17]. Our implementation achieves a masked language model accuracy of 60.5% at a batch size of 2M, for $\epsilon = 5.36$. To put this number in perspective, non-private BERT models achieve an accuracy of $\sim$70%. | false | false | false | false | false | false | true | false | true | false | false | false | true | false | false | false | false | false | 249,082 |
2310.12072 | SPEED: Speculative Pipelined Execution for Efficient Decoding | Generative Large Language Models (LLMs) based on the Transformer architecture have recently emerged as a dominant foundation model for a wide range of Natural Language Processing tasks. Nevertheless, their application in real-time scenarios has been highly restricted due to the significant inference latency associated with these models. This is particularly pronounced due to the autoregressive nature of generative LLM inference, where tokens are generated sequentially since each token depends on all previous output tokens. It is therefore challenging to achieve any token-level parallelism, making inference extremely memory-bound. In this work, we propose SPEED, which improves inference efficiency by speculatively executing multiple future tokens in parallel with the current token using predicted values based on early-layer hidden states. For Transformer decoders that employ parameter sharing, the memory operations for the tokens executing in parallel can be amortized, which allows us to accelerate generative LLM inference. We demonstrate the efficiency of our method in terms of latency reduction relative to model accuracy and demonstrate how speculation allows for training deeper decoders with parameter sharing with minimal runtime overhead. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 400,893 |
2411.17042 | Conformalised Conditional Normalising Flows for Joint Prediction Regions
in time series | Conformal Prediction offers a powerful framework for quantifying uncertainty in machine learning models, enabling the construction of prediction sets with finite-sample validity guarantees. While easily adaptable to non-probabilistic models, applying conformal prediction to probabilistic generative models, such as Normalising Flows is not straightforward. This work proposes a novel method to conformalise conditional normalising flows, specifically addressing the problem of obtaining prediction regions for multi-step time series forecasting. Our approach leverages the flexibility of normalising flows to generate potentially disjoint prediction regions, leading to improved predictive efficiency in the presence of potential multimodal predictive distributions. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 511,280 |
2011.08816 | Answering Regular Path Queries Over SQ Ontologies | We study query answering in the description logic $\mathcal{SQ}$ supporting qualified number restrictions on both transitive and non-transitive roles. Our main contributions are a tree-like model property for $\mathcal{SQ}$ knowledge bases and, building upon this, an optimal automata-based algorithm for answering positive existential regular path queries in 2ExpTime. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 207,003 |
2110.03174 | Transferring Voice Knowledge for Acoustic Event Detection: An Empirical
Study | Detection of common events and scenes from audio is useful for extracting and understanding human contexts in daily life. Prior studies have shown that leveraging knowledge from a relevant domain is beneficial for a target acoustic event detection (AED) process. Inspired by the observation that many human-centered acoustic events in daily life involve voice elements, this paper investigates the potential of transferring high-level voice representations extracted from a public speaker dataset to enrich an AED pipeline. Towards this end, we develop a dual-branch neural network architecture for the joint learning of voice and acoustic features during an AED process and conduct thorough empirical studies to examine the performance on the public AudioSet [1] with different types of inputs. Our main observations are that: 1) Joint learning of audio and voice inputs improves the AED performance (mean average precision) for both a CNN baseline (0.292 vs 0.134 mAP) and a TALNet [2] baseline (0.361 vs 0.351 mAP); 2) Augmenting the extra voice features is critical to maximize the model performance with dual inputs. | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 259,396 |
0805.4620 | Uplink Macro Diversity of Limited Backhaul Cellular Network | In this work new achievable rates are derived, for the uplink channel of a cellular network with joint multicell processing, where unlike previous results, the ideal backhaul network has finite capacity per-cell. Namely, the cell sites are linked to the central joint processor via lossless links with finite capacity. The cellular network is abstracted by symmetric models, which render analytical treatment plausible. For this idealistic model family, achievable rates are presented for cell-sites that use compress-and-forward schemes combined with local decoding, for both Gaussian and fading channels. The rates are given in closed form for the classical Wyner model and the soft-handover model. These rates are then demonstrated to be rather close to the optimal unlimited backhaul joint processing rates, already for modest backhaul capacities, supporting the potential gain offered by the joint multicell processing approach. Particular attention is also given to the low-SNR characterization of these rates through which the effect of the limited backhaul network is explicitly revealed. In addition, the rate at which the backhaul capacity should scale in order to maintain the original high-SNR characterization of an unlimited backhaul capacity system is found. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 1,849 |
2211.11576 | Imputation of Missing Streamflow Data at Multiple Gauging Stations in
Benin Republic | Streamflow observation data is vital for flood monitoring, agricultural, and settlement planning. However, such streamflow data are commonly plagued with missing observations due to various causes such as harsh environmental conditions and constrained operational resources. This problem is often more pervasive in under-resourced areas such as Sub-Saharan Africa. In this work, we reconstruct streamflow time series data through bias correction of the GEOGloWS ECMWF streamflow service (GESS) forecasts at ten river gauging stations in Benin Republic. We perform bias correction by fitting Quantile Mapping, Gaussian Process, and Elastic Net regression in a constrained training period. We show by simulating missingness in a testing period that GESS forecasts have a significant bias that results in low predictive skill over the ten Beninese stations. Our findings suggest that overall bias correction by Elastic Net and Gaussian Process regression achieves superior skill relative to traditional imputation by Random Forest, k-Nearest Neighbour, and GESS lookup. The findings of this work provide a basis for integrating global GESS streamflow data into operational early-warning decision-making systems (e.g., flood alert) in countries vulnerable to drought and flooding due to extreme weather events. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 331,788 |
1406.2125 | From XML Schema to JSON Schema: Translation with CHR | Despite its rising popularity as data format especially for web services, the software ecosystem around the JavaScript Object Notation (JSON) is not as widely distributed as that of XML. For both data formats there exist schema languages to specify the structure of instance documents, but there is currently no opportunity to translate already existing XML Schema documents into equivalent JSON Schemas. In this paper we introduce an implementation of a language translator. It takes an XML Schema and creates its equivalent JSON Schema document. Our approach is based on Prolog and CHR. By unfolding the XML Schema document into CHR constraints, it is possible to specify the concrete translation rules in a declarative way. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | true | 33,714 |
2002.07897 | LocoGAN -- Locally Convolutional GAN | In the paper we construct a fully convolutional GAN model: LocoGAN, which latent space is given by noise-like images of possibly different resolutions. The learning is local, i.e. we process not the whole noise-like image, but the sub-images of a fixed size. As a consequence LocoGAN can produce images of arbitrary dimensions e.g. LSUN bedroom data set. Another advantage of our approach comes from the fact that we use the position channels, which allows the generation of fully periodic (e.g. cylindrical panoramic images) or almost periodic ,,infinitely long" images (e.g. wall-papers). | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 164,596 |
1602.02868 | Data-Driven Evaluation of Building Demand Response Capacity | Before a building can participate in a demand response program, its facility managers must characterize the site's ability to reduce load. Today, this is often done through manual audit processes and prototypical control strategies. In this paper, we propose a new approach to estimate a building's demand response capacity using detailed data from various sensors installed in a building. We derive a formula for a probabilistic measure that characterizes various tradeoffs between the available demand response capacity and the confidence level associated with that curtailment under the constraints of building occupant comfort level (or utility). Then, we develop a data-driven framework to associate observed or projected building energy consumption with a particular set of rules learned from a large sensor dataset. We apply this methodology using testbeds in two buildings in Singapore: a unique net-zero energy building and a modern commercial office building. Our experimental results identify key control parameters and provide insight into the available demand response strategies at each site. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 51,925 |
1507.04271 | Hybrid access control with modified SINR association for future
heterogeneous networks | The heterogeneity in cellular networks that comprise multiple base stations types imposes new challenges in network planning and deployment of future generation of cellular networks. The Radio Resource Management (RRM) techniques, such as dynamic sharing of the available resources and advanced user association strategies, determine the overall network capacity and efficiency. This paper evaluates the downlink performance of a two tier heterogeneous network (consisting of macro and femto tiers) in terms of rate distribution, i.e. the percentage of users that achieve certain rate in the system. The paper specifically addresses the femto tier RRM by randomization of the allocated resources and the user association process by introducing a modified SINR association strategy with bias factor for load balancing. Also, the paper introduces hybrid access control mechanism at the femto tier that allows the authorized users of the femtocell, which are part of the Closed Subscriber Group (CSG) on the femtocell, to achieve higher data rates up to 10 times compared to the other regular users associated in the access. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 45,157 |
1901.08783 | Scalable solvers for complex electromagnetics problems | In this work, we present scalable balancing domain decomposition by constraints methods for linear systems arising from arbitrary order edge finite element discretizations of multi-material and heterogeneous 3D problems. In order to enforce the continuity across subdomains of the method, we use a partition of the interface objects (edges and faces) into sub-objects determined by the variation of the physical coefficients of the problem. For multi-material problems, a constant coefficient condition is enough to define this sub-partition of the objects. For arbitrarily heterogeneous problems, a relaxed version of the method is defined, where we only require that the maximal contrast of the physical coefficient in each object is smaller than a predefined threshold. Besides, the addition of perturbation terms to the preconditioner is empirically shown to be effective in order to deal with the case where the two coefficients of the model problem jump simultaneously across the interface. The new method, in contrast to existing approaches for problems in curl-conforming spaces does not require spectral information whilst providing robustness with regard to coefficient jumps and heterogeneous materials. A detailed set of numerical experiments, which includes the application of the preconditioner to 3D realistic cases, shows excellent weak scalability properties of the implementation of the proposed algorithms. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 119,579 |
2112.10735 | Successive Cancellation Ordered Search Decoding of Modified
$\boldsymbol{G}_N$-Coset Codes | A tree search algorithm called successive cancellation ordered search (SCOS) is proposed for $\boldsymbol{G}_N$-coset codes that implements maximum-likelihood (ML) decoding with adaptive complexity for transmission over binary-input AWGN channels. Unlike bit-flip decoders, no outer code is needed to terminate decoding; therefore, SCOS also applies to $\boldsymbol{G}_N$-coset codes modified with dynamic frozen bits. The average complexity is close to that of successive cancellation (SC) decoding at practical frame error rates (FERs) for codes with wide ranges of rate and lengths up to $512$ bits, which perform within $0.25$ dB or less from the random coding union bound and outperform Reed--Muller codes under ML decoding by up to $0.5$ dB. Simulations illustrate simultaneous gains for SCOS over SC-Fano, SC stack (SCS) and SC list (SCL) decoding in FER and the average complexity at various SNR regimes. SCOS is further extended by forcing it to look for candidates satisfying a threshold, thereby outperforming basic SCOS under complexity constraints. The modified SCOS enables strong error-detection capability without the need for an outer code. In particular, the $(128, 64)$ polarization-adjusted convolutional code under modified SCOS provides gains in overall and undetected FER compared to CRC-aided polar codes under SCL/dynamic SC flip decoding at high SNR. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 272,513 |
2311.04449 | Recursion in Recursion: Two-Level Nested Recursion for Length
Generalization with Scalability | Binary Balanced Tree RvNNs (BBT-RvNNs) enforce sequence composition according to a preset balanced binary tree structure. Thus, their non-linear recursion depth is just $\log_2 n$ ($n$ being the sequence length). Such logarithmic scaling makes BBT-RvNNs efficient and scalable on long sequence tasks such as Long Range Arena (LRA). However, such computational efficiency comes at a cost because BBT-RvNNs cannot solve simple arithmetic tasks like ListOps. On the flip side, RvNNs (e.g., Beam Tree RvNN) that do succeed on ListOps (and other structure-sensitive tasks like formal logical inference) are generally several times more expensive than even RNNs. In this paper, we introduce a novel framework -- Recursion in Recursion (RIR) to strike a balance between the two sides - getting some of the benefits from both worlds. In RIR, we use a form of two-level nested recursion - where the outer recursion is a $k$-ary balanced tree model with another recursive model (inner recursion) implementing its cell function. For the inner recursion, we choose Beam Tree RvNNs (BT-RvNN). To adjust BT-RvNNs within RIR we also propose a novel strategy of beam alignment. Overall, this entails that the total recursive depth in RIR is upper-bounded by $k \log_k n$. Our best RIR-based model is the first model that demonstrates high ($\geq 90\%$) length-generalization performance on ListOps while at the same time being scalable enough to be trainable on long sequence inputs from LRA. Moreover, in terms of accuracy in the LRA language tasks, it performs competitively with Structured State Space Models (SSMs) without any special initialization - outperforming Transformers by a large margin. On the other hand, while SSMs can marginally outperform RIR on LRA, they (SSMs) fail to length-generalize on ListOps. Our code is available at: \url{https://github.com/JRC1995/BeamRecursionFamily/}. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 406,224 |
2301.06223 | Learning-based Intelligent Surface Configuration, User Selection,
Channel Allocation, and Modulation Adaptation for Jamming-resisting Multiuser
OFDMA Systems | Reconfigurable intelligent surfaces (RISs) can potentially combat jamming attacks by diffusing jamming signals. This paper jointly optimizes user selection, channel allocation, modulation-coding, and RIS configuration in a multiuser OFDMA system under a jamming attack. This problem is non-trivial and has never been addressed, because of its mixed-integer programming nature and difficulties in acquiring channel state information (CSI) involving the RIS and jammer. We propose a new deep reinforcement learning (DRL)-based approach, which learns only through changes in the received data rates of the users to reject the jamming signals and maximize the sum rate of the system. The key idea is that we decouple the discrete selection of users, channels, and modulation-coding from the continuous RIS configuration, hence facilitating the RIS configuration with the latest twin delayed deep deterministic policy gradient (TD3) model. Another important aspect is that we show a winner-takes-all strategy is almost surely optimal for selecting the users, channels, and modulation-coding, given a learned RIS configuration. Simulations show that the new approach converges fast to fulfill the benefit of the RIS, due to its substantially small state and action spaces. Without the need of the CSI, the approach is promising and offers practical value. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 340,588 |
2309.08754 | Reproducible Domain-Specific Knowledge Graphs in the Life Sciences: a
Systematic Literature Review | Knowledge graphs (KGs) are widely used for representing and organizing structured knowledge in diverse domains. However, the creation and upkeep of KGs pose substantial challenges. Developing a KG demands extensive expertise in data modeling, ontology design, and data curation. Furthermore, KGs are dynamic, requiring continuous updates and quality control to ensure accuracy and relevance. These intricacies contribute to the considerable effort required for their development and maintenance. One critical dimension of KGs that warrants attention is reproducibility. The ability to replicate and validate KGs is fundamental for ensuring the trustworthiness and sustainability of the knowledge they represent. Reproducible KGs not only support open science by allowing others to build upon existing knowledge but also enhance transparency and reliability in disseminating information. Despite the growing number of domain-specific KGs, a comprehensive analysis concerning their reproducibility has been lacking. This paper addresses this gap by offering a general overview of domain-specific KGs and comparing them based on various reproducibility criteria. Our study over 19 different domains shows only eight out of 250 domain-specific KGs (3.2%) provide publicly available source code. Among these, only one system could successfully pass our reproducibility assessment (14.3%). These findings highlight the challenges and gaps in achieving reproducibility across domain-specific KGs. Our finding that only 0.4% of published domain-specific KGs are reproducible shows a clear need for further research and a shift in cultural practices. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 392,308 |
1905.06292 | 3D Point Cloud Generative Adversarial Network Based on Tree Structured
Graph Convolutions | In this paper, we propose a novel generative adversarial network (GAN) for 3D point clouds generation, which is called tree-GAN. To achieve state-of-the-art performance for multi-class 3D point cloud generation, a tree-structured graph convolution network (TreeGCN) is introduced as a generator for tree-GAN. Because TreeGCN performs graph convolutions within a tree, it can use ancestor information to boost the representation power for features. To evaluate GANs for 3D point clouds accurately, we develop a novel evaluation metric called Frechet point cloud distance (FPD). Experimental results demonstrate that the proposed tree-GAN outperforms state-of-the-art GANs in terms of both conventional metrics and FPD, and can generate point clouds for different semantic parts without prior knowledge. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 130,949 |
2501.06467 | Retrieval-Augmented Dialogue Knowledge Aggregation for Expressive
Conversational Speech Synthesis | Conversational speech synthesis (CSS) aims to take the current dialogue (CD) history as a reference to synthesize expressive speech that aligns with the conversational style. Unlike CD, stored dialogue (SD) contains preserved dialogue fragments from earlier stages of user-agent interaction, which include style expression knowledge relevant to scenarios similar to those in CD. Note that this knowledge plays a significant role in enabling the agent to synthesize expressive conversational speech that generates empathetic feedback. However, prior research has overlooked this aspect. To address this issue, we propose a novel Retrieval-Augmented Dialogue Knowledge Aggregation scheme for expressive CSS, termed RADKA-CSS, which includes three main components: 1) To effectively retrieve dialogues from SD that are similar to CD in terms of both semantic and style. First, we build a stored dialogue semantic-style database (SDSSD) which includes the text and audio samples. Then, we design a multi-attribute retrieval scheme to match the dialogue semantic and style vectors of the CD with the stored dialogue semantic and style vectors in the SDSSD, retrieving the most similar dialogues. 2) To effectively utilize the style knowledge from CD and SD, we propose adopting the multi-granularity graph structure to encode the dialogue and introducing a multi-source style knowledge aggregation mechanism. 3) Finally, the aggregated style knowledge are fed into the speech synthesizer to help the agent synthesize expressive speech that aligns with the conversational style. We conducted a comprehensive and in-depth experiment based on the DailyTalk dataset, which is a benchmarking dataset for the CSS task. Both objective and subjective evaluations demonstrate that RADKA-CSS outperforms baseline models in expressiveness rendering. Code and audio samples can be found at: https://github.com/Coder-jzq/RADKA-CSS. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 523,994 |
2202.09171 | Linearization and Identification of Multiple-Attractor Dynamical Systems
through Laplacian Eigenmaps | Dynamical Systems (DS) are fundamental to the modeling and understanding time evolving phenomena, and have application in physics, biology and control. As determining an analytical description of the dynamics is often difficult, data-driven approaches are preferred for identifying and controlling nonlinear DS with multiple equilibrium points. Identification of such DS has been treated largely as a supervised learning problem. Instead, we focus on an unsupervised learning scenario where we know neither the number nor the type of dynamics. We propose a Graph-based spectral clustering method that takes advantage of a velocity-augmented kernel to connect data points belonging to the same dynamics, while preserving the natural temporal evolution. We study the eigenvectors and eigenvalues of the Graph Laplacian and show that they form a set of orthogonal embedding spaces, one for each sub-dynamics. We prove that there always exist a set of 2-dimensional embedding spaces in which the sub-dynamics are linear and n-dimensional embedding spaces where they are quasi-linear. We compare the clustering performance of our algorithm to Kernel K-Means, Spectral Clustering and Gaussian Mixtures and show that, even when these algorithms are provided with the correct number of sub-dynamics, they fail to cluster them correctly. We learn a diffeomorphism from the Laplacian embedding space to the original space and show that the Laplacian embedding leads to good reconstruction accuracy and a faster training time through an exponential decaying loss compared to the state-of-the-art diffeomorphism-based approaches. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 281,111 |
2412.17707 | SMAC-Hard: Enabling Mixed Opponent Strategy Script and Self-play on SMAC | The availability of challenging simulation environments is pivotal for advancing the field of Multi-Agent Reinforcement Learning (MARL). In cooperative MARL settings, the StarCraft Multi-Agent Challenge (SMAC) has gained prominence as a benchmark for algorithms following centralized training with decentralized execution paradigm. However, with continual advancements in SMAC, many algorithms now exhibit near-optimal performance, complicating the evaluation of their true effectiveness. To alleviate this problem, in this work, we highlight a critical issue: the default opponent policy in these environments lacks sufficient diversity, leading MARL algorithms to overfit and exploit unintended vulnerabilities rather than learning robust strategies. To overcome these limitations, we propose SMAC-HARD, a novel benchmark designed to enhance training robustness and evaluation comprehensiveness. SMAC-HARD supports customizable opponent strategies, randomization of adversarial policies, and interfaces for MARL self-play, enabling agents to generalize to varying opponent behaviors and improve model stability. Furthermore, we introduce a black-box testing framework wherein agents are trained without exposure to the edited opponent scripts but are tested against these scripts to evaluate the policy coverage and adaptability of MARL algorithms. We conduct extensive evaluations of widely used and state-of-the-art algorithms on SMAC-HARD, revealing the substantial challenges posed by edited and mixed strategy opponents. Additionally, the black-box strategy tests illustrate the difficulty of transferring learned policies to unseen adversaries. We envision SMAC-HARD as a critical step toward benchmarking the next generation of MARL algorithms, fostering progress in self-play methods for multi-agent systems. Our code is available at https://github.com/devindeng94/smac-hard. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 520,084 |
2408.15417 | Implicit Geometry of Next-token Prediction: From Language Sparsity
Patterns to Model Representations | Next-token prediction (NTP) over large text corpora has become the go-to paradigm to train large language models. Yet, it remains unclear how NTP influences the mapping of linguistic patterns to geometric properties of the resulting model representations. We frame training of large language models as soft-label classification over sparse probabilistic label vectors, coupled with an analytical approximation that allows unrestricted generation of context embeddings. This approach links NTP training to rank-constrained, nuclear-norm regularized optimization in the logit domain, offering a framework for analyzing the geometry of word and context embeddings. In large embedding spaces, we find that NTP implicitly favors learning logits with a sparse plus low-rank structure. While the sparse component captures the co-occurrence frequency of context-word pairs, the orthogonal low-rank component, which becomes dominant as training progresses, depends solely on the sparsity pattern of the co-occurrence matrix. Consequently, when projected onto an appropriate subspace, representations of contexts that are followed by the same set of next-tokens collapse, a phenomenon we term subspace-collapse. We validate our findings on synthetic and small-scale real language datasets. Finally, we outline potential research directions aimed at deepening the understanding of NTP's influence on the learning of linguistic patterns and regularities. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 483,930 |
2404.04559 | Spectral GNN via Two-dimensional (2-D) Graph Convolution | Spectral Graph Neural Networks (GNNs) have achieved tremendous success in graph learning. As an essential part of spectral GNNs, spectral graph convolution extracts crucial frequency information in graph data, leading to superior performance of spectral GNNs in downstream tasks. However, in this paper, we show that existing spectral GNNs remain critical drawbacks in performing the spectral graph convolution. Specifically, considering the spectral graph convolution as a construction operation towards target output, we prove that existing popular convolution paradigms cannot construct the target output with mild conditions on input graph signals, causing spectral GNNs to fall into suboptimal solutions. To address the issues, we rethink the spectral graph convolution from a more general two-dimensional (2-D) signal convolution perspective and propose a new convolution paradigm, named 2-D graph convolution. We prove that 2-D graph convolution unifies existing graph convolution paradigms, and is capable to construct arbitrary target output. Based on the proposed 2-D graph convolution, we further propose ChebNet2D, an efficient and effective GNN implementation of 2-D graph convolution through applying Chebyshev interpolation. Extensive experiments on benchmark datasets demonstrate both effectiveness and efficiency of the ChebNet2D. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 444,701 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.