id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1312.4314 | Learning Factored Representations in a Deep Mixture of Experts | Mixtures of Experts combine the outputs of several "expert" networks, each of which specializes in a different part of the input space. This is achieved by training a "gating" network that maps each input to a distribution over the experts. Such models show promise for building larger networks that are still cheap to compute at test time, and more parallelizable at training time. In this this work, we extend the Mixture of Experts to a stacked model, the Deep Mixture of Experts, with multiple sets of gating and experts. This exponentially increases the number of effective experts by associating each input with a combination of experts at each layer, yet maintains a modest model size. On a randomly translated version of the MNIST dataset, we find that the Deep Mixture of Experts automatically learns to develop location-dependent ("where") experts at the first layer, and class-specific ("what") experts at the second layer. In addition, we see that the different combinations are in use when the model is applied to a dataset of speech monophones. These demonstrate effective use of all expert combinations. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 29,121 |
2305.00365 | A Transfer Learning Approach to Minimize Reinforcement Learning Risks in
Energy Optimization for Smart Buildings | Energy optimization leveraging artificially intelligent algorithms has been proven effective. However, when buildings are commissioned, there is no historical data that could be used to train these algorithms. On-line Reinforcement Learning (RL) algorithms have shown significant promise, but their deployment carries a significant risk, because as the RL agent initially explores its action space it could cause significant discomfort to the building residents. In this paper we present ReLBOT - a new technique that uses transfer learning in conjunction with deep RL to transfer knowledge from an existing, optimized and instrumented building, to the newly commissioning smart building, to reduce the adverse impact of the reinforcement learning agent's warm-up period. We demonstrate improvements of up to 6.2 times in the duration, and up to 132 times in prediction variance, for the reinforcement learning agent's warm-up period. | false | false | false | false | true | false | true | false | false | false | true | false | false | false | false | false | false | false | 361,311 |
0903.1680 | Faceted Exploration of Emerging Resource Spaces | Humans have the ability to regcognize the real world from different facets. Faceted exploration is a mechanism for browsing and understanding large-scale resources in information network by multiple facets. This paper proposes an Emerging Resource Space Model, whose schema is a partially ordered set of concepts with subclassOf relation and each resource is categorized by multiple concepts. Emering Resource Space (ERS) is a class of resources characterized by a concept set. ERSes compose a lattice (ERSL) via concept association. A series of exploration operations is proposed to guide users to explore through ERSL with more demanding and richer semantics than current faceted navigation. To fulfill instant response during faceted exploration, we devise an efficient algorithm for mining and indexing ERSL. The proposed model can effectively support faceted exploration in various applications from personal information management to large-scale information sharing. | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | true | 3,320 |
2303.14884 | A large-scale dataset for end-to-end table recognition in the wild | Table recognition (TR) is one of the research hotspots in pattern recognition, which aims to extract information from tables in an image. Common table recognition tasks include table detection (TD), table structure recognition (TSR) and table content recognition (TCR). TD is to locate tables in the image, TCR recognizes text content, and TSR recognizes spatial ogical structure. Currently, the end-to-end TR in real scenarios, accomplishing the three sub-tasks simultaneously, is yet an unexplored research area. One major factor that inhibits researchers is the lack of a benchmark dataset. To this end, we propose a new large-scale dataset named Table Recognition Set (TabRecSet) with diverse table forms sourcing from multiple scenarios in the wild, providing complete annotation dedicated to end-to-end TR research. It is the largest and first bi-lingual dataset for end-to-end TR, with 38.1K tables in which 20.4K are in English\, and 17.7K are in Chinese. The samples have diverse forms, such as the border-complete and -incomplete table, regular and irregular table (rotated, distorted, etc.). The scenarios are multiple in the wild, varying from scanned to camera-taken images, documents to Excel tables, educational test papers to financial invoices. The annotations are complete, consisting of the table body spatial annotation, cell spatial logical annotation and text content for TD, TSR and TCR, respectively. The spatial annotation utilizes the polygon instead of the bounding box or quadrilateral adopted by most datasets. The polygon spatial annotation is more suitable for irregular tables that are common in wild scenarios. Additionally, we propose a visualized and interactive annotation tool named TableMe to improve the efficiency and quality of table annotation. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 354,279 |
1512.06697 | Random Tessellations, Restricted Isometric Embeddings, and One Bit
Sensing | We obtain mproved bounds for one bit sensing. For instance, let $ K_s$ denote the set of $ s$-sparse unit vectors in the sphere $ \mathbb S ^{n}$ in dimension $ n+1$ with sparsity parameter $ 0 < s < n+1$ and assume that $ 0 < \delta < 1$. We show that for $ m \gtrsim \delta ^{-2} s \log \frac ns$, the one-bit map $$ x \mapsto \bigl[ {sgn} \langle x,g_j \rangle \bigr] _{j=1} ^{m}, $$ where $ g_j$ are iid gaussian vectors on $ \mathbb R ^{n+1}$, with high probability has $ \delta $-RIP from $ K_s$ into the $ m$-dimensional Hamming cube. These bounds match the bounds for the {linear} $ \delta $-RIP given by $ x \mapsto \frac 1m[\langle x,g_j \rangle ] _{j=1} ^{m} $, from the sparse vectors in $ \mathbb R ^{n}$ into $ \ell ^{1}$. In other words, the one bit and linear RIPs are equally effective. There are corresponding improvements for other one-bit properties, such as the sign-product RIP property. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 50,345 |
1712.02903 | Blind Multiclass Ensemble Classification | The rising interest in pattern recognition and data analytics has spurred the development of innovative machine learning algorithms and tools. However, as each algorithm has its strengths and limitations, one is motivated to judiciously fuse multiple algorithms in order to find the "best" performing one, for a given dataset. Ensemble learning aims at such high-performance meta-algorithm, by combining the outputs from multiple algorithms. The present work introduces a blind scheme for learning from ensembles of classifiers, using a moment matching method that leverages joint tensor and matrix factorization. Blind refers to the combiner who has no knowledge of the ground-truth labels that each classifier has been trained on. A rigorous performance analysis is derived and the proposed scheme is evaluated on synthetic and real datasets. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 86,363 |
2206.03942 | Structure-Preserving Model Order Reduction for Index Two
Port-Hamiltonian Descriptor Systems | We present a new optimization-based structure-preserving model order reduction (MOR) method for port-Hamiltonian descriptor systems (pH-DAEs) with differentiation index two. Our method is based on a novel parameterization that allows us to represent any linear time-invariant pH-DAE with a minimal number of parameters, which makes it well-suited to model reduction. We propose two algorithms which directly optimize the parameters of a reduced model to approximate a given large-scale model with respect to either the H-infinity or the H-2 norm. This approach has several benefits. Our parameterization ensures that the reduced model is again a pH-DAE system and enables a compact representation of the algebraic part of the large-scale model, which in projection-based methods often requires a more involved treatment. The direct optimization is entirely based on transfer function evaluations of the large-scale model and is therefore independent of the system matrices' structure. Numerical experiments are conducted to illustrate the high accuracy and small reduced model orders in comparison to other structure-preserving MOR methods. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | 301,460 |
1912.04617 | Second-Order Non-Convex Optimization for Constrained Fixed-Structure
Static Output Feedback Controller Synthesis | For linear time-invariant (LTI) systems, the design of an optimal controller is a commonly encountered problem in many applications. Among all the optimization approaches available, the linear quadratic regulator (LQR) methodology certainly garners much attention and interest. As is well-known, standard numerical tools in linear algebra are readily available which enable the determination of the optimal static LQR feedback gain matrix when all the system state variables are measurable. However, in various certain scenarios where some of the system state variables are not measurable, and consequent prescribed structural constraints on the controller structure arise, the optimization problem can become intractable due to the non-convexity characteristics that can then be present. In such cases, there have been some first-order methods proposed to cater to these problems, but all of these first-order optimization methods, if at all successful, are limited to only linear convergence. To speed up the convergence, a second-order approach in the matrix space is essential, with appropriate methodology to solve the linear equality constrained static output feedback (SOF) problem with a suitably defined linear quadratic cost function. Thus along this line, in this work, an efficient method is proposed in the matrix space to calculate the Hessian matrix by solving several Lyapunov equations. Then a new optimization technique is applied to deal with the indefiniteness of the Hessian matrix. Subsequently, through Newton's method with linear equality constraints, a second-order optimization algorithm is developed to effectively solve the constrained SOF LQR problem. Finally, two numerical examples are described which demonstrate the applicability and effectiveness of the proposed method. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 156,890 |
2502.11949 | Massively Scaling Explicit Policy-conditioned Value Functions | We introduce a scaling strategy for Explicit Policy-Conditioned Value Functions (EPVFs) that significantly improves performance on challenging continuous-control tasks. EPVFs learn a value function V({\theta}) that is explicitly conditioned on the policy parameters, enabling direct gradient-based updates to the parameters of any policy. However, EPVFs at scale struggle with unrestricted parameter growth and efficient exploration in the policy parameter space. To address these issues, we utilize massive parallelization with GPU-based simulators, big batch sizes, weight clipping and scaled peturbations. Our results show that EPVFs can be scaled to solve complex tasks, such as a custom Ant environment, and can compete with state-of-the-art Deep Reinforcement Learning (DRL) baselines like Proximal Policy Optimization (PPO) and Soft Actor-Critic (SAC). We further explore action-based policy parameter representations from previous work and specialized neural network architectures to efficiently handle weight-space features, which have not been used in the context of DRL before. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 534,617 |
2403.17104 | Attribute First, then Generate: Locally-attributable Grounded Text
Generation | Recent efforts to address hallucinations in Large Language Models (LLMs) have focused on attributed text generation, which supplements generated texts with citations of supporting sources for post-generation fact-checking and corrections. Yet, these citations often point to entire documents or paragraphs, burdening users with extensive verification work. In this paper, we introduce a locally-attributable text generation approach, prioritizing concise attributions. Our method, named "Attribute First, then Generate", breaks down the conventional end-to-end generation process into three intuitive steps: content selection, sentence planning, and sequential sentence generation. By initially identifying relevant source segments ("select first") and then conditioning the generation process on them ("then generate"), we ensure these segments also act as the output's fine-grained attributions ("select" becomes "attribute"). Tested on Multi-document Summarization and Long-form Question-answering, our method not only yields more concise citations than the baselines but also maintains - and in some cases enhances - both generation quality and attribution accuracy. Furthermore, it significantly reduces the time required for fact verification by human assessors. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 441,313 |
2410.02078 | Posterior sampling via Langevin dynamics based on generative priors | Posterior sampling in high-dimensional spaces using generative models holds significant promise for various applications, including but not limited to inverse problems and guided generation tasks. Despite many recent developments, generating diverse posterior samples remains a challenge, as existing methods require restarting the entire generative process for each new sample, making the procedure computationally expensive. In this work, we propose efficient posterior sampling by simulating Langevin dynamics in the noise space of a pre-trained generative model. By exploiting the mapping between the noise and data spaces which can be provided by distilled flows or consistency models, our method enables seamless exploration of the posterior without the need to re-run the full sampling chain, drastically reducing computational overhead. Theoretically, we prove a guarantee for the proposed noise-space Langevin dynamics to approximate the posterior, assuming that the generative model sufficiently approximates the prior distribution. Our framework is experimentally validated on image restoration tasks involving noisy linear and nonlinear forward operators applied to LSUN-Bedroom (256 x 256) and ImageNet (64 x 64) datasets. The results demonstrate that our approach generates high-fidelity samples with enhanced semantic diversity even under a limited number of function evaluations, offering superior efficiency and performance compared to existing diffusion-based posterior sampling techniques. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 494,089 |
2501.12728 | A Call for Critically Rethinking and Reforming Data Analysis in
Empirical Software Engineering | Context: Empirical Software Engineering (ESE) drives innovation in SE through qualitative and quantitative studies. However, concerns about the correct application of empirical methodologies have existed since the 2006 Dagstuhl seminar on SE. Objective: To analyze three decades of SE research, identify mistakes in statistical methods, and evaluate experts' ability to detect and address these issues. Methods: We conducted a literature survey of ~27,000 empirical studies, using LLMs to classify statistical methodologies as adequate or inadequate. Additionally, we selected 30 primary studies and held a workshop with 33 ESE experts to assess their ability to identify and resolve statistical issues. Results: Significant statistical issues were found in the primary studies, and experts showed limited ability to detect and correct these methodological problems, raising concerns about the broader ESE community's proficiency in this area. Conclusions. Despite our study's eventual limitations, its results shed light on recurring issues from promoting information copy-and-paste from past authors' works and the continuous publication of inadequate approaches that promote dubious results and jeopardize the spread of the correct statistical strategies among researchers. Besides, it justifies further investigation into empirical rigor in software engineering to expose these recurring issues and establish a framework for reassessing our field's foundation of statistical methodology application. Therefore, this work calls for critically rethinking and reforming data analysis in empirical software engineering, paving the way for our work soon. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 526,415 |
1904.11024 | Phonetically-Oriented Word Error Alignment for Speech Recognition Error
Analysis in Speech Translation | We propose a variation to the commonly used Word Error Rate (WER) metric for speech recognition evaluation which incorporates the alignment of phonemes, in the absence of time boundary information. After computing the Levenshtein alignment on words in the reference and hypothesis transcripts, spans of adjacent errors are converted into phonemes with word and syllable boundaries and a phonetic Levenshtein alignment is performed. The aligned phonemes are recombined into aligned words that adjust the word alignment labels in each error region. We demonstrate that our Phonetically-Oriented Word Error Rate (POWER) yields similar scores to WER with the added advantages of better word alignments and the ability to capture one-to-many word alignments corresponding to homophonic errors in speech recognition hypotheses. These improved alignments allow us to better trace the impact of Levenshtein error types on downstream tasks such as speech translation. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 128,761 |
1805.03081 | Active Object Reconstruction Using a Guided View Planner | Inspired by the recent advance of image-based object reconstruction using deep learning, we present an active reconstruction model using a guided view planner. We aim to reconstruct a 3D model using images observed from a planned sequence of informative and discriminative views. But where are such informative and discriminative views around an object? To address this we propose a unified model for view planning and object reconstruction, which is utilized to learn a guided information acquisition model and to aggregate information from a sequence of images for reconstruction. Experiments show that our model (1) increases our reconstruction accuracy with an increasing number of views (2) and generally predicts a more informative sequence of views for object reconstruction compared to other alternative methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 96,978 |
0808.0235 | DMT of Multi-hop Cooperative Networks - Part II: Half-Duplex Networks
with Full-Duplex Performance | We consider single-source single-sink (ss-ss) multi-hop relay networks, with slow-fading links and single-antenna half-duplex relay nodes. In a companion paper, we established some basic results which laid the foundation for the results presented here. In the present paper, we consider two families of networks of half-duplex networks. KPP networks may be viewed as the union of K node-disjoint parallel relaying paths. Generalizations of these networks include KPP(I) networks, which permit interference between paths and KPP(D) networks, which possess a direct link between source and sink. We characterize the DMT of these families of networks completely and show that they can achieve the cut-set bound, thus proving that full-duplex performance can be obtained even in the presence of the half-duplex constraint. We then consider layered networks, and prove that a linear DMT between maximum diversity and maximum multiplexing gain is achievable. All protocols in this paper are explicit and use only amplify-and-forward relaying. We also construct codes that achieve the optimal DMT for all the proposed schemes. Two key implications of the results in the paper are that the half-duplex constraint does not entail any rate loss for a large class of cooperative networks and that AF protocols are often optimal. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 2,153 |
2205.04718 | Integrating Parcel Deliveries into a Ride-Pooling Service -- An
Agent-Based Simulation Study | This paper examines the integration of freight delivery into the passenger transport of an on-demand ride-pooling service. The goal of this research is to use existing passenger trips for logistics services and thus reduce additional vehicle kilometers for freight delivery and the total number of vehicles on the road network. This is achieved by merging the need for two separate fleets into a single one by combining the services. To evaluate the potential of such a mobility-on-demand service, this paper uses an agent-based simulation framework and integrates three heuristic parcel assignment strategies into a ride-pooling fleet control algorithm. Two integration scenarios (moderate and full) are set up. While in both scenarios passengers and parcels share rides in one vehicle, in the moderate scenario no stops for parcel pick-up and delivery are allowed during a passenger ride to decrease customer inconvenience. Using real-world demand data for a case study of Munich, Germany, the two integration scenarios together with the three assignment strategies are compared to the status quo, which uses two separate vehicle fleets for passenger and logistics transport. The results indicate that the integration of logistics services into a ride-pooling service is possible and can exploit unused system capacities without deteriorating passenger transport. Depending on the assignment strategies nearly all parcels can be served until a parcel to passenger demand ratio of 1:10 while the overall fleet kilometers can be deceased compared to the status quo. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | 295,729 |
2407.07630 | A Review of the Challenges with Massive Web-mined Corpora Used in Large
Language Models Pre-Training | This article presents a comprehensive review of the challenges associated with using massive web-mined corpora for the pre-training of large language models (LLMs). This review identifies key challenges in this domain, including challenges such as noise (irrelevant or misleading information), duplication of content, the presence of low-quality or incorrect information, biases, and the inclusion of sensitive or personal information in web-mined corpora. Addressing these issues is crucial for the development of accurate, reliable, and ethically responsible language models. Through an examination of current methodologies for data cleaning, pre-processing, bias detection and mitigation, we highlight the gaps in existing approaches and suggest directions for future research. Our discussion aims to catalyze advancements in developing more sophisticated and ethically responsible LLMs. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 471,840 |
2412.18981 | HAND: Hierarchical Attention Network for Multi-Scale Handwritten
Document Recognition and Layout Analysis | Handwritten document recognition (HDR) is one of the most challenging tasks in the field of computer vision, due to the various writing styles and complex layouts inherent in handwritten texts. Traditionally, this problem has been approached as two separate tasks, handwritten text recognition and layout analysis, and struggled to integrate the two processes effectively. This paper introduces HAND (Hierarchical Attention Network for Multi-Scale Document), a novel end-to-end and segmentation-free architecture for simultaneous text recognition and layout analysis tasks. Our model's key components include an advanced convolutional encoder integrating Gated Depth-wise Separable and Octave Convolutions for robust feature extraction, a Multi-Scale Adaptive Processing (MSAP) framework that dynamically adjusts to document complexity and a hierarchical attention decoder with memory-augmented and sparse attention mechanisms. These components enable our model to scale effectively from single-line to triple-column pages while maintaining computational efficiency. Additionally, HAND adopts curriculum learning across five complexity levels. To improve the recognition accuracy of complex ancient manuscripts, we fine-tune and integrate a Domain-Adaptive Pre-trained mT5 model for post-processing refinement. Extensive evaluations on the READ 2016 dataset demonstrate the superior performance of HAND, achieving up to 59.8% reduction in CER for line-level recognition and 31.2% for page-level recognition compared to state-of-the-art methods. The model also maintains a compact size of 5.60M parameters while establishing new benchmarks in both text recognition and layout analysis. Source code and pre-trained models are available at : https://github.com/MHHamdan/HAND. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 520,663 |
2501.14587 | Visual Localization via Semantic Structures in Autonomous Photovoltaic
Power Plant Inspection | Inspection systems utilizing unmanned aerial vehicles (UAVs) equipped with thermal cameras are increasingly popular for the maintenance of photovoltaic (PV) power plants. However, automation of the inspection task is a challenging problem as it requires precise navigation to capture images from optimal distances and viewing angles. This paper presents a novel localization pipeline that directly integrates PV module detection with UAV navigation, allowing precise positioning during inspection. Detections are used to identify the power plant structures in the image and associate these with the power plant model. We define visually recognizable anchor points for the initial association and use object tracking to discern global associations. We present three distinct methods for visual segmentation of PV modules based on traditional computer vision, deep learning, and their fusion, and we evaluate their performance in relation to the proposed localization pipeline. The presented methods were verified and evaluated using custom aerial inspection data sets, demonstrating their robustness and applicability for real-time navigation. Additionally, we evaluate the influence of the power plant model's precision on the localization methods. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 527,176 |
2405.12774 | Blind Separation of Vibration Sources using Deep Learning and
Deconvolution | Vibrations of rotating machinery primarily originate from two sources, both of which are distorted by the machine's transfer function on their way to the sensor: the dominant gear-related vibrations and a low-energy signal linked to bearing faults. The proposed method facilitates the blind separation of vibration sources, eliminating the need for any information about the monitored equipment or external measurements. This method estimates both sources in two stages: initially, the gear signal is isolated using a dilated CNN, followed by the estimation of the bearing fault signal using the squared log envelope of the residual. The effect of the transfer function is removed from both sources using a novel whitening-based deconvolution method (WBD). Both simulation and experimental results demonstrate the method's ability to detect bearing failures early when no additional information is available. This study considers both local and distributed bearing faults, assuming that the vibrations are recorded under stable operating conditions. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 455,647 |
2102.02436 | Lower Bound on the Optimal Access Bandwidth of ($K+2,K,2$)-MDS Array
Code with Degraded Read Friendly | Accessing the data in the failed disk (degraded read) with low latency is crucial for an erasure-coded storage system. In this work, the maximum distance separable (MDS) array code with the property of degraded-read friendly (DRF) is discussed. For the DRF MDS array code with 2 redundant nodes and the sub-packetization level of 2, the lower bound of its access bandwidth is derived. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 218,409 |
2309.03317 | Sub-Array Selection in Full-Duplex Massive MIMO for Enhanced
Self-Interference Suppression | This study considers a novel full-duplex (FD) massive multiple-input multiple-output (mMIMO) system using hybrid beamforming (HBF) architecture, which allows for simultaneous uplink (UL) and downlink (DL) transmission over the same frequency band. Particularly, our objective is to mitigate the strong self-interference (SI) solely on the design of UL and DL RF beamforming stages jointly with sub-array selection (SAS) for transmit (Tx) and receive (Rx) sub-arrays at base station (BS). Based on the measured SI channel in an anechoic chamber, we propose a min-SI beamforming scheme with SAS, which applies perturbations to the beam directivity to enhance SI suppression in UL and DL beam directions. To solve this challenging nonconvex optimization problem, we propose a swarm intelligence-based algorithmic solution to find the optimal perturbations as well as the Tx and Rx sub-arrays to minimize SI subject to the directivity degradation constraints for the UL and DL beams. The results show that the proposed min-SI BF scheme can achieve SI suppression as high as 78 dB in FD mMIMO systems. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 390,341 |
1412.6060 | Combinatorial Structure of the Deterministic Seriation Method with
Multiple Subset Solutions | Seriation methods order a set of descriptions given some criterion (e.g., unimodality or minimum distance between similarity scores). Seriation is thus inherently a problem of finding the optimal solution among a set of permutations of objects. In this short technical note, we review the combinatorial structure of the classical seriation problem, which seeks a single solution out of a set of objects. We then extend those results to the iterative frequency seriation approach introduced by Lipo (1997), which finds optimal subsets of objects which each satisfy the unimodality criterion within each subset. The number of possible solutions across multiple solution subsets is larger than $n!$, which underscores the need to find new algorithms and heuristics to assist in the deterministic frequency seriation problem. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 38,556 |
1810.04058 | A Distributed Reinforcement Learning Solution With Knowledge Transfer
Capability for A Bike Rebalancing Problem | Rebalancing is a critical service bottleneck for many transportation services, such as Citi Bike. Citi Bike relies on manual orchestrations of rebalancing bikes between dispatchers and field agents. Motivated by such problem and the lack of smart autonomous solutions in this area, this project explored a new RL architecture called Distributed RL (DiRL) with Transfer Learning (TL) capability. The DiRL solution is adaptive to changing traffic dynamics when keeping bike stock under control at the minimum cost. DiRL achieved a 350% improvement in bike rebalancing autonomously and TL offered a 62.4% performance boost in managing an entire bike network. Lastly, a field trip to the dispatch office of Chariot, a ride-sharing service, provided insights to overcome challenges of deploying an RL solution in the real world. | true | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 109,960 |
2502.03670 | Chaos into Order: Neural Framework for Expected Value Estimation of
Stochastic Partial Differential Equations | Stochastic Partial Differential Equations (SPDEs) are fundamental to modeling complex systems in physics, finance, and engineering, yet their numerical estimation remains a formidable challenge. Traditional methods rely on discretization, introducing computational inefficiencies, and limiting applicability in high-dimensional settings. In this work, we introduce a novel neural framework for SPDE estimation that eliminates the need for discretization, enabling direct estimation of expected values across arbitrary spatio-temporal points. We develop and compare two distinct neural architectures: Loss Enforced Conditions (LEC), which integrates physical constraints into the loss function, and Model Enforced Conditions (MEC), which embeds these constraints directly into the network structure. Through extensive experiments on the stochastic heat equation, Burgers' equation, and Kardar-Parisi-Zhang (KPZ) equation, we reveal a trade-off: While LEC achieves superior residual minimization and generalization, MEC enforces initial conditions with absolute precision and exceptionally high accuracy in boundary condition enforcement. Our findings highlight the immense potential of neural-based SPDE solvers, particularly for high-dimensional problems where conventional techniques falter. By circumventing discretization and explicitly modeling uncertainty, our approach opens new avenues for solving SPDEs in fields ranging from quantitative finance to turbulence modeling. To the best of our knowledge, this is the first neural framework capable of directly estimating the expected values of SPDEs in an entirely non-discretized manner, offering a step forward in scientific computing. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 530,808 |
2204.02246 | Continuously Discovering Novel Strategies via Reward-Switching Policy
Optimization | We present Reward-Switching Policy Optimization (RSPO), a paradigm to discover diverse strategies in complex RL environments by iteratively finding novel policies that are both locally optimal and sufficiently different from existing ones. To encourage the learning policy to consistently converge towards a previously undiscovered local optimum, RSPO switches between extrinsic and intrinsic rewards via a trajectory-based novelty measurement during the optimization process. When a sampled trajectory is sufficiently distinct, RSPO performs standard policy optimization with extrinsic rewards. For trajectories with high likelihood under existing policies, RSPO utilizes an intrinsic diversity reward to promote exploration. Experiments show that RSPO is able to discover a wide spectrum of strategies in a variety of domains, ranging from single-agent particle-world tasks and MuJoCo continuous control to multi-agent stag-hunt games and StarCraftII challenges. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 289,870 |
2403.09069 | Dyadic Interaction Modeling for Social Behavior Generation | Human-human communication is like a delicate dance where listeners and speakers concurrently interact to maintain conversational dynamics. Hence, an effective model for generating listener nonverbal behaviors requires understanding the dyadic context and interaction. In this paper, we present an effective framework for creating 3D facial motions in dyadic interactions. Existing work consider a listener as a reactive agent with reflexive behaviors to the speaker's voice and facial motions. The heart of our framework is Dyadic Interaction Modeling (DIM), a pre-training approach that jointly models speakers' and listeners' motions through masking and contrastive learning to learn representations that capture the dyadic context. To enable the generation of non-deterministic behaviors, we encode both listener and speaker motions into discrete latent representations, through VQ-VAE. The pre-trained model is further fine-tuned for motion generation. Extensive experiments demonstrate the superiority of our framework in generating listener motions, establishing a new state-of-the-art according to the quantitative measures capturing the diversity and realism of generated motions. Qualitative results demonstrate the superior capabilities of the proposed approach in generating diverse and realistic expressions, eye blinks and head gestures. The code is available at https://github.com/Boese0601/Dyadic-Interaction-Modeling | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 437,616 |
2101.06224 | Multi-point dimensionality reduction to improve projection layout
reliability | In ordinary Dimensionality Reduction (DR), each data instance in a high dimensional space (original space), or on a distance matrix denoting original space distances, is mapped to (projected onto) one point in a low dimensional space (visual space), building a layout of projected points trying to preserve as much as possible some property of data such as distances, neighbourhood relationships, and/or topology structures, with the ultimate goal of approximating semantic properties of data with preserved geometric properties or topology structures in visual space. In this paper, the concept of Multi-point Dimensionality Reduction is elaborated on where each data instance can be mapped to (projected onto) possibly more than one point in visual space by providing the first general solution (algorithm) for it as a move in the direction of improving reliablity, usability and interpretability of dimensionality reduction. Furthermore by allowing the points in visual space to be split into two layers while maintaining the possibility of having more than one projection (mapping) per data instance , the benefit of separating more reliable points from less reliable points is dicussed notwithstanding the effort to improve less reliable points. The proposed solution (algorithm) in this paper, named Layered Vertex Splitting Data Embedding (LVSDE), is built upon and extends a combination of ordinary DR and graph drawing techniques. Based on the experiments of this paper on some data sets, the particular proposed algorithm (LVSDE) practically outperforms popular ordinary DR methods visually (semantics, group separation, subgroup detection or combinational group detection) in a way that is easily explainable. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 215,647 |
1910.14539 | Naver Labs Europe's Systems for the Document-Level Generation and
Translation Task at WNGT 2019 | Recently, neural models led to significant improvements in both machine translation (MT) and natural language generation tasks (NLG). However, generation of long descriptive summaries conditioned on structured data remains an open challenge. Likewise, MT that goes beyond sentence-level context is still an open issue (e.g., document-level MT or MT with metadata). To address these challenges, we propose to leverage data from both tasks and do transfer learning between MT, NLG, and MT with source-side metadata (MT+NLG). First, we train document-based MT systems with large amounts of parallel data. Then, we adapt these models to pure NLG and MT+NLG tasks by fine-tuning with smaller amounts of domain-specific data. This end-to-end NLG approach, without data selection and planning, outperforms the previous state of the art on the Rotowire NLG task. We participated to the "Document Generation and Translation" task at WNGT 2019, and ranked first in all tracks. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 151,675 |
2206.11000 | A Systematic Comparison of Phonetic Aware Techniques for Speech
Enhancement | Speech enhancement has seen great improvement in recent years using end-to-end neural networks. However, most models are agnostic to the spoken phonetic content. Recently, several studies suggested phonetic-aware speech enhancement, mostly using perceptual supervision. Yet, injecting phonetic features during model optimization can take additional forms (e.g., model conditioning). In this paper, we conduct a systematic comparison between different methods of incorporating phonetic information in a speech enhancement model. By conducting a series of controlled experiments, we observe the influence of different phonetic content models as well as various feature-injection techniques on enhancement performance, considering both causal and non-causal models. Specifically, we evaluate three settings for injecting phonetic information, namely: i) feature conditioning; ii) perceptual supervision; and iii) regularization. Phonetic features are obtained using an intermediate layer of either a supervised pre-trained Automatic Speech Recognition (ASR) model or by using a pre-trained Self-Supervised Learning (SSL) model. We further observe the effect of choosing different embedding layers on performance, considering both manual and learned configurations. Results suggest that using a SSL model as phonetic features outperforms the ASR one in most cases. Interestingly, the conditioning setting performs best among the evaluated configurations. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 304,116 |
1804.00240 | Missing Data as Part of the Social Behavior in Real-World Financial
Complex Systems | Many real-world networks are known to exhibit facts that counter our knowledge prescribed by the theories on network creation and communication patterns. A common prerequisite in network analysis is that information on nodes and links will be complete because network topologies are extremely sensitive to missing information of this kind. Therefore, many real-world networks that fail to meet this criterion under random sampling may be discarded. In this paper we offer a framework for interpreting the missing observations in network data under the hypothesis that these observations are not missing at random. We demonstrate the methodology with a case study of a financial trade network, where the awareness of agents to the data collection procedure by a self-interested observer may result in strategic revealing or withholding of information. The non-random missingness has been overlooked despite the possibility of this being an important feature of the processes by which the network is generated. The analysis demonstrates that strategic information withholding may be a valid general phenomenon in complex systems. The evidence is sufficient to support the existence of an influential observer and to offer a compelling dynamic mechanism for the creation of the network. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 93,972 |
2205.07347 | Wigner-Smith Time Delay Matrix for Acoustic Scattering: Theory and
Phenomenology | The Wigner-Smith (WS) time delay matrix relates a lossless system's scattering matrix to its frequency derivative. First proposed in the realm of quantum mechanics to characterize time delays experienced by particles during a collision, this article extends the use of WS time delay techniques to acoustic scattering problems governed by the Helmholtz equation. Expression for the entries of the WS time delay matrix involving renormalized volume integrals of energy densities are derived, and shown to hold true independent of the scatterer's geometry, boundary condition (sound-soft or sound-hard), and excitation. Numerical examples show that the eigenmodes of the WS time delay matrix describe distinct scattering phenomena characterized by well-defined time delays. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 296,569 |
2310.14837 | Harnessing Attention Mechanisms: Efficient Sequence Reduction using
Attention-based Autoencoders | Many machine learning models use the manipulation of dimensions as a driving force to enable models to identify and learn important features in data. In the case of sequential data this manipulation usually happens on the token dimension level. Despite the fact that many tasks require a change in sequence length itself, the step of sequence length reduction usually happens out of necessity and in a single step. As far as we are aware, no model uses the sequence length reduction step as an additional opportunity to tune the models performance. In fact, sequence length manipulation as a whole seems to be an overlooked direction. In this study we introduce a novel attention-based method that allows for the direct manipulation of sequence lengths. To explore the method's capabilities, we employ it in an autoencoder model. The autoencoder reduces the input sequence to a smaller sequence in latent space. It then aims to reproduce the original sequence from this reduced form. In this setting, we explore the methods reduction performance for different input and latent sequence lengths. We are able to show that the autoencoder retains all the significant information when reducing the original sequence to half its original size. When reducing down to as low as a quarter of its original size, the autoencoder is still able to reproduce the original sequence with an accuracy of around 90%. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 402,042 |
2312.05842 | Mutual Enhancement of Large and Small Language Models with Cross-Silo
Knowledge Transfer | While large language models (LLMs) are empowered with broad knowledge, their task-specific performance is often suboptimal. It necessitates fine-tuning LLMs with task-specific data, but such data may be inaccessible due to privacy concerns. In this paper, we propose a novel approach to enhance LLMs with smaller language models (SLMs) that are trained on clients using their private task-specific data. To enable mutual enhancement between LLMs and SLMs, we propose CrossLM, where the SLMs promote the LLM to generate task-specific high-quality data, and both the LLM and SLMs are enhanced with the generated data. We evaluate CrossLM using publicly accessible language models across a range of benchmark tasks. The results demonstrate that CrossLM significantly enhances the task-specific performance of SLMs on clients and the LLM on the cloud server simultaneously while preserving the LLM's generalization capability. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 414,266 |
2002.08310 | Online PMU-Based Method for Estimating Dynamic Load Parameters in
Ambient Conditions | This paper proposes a novel purely measurement-based method for estimating dynamic load parameters in near real-time when stochastic load fluctuations are present. By leveraging on the regression theorem for the Ornstein-Uhlenbeck process, the proposed method can provide highly accurate estimation without requiring any information about the power system model. Furthermore, by exploiting recursion, an online algorithm is developed to estimate the dynamic load parameters in near real-time. Simulation studies on the IEEE 39-Bus system demonstrate the accuracy of the model-free method, its robustness against measurement noise, as well as its satisfactory performance in tracking real-time changes of the load parameters. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 164,715 |
1711.08716 | Prediction of the progression of subcortical brain structures in
Alzheimer's disease from baseline | We propose a method to predict the subject-specific longitudinal progression of brain structures extracted from baseline MRI, and evaluate its performance on Alzheimer's disease data. The disease progression is modeled as a trajectory on a group of diffeomorphisms in the context of large deformation diffeomorphic metric mapping (LDDMM). We first exhibit the limited predictive abilities of geodesic regression extrapolation on this group. Building on the recent concept of parallel curves in shape manifolds, we then introduce a second predictive protocol which personalizes previously learned trajectories to new subjects, and investigate the relative performances of two parallel shifting paradigms. This design only requires the baseline imaging data. Finally, coefficients encoding the disease dynamics are obtained from longitudinal cognitive measurements for each subject, and exploited to refine our methodology which is demonstrated to successfully predict the follow-up visits. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 85,255 |
0707.1912 | Throughput Scaling Laws for Wireless Networks with Fading Channels | A network of n communication links, operating over a shared wireless channel, is considered. Fading is assumed to be the dominant factor affecting the strength of the channels between transmitter and receiver terminals. It is assumed that each link can be active and transmit with a constant power P or remain silent. The objective is to maximize the throughput over the selection of active links. By deriving an upper bound and a lower bound, it is shown that in the case of Rayleigh fading (i) the maximum throughput scales like $\log n$ (ii) the maximum throughput is achievable in a distributed fashion. The upper bound is obtained using probabilistic methods, where the key point is to upper bound the throughput of any random set of active links by a chi-squared random variable. To obtain the lower bound, a decentralized link activation strategy is proposed and analyzed. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 423 |
2102.12318 | Set-valued classification -- overview via a unified framework | Multi-class classification problem is among the most popular and well-studied statistical frameworks. Modern multi-class datasets can be extremely ambiguous and single-output predictions fail to deliver satisfactory performance. By allowing predictors to predict a set of label candidates, set-valued classification offers a natural way to deal with this ambiguity. Several formulations of set-valued classification are available in the literature and each of them leads to different prediction strategies. The present survey aims to review popular formulations using a unified statistical framework. The proposed framework encompasses previously considered and leads to new formulations as well as it allows to understand underlying trade-offs of each formulation. We provide infinite sample optimal set-valued classification strategies and review a general plug-in principle to construct data-driven algorithms. The exposition is supported by examples and pointers to both theoretical and practical contributions. Finally, we provide experiments on real-world datasets comparing these approaches in practice and providing general practical guidelines. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 221,694 |
2310.03006 | COOLer: Class-Incremental Learning for Appearance-Based Multiple Object
Tracking | Continual learning allows a model to learn multiple tasks sequentially while retaining the old knowledge without the training data of the preceding tasks. This paper extends the scope of continual learning research to class-incremental learning for multiple object tracking (MOT), which is desirable to accommodate the continuously evolving needs of autonomous systems. Previous solutions for continual learning of object detectors do not address the data association stage of appearance-based trackers, leading to catastrophic forgetting of previous classes' re-identification features. We introduce COOLer, a COntrastive- and cOntinual-Learning-based tracker, which incrementally learns to track new categories while preserving past knowledge by training on a combination of currently available ground truth labels and pseudo-labels generated by the past tracker. To further exacerbate the disentanglement of instance representations, we introduce a novel contrastive class-incremental instance representation learning technique. Finally, we propose a practical evaluation protocol for continual learning for MOT and conduct experiments on the BDD100K and SHIFT datasets. Experimental results demonstrate that COOLer continually learns while effectively addressing catastrophic forgetting of both tracking and detection. The code is available at https://github.com/BoSmallEar/COOLer. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 397,090 |
2410.04260 | Pareto Control Barrier Function for Inner Safe Set Maximization Under
Input Constraints | This article introduces the Pareto Control Barrier Function (PCBF) algorithm to maximize the inner safe set of dynamical systems under input constraints. Traditional Control Barrier Functions (CBFs) ensure safety by maintaining system trajectories within a safe set but often fail to account for realistic input constraints. To address this problem, we leverage the Pareto multi-task learning framework to balance competing objectives of safety and safe set volume. The PCBF algorithm is applicable to high-dimensional systems and is computationally efficient. We validate its effectiveness through comparison with Hamilton-Jacobi reachability for an inverted pendulum and through simulations on a 12-dimensional quadrotor system. Results show that the PCBF consistently outperforms existing methods, yielding larger safe sets and ensuring safety under input constraints. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | 495,203 |
2009.00845 | Online system identification in a Duffing oscillator by free energy
minimisation | Online system identification is the estimation of parameters of a dynamical system, such as mass or friction coefficients, for each measurement of the input and output signals. Here, the nonlinear stochastic differential equation of a Duffing oscillator is cast to a generative model and dynamical parameters are inferred using variational message passing on a factor graph of the model. The approach is validated with an experiment on data from an electronic implementation of a Duffing oscillator. The proposed inference procedure performs as well as offline prediction error minimisation in a state-of-the-art nonlinear model. | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | true | false | false | 194,158 |
2406.15007 | RouteFinder: Towards Foundation Models for Vehicle Routing Problems | This paper introduces RouteFinder, a comprehensive foundation model framework to tackle different Vehicle Routing Problem (VRP) variants. Our core idea is that a foundation model for VRPs should be able to represent variants by treating each as a subset of a generalized problem equipped with different attributes. We propose a unified VRP environment capable of efficiently handling any attribute combination. The RouteFinder model leverages a modern transformer-based encoder and global attribute embeddings to improve task representation. Additionally, we introduce two reinforcement learning techniques to enhance multi-task performance: mixed batch training, which enables training on different variants at once, and multi-variant reward normalization to balance different reward scales. Finally, we propose efficient adapter layers that enable fine-tuning for new variants with unseen attributes. Extensive experiments on 48 VRP variants show RouteFinder outperforms recent state-of-the-art learning methods. Code: https://github.com/ai4co/routefinder. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 466,582 |
2410.20046 | DQRM: Deep Quantized Recommendation Models | Large-scale recommendation models are currently the dominant workload for many large Internet companies. These recommenders are characterized by massive embedding tables that are sparsely accessed by the index for user and item features. The size of these 1TB+ tables imposes a severe memory bottleneck for the training and inference of recommendation models. In this work, we propose a novel recommendation framework that is small, powerful, and efficient to run and train, based on the state-of-the-art Deep Learning Recommendation Model (DLRM). The proposed framework makes inference more efficient on the cloud servers, explores the possibility of deploying powerful recommenders on smaller edge devices, and optimizes the workload of the communication overhead in distributed training under the data parallelism settings. Specifically, we show that quantization-aware training (QAT) can impose a strong regularization effect to mitigate the severe overfitting issues suffered by DLRMs. Consequently, we achieved INT4 quantization of DLRM models without any accuracy drop. We further propose two techniques that improve and accelerate the conventional QAT workload specifically for the embedding tables in the recommendation models. Furthermore, to achieve efficient training, we quantize the gradients of the embedding tables into INT8 on top of the well-supported specified sparsification. We show that combining gradient sparsification and quantization together significantly reduces the amount of communication. Briefly, DQRM models with INT4 can achieve 79.07% accuracy on Kaggle with 0.27 GB model size, and 81.21% accuracy on the Terabyte dataset with 1.57 GB, which even outperform FP32 DLRMs that have much larger model sizes (2.16 GB on Kaggle and 12.58 on Terabyte). | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | false | 502,624 |
2311.08001 | A Comparative Analysis of the COVID-19 Infodemic in English and Chinese:
Insights from Social Media Textual Data | The COVID-19 infodemic, characterized by the rapid spread of misinformation and unverified claims related to the pandemic, presents a significant challenge. This paper presents a comparative analysis of the COVID-19 infodemic in the English and Chinese languages, utilizing textual data extracted from social media platforms. To ensure a balanced representation, two infodemic datasets were created by augmenting previously collected social media textual data. Through word frequency analysis, the thirty-five most frequently occurring infodemic words are identified, shedding light on prevalent discussions surrounding the infodemic. Moreover, topic clustering analysis uncovers thematic structures and provides a deeper understanding of primary topics within each language context. Additionally, sentiment analysis enables comprehension of the emotional tone associated with COVID-19 information on social media platforms in English and Chinese. This research contributes to a better understanding of the COVID-19 infodemic phenomenon and can guide the development of strategies to combat misinformation during public health crises across different languages. | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 407,555 |
2006.14662 | The State of AI Ethics Report (June 2020) | These past few months have been especially challenging, and the deployment of technology in ways hitherto untested at an unrivalled pace has left the internet and technology watchers aghast. Artificial intelligence has become the byword for technological progress and is being used in everything from helping us combat the COVID-19 pandemic to nudging our attention in different directions as we all spend increasingly larger amounts of time online. It has never been more important that we keep a sharp eye out on the development of this field and how it is shaping our society and interactions with each other. With this inaugural edition of the State of AI Ethics we hope to bring forward the most important developments that caught our attention at the Montreal AI Ethics Institute this past quarter. Our goal is to help you navigate this ever-evolving field swiftly and allow you and your organization to make informed decisions. This pulse-check for the state of discourse, research, and development is geared towards researchers and practitioners alike who are making decisions on behalf of their organizations in considering the societal impacts of AI-enabled solutions. We cover a wide set of areas in this report spanning Agency and Responsibility, Security and Risk, Disinformation, Jobs and Labor, the Future of AI Ethics, and more. Our staff has worked tirelessly over the past quarter surfacing signal from the noise so that you are equipped with the right tools and knowledge to confidently tread this complex yet consequential domain. | false | false | false | false | true | false | true | false | false | false | false | false | false | true | false | false | false | false | 184,290 |
2011.02208 | Crack Detection as a Weakly-Supervised Problem: Towards Achieving Less
Annotation-Intensive Crack Detectors | Automatic crack detection is a critical task that has the potential to drastically reduce labor-intensive building and road inspections currently being done manually. Recent studies in this field have significantly improved the detection accuracy. However, the methods often heavily rely on costly annotation processes. In addition, to handle a wide variety of target domains, new batches of annotations are usually required for each new environment. This makes the data annotation cost a significant bottleneck when deploying crack detection systems in real life. To resolve this issue, we formulate the crack detection problem as a weakly-supervised problem and propose a two-branched framework. By combining predictions of a supervised model trained on low quality annotations with predictions based on pixel brightness, our framework is less affected by the annotation quality. Experimental results show that the proposed framework retains high detection accuracy even when provided with low quality annotations. Implementation of the proposed framework is publicly available at https://github.com/hitachi-rd-cv/weakly-sup-crackdet. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 204,861 |
2407.17790 | Exploring the Limitations of Kolmogorov-Arnold Networks in
Classification: Insights to Software Training and Hardware Implementation | Kolmogorov-Arnold Networks (KANs), a novel type of neural network, have recently gained popularity and attention due to the ability to substitute multi-layer perceptions (MLPs) in artificial intelligence (AI) with higher accuracy and interoperability. However, KAN assessment is still limited and cannot provide an in-depth analysis of a specific domain. Furthermore, no study has been conducted on the implementation of KANs in hardware design, which would directly demonstrate whether KANs are truly superior to MLPs in practical applications. As a result, in this paper, we focus on verifying KANs for classification issues, which are a common but significant topic in AI using four different types of datasets. Furthermore, the corresponding hardware implementation is considered using the Vitis high-level synthesis (HLS) tool. To the best of our knowledge, this is the first article to implement hardware for KAN. The results indicate that KANs cannot achieve more accuracy than MLPs in high complex datasets while utilizing substantially higher hardware resources. Therefore, MLP remains an effective approach for achieving accuracy and efficiency in software and hardware implementation. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 476,117 |
1810.04017 | Comparison of U-net-based Convolutional Neural Networks for Liver
Segmentation in CT | Various approaches for liver segmentation in CT have been proposed: Besides statistical shape models, which played a major role in this research area, novel approaches on the basis of convolutional neural networks have been introduced recently. Using a set of 219 liver CT datasets with reference segmentations from liver surgery planning, we evaluate the performance of several neural network classifiers based on 2D and 3D U-net architectures. An interesting observation is that slice-wise approaches perform surprisingly well, with mean and median Dice coefficients above 0.97, and may be preferable over 3D approaches given current hardware and software limitations. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 109,947 |
1803.02623 | TRLG: Fragile blind quad watermarking for image tamper detection and
recovery by providing compact digests with quality optimized using LWT and GA | In this paper, an efficient fragile blind quad watermarking scheme for image tamper detection and recovery based on lifting wavelet transform and genetic algorithm is proposed. TRLG generates four compact digests with super quality based on lifting wavelet transform and halftoning technique by distinguishing the types of image blocks. In other words, for each 2*2 non-overlap blocks, four chances for recovering destroyed blocks are considered. A special parameter estimation technique based on genetic algorithm is performed to improve and optimize the quality of digests and watermarked image. Furthermore, CCS map is used to determine the mapping block for embedding information, encrypting and confusing the embedded information. In order to improve the recovery rate, Mirror-aside and Partner-block are proposed. The experiments that have been conducted to evaluate the performance of TRLG proved the superiority in terms of quality of the watermarked and recovered image, tamper localization and security compared with state-of-the-art methods. The results indicate that the PSNR and SSIM of the watermarked image are about 46 dB and approximately one, respectively. Also, the mean of PSNR and SSIM of several recovered images which has been destroyed about 90% is reached to 24 dB and 0.86, respectively. | false | false | false | false | false | false | false | false | false | false | false | true | true | false | false | false | false | true | 92,104 |
2103.07162 | Is BERT a Cross-Disciplinary Knowledge Learner? A Surprising Finding of
Pre-trained Models' Transferability | This paper investigates whether the power of the models pre-trained on text data, such as BERT, can be transferred to general token sequence classification applications. To verify pre-trained models' transferability, we test the pre-trained models on text classification tasks with meanings of tokens mismatches, and real-world non-text token sequence classification data, including amino acid, DNA, and music. We find that even on non-text data, the models pre-trained on text converge faster, perform better than the randomly initialized models, and only slightly worse than the models using task-specific knowledge. We also find that the representations of the text and non-text pre-trained models share non-trivial similarities. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 224,516 |
2207.08894 | A Deep Reinforcement Learning Approach for Finding Non-Exploitable
Strategies in Two-Player Atari Games | This paper proposes new, end-to-end deep reinforcement learning algorithms for learning two-player zero-sum Markov games. Different from prior efforts on training agents to beat a fixed set of opponents, our objective is to find the Nash equilibrium policies that are free from exploitation by even the adversarial opponents. We propose (a) Nash-DQN algorithm, which integrates the deep learning techniques from single DQN into the classic Nash Q-learning algorithm for solving tabular Markov games; (b) Nash-DQN-Exploiter algorithm, which additionally adopts an exploiter to guide the exploration of the main agent. We conduct experimental evaluation on tabular examples as well as various two-player Atari games. Our empirical results demonstrate that (i) the policies found by many existing methods including Neural Fictitious Self Play and Policy Space Response Oracle can be prone to exploitation by adversarial opponents; (ii) the output policies of our algorithms are robust to exploitation, and thus outperform existing methods. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 308,719 |
1710.08518 | ContextVP: Fully Context-Aware Video Prediction | Video prediction models based on convolutional networks, recurrent networks, and their combinations often result in blurry predictions. We identify an important contributing factor for imprecise predictions that has not been studied adequately in the literature: blind spots, i.e., lack of access to all relevant past information for accurately predicting the future. To address this issue, we introduce a fully context-aware architecture that captures the entire available past context for each pixel using Parallel Multi-Dimensional LSTM units and aggregates it using blending units. Our model outperforms a strong baseline network of 20 recurrent convolutional layers and yields state-of-the-art performance for next step prediction on three challenging real-world video datasets: Human 3.6M, Caltech Pedestrian, and UCF-101. Moreover, it does so with fewer parameters than several recently proposed models, and does not rely on deep convolutional networks, multi-scale architectures, separation of background and foreground modeling, motion flow learning, or adversarial training. These results highlight that full awareness of past context is of crucial importance for video prediction. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 83,089 |
2106.03220 | PYROBOCOP : Python-based Robotic Control & Optimization Package for
Manipulation and Collision Avoidance | PYROBOCOP is a lightweight Python-based package for control and optimization of robotic systems described by nonlinear Differential Algebraic Equations (DAEs). In particular, the package can handle systems with contacts that are described by complementarity constraints and provides a general framework for specifying obstacle avoidance constraints. The package performs direct transcription of the DAEs into a set of nonlinear equations by performing orthogonal collocation on finite elements. The resulting optimization problem belongs to the class of Mathematical Programs with Complementarity Constraints (MPCCs). MPCCs fail to satisfy commonly assumed constraint qualifications and require special handling of the complementarity constraints in order for NonLinear Program (NLP) solvers to solve them effectively. PYROBOCOP provides automatic reformulation of the complementarity constraints that enables NLP solvers to perform optimization of robotic systems. The package is interfaced with ADOLC for obtaining sparse derivatives by automatic differentiation and IPOPT for performing optimization. We demonstrate the effectiveness of our approach in terms of speed and flexibility. We provide several numerical examples for several robotic systems with collision avoidance as well as contact constraints represented using complementarity constraints. We provide comparisons with other open source optimization packages like CasADi and Pyomo . | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | 239,228 |
2006.11481 | Pseudo-LiDAR Point Cloud Interpolation Based on 3D Motion Representation
and Spatial Supervision | Pseudo-LiDAR point cloud interpolation is a novel and challenging task in the field of autonomous driving, which aims to address the frequency mismatching problem between camera and LiDAR. Previous works represent the 3D spatial motion relationship induced by a coarse 2D optical flow, and the quality of interpolated point clouds only depends on the supervision of depth maps. As a result, the generated point clouds suffer from inferior global distributions and local appearances. To solve the above problems, we propose a Pseudo-LiDAR point cloud interpolation network to generates temporally and spatially high-quality point cloud sequences. By exploiting the scene flow between point clouds, the proposed network is able to learn a more accurate representation of the 3D spatial motion relationship. For the more comprehensive perception of the distribution of point cloud, we design a novel reconstruction loss function that implements the chamfer distance to supervise the generation of Pseudo-LiDAR point clouds in 3D space. In addition, we introduce a multi-modal deep aggregation module to facilitate the efficient fusion of texture and depth features. As the benefits of the improved motion representation, training loss function, and model structure, our approach gains significant improvements on the Pseudo-LiDAR point cloud interpolation task. The experimental results evaluated on KITTI dataset demonstrate the state-of-the-art performance of the proposed network, quantitatively and qualitatively. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 183,253 |
2012.05660 | Slimmable Generative Adversarial Networks | Generative adversarial networks (GANs) have achieved remarkable progress in recent years, but the continuously growing scale of models makes them challenging to deploy widely in practical applications. In particular, for real-time generation tasks, different devices require generators of different sizes due to varying computing power. In this paper, we introduce slimmable GANs (SlimGANs), which can flexibly switch the width of the generator to accommodate various quality-efficiency trade-offs at runtime. Specifically, we leverage multiple discriminators that share partial parameters to train the slimmable generator. To facilitate the \textit{consistency} between generators of different widths, we present a stepwise inplace distillation technique that encourages narrow generators to learn from wide ones. As for class-conditional generation, we propose a sliceable conditional batch normalization that incorporates the label information into different widths. Our methods are validated, both quantitatively and qualitatively, by extensive experiments and a detailed ablation study. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 210,854 |
2306.15365 | Herb-Drug Interactions: A Holistic Decision Support System in Healthcare | Complementary and alternative medicine are commonly used concomitantly with conventional medications leading to adverse drug reactions and even fatality in some cases. Furthermore, the vast possibility of herb-drug interactions prevents health professionals from remembering or manually searching them in a database. Decision support systems are a powerful tool that can be used to assist clinicians in making diagnostic and therapeutic decisions in patient care. Therefore, an original and hybrid decision support system was designed to identify herb-drug interactions, applying artificial intelligence techniques to identify new possible interactions. Different machine learning models will be used to strengthen the typical rules engine used in these cases. Thus, using the proposed system, the pharmacy community, people's first line of contact within the Healthcare System, will be able to make better and more accurate therapeutic decisions and mitigate possible adverse events. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 376,001 |
2002.01883 | Deep Radial-Basis Value Functions for Continuous Control | A core operation in reinforcement learning (RL) is finding an action that is optimal with respect to a learned value function. This operation is often challenging when the learned value function takes continuous actions as input. We introduce deep radial-basis value functions (RBVFs): value functions learned using a deep network with a radial-basis function (RBF) output layer. We show that the maximum action-value with respect to a deep RBVF can be approximated easily and accurately. Moreover, deep RBVFs can represent any true value function owing to their support for universal function approximation. We extend the standard DQN algorithm to continuous control by endowing the agent with a deep RBVF. We show that the resultant agent, called RBF-DQN, significantly outperforms value-function-only baselines, and is competitive with state-of-the-art actor-critic algorithms. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 162,763 |
2302.05075 | BEST: BERT Pre-Training for Sign Language Recognition with Coupling
Tokenization | In this work, we are dedicated to leveraging the BERT pre-training success and modeling the domain-specific statistics to fertilize the sign language recognition~(SLR) model. Considering the dominance of hand and body in sign language expression, we organize them as pose triplet units and feed them into the Transformer backbone in a frame-wise manner. Pre-training is performed via reconstructing the masked triplet unit from the corrupted input sequence, which learns the hierarchical correlation context cues among internal and external triplet units. Notably, different from the highly semantic word token in BERT, the pose unit is a low-level signal originally located in continuous space, which prevents the direct adoption of the BERT cross-entropy objective. To this end, we bridge this semantic gap via coupling tokenization of the triplet unit. It adaptively extracts the discrete pseudo label from the pose triplet unit, which represents the semantic gesture/body state. After pre-training, we fine-tune the pre-trained encoder on the downstream SLR task, jointly with the newly added task-specific layer. Extensive experiments are conducted to validate the effectiveness of our proposed method, achieving new state-of-the-art performance on all four benchmarks with a notable gain. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 344,921 |
2004.00568 | One-shot path planning for multi-agent systems using fully convolutional
neural network | Path planning plays a crucial role in robot action execution, since a path or a motion trajectory for a particular action has to be defined first before the action can be executed. Most of the current approaches are iterative methods where the trajectory is generated iteratively by predicting the next state based on the current state. Moreover, in case of multi-agent systems, paths are planned for each agent separately. In contrast to that, we propose a novel method by utilising fully convolutional neural network, which allows generation of complete paths, even for more than one agent, in one-shot, i.e., with a single prediction step. We demonstrate that our method is able to successfully generate optimal or close to optimal paths in more than 98\% of the cases for single path predictions. Moreover, we show that although the network has never been trained on multi-path planning it is also able to generate optimal or close to optimal paths in 85.7\% and 65.4\% of the cases when generating two and three paths, respectively. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 170,680 |
q-bio/0603007 | Compression ratios based on the Universal Similarity Metric still yield
protein distances far from CATH distances | Kolmogorov complexity has inspired several alignment-free distance measures, based on the comparison of lengths of compressions, which have been applied successfully in many areas. One of these measures, the so-called Universal Similarity Metric (USM), has been used by Krasnogor and Pelta to compare simple protein contact maps, showing that it yielded good clustering on four small datasets. We report an extensive test of this metric using a much larger and representative protein dataset: the domain dataset used by Sierk and Pearson to evaluate seven protein structure comparison methods and two protein sequence comparison methods. One result is that Krasnogor-Pelta method has less domain discriminant power than any one of the methods considered by Sierk and Pearson when using these simple contact maps. In another test, we found that the USM based distance has low agreement with the CATH tree structure for the same benchmark of Sierk and Pearson. In any case, its agreement is lower than the one of a standard sequential alignment method, SSEARCH. Finally, we manually found lots of small subsets of the database that are better clustered using SSEARCH than USM, to confirm that Krasnogor-Pelta's conclusions were based on datasets that were too small. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 540,853 |
2310.05370 | SocialCircle: Learning the Angle-based Social Interaction Representation
for Pedestrian Trajectory Prediction | Analyzing and forecasting trajectories of agents like pedestrians and cars in complex scenes has become more and more significant in many intelligent systems and applications. The diversity and uncertainty in socially interactive behaviors among a rich variety of agents make this task more challenging than other deterministic computer vision tasks. Researchers have made a lot of efforts to quantify the effects of these interactions on future trajectories through different mathematical models and network structures, but this problem has not been well solved. Inspired by marine animals that localize the positions of their companions underwater through echoes, we build a new anglebased trainable social interaction representation, named SocialCircle, for continuously reflecting the context of social interactions at different angular orientations relative to the target agent. We validate the effect of the proposed SocialCircle by training it along with several newly released trajectory prediction models, and experiments show that the SocialCircle not only quantitatively improves the prediction performance, but also qualitatively helps better simulate social interactions when forecasting pedestrian trajectories in a way that is consistent with human intuitions. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 398,118 |
2110.09770 | AEFE: Automatic Embedded Feature Engineering for Categorical Features | The challenge of solving data mining problems in e-commerce applications such as recommendation system (RS) and click-through rate (CTR) prediction is how to make inferences by constructing combinatorial features from a large number of categorical features while preserving the interpretability of the method. In this paper, we propose Automatic Embedded Feature Engineering(AEFE), an automatic feature engineering framework for representing categorical features, which consists of various components including custom paradigm feature construction and multiple feature selection. By selecting the potential field pairs intelligently and generating a series of interpretable combinatorial features, our framework can provide a set of unseen generated features for enhancing model performance and then assist data analysts in discovering the feature importance for particular data mining tasks. Furthermore, AEFE is distributed implemented by task-parallelism, data sampling, and searching schema based on Matrix Factorization field combination, to optimize the performance and enhance the efficiency and scalability of the framework. Experiments conducted on some typical e-commerce datasets indicate that our method outperforms the classical machine learning models and state-of-the-art deep learning models. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 261,920 |
2208.02005 | Gradient-based Uncertainty for Monocular Depth Estimation | In monocular depth estimation, disturbances in the image context, like moving objects or reflecting materials, can easily lead to erroneous predictions. For that reason, uncertainty estimates for each pixel are necessary, in particular for safety-critical applications such as automated driving. We propose a post hoc uncertainty estimation approach for an already trained and thus fixed depth estimation model, represented by a deep neural network. The uncertainty is estimated with the gradients which are extracted with an auxiliary loss function. To avoid relying on ground-truth information for the loss definition, we present an auxiliary loss function based on the correspondence of the depth prediction for an image and its horizontally flipped counterpart. Our approach achieves state-of-the-art uncertainty estimation results on the KITTI and NYU Depth V2 benchmarks without the need to retrain the neural network. Models and code are publicly available at https://github.com/jhornauer/GrUMoDepth. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 311,346 |
2402.17370 | An Efficient MLP-based Point-guided Segmentation Network for Ore Images
with Ambiguous Boundary | The precise segmentation of ore images is critical to the successful execution of the beneficiation process. Due to the homogeneous appearance of the ores, which leads to low contrast and unclear boundaries, accurate segmentation becomes challenging, and recognition becomes problematic. This paper proposes a lightweight framework based on Multi-Layer Perceptron (MLP), which focuses on solving the problem of edge burring. Specifically, we introduce a lightweight backbone better suited for efficiently extracting low-level features. Besides, we design a feature pyramid network consisting of two MLP structures that balance local and global information thus enhancing detection accuracy. Furthermore, we propose a novel loss function that guides the prediction points to match the instance edge points to achieve clear object boundaries. We have conducted extensive experiments to validate the efficacy of our proposed method. Our approach achieves a remarkable processing speed of over 27 frames per second (FPS) with a model size of only 73 MB. Moreover, our method delivers a consistently high level of accuracy, with impressive performance scores of 60.4 and 48.9 in~$AP_{50}^{box}$ and~$AP_{50}^{mask}$ respectively, as compared to the currently available state-of-the-art techniques, when tested on the ore image dataset. The source code will be released at \url{https://github.com/MVME-HBUT/ORENEXT}. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 432,957 |
2103.07815 | Dynamically Switching Human Prediction Models for Efficient Planning | As environments involving both robots and humans become increasingly common, so does the need to account for people during planning. To plan effectively, robots must be able to respond to and sometimes influence what humans do. This requires a human model which predicts future human actions. A simple model may assume the human will continue what they did previously; a more complex one might predict that the human will act optimally, disregarding the robot; whereas an even more complex one might capture the robot's ability to influence the human. These models make different trade-offs between computational time and performance of the resulting robot plan. Using only one model of the human either wastes computational resources or is unable to handle critical situations. In this work, we give the robot access to a suite of human models and enable it to assess the performance-computation trade-off online. By estimating how an alternate model could improve human prediction and how that may translate to performance gain, the robot can dynamically switch human models whenever the additional computation is justified. Our experiments in a driving simulator showcase how the robot can achieve performance comparable to always using the best human model, but with greatly reduced computation. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | 224,706 |
2109.05339 | Scaling and Acceleration of Three-dimensional Structure Determination
for Single-Particle Imaging Experiments with SpiniFEL | The Linac Coherent Light Source (LCLS) is an X- ray free electron laser (XFEL) facility enabling the study of the structure and dynamics of single macromolecules. A major upgrade will bring the repetition rate of the X-ray source from 120 to 1 million pulses per second. Exascale high performance computing (HPC) capabilities will be required to process the corresponding data rates. We present SpiniFEL, an application used for structure determination of proteins from single-particle imaging (SPI) experiments. An emerging technique for imaging individual proteins and other large molecular complexes by outrunning radiation damage, SPI breaks free from the need for crystallization (which is difficult for some proteins) and allows for imaging molecular dynamics at near ambient conditions. SpiniFEL is being developed to run on supercomputers in near real-time while an experiment is taking place, so that the feedback about the data can guide the data collection strategy. We describe here how we reformulated the mathematical framework for parallelizable implementation and accelerated the most compute intensive parts of the application. We also describe the use of Pygion, a Python interface for the Legion task-based programming model and compare to our existing MPI+GPU implementation. | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 254,750 |
1609.09196 | EXTRACT: Strong Examples from Weakly-Labeled Sensor Data | Thanks to the rise of wearable and connected devices, sensor-generated time series comprise a large and growing fraction of the world's data. Unfortunately, extracting value from this data can be challenging, since sensors report low-level signals (e.g., acceleration), not the high-level events that are typically of interest (e.g., gestures). We introduce a technique to bridge this gap by automatically extracting examples of real-world events in low-level data, given only a rough estimate of when these events have taken place. By identifying sets of features that repeat in the same temporal arrangement, we isolate examples of such diverse events as human actions, power consumption patterns, and spoken words with up to 96% precision and recall. Our method is fast enough to run in real time and assumes only minimal knowledge of which variables are relevant or the lengths of events. Our evaluation uses numerous publicly available datasets and over 1 million samples of manually labeled sensor data. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | 61,687 |
2210.09599 | Denoising Enhanced Distantly Supervised Ultrafine Entity Typing | Recently, the task of distantly supervised (DS) ultra-fine entity typing has received significant attention. However, DS data is noisy and often suffers from missing or wrong labeling issues resulting in low precision and low recall. This paper proposes a novel ultra-fine entity typing model with denoising capability. Specifically, we build a noise model to estimate the unknown labeling noise distribution over input contexts and noisy type labels. With the noise model, more trustworthy labels can be recovered by subtracting the estimated noise from the input. Furthermore, we propose an entity typing model, which adopts a bi-encoder architecture, is trained on the denoised data. Finally, the noise model and entity typing model are trained iteratively to enhance each other. We conduct extensive experiments on the Ultra-Fine entity typing dataset as well as OntoNotes dataset and demonstrate that our approach significantly outperforms other baseline methods. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 324,598 |
2210.07592 | TSP-Bot: Robotic TSP Pen Art using High-DoF Manipulators | TSP art is an art form for drawing an image using piecewise-continuous line segments. We present TSP-Bot, a robotic pen drawing system capable of creating complicated TSP pen art on a planar surface using multiple colors. The system begins by converting a colored raster image into a set of points that represent the image's tone, which can be controlled by adjusting the point density. Next, the system finds a piecewise-continuous linear path that visits each point exactly once, which is equivalent to solving a Traveling Salesman Problem (TSP). The path is simplified with fewer points using bounded approximation and smoothed and optimized using Bezier spline curves with bounded curvature. Our robotic drawing system consisting of single or dual manipulators with fingered grippers and a mobile platform performs the drawing task by following the resulting complex and sophisticated path composed of thousands of TSP sites. As a result, our system can draw complicated and visually pleasing TSP pen art. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | 323,786 |
2405.17900 | Enhancing Emotion Recognition in Conversation through Emotional
Cross-Modal Fusion and Inter-class Contrastive Learning | The purpose of emotion recognition in conversation (ERC) is to identify the emotion category of an utterance based on contextual information. Previous ERC methods relied on simple connections for cross-modal fusion and ignored the information differences between modalities, resulting in the model being unable to focus on modality-specific emotional information. At the same time, the shared information between modalities was not processed to generate emotions. Information redundancy problem. To overcome these limitations, we propose a cross-modal fusion emotion prediction network based on vector connections. The network mainly includes two stages: the multi-modal feature fusion stage based on connection vectors and the emotion classification stage based on fused features. Furthermore, we design a supervised inter-class contrastive learning module based on emotion labels. Experimental results confirm the effectiveness of the proposed method, demonstrating excellent performance on the IEMOCAP and MELD datasets. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 458,177 |
2404.17053 | Agentive Permissions in Multiagent Systems | This paper proposes to distinguish four forms of agentive permissions in multiagent settings. The main technical results are the complexity analysis of model checking, the semantic undefinability of modalities that capture these forms of permissions through each other, and a complete logical system capturing the interplay between these modalities. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | 449,719 |
1611.01655 | Twenty (simple) questions | A basic combinatorial interpretation of Shannon's entropy function is via the "20 questions" game. This cooperative game is played by two players, Alice and Bob: Alice picks a distribution $\pi$ over the numbers $\{1,\ldots,n\}$, and announces it to Bob. She then chooses a number $x$ according to $\pi$, and Bob attempts to identify $x$ using as few Yes/No queries as possible, on average. An optimal strategy for the "20 questions" game is given by a Huffman code for $\pi$: Bob's questions reveal the codeword for $x$ bit by bit. This strategy finds $x$ using fewer than $H(\pi)+1$ questions on average. However, the questions asked by Bob could be arbitrary. In this paper, we investigate the following question: Are there restricted sets of questions that match the performance of Huffman codes, either exactly or approximately? Our first main result shows that for every distribution $\pi$, Bob has a strategy that uses only questions of the form "$x < c$?" and "$x = c$?", and uncovers $x$ using at most $H(\pi)+1$ questions on average, matching the performance of Huffman codes in this sense. We also give a natural set of $O(rn^{1/r})$ questions that achieve a performance of at most $H(\pi)+r$, and show that $\Omega(rn^{1/r})$ questions are required to achieve such a guarantee. Our second main result gives a set $\mathcal{Q}$ of $1.25^{n+o(n)}$ questions such that for every distribution $\pi$, Bob can implement an optimal strategy for $\pi$ using only questions from $\mathcal{Q}$. We also show that $1.25^{n-o(n)}$ questions are needed, for infinitely many $n$. If we allow a small slack of $r$ over the optimal strategy, then roughly $(rn)^{\Theta(1/r)}$ questions are necessary and sufficient. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | true | 63,416 |
2001.05668 | The Chameleon Attack: Manipulating Content Display in Online Social
Media | Online social networks (OSNs) are ubiquitous attracting millions of users all over the world. Being a popular communication media OSNs are exploited in a variety of cyber attacks. In this article, we discuss the Chameleon attack technique, a new type of OSN-based trickery where malicious posts and profiles change the way they are displayed to OSN users to conceal themselves before the attack or avoid detection. Using this technique, adversaries can, for example, avoid censorship by concealing true content when it is about to be inspected; acquire social capital to promote new content while piggybacking a trending one; cause embarrassment and serious reputation damage by tricking a victim to like, retweet, or comment a message that he wouldn't normally do without any indication for the trickery within the OSN. An experiment performed with closed Facebook groups of sports fans shows that (1) Chameleon pages can pass by the moderation filters by changing the way their posts are displayed and (2) moderators do not distinguish between regular and Chameleon pages. We list the OSN weaknesses that facilitate the Chameleon attack and propose a set of mitigation guidelines. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 160,604 |
1503.03506 | Diverse Landmark Sampling from Determinantal Point Processes for
Scalable Manifold Learning | High computational costs of manifold learning prohibit its application for large point sets. A common strategy to overcome this problem is to perform dimensionality reduction on selected landmarks and to successively embed the entire dataset with the Nystr\"om method. The two main challenges that arise are: (i) the landmarks selected in non-Euclidean geometries must result in a low reconstruction error, (ii) the graph constructed from sparsely sampled landmarks must approximate the manifold well. We propose the sampling of landmarks from determinantal distributions on non-Euclidean spaces. Since current determinantal sampling algorithms have the same complexity as those for manifold learning, we present an efficient approximation running in linear time. Further, we recover the local geometry after the sparsification by assigning each landmark a local covariance matrix, estimated from the original point set. The resulting neighborhood selection based on the Bhattacharyya distance improves the embedding of sparsely sampled manifolds. Our experiments show a significant performance improvement compared to state-of-the-art landmark selection techniques. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 41,055 |
1801.01704 | Artificial Intelligence (AI) Methods in Optical Networks: A
Comprehensive Survey | Artificial intelligence (AI) is an extensive scientific discipline which enables computer systems to solve problems by emulating complex biological processes such as learning, reasoning and self-correction. This paper presents a comprehensive review of the application of AI techniques for improving performance of optical communication systems and networks. The use of AI-based techniques is first studied in applications related to optical transmission, ranging from the characterization and operation of network components to performance monitoring, mitigation of nonlinearities, and quality of transmission estimation. Then, applications related to optical network control and management are also reviewed, including topics like optical network planning and operation in both transport and access networks. Finally, the paper also presents a summary of opportunities and challenges in optical networking where AI is expected to play a key role in the near future. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 87,785 |
1404.3312 | Shrinkage Optimized Directed Information using Pictorial Structures for
Action Recognition | In this paper, we propose a novel action recognition framework. The method uses pictorial structures and shrinkage optimized directed information assessment (SODA) coupled with Markov Random Fields called SODA+MRF to model the directional temporal dependency and bidirectional spatial dependency. As a variant of mutual information, directional information captures the directional information flow and temporal structure of video sequences across frames. Meanwhile, within each frame, Markov random fields are utilized to model the spatial relations among different parts of a human body and the body parts of different people. The proposed SODA+MRF model is robust to view point transformations and detect complex interactions accurately. We compare the proposed method against several baseline methods to highlight the effectiveness of the SODA+MRF model. We demonstrate that our algorithm has superior action recognition performance on the UCF action recognition dataset, the Olympic sports dataset and the collective activity dataset over several state-of-the-art methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 32,291 |
2003.07060 | Finding Fair and Efficient Allocations When Valuations Don't Add Up | In this paper, we present new results on the fair and efficient allocation of indivisible goods to agents whose preferences correspond to {\em matroid rank functions}. This is a versatile valuation class with several desirable properties (such as monotonicity and submodularity), which naturally lends itself to a number of real-world domains. We use these properties to our advantage; first, we show that when agent valuations are matroid rank functions, a socially optimal (i.e. utilitarian social welfare-maximizing) allocation that achieves envy-freeness up to one item (EF1) exists and is computationally tractable. We also prove that the Nash welfare-maximizing and the leximin allocations both exhibit this fairness/efficiency combination, by showing that they can be achieved by minimizing any symmetric strictly convex function over utilitarian optimal outcomes. To the best of our knowledge, this is the first valuation function class not subsumed by additive valuations for which it has been established that an allocation maximizing Nash welfare is EF1. Moreover, for a subclass of these valuation functions based on maximum (unweighted) bipartite matching, we show that a leximin allocation can be computed in polynomial time. Additionally, we explore possible extensions of our results to fairness criteria other than EF1 as well as to generalizations of the above valuation classes. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 168,320 |
2311.10541 | Detection of Offensive and Threatening Online Content in a Low Resource
Language | Hausa is a major Chadic language, spoken by over 100 million people in Africa. However, from a computational linguistic perspective, it is considered a low-resource language, with limited resources to support Natural Language Processing (NLP) tasks. Online platforms often facilitate social interactions that can lead to the use of offensive and threatening language, which can go undetected due to the lack of detection systems designed for Hausa. This study aimed to address this issue by (1) conducting two user studies (n=308) to investigate cyberbullying-related issues, (2) collecting and annotating the first set of offensive and threatening datasets to support relevant downstream tasks in Hausa, (3) developing a detection system to flag offensive and threatening content, and (4) evaluating the detection system and the efficacy of the Google-based translation engine in detecting offensive and threatening terms in Hausa. We found that offensive and threatening content is quite common, particularly when discussing religion and politics. Our detection system was able to detect more than 70% of offensive and threatening content, although many of these were mistranslated by Google's translation engine. We attribute this to the subtle relationship between offensive and threatening content and idiomatic expressions in the Hausa language. We recommend that diverse stakeholders participate in understanding local conventions and demographics in order to develop a more effective detection system. These insights are essential for implementing targeted moderation strategies to create a safe and inclusive online environment. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 408,562 |
1603.08538 | Hybrid Ant Colony Optimization in solving Multi-Skill
Resource-Constrained Project Scheduling Problem | In this paper Hybrid Ant Colony Optimization (HAntCO) approach in solving Multi--Skill Resource Constrained Project Scheduling Problem (MS--RCPSP) has been presented. We have proposed hybrid approach that links classical heuristic priority rules for project scheduling with Ant Colony Optimization (ACO). Furthermore, a novel approach for updating pheromone value has been proposed, based on both the best and worst solutions stored by ants. The objective of this paper is to research the usability and robustness of ACO and its hybrids with priority rules in solving MS--RCPSP. Experiments have been performed using artificially created dataset instances, based on real--world ones. We published those instances that can be used as a benchmark. Presented results show that ACO--based hybrid method is an efficient approach. More directed search process by hybrids makes this approach more stable and provides mostly better results than classical ACO. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 53,791 |
2309.14774 | BLIP-Adapter: Parameter-Efficient Transfer Learning for Mobile
Screenshot Captioning | This study aims to explore efficient tuning methods for the screenshot captioning task. Recently, image captioning has seen significant advancements, but research in captioning tasks for mobile screens remains relatively scarce. Current datasets and use cases describing user behaviors within product screenshots are notably limited. Consequently, we sought to fine-tune pre-existing models for the screenshot captioning task. However, fine-tuning large pre-trained models can be resource-intensive, requiring considerable time, computational power, and storage due to the vast number of parameters in image captioning models. To tackle this challenge, this study proposes a combination of adapter methods, which necessitates tuning only the additional modules on the model. These methods are originally designed for vision or language tasks, and our intention is to apply them to address similar challenges in screenshot captioning. By freezing the parameters of the image caption models and training only the weights associated with the methods, performance comparable to fine-tuning the entire model can be achieved, while significantly reducing the number of parameters. This study represents the first comprehensive investigation into the effectiveness of combining adapters within the context of the screenshot captioning task. Through our experiments and analyses, this study aims to provide valuable insights into the application of adapters in vision-language models and contribute to the development of efficient tuning techniques for the screenshot captioning task. Our study is available at https://github.com/RainYuGG/BLIP-Adapter | true | false | false | false | false | false | true | false | true | false | false | true | false | false | false | false | false | false | 394,735 |
2403.18092 | OCAI: Improving Optical Flow Estimation by Occlusion and Consistency
Aware Interpolation | The scarcity of ground-truth labels poses one major challenge in developing optical flow estimation models that are both generalizable and robust. While current methods rely on data augmentation, they have yet to fully exploit the rich information available in labeled video sequences. We propose OCAI, a method that supports robust frame interpolation by generating intermediate video frames alongside optical flows in between. Utilizing a forward warping approach, OCAI employs occlusion awareness to resolve ambiguities in pixel values and fills in missing values by leveraging the forward-backward consistency of optical flows. Additionally, we introduce a teacher-student style semi-supervised learning method on top of the interpolated frames. Using a pair of unlabeled frames and the teacher model's predicted optical flow, we generate interpolated frames and flows to train a student model. The teacher's weights are maintained using Exponential Moving Averaging of the student. Our evaluations demonstrate perceptually superior interpolation quality and enhanced optical flow accuracy on established benchmarks such as Sintel and KITTI. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 441,754 |
2401.08865 | The Effect of Intrinsic Dataset Properties on Generalization: Unraveling
Learning Differences Between Natural and Medical Images | This paper investigates discrepancies in how neural networks learn from different imaging domains, which are commonly overlooked when adopting computer vision techniques from the domain of natural images to other specialized domains such as medical images. Recent works have found that the generalization error of a trained network typically increases with the intrinsic dimension ($d_{data}$) of its training set. Yet, the steepness of this relationship varies significantly between medical (radiological) and natural imaging domains, with no existing theoretical explanation. We address this gap in knowledge by establishing and empirically validating a generalization scaling law with respect to $d_{data}$, and propose that the substantial scaling discrepancy between the two considered domains may be at least partially attributed to the higher intrinsic ``label sharpness'' ($K_\mathcal{F}$) of medical imaging datasets, a metric which we propose. Next, we demonstrate an additional benefit of measuring the label sharpness of a training set: it is negatively correlated with the trained model's adversarial robustness, which notably leads to models for medical images having a substantially higher vulnerability to adversarial attack. Finally, we extend our $d_{data}$ formalism to the related metric of learned representation intrinsic dimension ($d_{repr}$), derive a generalization scaling law with respect to $d_{repr}$, and show that $d_{data}$ serves as an upper bound for $d_{repr}$. Our theoretical results are supported by thorough experiments with six models and eleven natural and medical imaging datasets over a range of training set sizes. Our findings offer insights into the influence of intrinsic dataset properties on generalization, representation learning, and robustness in deep neural networks. Code link: https://github.com/mazurowski-lab/intrinsic-properties | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 422,052 |
2405.13337 | Semantic Equitable Clustering: A Simple and Effective Strategy for
Clustering Vision Tokens | The Vision Transformer (ViT) has gained prominence for its superior relational modeling prowess. However, its global attention mechanism's quadratic complexity poses substantial computational burdens. A common remedy spatially groups tokens for self-attention, reducing computational requirements. Nonetheless, this strategy neglects semantic information in tokens, possibly scattering semantically-linked tokens across distinct groups, thus compromising the efficacy of self-attention intended for modeling inter-token dependencies. Motivated by these insights, we introduce a fast and balanced clustering method, named \textbf{S}emantic \textbf{E}quitable \textbf{C}lustering (SEC). SEC clusters tokens based on their global semantic relevance in an efficient, straightforward manner. In contrast to traditional clustering methods requiring multiple iterations, our method achieves token clustering in a single pass. Additionally, SEC regulates the number of tokens per cluster, ensuring a balanced distribution for effective parallel processing on current computational platforms without necessitating further optimization. Capitalizing on SEC, we propose a versatile vision backbone, SECViT. Comprehensive experiments in image classification, object detection, instance segmentation, and semantic segmentation validate the effectiveness of SECViT. Moreover, SEC can be conveniently and swiftly applied to multimodal large language models (MLLM), such as LLaVA, to serve as a vision language connector, effectively accelerating the model's efficiency while maintaining unchanged or better performance. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 455,898 |
2401.03637 | Inverse-like Antagonistic Scene Text Spotting via Reading-Order
Estimation and Dynamic Sampling | Scene text spotting is a challenging task, especially for inverse-like scene text, which has complex layouts, e.g., mirrored, symmetrical, or retro-flexed. In this paper, we propose a unified end-to-end trainable inverse-like antagonistic text spotting framework dubbed IATS, which can effectively spot inverse-like scene texts without sacrificing general ones. Specifically, we propose an innovative reading-order estimation module (REM) that extracts reading-order information from the initial text boundary generated by an initial boundary module (IBM). To optimize and train REM, we propose a joint reading-order estimation loss consisting of a classification loss, an orthogonality loss, and a distribution loss. With the help of IBM, we can divide the initial text boundary into two symmetric control points and iteratively refine the new text boundary using a lightweight boundary refinement module (BRM) for adapting to various shapes and scales. To alleviate the incompatibility between text detection and recognition, we propose a dynamic sampling module (DSM) with a thin-plate spline that can dynamically sample appropriate features for recognition in the detected text region. Without extra supervision, the DSM can proactively learn to sample appropriate features for text recognition through the gradient returned by the recognition module. Extensive experiments on both challenging scene text and inverse-like scene text datasets demonstrate that our method achieves superior performance both on irregular and inverse-like text spotting. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 420,178 |
2105.10213 | GAN pretraining for deep convolutional autoencoders applied to
Software-based Fingerprint Presentation Attack Detection | The need for reliable systems to determine fingerprint presentation attacks grows with the rising use of the fingerprint for authentication. This work presents a new approach to single-class classification for software-based fingerprint presentation attach detection. The described method utilizes a Wasserstein GAN to apply transfer learning to a deep convolutional autoencoder. By doing so, the autoencoder could be pretrained and finetuned on the LivDet2021 Dermalog sensor dataset with only 1122 bona fide training samples. Without making use of any presentation attack samples, the model could archive an average classification error rate of 16.79%. The Wasserstein GAN implemented to pretrain the autoencoders weights can further be used to generate realistic-looking artificial fingerprint patches. Extensive testing of different autoencoder architectures and hyperparameters led to coarse architectural guidelines as well as multiple implementations which can be utilized for future work. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 236,316 |
1905.10692 | Mapping high-performance RNNs to in-memory neuromorphic chips | The increasing need for compact and low-power computing solutions for machine learning applications has triggered significant interest in energy-efficient neuromorphic systems. However, most of these architectures rely on spiking neural networks, which typically perform poorly compared to their non-spiking counterparts in terms of accuracy. In this paper, we propose a new adaptive spiking neuron model that can be abstracted as a low-pass filter. This abstraction enables faster and better training of spiking networks using back-propagation, without simulating spikes. We show that this model dramatically improves the inference performance of a recurrent neural network and validate it with three complex spatio-temporal learning tasks: the temporal addition task, the temporal copying task, and a spoken-phrase recognition task. We estimate at least 500x higher energy-efficiency using our models on compatible neuromorphic chips in comparison to Cortex-M4, a popular embedded microprocessor. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | true | 132,153 |
2003.10585 | Input-to-State Representation in linear reservoirs dynamics | Reservoir computing is a popular approach to design recurrent neural networks, due to its training simplicity and approximation performance. The recurrent part of these networks is not trained (e.g., via gradient descent), making them appealing for analytical studies by a large community of researchers with backgrounds spanning from dynamical systems to neuroscience. However, even in the simple linear case, the working principle of these networks is not fully understood and their design is usually driven by heuristics. A novel analysis of the dynamics of such networks is proposed, which allows the investigator to express the state evolution using the controllability matrix. Such a matrix encodes salient characteristics of the network dynamics; in particular, its rank represents an input-indepedent measure of the memory capacity of the network. Using the proposed approach, it is possible to compare different reservoir architectures and explain why a cyclic topology achieves favourable results as verified by practitioners. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 169,378 |
2402.06137 | On the Privacy of Selection Mechanisms with Gaussian Noise | Report Noisy Max and Above Threshold are two classical differentially private (DP) selection mechanisms. Their output is obtained by adding noise to a sequence of low-sensitivity queries and reporting the identity of the query whose (noisy) answer satisfies a certain condition. Pure DP guarantees for these mechanisms are easy to obtain when Laplace noise is added to the queries. On the other hand, when instantiated using Gaussian noise, standard analyses only yield approximate DP guarantees despite the fact that the outputs of these mechanisms lie in a discrete space. In this work, we revisit the analysis of Report Noisy Max and Above Threshold with Gaussian noise and show that, under the additional assumption that the underlying queries are bounded, it is possible to provide pure ex-ante DP bounds for Report Noisy Max and pure ex-post DP bounds for Above Threshold. The resulting bounds are tight and depend on closed-form expressions that can be numerically evaluated using standard methods. Empirically we find these lead to tighter privacy accounting in the high privacy, low data regime. Further, we propose a simple privacy filter for composing pure ex-post DP guarantees, and use it to derive a fully adaptive Gaussian Sparse Vector Technique mechanism. Finally, we provide experiments on mobility and energy consumption datasets demonstrating that our Sparse Vector Technique is practically competitive with previous approaches and requires less hyper-parameter tuning. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 428,182 |
1809.09582 | Contextual Bandits with Cross-learning | In the classical contextual bandits problem, in each round $t$, a learner observes some context $c$, chooses some action $i$ to perform, and receives some reward $r_{i,t}(c)$. We consider the variant of this problem where in addition to receiving the reward $r_{i,t}(c)$, the learner also learns the values of $r_{i,t}(c')$ for some other contexts $c'$ in set $\mathcal{O}_i(c)$; i.e., the rewards that would have been achieved by performing that action under different contexts $c'\in \mathcal{O}_i(c)$. This variant arises in several strategic settings, such as learning how to bid in non-truthful repeated auctions, which has gained a lot of attention lately as many platforms have switched to running first-price auctions. We call this problem the contextual bandits problem with cross-learning. The best algorithms for the classical contextual bandits problem achieve $\tilde{O}(\sqrt{CKT})$ regret against all stationary policies, where $C$ is the number of contexts, $K$ the number of actions, and $T$ the number of rounds. We design and analyze new algorithms for the contextual bandits problem with cross-learning and show that their regret has better dependence on the number of contexts. Under complete cross-learning where the rewards for all contexts are learned when choosing an action, i.e., set $\mathcal{O}_i(c)$ contains all contexts, we show that our algorithms achieve regret $\tilde{O}(\sqrt{KT})$, removing the dependence on $C$. For any other cases, i.e., under partial cross-learning where $|\mathcal{O}_i(c)|< C$ for some context-action pair of $(i,c)$, the regret bounds depend on how the sets $\mathcal O_i(c)$ impact the degree to which cross-learning between contexts is possible. We simulate our algorithms on real auction data from an ad exchange running first-price auctions and show that they outperform traditional contextual bandit algorithms. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 108,740 |
2110.04752 | How Robust are Limit Order Book Representations under Data Perturbation? | The success of machine learning models in the financial domain is highly reliant on the quality of the data representation. In this paper, we focus on the representation of limit order book data and discuss the opportunities and challenges for learning representations of such data. We also experimentally analyse the issues associated with existing representations and present a guideline for future research in this area. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 260,028 |
1710.11420 | Joint Cooperative Computation and Interactive Communication for
Relay-Assisted Mobile Edge Computing | To realize cooperative computation and communication in a relay mobile edge computing system, we develop a hybrid relay forward protocol, where we seek to balance the execution delay and network energy consumption. The problem is formulated as a nondifferentible optimization problem which is nonconvex with highly coupled constraints. By exploiting the problem structure, we propose a lightweight algorithm based on inexact block coordinate descent method. Our results show that the proposed algorithm exhibits much faster convergence as compared with the popular concave-convex procedure based algorithm, while achieving good performance. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 83,593 |
2207.05409 | Knowledge Condensation Distillation | Knowledge Distillation (KD) transfers the knowledge from a high-capacity teacher network to strengthen a smaller student. Existing methods focus on excavating the knowledge hints and transferring the whole knowledge to the student. However, the knowledge redundancy arises since the knowledge shows different values to the student at different learning stages. In this paper, we propose Knowledge Condensation Distillation (KCD). Specifically, the knowledge value on each sample is dynamically estimated, based on which an Expectation-Maximization (EM) framework is forged to iteratively condense a compact knowledge set from the teacher to guide the student learning. Our approach is easy to build on top of the off-the-shelf KD methods, with no extra training parameters and negligible computation overhead. Thus, it presents one new perspective for KD, in which the student that actively identifies teacher's knowledge in line with its aptitude can learn to learn more effectively and efficiently. Experiments on standard benchmarks manifest that the proposed KCD can well boost the performance of student model with even higher distillation efficiency. Code is available at https://github.com/dzy3/KCD. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 307,531 |
2406.00389 | Understanding the Convergence in Balanced Resonate-and-Fire Neurons | Resonate-and-Fire (RF) neurons are an interesting complementary model for integrator neurons in spiking neural networks (SNNs). Due to their resonating membrane dynamics they can extract frequency patterns within the time domain. While established RF variants suffer from intrinsic shortcomings, the recently proposed balanced resonate-and-fire (BRF) neuron marked a significant methodological advance in terms of task performance, spiking and parameter efficiency, as well as, general stability and robustness, demonstrated for recurrent SNNs in various sequence learning tasks. One of the most intriguing result, however, was an immense improvement in training convergence speed and smoothness, overcoming the typical convergence dilemma in backprop-based SNN training. This paper aims at providing further intuitions about how and why these convergence advantages emerge. We show that BRF neurons, in contrast to well-established ALIF neurons, span a very clean and smooth - almost convex - error landscape. Furthermore, empirical results reveal that the convergence benefits are predominantly coupled with a divergence boundary-aware optimization, a major component of the BRF formulation that addresses the numerical stability of the time-discrete resonator approximation. These results are supported by a formal investigation of the membrane dynamics indicating that the gradient is transferred back through time without loss of magnitude. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 459,828 |
2211.06929 | Goal-Conditioned Reinforcement Learning in the Presence of an Adversary | Reinforcement learning has seen increasing applications in real-world contexts over the past few years. However, physical environments are often imperfect and policies that perform well in simulation might not achieve the same performance when applied elsewhere. A common approach to combat this is to train agents in the presence of an adversary. An adversary acts to destabilise the agent, which learns a more robust policy and can better handle realistic conditions. Many real-world applications of reinforcement learning also make use of goal-conditioning: this is particularly useful in the context of robotics, as it allows the agent to act differently, depending on which goal is selected. Here, we focus on the problem of goal-conditioned learning in the presence of an adversary. We first present DigitFlip and CLEVR-Play, two novel goal-conditioned environments that support acting against an adversary. Next, we propose EHER and CHER -- two HER-based algorithms for goal-conditioned learning -- and evaluate their performance. Finally, we unify the two threads and introduce IGOAL: a novel framework for goal-conditioned learning in the presence of an adversary. Experimental results show that combining IGOAL with EHER allows agents to significantly outperform existing approaches, when acting against both random and competent adversaries. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 330,074 |
2310.19641 | DistNet2D: Leveraging long-range temporal information for efficient
segmentation and tracking | Extracting long tracks and lineages from videomicroscopy requires an extremely low error rate, which is challenging on complex datasets of dense or deforming cells. Leveraging temporal context is key to overcoming this challenge. We propose DistNet2D, a new deep neural network (DNN) architecture for 2D cell segmentation and tracking that leverages both mid- and long-term temporal information. DistNet2D considers seven frames at the input and uses a post-processing procedure that exploits information from the entire video to correct segmentation errors. DistNet2D outperforms two recent methods on two experimental datasets, one containing densely packed bacterial cells and the other containing eukaryotic cells. It is integrated into an ImageJ-based graphical user interface for 2D data visualization, curation, and training. Finally, we demonstrate the performance of DistNet2D on correlating the size and shape of cells with their transport properties over large statistics, for both bacterial and eukaryotic cells. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 404,066 |
2302.01133 | SceneScape: Text-Driven Consistent Scene Generation | We present a method for text-driven perpetual view generation -- synthesizing long-term videos of various scenes solely, given an input text prompt describing the scene and camera poses. We introduce a novel framework that generates such videos in an online fashion by combining the generative power of a pre-trained text-to-image model with the geometric priors learned by a pre-trained monocular depth prediction model. To tackle the pivotal challenge of achieving 3D consistency, i.e., synthesizing videos that depict geometrically-plausible scenes, we deploy an online test-time training to encourage the predicted depth map of the current frame to be geometrically consistent with the synthesized scene. The depth maps are used to construct a unified mesh representation of the scene, which is progressively constructed along the video generation process. In contrast to previous works, which are applicable only to limited domains, our method generates diverse scenes, such as walkthroughs in spaceships, caves, or ice castles. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 343,494 |
1808.01491 | Non-locally Enhanced Encoder-Decoder Network for Single Image De-raining | Single image rain streaks removal has recently witnessed substantial progress due to the development of deep convolutional neural networks. However, existing deep learning based methods either focus on the entrance and exit of the network by decomposing the input image into high and low frequency information and employing residual learning to reduce the mapping range, or focus on the introduction of cascaded learning scheme to decompose the task of rain streaks removal into multi-stages. These methods treat the convolutional neural network as an encapsulated end-to-end mapping module without deepening into the rationality and superiority of neural network design. In this paper, we delve into an effective end-to-end neural network structure for stronger feature expression and spatial correlation learning. Specifically, we propose a non-locally enhanced encoder-decoder network framework, which consists of a pooling indices embedded encoder-decoder network to efficiently learn increasingly abstract feature representation for more accurate rain streaks modeling while perfectly preserving the image detail. The proposed encoder-decoder framework is composed of a series of non-locally enhanced dense blocks that are designed to not only fully exploit hierarchical features from all the convolutional layers but also well capture the long-distance dependencies and structural information. Extensive experiments on synthetic and real datasets demonstrate that the proposed method can effectively remove rain-streaks on rainy image of various densities while well preserving the image details, which achieves significant improvements over the recent state-of-the-art methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 104,576 |
2102.11958 | The SpaceNet Multi-Temporal Urban Development Challenge | Building footprints provide a useful proxy for a great many humanitarian applications. For example, building footprints are useful for high fidelity population estimates, and quantifying population statistics is fundamental to ~1/4 of the United Nations Sustainable Development Goals Indicators. In this paper we (the SpaceNet Partners) discuss efforts to develop techniques for precise building footprint localization, tracking, and change detection via the SpaceNet Multi-Temporal Urban Development Challenge (also known as SpaceNet 7). In this NeurIPS 2020 competition, participants were asked identify and track buildings in satellite imagery time series collected over rapidly urbanizing areas. The competition centered around a brand new open source dataset of Planet Labs satellite imagery mosaics at 4m resolution, which includes 24 images (one per month) covering ~100 unique geographies. Tracking individual buildings at this resolution is quite challenging, yet the winning participants demonstrated impressive performance with the newly developed SpaceNet Change and Object Tracking (SCOT) metric. This paper details the top-5 winning approaches, as well as analysis of results that yielded a handful of interesting anecdotes such as decreasing performance with latitude. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 221,573 |
2108.09468 | End2End Occluded Face Recognition by Masking Corrupted Features | With the recent advancement of deep convolutional neural networks, significant progress has been made in general face recognition. However, the state-of-the-art general face recognition models do not generalize well to occluded face images, which are exactly the common cases in real-world scenarios. The potential reasons are the absences of large-scale occluded face data for training and specific designs for tackling corrupted features brought by occlusions. This paper presents a novel face recognition method that is robust to occlusions based on a single end-to-end deep neural network. Our approach, named FROM (Face Recognition with Occlusion Masks), learns to discover the corrupted features from the deep convolutional neural networks, and clean them by the dynamically learned masks. In addition, we construct massive occluded face images to train FROM effectively and efficiently. FROM is simple yet powerful compared to the existing methods that either rely on external detectors to discover the occlusions or employ shallow models which are less discriminative. Experimental results on the LFW, Megaface challenge 1, RMF2, AR dataset and other simulated occluded/masked datasets confirm that FROM dramatically improves the accuracy under occlusions, and generalizes well on general face recognition. Code is available at https://github.com/haibo-qiu/FROM | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 251,611 |
1407.4334 | How network temporal dynamics shape a mutualistic system with invasive
species? | Ecological networks allow us to study the structure and function of ecosystems and gain insights on species resilience/stability. The study of this ecological networks is usually a snapshop focused in a limited specific range of space and time, prevent us to perceive the real dynamics of ecological processes. By definition, an alien species has some ecological strategies and traits that permit it to compete better than the native species (e.g. absence of predators, different bloom period, high grow rate, etc.). Plant-pollinator networks provide valuable services to whole ecosystems and the introduction of an alien species may have different effects on the native network (competitive facilitation, native species extinction, etc.). While scientists acknowledge the significance of network connectivity in driving ecosystem services, the inclusion of temporary networks in ecological models is still in its infancy. We propose to use existing data on seasonality to develop a simulation platform that show inference between temporality of networks and invasions traits. Our focus is only to pick up some simple model to show, that theoretically temporal aspect play a role (different extinction patterns) to encourage ecologist to get involved in temporal networks. Moreover, the derived simulations could be further extended and adjust to other ecological questions. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | 34,695 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.