id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2207.03337
Factorizing Knowledge in Neural Networks
In this paper, we explore a novel and ambitious knowledge-transfer task, termed Knowledge Factorization~(KF). The core idea of KF lies in the modularization and assemblability of knowledge: given a pretrained network model as input, KF aims to decompose it into several factor networks, each of which handles only a dedicated task and maintains task-specific knowledge factorized from the source network. Such factor networks are task-wise disentangled and can be directly assembled, without any fine-tuning, to produce the more competent combined-task networks. In other words, the factor networks serve as Lego-brick-like building blocks, allowing us to construct customized networks in a plug-and-play manner. Specifically, each factor network comprises two modules, a common-knowledge module that is task-agnostic and shared by all factor networks, alongside with a task-specific module dedicated to the factor network itself. We introduce an information-theoretic objective, InfoMax-Bottleneck~(IMB), to carry out KF by optimizing the mutual information between the learned representations and input. Experiments across various benchmarks demonstrate that, the derived factor networks yield gratifying performances on not only the dedicated tasks but also disentanglement, while enjoying much better interpretability and modularity. Moreover, the learned common-knowledge representations give rise to impressive results on transfer learning. Our code is available at https://github.com/Adamdad/KnowledgeFactor.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
306,805
1006.3455
An External Description for MIMO Systems Sampled in an Aperiodic Way
An external description for aperiodically sampled MIMO linear systems has been developed. Emphasis is on the sampling period sequence, included among the variables to be handled. The computational procedure is simple and no use of polynomial matrix theory is required. This input/output description is believed to be a basic formulation for its later application to the problem of optimal control and/or identification of linear dynamical systems.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
6,822
2205.11987
Word-order typology in Multilingual BERT: A case study in subordinate-clause detection
The capabilities and limitations of BERT and similar models are still unclear when it comes to learning syntactic abstractions, in particular across languages. In this paper, we use the task of subordinate-clause detection within and across languages to probe these properties. We show that this task is deceptively simple, with easy gains offset by a long tail of harder cases, and that BERT's zero-shot performance is dominated by word-order effects, mirroring the SVO/VSO/SOV typology.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
298,360
2303.14384
Reliability-Hierarchical Memory Network for Scribble-Supervised Video Object Segmentation
This paper aims to solve the video object segmentation (VOS) task in a scribble-supervised manner, in which VOS models are not only trained by the sparse scribble annotations but also initialized with the sparse target scribbles for inference. Thus, the annotation burdens for both training and initialization can be substantially lightened. The difficulties of scribble-supervised VOS lie in two aspects. On the one hand, it requires the powerful ability to learn from the sparse scribble annotations during training. On the other hand, it demands strong reasoning capability during inference given only a sparse initial target scribble. In this work, we propose a Reliability-Hierarchical Memory Network (RHMNet) to predict the target mask in a step-wise expanding strategy w.r.t. the memory reliability level. To be specific, RHMNet first only uses the memory in the high-reliability level to locate the region with high reliability belonging to the target, which is highly similar to the initial target scribble. Then it expands the located high-reliability region to the entire target conditioned on the region itself and the memories in all reliability levels. Besides, we propose a scribble-supervised learning mechanism to facilitate the learning of our model to predict dense results. It mines the pixel-level relation within the single frame and the frame-level relation within the sequence to take full advantage of the scribble annotations in sequence training samples. The favorable performance on two popular benchmarks demonstrates that our method is promising.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
354,065
2405.14555
Subtle Biases Need Subtler Measures: Dual Metrics for Evaluating Representative and Affinity Bias in Large Language Models
Research on Large Language Models (LLMs) has often neglected subtle biases that, although less apparent, can significantly influence the models' outputs toward particular social narratives. This study addresses two such biases within LLMs: representative bias, which denotes a tendency of LLMs to generate outputs that mirror the experiences of certain identity groups, and affinity bias, reflecting the models' evaluative preferences for specific narratives or viewpoints. We introduce two novel metrics to measure these biases: the Representative Bias Score (RBS) and the Affinity Bias Score (ABS), and present the Creativity-Oriented Generation Suite (CoGS), a collection of open-ended tasks such as short story writing and poetry composition, designed with customized rubrics to detect these subtle biases. Our analysis uncovers marked representative biases in prominent LLMs, with a preference for identities associated with being white, straight, and men. Furthermore, our investigation of affinity bias reveals distinctive evaluative patterns within each model, akin to `bias fingerprints'. This trend is also seen in human evaluators, highlighting a complex interplay between human and machine bias perceptions.
false
false
false
false
true
false
true
false
true
false
false
false
false
true
false
false
false
false
456,488
2012.13489
Learning Robust Representation for Clustering through Locality Preserving Variational Discriminative Network
Clustering is one of the fundamental problems in unsupervised learning. Recent deep learning based methods focus on learning clustering oriented representations. Among those methods, Variational Deep Embedding achieves great success in various clustering tasks by specifying a Gaussian Mixture prior to the latent space. However, VaDE suffers from two problems: 1) it is fragile to the input noise; 2) it ignores the locality information between the neighboring data points. In this paper, we propose a joint learning framework that improves VaDE with a robust embedding discriminator and a local structure constraint, which are both helpful to improve the robustness of our model. Experiment results on various vision and textual datasets demonstrate that our method outperforms the state-of-the-art baseline models in all metrics. Further detailed analysis shows that our proposed model is very robust to the adversarial inputs, which is a desirable property for practical applications.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
213,231
2402.13147
SPRINQL: Sub-optimal Demonstrations driven Offline Imitation Learning
We focus on offline imitation learning (IL), which aims to mimic an expert's behavior using demonstrations without any interaction with the environment. One of the main challenges in offline IL is the limited support of expert demonstrations, which typically cover only a small fraction of the state-action space. While it may not be feasible to obtain numerous expert demonstrations, it is often possible to gather a larger set of sub-optimal demonstrations. For example, in treatment optimization problems, there are varying levels of doctor treatments available for different chronic conditions. These range from treatment specialists and experienced general practitioners to less experienced general practitioners. Similarly, when robots are trained to imitate humans in routine tasks, they might learn from individuals with different levels of expertise and efficiency. In this paper, we propose an offline IL approach that leverages the larger set of sub-optimal demonstrations while effectively mimicking expert trajectories. Existing offline IL methods based on behavior cloning or distribution matching often face issues such as overfitting to the limited set of expert demonstrations or inadvertently imitating sub-optimal trajectories from the larger dataset. Our approach, which is based on inverse soft-Q learning, learns from both expert and sub-optimal demonstrations. It assigns higher importance (through learned weights) to aligning with expert demonstrations and lower importance to aligning with sub-optimal ones. A key contribution of our approach, called SPRINQL, is transforming the offline IL problem into a convex optimization over the space of Q functions. Through comprehensive experimental evaluations, we demonstrate that the SPRINQL algorithm achieves state-of-the-art (SOTA) performance on offline IL benchmarks. Code is available at https://github.com/hmhuy0/SPRINQL.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
431,132
2010.05662
End-to-End Deep Learning for Reliable Cardiac Activity Monitoring using Seismocardiograms
Continuous monitoring of cardiac activity is paramount to understanding the functioning of the heart in addition to identifying precursors to conditions such as Atrial Fibrillation. Through continuous cardiac monitoring, early indications of any potential disorder can be detected before the actual event, allowing timely preventive measures to be taken. Electrocardiography (ECG) is an established standard for monitoring the function of the heart for clinical and non-clinical applications, but its electrode-based implementation makes it cumbersome, especially for uninterrupted monitoring. Hence we propose SeismoNet, a Deep Convolutional Neural Network which aims to provide an end-to-end solution to robustly observe heart activity from Seismocardiogram (SCG) signals. These SCG signals are motion-based and can be acquired in an easy, user-friendly fashion. Furthermore, the use of deep learning enables the detection of R-peaks directly from SCG signals in spite of their noise-ridden morphology and obviates the need for extracting hand-crafted features. SeismoNet was modelled on the publicly available CEBS dataset and achieved a high overall Sensitivity and Positive Predictive Value of 0.98 and 0.98 respectively.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
200,218
1009.6197
Secure Relay Beamforming over Cognitive Radio Channels
In this paper, a cognitive relay channel is considered, and amplify-and-forward (AF) relay beamforming designs in the presence of an eavesdropper and a primary user are studied. Our objective is to optimize the performance of the cognitive relay beamforming system while limiting the interference in the direction of the primary receiver and keeping the transmitted signal secret from the eavesdropper. We show that under both total and individual power constraints, the problem becomes a quasiconvex optimization problem which can be solved by interior point methods. We also propose two sub-optimal null space beamforming schemes which are obtained in a more computationally efficient way.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
7,730
2006.12485
On Creating Benchmark Dataset for Aerial Image Interpretation: Reviews, Guidances and Million-AID
The past years have witnessed great progress on remote sensing (RS) image interpretation and its wide applications. With RS images becoming more accessible than ever before, there is an increasing demand for the automatic interpretation of these images. In this context, the benchmark datasets serve as essential prerequisites for developing and testing intelligent interpretation algorithms. After reviewing existing benchmark datasets in the research community of RS image interpretation, this article discusses the problem of how to efficiently prepare a suitable benchmark dataset for RS image interpretation. Specifically, we first analyze the current challenges of developing intelligent algorithms for RS image interpretation with bibliometric investigations. We then present the general guidances on creating benchmark datasets in efficient manners. Following the presented guidances, we also provide an example on building RS image dataset, i.e., Million-AID, a new large-scale benchmark dataset containing a million instances for RS image scene classification. Several challenges and perspectives in RS image annotation are finally discussed to facilitate the research in benchmark dataset construction. We do hope this paper will provide the RS community an overall perspective on constructing large-scale and practical image datasets for further research, especially data-driven ones.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
183,609
1211.6898
On the Use of Non-Stationary Policies for Stationary Infinite-Horizon Markov Decision Processes
We consider infinite-horizon stationary $\gamma$-discounted Markov Decision Processes, for which it is known that there exists a stationary optimal policy. Using Value and Policy Iteration with some error $\epsilon$ at each iteration, it is well-known that one can compute stationary policies that are $\frac{2\gamma}{(1-\gamma)^2}\epsilon$-optimal. After arguing that this guarantee is tight, we develop variations of Value and Policy Iteration for computing non-stationary policies that can be up to $\frac{2\gamma}{1-\gamma}\epsilon$-optimal, which constitutes a significant improvement in the usual situation when $\gamma$ is close to 1. Surprisingly, this shows that the problem of "computing near-optimal non-stationary policies" is much simpler than that of "computing near-optimal stationary policies".
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
20,017
2208.08287
Noisy Nonnegative Tucker Decomposition with Sparse Factors and Missing Data
Tensor decomposition is a powerful tool for extracting physically meaningful latent factors from multi-dimensional nonnegative data, and has been an increasing interest in a variety of fields such as image processing, machine learning, and computer vision. In this paper, we propose a sparse nonnegative Tucker decomposition and completion method for the recovery of underlying nonnegative data under noisy observations. Here the underlying nonnegative data tensor is decomposed into a core tensor and several factor matrices with all entries being nonnegative and the factor matrices being sparse. The loss function is derived by the maximum likelihood estimation of the noisy observations, and the $\ell_0$ norm is employed to enhance the sparsity of the factor matrices. We establish the error bound of the estimator of the proposed model under generic noise scenarios, which is then specified to the observations with additive Gaussian noise, additive Laplace noise, and Poisson observations, respectively. Our theoretical results are better than those by existing tensor-based or matrix-based methods. Moreover, the minimax lower bounds are shown to be matched with the derived upper bounds up to logarithmic factors. Numerical examples on both synthetic and real-world data sets demonstrate the superiority of the proposed method for nonnegative tensor data completion.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
313,330
1801.03975
Timely Status Update in Massive IoT Systems: Decentralized Scheduling for Wireless Uplinks
In a typical Internet of Things (IoT) application where a central controller collects status updates from multiple terminals, e.g., sensors and monitors, through a wireless multiaccess uplink, an important problem is how to attain timely status updates autonomously. In this paper, the timeliness of the status is measured by the recently proposed age-of-information (AoI) metric; both the theoretical and practical aspects of the problem are investigated: we aim to obtain a scheduling policy with minimum AoI and, meanwhile, still suitable for decentralized implementation on account of signalling exchange overhead. Towards this end, we first consider the set of arrival-independent and renewal (AIR) policies; the optimal policy thereof to minimize the time-average AoI is proved to be a round-robin policy with one-packet (latest packet only and others are dropped) buffers (RR-ONE). The optimality is established based on a generalized Poisson-arrival-see-time-average (PASTA) theorem. It is further proved that RR-ONE is asymptotically optimal among all policies in the massive IoT regime. The AoI steady-state stationary distribution under RR-ONE is also derived. A fully decentralized implementation of RR-ONE is proposed which can accommodate dynamic terminal appearances. In addition, considering scenarios where packets cannot be dropped, the optimal AIR policy thereof is also found with parameters given by the solution to a convex problem.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
88,188
2001.02329
Emo-CNN for Perceiving Stress from Audio Signals: A Brain Chemistry Approach
Emotion plays a key role in many applications like healthcare, to gather patients emotional behavior. There are certain emotions which are given more importance due to their effectiveness in understanding human feelings. In this paper, we propose an approach that models human stress from audio signals. The research challenge in speech emotion detection is defining the very meaning of stress and being able to categorize it in a precise manner. Supervised Machine Learning models, including state of the art Deep Learning classification methods, rely on the availability of clean and labelled data. One of the problems in affective computation and emotion detection is the limited amount of annotated data of stress. The existing labelled stress emotion datasets are highly subjective to the perception of the annotator. We address the first issue of feature selection by exploiting the use of traditional MFCC features in Convolutional Neural Network. Our experiments show that Emo-CNN consistently and significantly outperforms the popular existing methods over multiple datasets. It achieves 90.2% categorical accuracy on the Emo-DB dataset. To tackle the second and the more significant problem of subjectivity in stress labels, we use Lovheim's cube, which is a 3-dimensional projection of emotions. The cube aims at explaining the relationship between these neurotransmitters and the positions of emotions in 3D space. The learnt emotion representations from the Emo-CNN are mapped to the cube using three component PCA (Principal Component Analysis) which is then used to model human stress. This proposed approach not only circumvents the need for labelled stress data but also complies with the psychological theory of emotions given by Lovheim's cube. We believe that this work is the first step towards creating a connection between Artificial Intelligence and the chemistry of human emotions.
true
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
159,706
cmp-lg/9408005
A Modular and Flexible Architecture for an Integrated Corpus Query System
The paper describes the architecture of an integrated and extensible corpus query system developed at the University of Stuttgart and gives examples of some of the modules realized within this architecture. The modules form the core of a corpus workbench. Within the proposed architecture, information required for the evaluation of queries may be derived from different knowledge sources (the corpus text, databases, on-line thesauri) and by different means: either through direct lookup in a database or by calling external tools which may infer the necessary information at the time of query evaluation. The information available and the method of information access can be stated declaratively and individually for each corpus, leading to a flexible, extensible and modular corpus workbench.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
536,154
1905.01896
Distributional Semantics and Linguistic Theory
Distributional semantics provides multi-dimensional, graded, empirically induced word representations that successfully capture many aspects of meaning in natural languages, as shown in a large body of work in computational linguistics; yet, its impact in theoretical linguistics has so far been limited. This review provides a critical discussion of the literature on distributional semantics, with an emphasis on methods and results that are of relevance for theoretical linguistics, in three areas: semantic change, polysemy and composition, and the grammar-semantics interface (specifically, the interface of semantics with syntax and with derivational morphology). The review aims at fostering greater cross-fertilization of theoretical and computational approaches to language, as a means to advance our collective knowledge of how it works.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
129,834
2304.06053
TextANIMAR: Text-based 3D Animal Fine-Grained Retrieval
3D object retrieval is an important yet challenging task that has drawn more and more attention in recent years. While existing approaches have made strides in addressing this issue, they are often limited to restricted settings such as image and sketch queries, which are often unfriendly interactions for common users. In order to overcome these limitations, this paper presents a novel SHREC challenge track focusing on text-based fine-grained retrieval of 3D animal models. Unlike previous SHREC challenge tracks, the proposed task is considerably more challenging, requiring participants to develop innovative approaches to tackle the problem of text-based retrieval. Despite the increased difficulty, we believe this task can potentially drive useful applications in practice and facilitate more intuitive interactions with 3D objects. Five groups participated in our competition, submitting a total of 114 runs. While the results obtained in our competition are satisfactory, we note that the challenges presented by this task are far from fully solved. As such, we provide insights into potential areas for future research and improvements. We believe we can help push the boundaries of 3D object retrieval and facilitate more user-friendly interactions via vision-language technologies. https://aichallenge.hcmus.edu.vn/textanimar
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
357,842
2403.02334
Gradient Correlation Subspace Learning against Catastrophic Forgetting
Efficient continual learning techniques have been a topic of significant research over the last few years. A fundamental problem with such learning is severe degradation of performance on previously learned tasks, known also as catastrophic forgetting. This paper introduces a novel method to reduce catastrophic forgetting in the context of incremental class learning called Gradient Correlation Subspace Learning (GCSL). The method detects a subspace of the weights that is least affected by previous tasks and projects the weights to train for the new task into said subspace. The method can be applied to one or more layers of a given network architectures and the size of the subspace used can be altered from layer to layer and task to task. Code will be available at \href{https://github.com/vgthengane/GCSL}{https://github.com/vgthengane/GCSL}
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
434,773
2109.01954
An Exploration of Deep Learning Methods in Hungry Geese
Hungry Geese is a n-player variation of the popular game snake. This paper looks at state of the art Deep Reinforcement Learning Value Methods. The goal of the paper is to aggregate research of value based methods and apply it as an exercise to other environments. A vanilla Deep Q Network, a Double Q-network and a Dueling Q-Network were all examined and tested with the Hungry Geese environment. The best performing model was the vanilla Deep Q Network due to its simple state representation and smaller network structure. Converging towards an optimal policy was found to be difficult due to random geese initialization and food generation. Therefore we show that Deep Q Networks may not be the appropriate model for such a stochastic environment and lastly we present improvements that can be made along with more suitable models for the environment.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
253,590
2009.10947
Pose Imitation Constraints for Collaborative Robots
Achieving human-like motion in robots has been a fundamental goal in many areas of robotics research. Inverse kinematic (IK) solvers have been explored as a solution to provide kinematic structures with anthropomorphic movements. In particular, numeric solvers based on geometry, such as FABRIK, have shown potential for producing human-like motion at a low computational cost. Nevertheless, these methods have shown limitations when solving for robot kinematic constraints. This work proposes a framework inspired by FABRIK for human pose imitation in real-time. The goal is to mitigate the problems of the original algorithm while retaining the resulting humanlike fluidity and low cost. We first propose a human constraint model for pose imitation. Then, we present a pose imitation algorithm (PIC), and it's soft version (PICs) that can successfully imitate human poses using the proposed constraint system. PIC was tested on two collaborative robots (Baxter and YuMi). Fifty human demonstrations were collected for a bi-manual assembly and an incision task. Then, two performance metrics were obtained for both robots: pose accuracy with respect to the human and the percentage of environment occlusion/obstruction. The performance of PIC and PICs was compared against the numerical solver baseline (FABRIK). The proposed algorithms achieve a higher pose accuracy than FABRIK for both tasks (25%-FABRIK, 53%-PICs, 58%-PICs). In addition, PIC and it's soft version achieve a lower percentage of occlusion during incision (10%-FABRIK, 4%-PICs, 9%-PICs). These results indicate that the PIC method can reproduce human poses and achieve key desired effects of human imitation.
true
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
197,027
1803.08420
Incremental Color Quantization for Color-Vision-Deficient Observers Using Mobile Gaming Data
The sizes of compressed images depend on their spatial resolution (number of pixels) and on their color resolution (number of color quantization levels). We introduce DaltonQuant, a new color quantization technique for image compression that cloud services can apply to images destined for a specific user with known color vision deficiencies. DaltonQuant improves compression in a user-specific but reversible manner thereby improving a user's network bandwidth and data storage efficiency. DaltonQuant quantizes image data to account for user-specific color perception anomalies, using a new method for incremental color quantization based on a large corpus of color vision acuity data obtained from a popular mobile game. Servers that host images can revert DaltonQuant's image requantization and compression when those images must be transmitted to a different user, making the technique practical to deploy on a large scale. We evaluate DaltonQuant's compression performance on the Kodak PC reference image set and show that it improves compression by an additional 22%-29% over the state-of-the-art compressors TinyPNG and pngquant.
true
false
false
false
false
false
false
false
false
false
false
true
false
true
false
false
false
false
93,258
1501.01661
Provably Delay Efficient Data Retrieving in Storage Clouds
One key requirement for storage clouds is to be able to retrieve data quickly. Recent system measurements have shown that the data retrieving delay in storage clouds is highly variable, which may result in a long latency tail. One crucial idea to improve the delay performance is to retrieve multiple data copies by using parallel downloading threads. However, how to optimally schedule these downloading threads to minimize the data retrieving delay remains to be an important open problem. In this paper, we develop low-complexity thread scheduling policies for several important classes of data downloading time distributions, and prove that these policies are either delay-optimal or within a constant gap from the optimum delay performance. These theoretical results hold for an arbitrary arrival process of read requests that may contain finite or infinite read requests, and for heterogeneous MDS storage codes that can support diverse storage redundancy and reliability requirements for different data files. Our numerical results show that the delay performance of the proposed policies is significantly better than that of First-Come- First-Served (FCFS) policies considered in prior work.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
39,099
2501.16471
SIM: Surface-based fMRI Analysis for Inter-Subject Multimodal Decoding from Movie-Watching Experiments
Current AI frameworks for brain decoding and encoding, typically train and test models within the same datasets. This limits their utility for brain computer interfaces (BCI) or neurofeedback, for which it would be useful to pool experiences across individuals to better simulate stimuli not sampled during training. A key obstacle to model generalisation is the degree of variability of inter-subject cortical organisation, which makes it difficult to align or compare cortical signals across participants. In this paper we address this through the use of surface vision transformers, which build a generalisable model of cortical functional dynamics, through encoding the topography of cortical networks and their interactions as a moving image across a surface. This is then combined with tri-modal self-supervised contrastive (CLIP) alignment of audio, video, and fMRI modalities to enable the retrieval of visual and auditory stimuli from patterns of cortical activity (and vice-versa). We validate our approach on 7T task-fMRI data from 174 healthy participants engaged in the movie-watching experiment from the Human Connectome Project (HCP). Results show that it is possible to detect which movie clips an individual is watching purely from their brain activity, even for individuals and movies not seen during training. Further analysis of attention maps reveals that our model captures individual patterns of brain activity that reflect semantic and visual systems. This opens the door to future personalised simulations of brain function. Code & pre-trained models will be made available at https://github.com/metrics-lab/sim, processed data for training will be available upon request at https://gin.g-node.org/Sdahan30/sim.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
527,977
2403.01244
Mitigating Catastrophic Forgetting in Large Language Models with Self-Synthesized Rehearsal
Large language models (LLMs) suffer from catastrophic forgetting during continual learning. Conventional rehearsal-based methods rely on previous training data to retain the model's ability, which may not be feasible in real-world applications. When conducting continual learning based on a publicly-released LLM checkpoint, the availability of the original training data may be non-existent. To address this challenge, we propose a framework called Self-Synthesized Rehearsal (SSR) that uses the LLM to generate synthetic instances for rehearsal. Concretely, we first employ the base LLM for in-context learning to generate synthetic instances. Subsequently, we utilize the latest LLM to refine the instance outputs based on the synthetic inputs, preserving its acquired ability. Finally, we select diverse high-quality synthetic instances for rehearsal in future stages. Experimental results demonstrate that SSR achieves superior or comparable performance compared to conventional rehearsal-based approaches while being more data-efficient. Besides, SSR effectively preserves the generalization capabilities of LLMs in general domains.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
434,321
2204.04322
Iterative Depth-First Search for Fully Observable Non-Deterministic Planning
Fully Observable Non-Deterministic (FOND) planning models uncertainty through actions with non-deterministic effects. Existing FOND planning algorithms are effective and employ a wide range of techniques. However, most of the existing algorithms are not robust for dealing with both non-determinism and task size. In this paper, we develop a novel iterative depth-first search algorithm that solves FOND planning tasks and produces strong cyclic policies. Our algorithm is explicitly designed for FOND planning, addressing more directly the non-deterministic aspect of FOND planning, and it also exploits the benefits of heuristic functions to make the algorithm more effective during the iterative searching process. We compare our proposed algorithm to well-known FOND planners, and show that it has robust performance over several distinct types of FOND domains considering different metrics.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
290,608
1504.00822
Quantum Expander Codes
We present an efficient decoding algorithm for constant rate quantum hypergraph-product LDPC codes which provably corrects adversarial errors of weight $\Omega(\sqrt{n})$ for codes of length $n$. The algorithm runs in time linear in the number of qubits, which makes its performance the strongest to date for linear-time decoding of quantum codes. The algorithm relies on expanding properties, not of the quantum code's factor graph directly, but of the factor graph of the original classical code it is constructed from.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
41,733
2409.13919
Measuring Error Alignment for Decision-Making Systems
Given that AI systems are set to play a pivotal role in future decision-making processes, their trustworthiness and reliability are of critical concern. Due to their scale and complexity, modern AI systems resist direct interpretation, and alternative ways are needed to establish trust in those systems, and determine how well they align with human values. We argue that good measures of the information processing similarities between AI and humans, may be able to achieve these same ends. While Representational alignment (RA) approaches measure similarity between the internal states of two systems, the associated data can be expensive and difficult to collect for human systems. In contrast, Behavioural alignment (BA) comparisons are cheaper and easier, but questions remain as to their sensitivity and reliability. We propose two new behavioural alignment metrics misclassification agreement which measures the similarity between the errors of two systems on the same instances, and class-level error similarity which measures the similarity between the error distributions of two systems. We show that our metrics correlate well with RA metrics, and provide complementary information to another BA metric, within a range of domains, and set the scene for a new approach to value alignment.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
490,232
1807.01430
SGAD: Soft-Guided Adaptively-Dropped Neural Network
Deep neural networks (DNNs) have been proven to have many redundancies. Hence, many efforts have been made to compress DNNs. However, the existing model compression methods treat all the input samples equally while ignoring the fact that the difficulties of various input samples being correctly classified are different. To address this problem, DNNs with adaptive dropping mechanism are well explored in this work. To inform the DNNs how difficult the input samples can be classified, a guideline that contains the information of input samples is introduced to improve the performance. Based on the developed guideline and adaptive dropping mechanism, an innovative soft-guided adaptively-dropped (SGAD) neural network is proposed in this paper. Compared with the 32 layers residual neural networks, the presented SGAD can reduce the FLOPs by 77% with less than 1% drop in accuracy on CIFAR-10.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
true
false
false
102,059
2310.05366
Rotation Matters: Generalized Monocular 3D Object Detection for Various Camera Systems
Research on monocular 3D object detection is being actively studied, and as a result, performance has been steadily improving. However, 3D object detection performance is significantly reduced when applied to a camera system different from the system used to capture the training datasets. For example, a 3D detector trained on datasets from a passenger car mostly fails to regress accurate 3D bounding boxes for a camera mounted on a bus. In this paper, we conduct extensive experiments to analyze the factors that cause performance degradation. We find that changing the camera pose, especially camera orientation, relative to the road plane caused performance degradation. In addition, we propose a generalized 3D object detection method that can be universally applied to various camera systems. We newly design a compensation module that corrects the estimated 3D bounding box location and heading direction. The proposed module can be applied to most of the recent 3D object detection networks. It increases AP3D score (KITTI moderate, IoU $> 70\%$) about 6-to-10-times above the baselines without additional training. Both quantitative and qualitative results show the effectiveness of the proposed method.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
398,116
2412.14188
CogSimulator: A Model for Simulating User Cognition & Behavior with Minimal Data for Tailored Cognitive Enhancement
The interplay between cognition and gaming, notably through educational games enhancing cognitive skills, has garnered significant attention in recent years. This research introduces the CogSimulator, a novel algorithm for simulating user cognition in small-group settings with minimal data, as the educational game Wordle exemplifies. The CogSimulator employs Wasserstein-1 distance and coordinates search optimization for hyperparameter tuning, enabling precise few-shot predictions in new game scenarios. Comparative experiments with the Wordle dataset illustrate that our model surpasses most conventional machine learning models in mean Wasserstein-1 distance, mean squared error, and mean accuracy, showcasing its efficacy in cognitive enhancement through tailored game design.
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
518,607
2407.16200
MCTS Based Dispatch of Autonomous Vehicles under Operational Constraints for Continuous Transportation
Continuous transportation of material in the mining industry is achieved by the dispatch of autonomous haul-trucks with discrete haulage capacities. Recently, Monte Carlo Tree Search (MCTS) was successfully deployed in tackling challenges of long-run optimality, scalability and adaptability in haul-truck dispatch. Typically, operational constraints imposed on the mine site are satisfied by heuristic controllers or human operators independent of the dispatch planning. This article incorporates operational constraint satisfaction into the dispatch planning by utilising the MCTS based dispatch planner Flow-Achieving Scheduling Tree (FAST). Operational constraint violation and satisfaction are modelled as opportunity costs in the combinatorial optimisation problem of dispatch. Explicit cost formulations are avoided by utilising MCTS generator models to derive opportunity costs. Experimental studies with four types of operational constraints demonstrate the success of utilising opportunity costs for constraint satisfaction, and the effectiveness of integrating constraints into dispatch planning.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
475,503
1703.03340
Adaptive Non-uniform Compressive Sampling for Time-varying Signals
In this paper, adaptive non-uniform compressive sampling (ANCS) of time-varying signals, which are sparse in a proper basis, is introduced. ANCS employs the measurements of previous time steps to distribute the sensing energy among coefficients more intelligently. To this aim, a Bayesian inference method is proposed that does not require any prior knowledge of importance levels of coefficients or sparsity of the signal. Our numerical simulations show that ANCS is able to achieve the desired non-uniform recovery of the signal. Moreover, if the signal is sparse in canonical basis, ANCS can reduce the number of required measurements significantly.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
69,716
1602.00487
Towards a Cognitive Routing Engine for Software Defined Networks
Most Software Defined Networks (SDN) traffic engineering applications use excessive and frequent global monitoring in order to find the optimal Quality-of-Service (QoS) paths for the current state of the network. In this work, we present the motivations, architecture and initial evaluation of a SDN application called Cognitive Routing Engine (CRE) which is able to find near-optimal paths for a user-specified QoS while using a very small monitoring overhead compared to global monitoring which is required to guarantee that optimal paths are found. Smaller monitoring overheads bring the advantage of smaller response time for the SDN controllers and switches. The initial evaluation of CRE on a SDN representation of the GEANT academic network shows that it is possible to find near-optimal paths with a small optimality gap of 1.65% while using 9.5 times less monitoring.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
51,587
1908.05762
Entity-aware ELMo: Learning Contextual Entity Representation for Entity Disambiguation
We present a new local entity disambiguation system. The key to our system is a novel approach for learning entity representations. In our approach we learn an entity aware extension of Embedding for Language Model (ELMo) which we call Entity-ELMo (E-ELMo). Given a paragraph containing one or more named entity mentions, each mention is first defined as a function of the entire paragraph (including other mentions), then they predict the referent entities. Utilizing E-ELMo for local entity disambiguation, we outperform all of the state-of-the-art local and global models on the popular benchmarks by improving about 0.5\% on micro average accuracy for AIDA test-b with Yago candidate set. The evaluation setup of the training data and candidate set are the same as our baselines for fair comparison.
false
false
false
false
false
true
true
false
true
false
false
false
false
false
false
false
false
false
141,803
2009.08864
Classification and Region Analysis of COVID-19 Infection using Lung CT Images and Deep Convolutional Neural Networks
COVID-19 is a global health problem. Consequently, early detection and analysis of the infection patterns are crucial for controlling infection spread as well as devising a treatment plan. This work proposes a two-stage deep Convolutional Neural Networks (CNNs) based framework for delineation of COVID-19 infected regions in Lung CT images. In the first stage, initially, COVID-19 specific CT image features are enhanced using a two-level discrete wavelet transformation. These enhanced CT images are then classified using the proposed custom-made deep CoV-CTNet. In the second stage, the CT images classified as infectious images are provided to the segmentation models for the identification and analysis of COVID-19 infectious regions. In this regard, we propose a novel semantic segmentation model CoV-RASeg, which systematically uses average and max pooling operations in the encoder and decoder blocks. This systematic utilization of max and average pooling operations helps the proposed CoV-RASeg in simultaneously learning both the boundaries and region homogeneity. Moreover, the idea of attention is incorporated to deal with mildly infected regions. The proposed two-stage framework is evaluated on a standard Lung CT image dataset, and its performance is compared with the existing deep CNN models. The performance of the proposed CoV-CTNet is evaluated using Mathew Correlation Coefficient (MCC) measure (0.98) and that of proposed CoV-RASeg using Dice Similarity (DS) score (0.95). The promising results on an unseen test set suggest that the proposed framework has the potential to help the radiologists in the identification and analysis of COVID-19 infected regions.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
196,371
2407.19625
LoginMEA: Local-to-Global Interaction Network for Multi-modal Entity Alignment
Multi-modal entity alignment (MMEA) aims to identify equivalent entities between two multi-modal knowledge graphs (MMKGs), whose entities can be associated with relational triples and related images. Most previous studies treat the graph structure as a special modality, and fuse different modality information with separate uni-modal encoders, neglecting valuable relational associations in modalities. Other studies refine each uni-modal information with graph structures, but may introduce unnecessary relations in specific modalities. To this end, we propose a novel local-to-global interaction network for MMEA, termed as LoginMEA. Particularly, we first fuse local multi-modal interactions to generate holistic entity semantics and then refine them with global relational interactions of entity neighbors. In this design, the uni-modal information is fused adaptively, and can be refined with relations accordingly. To enrich local interactions of multi-modal entity information, we device modality weights and low-rank interactive fusion, allowing diverse impacts and element-level interactions among modalities. To capture global interactions of graph structures, we adopt relation reflection graph attention networks, which fully capture relational associations between entities. Extensive experiments demonstrate superior results of our method over 5 cross-KG or bilingual benchmark datasets, indicating the effectiveness of capturing local and global interactions.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
476,856
1906.02739
Mesh R-CNN
Rapid advances in 2D perception have led to systems that accurately detect objects in real-world images. However, these systems make predictions in 2D, ignoring the 3D structure of the world. Concurrently, advances in 3D shape prediction have mostly focused on synthetic benchmarks and isolated objects. We unify advances in these two areas. We propose a system that detects objects in real-world images and produces a triangle mesh giving the full 3D shape of each detected object. Our system, called Mesh R-CNN, augments Mask R-CNN with a mesh prediction branch that outputs meshes with varying topological structure by first predicting coarse voxel representations which are converted to meshes and refined with a graph convolution network operating over the mesh's vertices and edges. We validate our mesh prediction branch on ShapeNet, where we outperform prior work on single-image shape prediction. We then deploy our full Mesh R-CNN system on Pix3D, where we jointly detect objects and predict their 3D shapes.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
134,161
1710.10295
Frame-Induced Group Polarization in Small Discussion Networks
We present a novel explanation for the group polarization effect whereby discussion among like-minded individuals induces shifts toward the extreme. Our theory distinguishes between a quantitative policy under debate and the discussion's rhetorical frame, such as the likelihood of an outcome. If policy and frame position are mathematically related so that frame position increases more slowly as the policy becomes more extreme, majority formation at the extreme is favored, thereby shifting consensus formation toward the extreme. Additionally, use of a heuristic frame can shift the frame reference point away from the policy reference, yielding differential polarization on opposing policy sides. We present a mathematical model that predicts consensus policy given group member initial preferences and network structure. Our online group discussion experiment manipulated policy side, disagreement level, and network structure. The results, which challenge existing polarization theory, are in qualitative and quantitative accord with our theory and model.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
83,343
2309.08134
AnyOKP: One-Shot and Instance-Aware Object Keypoint Extraction with Pretrained ViT
Towards flexible object-centric visual perception, we propose a one-shot instance-aware object keypoint (OKP) extraction approach, AnyOKP, which leverages the powerful representation ability of pretrained vision transformer (ViT), and can obtain keypoints on multiple object instances of arbitrary category after learning from a support image. An off-the-shelf petrained ViT is directly deployed for generalizable and transferable feature extraction, which is followed by training-free feature enhancement. The best-prototype pairs (BPPs) are searched for in support and query images based on appearance similarity, to yield instance-unaware candidate keypoints.Then, the entire graph with all candidate keypoints as vertices are divided to sub-graphs according to the feature distributions on the graph edges. Finally, each sub-graph represents an object instance. AnyOKP is evaluated on real object images collected with the cameras of a robot arm, a mobile robot, and a surgical robot, which not only demonstrates the cross-category flexibility and instance awareness, but also show remarkable robustness to domain shift and viewpoint change.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
392,042
2006.08880
On SCC-recursiveness in Quantitative Argumentation
Abstract argumentation is a reasoning model for evaluating arguments based on various semantics. SCC-recursiveness is a sophisticated property of semantics that provides a general schema for characterizing semantics through the decomposition along strongly connected components (SCCs). While this property has been extensively explored in various qualitative frameworks, it has been relatively neglected in quantitative argumentation. To fill this gap, we demonstrate that this property is well-suited to fuzzy extension semantics, which is a quantitative generalization of classical semantics in fuzzy argumentation frameworks (FAF). We tailor the SCC-recursive schema to enable the characterization of fuzzy extension semantics through the recursive decomposition of an FAF along its SCCs. Our contributions are twofold. Theoretically, we show that SCC-recursiveness provides an alternative approach to characterize fuzzy extension semantics, offering a deep understanding and better insight into these semantics. Practically, our schema provides a sound and complete algorithm for computing fuzzy extension semantics, which naturally reduces computational efforts when dealing with a large number of SCCs.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
182,343
2106.08548
Mining Interpretable Spatio-temporal Logic Properties for Spatially Distributed Systems
The Internet-of-Things, complex sensor networks, multi-agent cyber-physical systems are all examples of spatially distributed systems that continuously evolve in time. Such systems generate huge amounts of spatio-temporal data, and system designers are often interested in analyzing and discovering structure within the data. There has been considerable interest in learning causal and logical properties of temporal data using logics such as Signal Temporal Logic (STL); however, there is limited work on discovering such relations on spatio-temporal data. We propose the first set of algorithms for unsupervised learning for spatio-temporal data. Our method does automatic feature extraction from the spatio-temporal data by projecting it onto the parameter space of a parametric spatio-temporal reach and escape logic (PSTREL). We propose an agglomerative hierarchical clustering technique that guarantees that each cluster satisfies a distinct STREL formula. We show that our method generates STREL formulas of bounded description complexity using a novel decision-tree approach which generalizes previous unsupervised learning techniques for Signal Temporal Logic. We demonstrate the effectiveness of our approach on case studies from diverse domains such as urban transportation, epidemiology, green infrastructure, and air quality monitoring.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
241,330
2310.03752
A Deep Learning Sequential Decoder for Transient High-Density Electromyography in Hand Gesture Recognition Using Subject-Embedded Transfer Learning
Hand gesture recognition (HGR) has gained significant attention due to the increasing use of AI-powered human-computer interfaces that can interpret the deep spatiotemporal dynamics of biosignals from the peripheral nervous system, such as surface electromyography (sEMG). These interfaces have a range of applications, including the control of extended reality, agile prosthetics, and exoskeletons. However, the natural variability of sEMG among individuals has led researchers to focus on subject-specific solutions. Deep learning methods, which often have complex structures, are particularly data-hungry and can be time-consuming to train, making them less practical for subject-specific applications. In this paper, we propose and develop a generalizable, sequential decoder of transient high-density sEMG (HD-sEMG) that achieves 73% average accuracy on 65 gestures for partially-observed subjects through subject-embedded transfer learning, leveraging pre-knowledge of HGR acquired during pre-training. The use of transient HD-sEMG before gesture stabilization allows us to predict gestures with the ultimate goal of counterbalancing system control delays. The results show that the proposed generalized models significantly outperform subject-specific approaches, especially when the training data is limited, and there is a significant number of gesture classes. By building on pre-knowledge and incorporating a multiplicative subject-embedded structure, our method comparatively achieves more than 13% average accuracy across partially observed subjects with minimal data availability. This work highlights the potential of HD-sEMG and demonstrates the benefits of modeling common patterns across users to reduce the need for large amounts of data for new users, enhancing practicality.
true
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
397,405
2012.00483
ClimaText: A Dataset for Climate Change Topic Detection
Climate change communication in the mass media and other textual sources may affect and shape public perception. Extracting climate change information from these sources is an important task, e.g., for filtering content and e-discovery, sentiment analysis, automatic summarization, question-answering, and fact-checking. However, automating this process is a challenge, as climate change is a complex, fast-moving, and often ambiguous topic with scarce resources for popular text-based AI tasks. In this paper, we introduce \textsc{ClimaText}, a dataset for sentence-based climate change topic detection, which we make publicly available. We explore different approaches to identify the climate change topic in various text sources. We find that popular keyword-based models are not adequate for such a complex and evolving task. Context-based algorithms like BERT \cite{devlin2018bert} can detect, in addition to many trivial cases, a variety of complex and implicit topic patterns. Nevertheless, our analysis reveals a great potential for improvement in several directions, such as, e.g., capturing the discussion on indirect effects of climate change. Hence, we hope this work can serve as a good starting point for further research on this topic.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
209,154
2304.02688
Going Further: Flatness at the Rescue of Early Stopping for Adversarial Example Transferability
Transferability is the property of adversarial examples to be misclassified by other models than the surrogate model for which they were crafted. Previous research has shown that early stopping the training of the surrogate model substantially increases transferability. A common hypothesis to explain this is that deep neural networks (DNNs) first learn robust features, which are more generic, thus a better surrogate. Then, at later epochs, DNNs learn non-robust features, which are more brittle, hence worst surrogate. First, we tend to refute this hypothesis, using transferability as a proxy for representation similarity. We then establish links between transferability and the exploration of the loss landscape in parameter space, focusing on sharpness, which is affected by early stopping. This leads us to evaluate surrogate models trained with seven minimizers that minimize both loss value and loss sharpness. Among them, SAM consistently outperforms early stopping by up to 28.8 percentage points. We discover that the strong SAM regularization from large flat neighborhoods tightly links to transferability. Finally, the best sharpness-aware minimizers prove competitive with other training methods and complement existing transferability techniques.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
356,504
2312.01409
Generative Rendering: Controllable 4D-Guided Video Generation with 2D Diffusion Models
Traditional 3D content creation tools empower users to bring their imagination to life by giving them direct control over a scene's geometry, appearance, motion, and camera path. Creating computer-generated videos, however, is a tedious manual process, which can be automated by emerging text-to-video diffusion models. Despite great promise, video diffusion models are difficult to control, hindering a user to apply their own creativity rather than amplifying it. To address this challenge, we present a novel approach that combines the controllability of dynamic 3D meshes with the expressivity and editability of emerging diffusion models. For this purpose, our approach takes an animated, low-fidelity rendered mesh as input and injects the ground truth correspondence information obtained from the dynamic mesh into various stages of a pre-trained text-to-image generation model to output high-quality and temporally consistent frames. We demonstrate our approach on various examples where motion can be obtained by animating rigged assets or changing the camera path.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
true
412,432
2210.00518
High Precision Differentiation Techniques for Data-Driven Solution of Nonlinear PDEs by Physics-Informed Neural Networks
Time-dependent Partial Differential Equations with given initial conditions are considered in this paper. New differentiation techniques of the unknown solution with respect to time variable are proposed. It is shown that the proposed techniques allow to generate accurate higher order derivatives simultaneously for a set of spatial points. The calculated derivatives can then be used for data-driven solution in different ways. An application for Physics Informed Neural Networks by the well-known DeepXDE software solution in Python under Tensorflow background framework has been presented for three real-life PDEs: Burgers', Allen-Cahn and Schrodinger equations.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
320,894
2003.13254
Environmental Adaptation of Robot Morphology and Control through Real-world Evolution
Robots operating in the real world will experience a range of different environments and tasks. It is essential for the robot to have the ability to adapt to its surroundings to work efficiently in changing conditions. Evolutionary robotics aims to solve this by optimizing both the control and body (morphology) of a robot, allowing adaptation to internal, as well as external factors. Most work in this field has been done in physics simulators, which are relatively simple and not able to replicate the richness of interactions found in the real world. Solutions that rely on the complex interplay between control, body, and environment are therefore rarely found. In this paper, we rely solely on real-world evaluations and apply evolutionary search to yield combinations of morphology and control for our mechanically self-reconfiguring quadruped robot. We evolve solutions on two distinct physical surfaces and analyze the results in terms of both control and morphology. We then transition to two previously unseen surfaces to demonstrate the generality of our method. We find that the evolutionary search finds high-performing and diverse morphology-controller configurations by adapting both control and body to the different properties of the physical environments. We additionally find that morphology and control vary with statistical significance between the environments. Moreover, we observe that our method allows for morphology and control parameters to transfer to previously-unseen terrains, demonstrating the generality of our approach.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
170,153
1707.09695
Recurrent 3D Pose Sequence Machines
3D human articulated pose recovery from monocular image sequences is very challenging due to the diverse appearances, viewpoints, occlusions, and also the human 3D pose is inherently ambiguous from the monocular imagery. It is thus critical to exploit rich spatial and temporal long-range dependencies among body joints for accurate 3D pose sequence prediction. Existing approaches usually manually design some elaborate prior terms and human body kinematic constraints for capturing structures, which are often insufficient to exploit all intrinsic structures and not scalable for all scenarios. In contrast, this paper presents a Recurrent 3D Pose Sequence Machine(RPSM) to automatically learn the image-dependent structural constraint and sequence-dependent temporal context by using a multi-stage sequential refinement. At each stage, our RPSM is composed of three modules to predict the 3D pose sequences based on the previously learned 2D pose representations and 3D poses: (i) a 2D pose module extracting the image-dependent pose representations, (ii) a 3D pose recurrent module regressing 3D poses and (iii) a feature adaption module serving as a bridge between module (i) and (ii) to enable the representation transformation from 2D to 3D domain. These three modules are then assembled into a sequential prediction framework to refine the predicted poses with multiple recurrent stages. Extensive evaluations on the Human3.6M dataset and HumanEva-I dataset show that our RPSM outperforms all state-of-the-art approaches for 3D pose estimation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
78,051
1609.09582
Digitizing Municipal Street Inspections Using Computer Vision
"People want an authority to tell them how to value things. But they chose this authority not based on facts or results. They chose it because it seems authoritative and familiar." - The Big Short The pavement condition index is one such a familiar measure used by many US cities to measure street quality and justify billions of dollars spent every year on street repair. These billion-dollar decisions are based on evaluation criteria that are subjective and not representative. In this paper, we build upon our initial submission to D4GX 2015 that approaches this problem of information asymmetry in municipal decision-making. We describe a process to identify street-defects using computer vision techniques on data collected using the Street Quality Identification Device (SQUID). A User Interface to host a large quantity of image data towards digitizing the street inspection process and enabling actionable intelligence for a core public service is also described. This approach of combining device, data and decision-making around street repair enables cities make targeted decisions about street repair and could lead to an anticipatory response which can result in significant cost savings. Lastly, we share lessons learnt from the deployment of SQUID in the city of Syracuse, NY.
false
false
false
false
false
false
false
false
false
false
false
true
false
true
false
false
false
false
61,747
2205.13226
Censor-aware Semi-supervised Learning for Survival Time Prediction from Medical Images
Survival time prediction from medical images is important for treatment planning, where accurate estimations can improve healthcare quality. One issue affecting the training of survival models is censored data. Most of the current survival prediction approaches are based on Cox models that can deal with censored data, but their application scope is limited because they output a hazard function instead of a survival time. On the other hand, methods that predict survival time usually ignore censored data, resulting in an under-utilization of the training set. In this work, we propose a new training method that predicts survival time using all censored and uncensored data. We propose to treat censored data as samples with a lower-bound time to death and estimate pseudo labels to semi-supervise a censor-aware survival time regressor. We evaluate our method on pathology and x-ray images from the TCGA-GM and NLST datasets. Our results establish the state-of-the-art survival prediction accuracy on both datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
298,858
2401.01542
Adversarial Machine Learning-Enabled Anonymization of OpenWiFi Data
Data privacy and protection through anonymization is a critical issue for network operators or data owners before it is forwarded for other possible use of data. With the adoption of Artificial Intelligence (AI), data anonymization augments the likelihood of covering up necessary sensitive information; preventing data leakage and information loss. OpenWiFi networks are vulnerable to any adversary who is trying to gain access or knowledge on traffic regardless of the knowledge possessed by data owners. The odds for discovery of actual traffic information is addressed by applied conditional tabular generative adversarial network (CTGAN). CTGAN yields synthetic data; which disguises as actual data but fostering hidden acute information of actual data. In this paper, the similarity assessment of synthetic with actual data is showcased in terms of clustering algorithms followed by a comparison of performance for unsupervised cluster validation metrics. A well-known algorithm, K-means outperforms other algorithms in terms of similarity assessment of synthetic data over real data while achieving nearest scores 0.634, 23714.57, and 0.598 as Silhouette, Calinski and Harabasz and Davies Bouldin metric respectively. On exploiting a comparative analysis in validation scores among several algorithms, K-means forms the epitome of unsupervised clustering algorithms ensuring explicit usage of synthetic data at the same time a replacement for real data. Hence, the experimental results aim to show the viability of using CTGAN-generated synthetic data in lieu of publishing anonymized data to be utilized in various applications.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
419,399
2402.16392
Placing Objects in Context via Inpainting for Out-of-distribution Segmentation
When deploying a semantic segmentation model into the real world, it will inevitably encounter semantic classes that were not seen during training. To ensure a safe deployment of such systems, it is crucial to accurately evaluate and improve their anomaly segmentation capabilities. However, acquiring and labelling semantic segmentation data is expensive and unanticipated conditions are long-tail and potentially hazardous. Indeed, existing anomaly segmentation datasets capture a limited number of anomalies, lack realism or have strong domain shifts. In this paper, we propose the Placing Objects in Context (POC) pipeline to realistically add any object into any image via diffusion models. POC can be used to easily extend any dataset with an arbitrary number of objects. In our experiments, we present different anomaly segmentation datasets based on POC-generated data and show that POC can improve the performance of recent state-of-the-art anomaly fine-tuning methods across several standardized benchmarks. POC is also effective for learning new classes. For example, we utilize it to augment Cityscapes samples by incorporating a subset of Pascal classes and demonstrate that models trained on such data achieve comparable performance to the Pascal-trained baseline. This corroborates the low synth2real gap of models trained on POC-generated images. Code: https://github.com/naver/poc
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
432,553
2307.15128
End-to-end Remote Sensing Change Detection of Unregistered Bi-temporal Images for Natural Disasters
Change detection based on remote sensing images has been a prominent area of interest in the field of remote sensing. Deep networks have demonstrated significant success in detecting changes in bi-temporal remote sensing images and have found applications in various fields. Given the degradation of natural environments and the frequent occurrence of natural disasters, accurately and swiftly identifying damaged buildings in disaster-stricken areas through remote sensing images holds immense significance. This paper aims to investigate change detection specifically for natural disasters. Considering that existing public datasets used in change detection research are registered, which does not align with the practical scenario where bi-temporal images are not matched, this paper introduces an unregistered end-to-end change detection synthetic dataset called xBD-E2ECD. Furthermore, we propose an end-to-end change detection network named E2ECDNet, which takes an unregistered bi-temporal image pair as input and simultaneously generates the flow field prediction result and the change detection prediction result. It is worth noting that our E2ECDNet also supports change detection for registered image pairs, as registration can be seen as a special case of non-registration. Additionally, this paper redefines the criteria for correctly predicting a positive case and introduces neighborhood-based change detection evaluation metrics. The experimental results have demonstrated significant improvements.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
382,167
2101.05324
Multi-robot Symmetric Rendezvous Search on the Line
We study the Symmetric Rendezvous Search Problem for a multi-robot system. There are $n>2$ robots arbitrarily located on a line. Their goal is to meet somewhere on the line as quickly as possible. The robots do not know the initial location of any of the other robots or their own positions on the line. The symmetric version of the problem requires the robots to execute the same search strategy to achieve rendezvous. Therefore, we solve the problem in an online fashion with a randomized strategy. In this paper, we present a symmetric rendezvous algorithm which achieves a constant competitive ratio for the total distance traveled by the robots. We validate our theoretical results through simulations.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
215,390
2101.02668
Towards Meaningful Statements in IR Evaluation. Mapping Evaluation Measures to Interval Scales
Recently, it was shown that most popular IR measures are not interval-scaled, implying that decades of experimental IR research used potentially improper methods, which may have produced questionable results. However, it was unclear if and to what extent these findings apply to actual evaluations and this opened a debate in the community with researchers standing on opposite positions about whether this should be considered an issue (or not) and to what extent. In this paper, we first give an introduction to the representational measurement theory explaining why certain operations and significance tests are permissible only with scales of a certain level. For that, we introduce the notion of meaningfulness specifying the conditions under which the truth (or falsity) of a statement is invariant under permissible transformations of a scale. Furthermore, we show how the recall base and the length of the run may make comparison and aggregation across topics problematic. Then we propose a straightforward and powerful approach for turning an evaluation measure into an interval scale, and describe an experimental evaluation of the differences between using the original measures and the interval-scaled ones. For all the regarded measures - namely Precision, Recall, Average Precision, (Normalized) Discounted Cumulative Gain, Rank-Biased Precision and Reciprocal Rank - we observe substantial effects, both on the order of average values and on the outcome of significance tests. For the latter, previously significant differences turn out to be insignificant, while insignificant ones become significant. The effect varies remarkably between the tests considered but overall, on average, we observed a 25% change in the decision about which systems are significantly different and which are not.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
214,701
2402.14988
Verifiable Boosted Tree Ensembles
Verifiable learning advocates for training machine learning models amenable to efficient security verification. Prior research demonstrated that specific classes of decision tree ensembles -- called large-spread ensembles -- allow for robustness verification in polynomial time against any norm-based attacker. This study expands prior work on verifiable learning from basic ensemble methods (i.e., hard majority voting) to advanced boosted tree ensembles, such as those trained using XGBoost or LightGBM. Our formal results indicate that robustness verification is achievable in polynomial time when considering attackers based on the $L_\infty$-norm, but remains NP-hard for other norm-based attackers. Nevertheless, we present a pseudo-polynomial time algorithm to verify robustness against attackers based on the $L_p$-norm for any $p \in \mathbb{N} \cup \{0\}$, which in practice grants excellent performance. Our experimental evaluation shows that large-spread boosted ensembles are accurate enough for practical adoption, while being amenable to efficient security verification.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
true
431,942
2308.08259
Graph Relation Aware Continual Learning
Continual graph learning (CGL) studies the problem of learning from an infinite stream of graph data, consolidating historical knowledge, and generalizing it to the future task. At once, only current graph data are available. Although some recent attempts have been made to handle this task, we still face two potential challenges: 1) most of existing works only manipulate on the intermediate graph embedding and ignore intrinsic properties of graphs. It is non-trivial to differentiate the transferred information across graphs. 2) recent attempts take a parameter-sharing policy to transfer knowledge across time steps or progressively expand new architecture given shifted graph distribution. Learning a single model could loss discriminative information for each graph task while the model expansion scheme suffers from high model complexity. In this paper, we point out that latent relations behind graph edges can be attributed as an invariant factor for the evolving graphs and the statistical information of latent relations evolves. Motivated by this, we design a relation-aware adaptive model, dubbed as RAM-CG, that consists of a relation-discovery modular to explore latent relations behind edges and a task-awareness masking classifier to accounts for the shifted. Extensive experiments show that RAM-CG provides significant 2.2%, 6.9% and 6.6% accuracy improvements over the state-of-the-art results on CitationNet, OGBN-arxiv and TWITCH dataset, respective.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
385,837
2410.01535
GaussianBlock: Building Part-Aware Compositional and Editable 3D Scene by Primitives and Gaussians
Recently, with the development of Neural Radiance Fields and Gaussian Splatting, 3D reconstruction techniques have achieved remarkably high fidelity. However, the latent representations learnt by these methods are highly entangled and lack interpretability. In this paper, we propose a novel part-aware compositional reconstruction method, called GaussianBlock, that enables semantically coherent and disentangled representations, allowing for precise and physical editing akin to building blocks, while simultaneously maintaining high fidelity. Our GaussianBlock introduces a hybrid representation that leverages the advantages of both primitives, known for their flexible actionability and editability, and 3D Gaussians, which excel in reconstruction quality. Specifically, we achieve semantically coherent primitives through a novel attention-guided centering loss derived from 2D semantic priors, complemented by a dynamic splitting and fusion strategy. Furthermore, we utilize 3D Gaussians that hybridize with primitives to refine structural details and enhance fidelity. Additionally, a binding inheritance strategy is employed to strengthen and maintain the connection between the two. Our reconstructed scenes are evidenced to be disentangled, compositional, and compact across diverse benchmarks, enabling seamless, direct and precise editing while maintaining high quality.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
493,812
1611.07206
Learning to Distill: The Essence Vector Modeling Framework
In the context of natural language processing, representation learning has emerged as a newly active research subject because of its excellent performance in many applications. Learning representations of words is a pioneering study in this school of research. However, paragraph (or sentence and document) embedding learning is more suitable/reasonable for some tasks, such as sentiment classification and document summarization. Nevertheless, as far as we are aware, there is relatively less work focusing on the development of unsupervised paragraph embedding methods. Classic paragraph embedding methods infer the representation of a given paragraph by considering all of the words occurring in the paragraph. Consequently, those stop or function words that occur frequently may mislead the embedding learning process to produce a misty paragraph representation. Motivated by these observations, our major contributions in this paper are twofold. First, we propose a novel unsupervised paragraph embedding method, named the essence vector (EV) model, which aims at not only distilling the most representative information from a paragraph but also excluding the general background information to produce a more informative low-dimensional vector representation for the paragraph. Second, in view of the increasing importance of spoken content processing, an extension of the EV model, named the denoising essence vector (D-EV) model, is proposed. The D-EV model not only inherits the advantages of the EV model but also can infer a more robust representation for a given spoken paragraph against imperfect speech recognition.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
64,322
2402.02070
HotRAP: Hot Record Retention and Promotion for LSM-trees with Tiered Storage
The multi-level design of Log-Structured Merge-trees (LSM-trees) naturally fits the tiered storage architecture: the upper levels (recently inserted/updated records) are kept in fast storage to guarantee performance while the lower levels (the majority of records) are placed in slower but cheaper storage to reduce cost. However, frequently accessed records may have been compacted and reside in slow storage. Existing algorithms are inefficient in promoting these ``hot'' records to fast storage, leading to compromised read performance. We present HotRAP, a key-value store based on RocksDB that can timely promote hot records individually from slow to fast storage and keep them in fast storage while they are hot. HotRAP uses an on-disk data structure (a specially-made LSM-tree) to track the hotness of keys and includes three pathways to ensure that hot records reach fast storage with short delays. Our experiments show that HotRAP outperforms state-of-the-art LSM-trees on tiered storage by up to 5.4$\times$ compared to the second best under read-only and read-write-balanced YCSB workloads with common access skew patterns, and by up to 1.9$\times$ compared to the second best under Twitter production workloads.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
426,368
1405.7254
Data-Driven Stochastic Models and Policies for Energy Harvesting Sensor Communications
Energy harvesting from the surroundings is a promising solution to perpetually power-up wireless sensor communications. This paper presents a data-driven approach of finding optimal transmission policies for a solar-powered sensor node that attempts to maximize net bit rates by adapting its transmission parameters, power levels and modulation types, to the changes of channel fading and battery recharge. We formulate this problem as a discounted Markov decision process (MDP) framework, whereby the energy harvesting process is stochastically quantized into several representative solar states with distinct energy arrivals and is totally driven by historical data records at a sensor node. With the observed solar irradiance at each time epoch, a mixed strategy is developed to compute the belief information of the underlying solar states for the choice of transmission parameters. In addition, a theoretical analysis is conducted for a simple on-off policy, in which a predetermined transmission parameter is utilized whenever a sensor node is active. We prove that such an optimal policy has a threshold structure with respect to battery states and evaluate the performance of an energy harvesting node by analyzing the expected net bit rate. The design framework is exemplified with real solar data records, and the results are useful in characterizing the interplay that occurs between energy harvesting and expenditure under various system configurations. Computer simulations show that the proposed policies significantly outperform other schemes with or without the knowledge of short-term energy harvesting and channel fading patterns.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
33,446
2111.02865
Testing using Privileged Information by Adapting Features with Statistical Dependence
Given an imperfect predictor, we exploit additional features at test time to improve the predictions made, without retraining and without knowledge of the prediction function. This scenario arises if training labels or data are proprietary, restricted, or no longer available, or if training itself is prohibitively expensive. We assume that the additional features are useful if they exhibit strong statistical dependence to the underlying perfect predictor. Then, we empirically estimate and strengthen the statistical dependence between the initial noisy predictor and the additional features via manifold denoising. As an example, we show that this approach leads to improvement in real-world visual attribute ranking. Project webpage: http://www.jamestompkin.com/tupi
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
264,996
2406.19301
MCNC: Manifold Constrained Network Compression
The outstanding performance of large foundational models across diverse tasks-from computer vision to speech and natural language processing-has significantly increased their demand. However, storing and transmitting these models pose significant challenges due to their massive size (e.g., 350GB for GPT-3). Recent literature has focused on compressing the original weights or reducing the number of parameters required for fine-tuning these models. These compression methods typically involve constraining the parameter space, for example, through low-rank reparametrization (e.g., LoRA) or quantization (e.g., QLoRA) during model training. In this paper, we present MCNC as a novel model compression method that constrains the parameter space to low-dimensional pre-defined and frozen nonlinear manifolds, which effectively cover this space. Given the prevalence of good solutions in over-parameterized deep neural networks, we show that by constraining the parameter space to our proposed manifold, we can identify high-quality solutions while achieving unprecedented compression rates across a wide variety of tasks. Through extensive experiments in computer vision and natural language processing tasks, we demonstrate that our method, MCNC, significantly outperforms state-of-the-art baselines in terms of compression, accuracy, and/or model reconstruction time.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
468,372
2104.13742
MineGAN++: Mining Generative Models for Efficient Knowledge Transfer to Limited Data Domains
GANs largely increases the potential impact of generative models. Therefore, we propose a novel knowledge transfer method for generative models based on mining the knowledge that is most beneficial to a specific target domain, either from a single or multiple pretrained GANs. This is done using a miner network that identifies which part of the generative distribution of each pretrained GAN outputs samples closest to the target domain. Mining effectively steers GAN sampling towards suitable regions of the latent space, which facilitates the posterior finetuning and avoids pathologies of other methods, such as mode collapse and lack of flexibility. Furthermore, to prevent overfitting on small target domains, we introduce sparse subnetwork selection, that restricts the set of trainable neurons to those that are relevant for the target dataset. We perform comprehensive experiments on several challenging datasets using various GAN architectures (BigGAN, Progressive GAN, and StyleGAN) and show that the proposed method, called MineGAN, effectively transfers knowledge to domains with few target images, outperforming existing methods. In addition, MineGAN can successfully transfer knowledge from multiple pretrained GANs.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
232,598
1608.06420
A Harmonic Potential Field Approach for Joint Planning & Control of a Rigid, Separable Nonholonomic, Mobile Robot
The main objective of this paper is to provide a tool for performing path planning at the servo level of a mobile robot. The ability to perform, in a provably correct manner, such a complex task at the servo level can lead to a large increase in the speed of operation, low energy consumption and high quality of response. Planning has been traditionally limited to the high level controller of a robot. The guidance velocity signal from this stage is usually converted to a control signal using what is known as an electronic speed controller (ESC). This paper demonstrates the ability of the harmonic potential field (HPF) approach to generate a provably correct, constrained, well behaved trajectory and control signal for a rigid, nonholonomic robot in a stationary, cluttered environment. It is shown that the HPF based, servo level planner can address a large number of challenges facing planning in a realistic situation. The suggested approach migrates the rich and provably correct properties of the solution trajectories from an HPF planner to those of the robot. This is achieved using a synchronizing control signal whose aim is to align the velocity of the robot in its local coordinates, with that of the gradient of the HPF. The link between the two is made possible by representing the robot using what the paper terms separable form. The context-sensitive and goal-oriented control signal used to steer the robot is demonstrated to be well behaved and robust in the presence of actuator noise, saturation and uncertainty in the parameters. The approach is developed, proofs of correctness are provided and the capabilities of the scheme are demonstrated using simulation results.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
60,113
2305.05477
Entanglement-Assisted Covert Communication via Qubit Depolarizing Channels
We consider entanglement-assisted communication over the qubit depolarizing channel under the security requirement of covert communication, where not only the information is kept secret, but the transmission itself must be concealed from detection by an adversary. Previous work showed that $O(\sqrt{n})$ information bits can be reliably and covertly transmitted in $n$ channel uses without entanglement assistance. However, Gagatsos et al. (2020) showed that entanglement assistance can increase this scaling to $O(\sqrt{n}\log(n))$ for continuous-variable bosonic channels. Here, we present a finite-dimensional parallel, and show that $O(\sqrt{n}\log(n))$ covert bits can be transmitted reliably over $n$ uses of a qubit depolarizing channel.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
363,165
2502.06258
Emergent Response Planning in LLM
In this work, we argue that large language models (LLMs), though trained to predict only the next token, exhibit emergent planning behaviors: $\textbf{their hidden representations encode future outputs beyond the next token}$. Through simple probing, we demonstrate that LLM prompt representations encode global attributes of their entire responses, including $\textit{structural attributes}$ (response length, reasoning steps), $\textit{content attributes}$ (character choices in storywriting, multiple-choice answers at the end of response), and $\textit{behavioral attributes}$ (answer confidence, factual consistency). In addition to identifying response planning, we explore how it scales with model size across tasks and how it evolves during generation. The findings that LLMs plan ahead for the future in their hidden representations suggests potential applications for improving transparency and generation control.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
532,008
2109.01761
An empirical evaluation of attention-based multi-head models for improved turbofan engine remaining useful life prediction
A single unit (head) is the conventional input feature extractor in deep learning architectures trained on multivariate time series signals. The importance of the fixed-dimensional vector representation generated by the single-head network has been demonstrated for industrial machinery condition monitoring and predictive maintenance. However, processing heterogeneous sensor signals with a single-head may result in a model that cannot explicitly account for the diversity in time-varying multivariate inputs. This work extends the conventional single-head deep learning models to a more robust form by developing context-specific heads to independently capture the inherent pattern in each sensor reading. Using the turbofan aircraft engine benchmark dataset (CMAPSS), an extensive experiment is performed to verify the effectiveness and benefits of multi-head multilayer perceptron, recurrent networks, convolution network, the transformer-style stand-alone attention network, and their variants for remaining useful life estimation. Moreover, the effect of different attention mechanisms on the multi-head models is also evaluated. In addition, each architecture's relative advantage and computational overhead are analyzed. Results show that utilizing the attention layer is task-sensitive and model dependent, as it does not provide consistent improvement across the models investigated. The best model is further compared with five state-of-the-art models, and the comparison shows that a relatively simple multi-head architecture performs better than the state-of-the-art models. The results presented in this study demonstrate the importance of multi-head models and attention mechanisms to an improved understanding of the remaining useful life of industrial assets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
253,524
2403.01607
Respiratory motion forecasting with online learning of recurrent neural networks for safety enhancement in externally guided radiotherapy
In lung radiotherapy, infrared cameras can record the location of reflective objects on the chest to infer the position of the tumor moving due to breathing, but treatment system latencies hinder radiation beam precision. Real-time recurrent learning (RTRL), is a potential solution as it can learn patterns within non-stationary respiratory data but has high complexity. This study assesses the capabilities of resource-efficient online RNN algorithms, namely unbiased online recurrent optimization (UORO), sparse-1 step approximation (SnAp-1), and decoupled neural interfaces (DNI) to forecast respiratory motion during radiotherapy treatment accurately. We use time series containing the 3D position of external markers on the chest of healthy subjects. We propose efficient implementations for SnAp-1 and DNI based on compression of the influence and immediate Jacobian matrices and an accurate update of the linear coefficients used in credit assignment estimation, respectively. The original sampling frequency was 10Hz; we performed resampling at 3.33Hz and 30Hz. We use UORO, SnAp-1, and DNI to forecast each marker's 3D position with horizons (the time interval in advance for which the prediction is made) h<=2.1s and compare them with RTRL, least mean squares, and linear regression. RNNs trained online achieved similar or better accuracy than most previous works using larger training databases and deep learning, even though we used only the first minute of each sequence to predict motion within that exact sequence. SnAp-1 had the lowest normalized root mean square errors (nRMSE) averaged over the horizon values considered, equal to 0.335 and 0.157, at 3.33Hz and 10.0Hz, respectively. Similarly, UORO had the highest accuracy at 30Hz, with an nRMSE of 0.0897. DNI's inference time, equal to 6.8ms per time step at 30Hz (Intel Core i7-13700 CPU), was the lowest among the RNN methods examined.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
434,498
2105.11301
DaN+: Danish Nested Named Entities and Lexical Normalization
This paper introduces DaN+, a new multi-domain corpus and annotation guidelines for Danish nested named entities (NEs) and lexical normalization to support research on cross-lingual cross-domain learning for a less-resourced language. We empirically assess three strategies to model the two-layer Named Entity Recognition (NER) task. We compare transfer capabilities from German versus in-language annotation from scratch. We examine language-specific versus multilingual BERT, and study the effect of lexical normalization on NER. Our results show that 1) the most robust strategy is multi-task learning which is rivaled by multi-label decoding, 2) BERT-based NER models are sensitive to domain shifts, and 3) in-language BERT and lexical normalization are the most beneficial on the least canonical data. Our results also show that an out-of-domain setup remains challenging, while performance on news plateaus quickly. This highlights the importance of cross-domain evaluation of cross-lingual transfer.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
236,656
2406.07096
Fast Context-Biasing for CTC and Transducer ASR models with CTC-based Word Spotter
Accurate recognition of rare and new words remains a pressing problem for contextualized Automatic Speech Recognition (ASR) systems. Most context-biasing methods involve modification of the ASR model or the beam-search decoding algorithm, complicating model reuse and slowing down inference. This work presents a new approach to fast context-biasing with CTC-based Word Spotter (CTC-WS) for CTC and Transducer (RNN-T) ASR models. The proposed method matches CTC log-probabilities against a compact context graph to detect potential context-biasing candidates. The valid candidates then replace their greedy recognition counterparts in corresponding frame intervals. A Hybrid Transducer-CTC model enables the CTC-WS application for the Transducer model. The results demonstrate a significant acceleration of the context-biasing recognition with a simultaneous improvement in F-score and WER compared to baseline methods. The proposed method is publicly available in the NVIDIA NeMo toolkit.
false
false
true
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
462,904
2301.12519
3D Object Detection in LiDAR Point Clouds using Graph Neural Networks
LiDAR (Light Detection and Ranging) is an advanced active remote sensing technique working on the principle of time of travel (ToT) for capturing highly accurate 3D information of the surroundings. LiDAR has gained wide attention in research and development with the LiDAR industry expected to reach 2.8 billion $ by 2025. Although the LiDAR dataset is of rich density and high spatial resolution, it is challenging to process LiDAR data due to its inherent 3D geometry and massive volume. But such a high-resolution dataset possesses immense potential in many applications and has great potential in 3D object detection and recognition. In this research we propose Graph Neural Network (GNN) based framework to learn and identify the objects in the 3D LiDAR point clouds. GNNs are class of deep learning which learns the patterns and objects based on the principle of graph learning which have shown success in various 3D computer vision tasks.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
342,559
2409.03749
Dynamics of Supervised and Reinforcement Learning in the Non-Linear Perceptron
The ability of a brain or a neural network to efficiently learn depends crucially on both the task structure and the learning rule. Previous works have analyzed the dynamical equations describing learning in the relatively simplified context of the perceptron under assumptions of a student-teacher framework or a linearized output. While these assumptions have facilitated theoretical understanding, they have precluded a detailed understanding of the roles of the nonlinearity and input-data distribution in determining the learning dynamics, limiting the applicability of the theories to real biological or artificial neural networks. Here, we use a stochastic-process approach to derive flow equations describing learning, applying this framework to the case of a nonlinear perceptron performing binary classification. We characterize the effects of the learning rule (supervised or reinforcement learning, SL/RL) and input-data distribution on the perceptron's learning curve and the forgetting curve as subsequent tasks are learned. In particular, we find that the input-data noise differently affects the learning speed under SL vs. RL, as well as determines how quickly learning of a task is overwritten by subsequent learning. Additionally, we verify our approach with real data using the MNIST dataset. This approach points a way toward analyzing learning dynamics for more-complex circuit architectures.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
486,153
2308.14994
ICARUS: An Android-Based Unmanned Aerial Vehicle (UAV) Search and Rescue Eye in the Sky
The purpose of this paper is to develop an unmanned aerial vehicle (UAV) using a quadcopter with the capability of video surveillance, map coordinates, a deployable parachute with a medicine kit or a food pack as a payload, a collision warning system, remotely controlled, integrated with an android application to assist in search and rescue operations. Applied research for the development of the functional prototype, quantitative and descriptive statistics to summarize data by describing the relationship between variables in a sample or population. The quadcopter underwent an evaluation using a survey instrument to test its acceptability using predefined variables to select respondents within Caloocan City and Quezon City, Philippines. Demographic profiles and known issues and concerns were answered by 30 respondents. The results were summarized and distributed in Tables 1 and 2. In terms of demographic profiles, the number of SAR operators within the specified areas is distributed equally, most are male, single, and within the age bracket of 31 and above. In issues and concerns, the most common type of search and rescue was ground search and rescue. Human error is the primary cause of most injuries in operating units. The prototype was useful and everyone agreed, in terms of acceptability, drone technology will improve search and rescue operations. The innovative way of utilizing Android and drone technology is a new step towards the improvement of SAR operations in the Philippines. The LiPo battery must be replaced with a higher capacity and the drone operator should undergo a training course and secure a permit from the Civil Aviation Authority of the Philippines (CAAP).
false
false
false
false
false
false
false
false
false
false
false
true
false
true
false
false
false
false
388,524
2208.02613
Semantic Interleaving Global Channel Attention for Multilabel Remote Sensing Image Classification
Multi-Label Remote Sensing Image Classification (MLRSIC) has received increasing research interest. Taking the cooccurrence relationship of multiple labels as additional information helps to improve the performance of this task. Current methods focus on using it to constrain the final feature output of a Convolutional Neural Network (CNN). On the one hand, these methods do not make full use of label correlation to form feature representation. On the other hand, they increase the label noise sensitivity of the system, resulting in poor robustness. In this paper, a novel method called Semantic Interleaving Global Channel Attention (SIGNA) is proposed for MLRSIC. First, the label co-occurrence graph is obtained according to the statistical information of the data set. The label co-occurrence graph is used as the input of the Graph Neural Network (GNN) to generate optimal feature representations. Then, the semantic features and visual features are interleaved, to guide the feature expression of the image from the original feature space to the semantic feature space with embedded label relations. SIGNA triggers global attention of feature maps channels in a new semantic feature space to extract more important visual features. Multihead SIGNA based feature adaptive weighting networks are proposed to act on any layer of CNN in a plug-and-play manner. For remote sensing images, better classification performance can be achieved by inserting CNN into the shallow layer. We conduct extensive experimental comparisons on three data sets: UCM data set, AID data set, and DFC15 data set. Experimental results demonstrate that the proposed SIGNA achieves superior classification performance compared to state-of-the-art (SOTA) methods. It is worth mentioning that the codes of this paper will be open to the community for reproducibility research. Our codes are available at https://github.com/kyle-one/SIGNA.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
311,518
2109.00150
Federated Reconnaissance: Efficient, Distributed, Class-Incremental Learning
We describe federated reconnaissance, a class of learning problems in which distributed clients learn new concepts independently and communicate that knowledge efficiently. In particular, we propose an evaluation framework and methodological baseline for a system in which each client is expected to learn a growing set of classes and communicate knowledge of those classes efficiently with other clients, such that, after knowledge merging, the clients should be able to accurately discriminate between classes in the superset of classes observed by the set of clients. We compare a range of learning algorithms for this problem and find that prototypical networks are a strong approach in that they are robust to catastrophic forgetting while incorporating new information efficiently. Furthermore, we show that the online averaging of prototype vectors is effective for client model merging and requires only a small amount of communication overhead, memory, and update time per class with no gradient-based learning or hyperparameter tuning. Additionally, to put our results in context, we find that a simple, prototypical network with four convolutional layers significantly outperforms complex, state of the art continual learning algorithms, increasing the accuracy by over 22% after learning 600 Omniglot classes and over 33% after learning 20 mini-ImageNet classes incrementally. These results have important implications for federated reconnaissance and continual learning more generally by demonstrating that communicating feature vectors is an efficient, robust, and effective means for distributed, continual learning.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
253,020
2211.15621
A Boosting Approach to Constructing an Ensemble Stack
An approach to evolutionary ensemble learning for classification is proposed in which boosting is used to construct a stack of programs. Each application of boosting identifies a single champion and a residual dataset, i.e. the training records that thus far were not correctly classified. The next program is only trained against the residual, with the process iterating until some maximum ensemble size or no further residual remains. Training against a residual dataset actively reduces the cost of training. Deploying the ensemble as a stack also means that only one classifier might be necessary to make a prediction, so improving interpretability. Benchmarking studies are conducted to illustrate competitiveness with the prediction accuracy of current state-of-the-art evolutionary ensemble learning algorithms, while providing solutions that are orders of magnitude simpler. Further benchmarking with a high cardinality dataset indicates that the proposed method is also more accurate and efficient than XGBoost.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
333,341
2106.10876
Total Generate: Cycle in Cycle Generative Adversarial Networks for Generating Human Faces, Hands, Bodies, and Natural Scenes
We propose a novel and unified Cycle in Cycle Generative Adversarial Network (C2GAN) for generating human faces, hands, bodies, and natural scenes. Our proposed C2GAN is a cross-modal model exploring the joint exploitation of the input image data and guidance data in an interactive manner. C2GAN contains two different generators, i.e., an image-generation generator and a guidance-generation generator. Both generators are mutually connected and trained in an end-to-end fashion and explicitly form three cycled subnets, i.e., one image generation cycle and two guidance generation cycles. Each cycle aims at reconstructing the input domain and simultaneously produces a useful output involved in the generation of another cycle. In this way, the cycles constrain each other implicitly providing complementary information from both image and guidance modalities and bringing an extra supervision gradient across the cycles, facilitating a more robust optimization of the whole model. Extensive results on four guided image-to-image translation subtasks demonstrate that the proposed C2GAN is effective in generating more realistic images compared with state-of-the-art models. The code is available at https://github.com/Ha0Tang/C2GAN.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
true
242,192
2107.07335
Towards Natural Brain-Machine Interaction using Endogenous Potentials based on Deep Neural Networks
Human-robot collaboration has the potential to maximize the efficiency of the operation of autonomous robots. Brain-machine interface (BMI) would be a desirable technology to collaborate with robots since the intention or state of users can be translated from the neural activities. However, the electroencephalogram (EEG), which is one of the most popularly used non-invasive BMI modalities, has low accuracy and a limited degree of freedom (DoF) due to a low signal-to-noise ratio. Thus, improving the performance of multi-class EEG classification is crucial to develop more flexible BMI-based human-robot collaboration. In this study, we investigated the possibility for inter-paradigm classification of multiple endogenous BMI paradigms, such as motor imagery (MI), visual imagery (VI), and speech imagery (SI), to enhance the limited DoF while maintaining robust accuracy. We conducted the statistical and neurophysiological analyses on MI, VI, and SI and classified three paradigms using the proposed temporal information-based neural network (TINN). We confirmed that statistically significant features could be extracted on different brain regions when classifying three endogenous paradigms. Moreover, our proposed TINN showed the highest accuracy of 0.93 compared to the previous methods for classifying three different types of mental imagery tasks (MI, VI, and SI).
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
246,383
2408.01923
Scalable Signal Temporal Logic Guided Reinforcement Learning via Value Function Space Optimization
The integration of reinforcement learning (RL) and formal methods has emerged as a promising framework for solving long-horizon planning problems. Conventional approaches typically involve abstraction of the state and action spaces and manually created labeling functions or predicates. However, the efficiency of these approaches deteriorates as the tasks become increasingly complex, which results in exponential growth in the size of labeling functions or predicates. To address these issues, we propose a scalable model-based RL framework, called VFSTL, which schedules pre-trained skills to follow unseen STL specifications without using hand-crafted predicates. Given a set of value functions obtained by goal-conditioned RL, we formulate an optimization problem to maximize the robustness value of Signal Temporal Logic (STL) defined specifications, which is computed using value functions as predicates. To further reduce the computation burden, we abstract the environment state space into the value function space (VFS). Then the optimization problem is solved by Model-Based Reinforcement Learning. Simulation results show that STL with value functions as predicates approximates the ground truth robustness and the planning in VFS directly achieves unseen specifications using data from sensors.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
478,420
2409.16948
The Power-Oriented Graphs Modeling Technique: From the Fundamental Principles to the Systematic, Step-by-Step Modeling of Complex Physical Systems
Modeling physical systems is an essential skill for a control engineer, since it enables to achieve a deep understanding of their dynamic behavior and, consequently, the development of effective control strategies. The first part of this article provides a tutorial description of the fundamental principles and properties of the Power-Oriented Graphs (POG) modeling technique. Various case studies in different energetic domains are then presented to consolidate the fundamental principles, each highlighting different features of the POG modeling technique. The latter is then compared with the other two main graphical modeling techniques available in the literature, namely Bond Graph (BG) and Energetic Macroscopic Representation (EMR). The second part of this article assumes once again a tutorial nature, in order to introduce the new Fast Modeling POG (FMPOG) procedure. The FMPOG, which operates in the POG framework, is a methodical step-by-step procedure that enables the readers to quickly derive the power-oriented graphical model of physical systems starting from their schematics. From the power-oriented graphical model, the state-space model can then be directly determined. To ensure the FMPOG procedure is easily usable by the entire community, we apply it to three examples in different energetic domains in this article, guiding the reader step-by-step through the derivation of the physical systems models. A freely available Matlab/Simulink program is provided in a repository, allowing the users to automatically apply the FMPOG procedure to various classes of physical systems. This program allows to convert the physical systems schematics into the corresponding POG block schemes and, ultimately, into the state-space mathematical models.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
491,589
2104.05980
Experiments of ASR-based mispronunciation detection for children and adult English learners
Pronunciation is one of the fundamentals of language learning, and it is considered a primary factor of spoken language when it comes to an understanding and being understood by others. The persistent presence of high error rates in speech recognition domains resulting from mispronunciations motivates us to find alternative techniques for handling mispronunciations. In this study, we develop a mispronunciation assessment system that checks the pronunciation of non-native English speakers, identifies the commonly mispronounced phonemes of Italian learners of English, and presents an evaluation of the non-native pronunciation observed in phonetically annotated speech corpora. In this work, to detect mispronunciations, we used a phone-based ASR implemented using Kaldi. We used two non-native English labeled corpora; (i) a corpus of Italian adults contains 5,867 utterances from 46 speakers, and (ii) a corpus of Italian children consists of 5,268 utterances from 78 children. Our results show that the selected error model can discriminate correct sounds from incorrect sounds in both native and nonnative speech, and therefore can be used to detect pronunciation errors in non-native speech. The phone error rates show improvement in using the error language model. The ASR system shows better accuracy after applying the error model on our selected corpora.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
229,919
2204.01681
End-to-end multi-particle reconstruction in high occupancy imaging calorimeters with graph neural networks
We present an end-to-end reconstruction algorithm to build particle candidates from detector hits in next-generation granular calorimeters similar to that foreseen for the high-luminosity upgrade of the CMS detector. The algorithm exploits a distance-weighted graph neural network, trained with object condensation, a graph segmentation technique. Through a single-shot approach, the reconstruction task is paired with energy regression. We describe the reconstruction performance in terms of efficiency as well as in terms of energy resolution. In addition, we show the jet reconstruction performance of our method and discuss its inference computational cost. To our knowledge, this work is the first-ever example of single-shot calorimetric reconstruction of ${\cal O}(1000)$ particles in high-luminosity conditions with 200 pileup.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
289,682
1806.09321
Kick control: using the attracting states arising within the sensorimotor loop of self-organized robots as motor primitives
Self-organized robots may develop attracting states within the sensorimotor loop, that is within the phase space of neural activity, body, and environmental variables. Fixpoints, limit cycles, and chaotic attractors correspond in this setting to a non-moving robot, to directed, and to irregular locomotion respectively. Short higher-order control commands may hence be used to kick the system from one self-organized attractor robustly into the basin of attraction of a different attractor, a concept termed here as kick control. The individual sensorimotor states serve in this context as highly compliant motor primitives. We study different implementations of kick control for the case of simulated and real-world wheeled robots, for which the dynamics of the distinct wheels is generated independently by local feedback loops. The feedback loops are mediated by rate-encoding neurons disposing exclusively of propriosensoric inputs in terms of projections of the actual rotational angle of the wheel. The changes of the neural activity are then transmitted into a rotational motion by a simulated transmission rod akin to the transmission rods used for steam locomotives. We find that the self-organized attractor landscape may be morphed both by higher-level control signals, in the spirit of kick control, and by interacting with the environment. Bumping against a wall destroys the limit cycle corresponding to forward motion, with the consequence that the dynamical variables are then attracted in phase space by the limit cycle corresponding to backward moving. The robot, which does not dispose of any distance or contact sensors, hence reverses direction autonomously.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
101,334
2103.10626
Cluster-to-Conquer: A Framework for End-to-End Multi-Instance Learning for Whole Slide Image Classification
In recent years, the availability of digitized Whole Slide Images (WSIs) has enabled the use of deep learning-based computer vision techniques for automated disease diagnosis. However, WSIs present unique computational and algorithmic challenges. WSIs are gigapixel-sized ($\sim$100K pixels), making them infeasible to be used directly for training deep neural networks. Also, often only slide-level labels are available for training as detailed annotations are tedious and can be time-consuming for experts. Approaches using multiple-instance learning (MIL) frameworks have been shown to overcome these challenges. Current state-of-the-art approaches divide the learning framework into two decoupled parts: a convolutional neural network (CNN) for encoding the patches followed by an independent aggregation approach for slide-level prediction. In this approach, the aggregation step has no bearing on the representations learned by the CNN encoder. We have proposed an end-to-end framework that clusters the patches from a WSI into ${k}$-groups, samples ${k}'$ patches from each group for training, and uses an adaptive attention mechanism for slide level prediction; Cluster-to-Conquer (C2C). We have demonstrated that dividing a WSI into clusters can improve the model training by exposing it to diverse discriminative features extracted from the patches. We regularized the clustering mechanism by introducing a KL-divergence loss between the attention weights of patches in a cluster and the uniform distribution. The framework is optimized end-to-end on slide-level cross-entropy, patch-level cross-entropy, and KL-divergence loss (Implementation: https://github.com/YashSharma/C2C).
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
225,517
1712.00003
Modeling Information Flow Through Deep Neural Networks
This paper proposes a principled information theoretic analysis of classification for deep neural network structures, e.g. convolutional neural networks (CNN). The output of convolutional filters is modeled as a random variable Y conditioned on the object class C and network filter bank F. The conditional entropy (CENT) H(Y |C,F) is shown in theory and experiments to be a highly compact and class-informative code, that can be computed from the filter outputs throughout an existing CNN and used to obtain higher classification results than the original CNN itself. Experiments demonstrate the effectiveness of CENT feature analysis in two separate CNN classification contexts. 1) In the classification of neurodegeneration due to Alzheimer's disease (AD) and natural aging from 3D magnetic resonance image (MRI) volumes, 3 CENT features result in an AUC=94.6% for whole-brain AD classification, the highest reported accuracy on the public OASIS dataset used and 12% higher than the softmax output of the original CNN trained for the task. 2) In the context of visual object classification from 2D photographs, transfer learning based on a small set of CENT features identified throughout an existing CNN leads to AUC values comparable to the 1000-feature softmax output of the original network when classifying previously unseen object categories. The general information theoretical analysis explains various recent CNN design successes, e.g. densely connected CNN architectures, and provides insights for future research directions in deep learning.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
85,810
2204.06699
SNP2Vec: Scalable Self-Supervised Pre-Training for Genome-Wide Association Study
Self-supervised pre-training methods have brought remarkable breakthroughs in the understanding of text, image, and speech. Recent developments in genomics has also adopted these pre-training methods for genome understanding. However, they focus only on understanding haploid sequences, which hinders their applicability towards understanding genetic variations, also known as single nucleotide polymorphisms (SNPs), which is crucial for genome-wide association study. In this paper, we introduce SNP2Vec, a scalable self-supervised pre-training approach for understanding SNP. We apply SNP2Vec to perform long-sequence genomics modeling, and we evaluate the effectiveness of our approach on predicting Alzheimer's disease risk in a Chinese cohort. Our approach significantly outperforms existing polygenic risk score methods and all other baselines, including the model that is trained entirely with haploid sequences. We release our code and dataset on https://github.com/HLTCHKUST/snp2vec.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
291,428
2404.03380
On the Theoretical Expressive Power and the Design Space of Higher-Order Graph Transformers
Graph transformers have recently received significant attention in graph learning, partly due to their ability to capture more global interaction via self-attention. Nevertheless, while higher-order graph neural networks have been reasonably well studied, the exploration of extending graph transformers to higher-order variants is just starting. Both theoretical understanding and empirical results are limited. In this paper, we provide a systematic study of the theoretical expressive power of order-$k$ graph transformers and sparse variants. We first show that, an order-$k$ graph transformer without additional structural information is less expressive than the $k$-Weisfeiler Lehman ($k$-WL) test despite its high computational cost. We then explore strategies to both sparsify and enhance the higher-order graph transformers, aiming to improve both their efficiency and expressiveness. Indeed, sparsification based on neighborhood information can enhance the expressive power, as it provides additional information about input graph structures. In particular, we show that a natural neighborhood-based sparse order-$k$ transformer model is not only computationally efficient, but also expressive -- as expressive as $k$-WL test. We further study several other sparse graph attention models that are computationally efficient and provide their expressiveness analysis. Finally, we provide experimental results to show the effectiveness of the different sparsification strategies.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
444,232
2101.10734
Consistent Mesh Colors for Multi-View Reconstructed 3D Scenes
We address the issue of creating consistent mesh texture maps captured from scenes without color calibration. We find that the method for aggregation of the multiple views is crucial for creating spatially consistent meshes without the need to explicitly optimize for spatial consistency. We compute a color prior from the cross-correlation of observable view faces and the faces per view to identify an optimal per-face color. We then use this color in a re-weighting ratio for the best-view texture, which is identified by prior mesh texturing work, to create a spatial consistent texture map. Despite our method not explicitly handling spatial consistency, our results show qualitatively more consistent results than other state-of-the-art techniques while being computationally more efficient. We evaluate on prior datasets and additionally Matterport3D showing qualitative improvements.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
217,031
2205.04362
FC$^3$: Feasibility-Based Control Chain Coordination
Hierarchical coordination of controllers often uses symbolic state representations that fully abstract their underlying low-level controllers, treating them as "black boxes" to the symbolic action abstraction. This paper proposes a framework to realize robust behavior, which we call Feasibility-based Control Chain Coordination (FC$^3$). Our controllers expose the geometric features and constraints they operate on. Based on this, FC$^3$ can reason over the controllers' feasibility and their sequence feasibility. For a given task, FC$^3$ first automatically constructs a library of potential controller chains using a symbolic action tree, which is then used to coordinate controllers in a chain, evaluate task feasibility, as well as switching between controller chains if necessary. In several real-world experiments we demonstrate FC$^3$'s robustness and awareness of the task's feasibility through its own actions and gradual responses to different interferences.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
295,620
2301.13591
Zero3D: Semantic-Driven Multi-Category 3D Shape Generation
Semantic-driven 3D shape generation aims to generate 3D objects conditioned on text. Previous works face problems with single-category generation, low-frequency 3D details, and requiring a large number of paired datasets for training. To tackle these challenges, we propose a multi-category conditional diffusion model. Specifically, 1) to alleviate the problem of lack of large-scale paired data, we bridge the text, 2D image and 3D shape based on the pre-trained CLIP model, and 2) to obtain the multi-category 3D shape feature, we apply the conditional flow model to generate 3D shape vector conditioned on CLIP embedding. 3) to generate multi-category 3D shape, we employ the hidden-layer diffusion model conditioned on the multi-category shape vector, which greatly reduces the training time and memory consumption.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
342,967
2311.07355
ADAMM: Anomaly Detection of Attributed Multi-graphs with Metadata: A Unified Neural Network Approach
Given a complex graph database of node- and edge-attributed multi-graphs as well as associated metadata for each graph, how can we spot the anomalous instances? Many real-world problems can be cast as graph inference tasks where the graph representation could capture complex relational phenomena (e.g., transactions among financial accounts in a journal entry), along with metadata reflecting tabular features (e.g. approver, effective date, etc.). While numerous anomaly detectors based on Graph Neural Networks (GNNs) have been proposed, none are capable of directly handling directed graphs with multi-edges and self-loops. Furthermore, the simultaneous handling of relational and tabular features remains an unexplored area. In this work we propose ADAMM, a novel graph neural network model that handles directed multi-graphs, providing a unified end-to-end architecture that fuses metadata and graph-level representation learning through an unsupervised anomaly detection objective. Experiments on datasets from two different domains, namely, general-ledger journal entries from different firms (accounting) as well as human GPS trajectories from thousands of individuals (urban mobility) validate ADAMM's generality and detection effectiveness of expert-guided and ground-truth anomalies. Notably, ADAMM outperforms existing baselines that handle the two data modalities (graph and metadata) separately with post hoc synthesis efforts.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
407,292
2404.16743
Automatic Speech Recognition System-Independent Word Error Rate Estimation
Word error rate (WER) is a metric used to evaluate the quality of transcriptions produced by Automatic Speech Recognition (ASR) systems. In many applications, it is of interest to estimate WER given a pair of a speech utterance and a transcript. Previous work on WER estimation focused on building models that are trained with a specific ASR system in mind (referred to as ASR system-dependent). These are also domain-dependent and inflexible in real-world applications. In this paper, a hypothesis generation method for ASR System-Independent WER estimation (SIWE) is proposed. In contrast to prior work, the WER estimators are trained using data that simulates ASR system output. Hypotheses are generated using phonetically similar or linguistically more likely alternative words. In WER estimation experiments, the proposed method reaches a similar performance to ASR system-dependent WER estimators on in-domain data and achieves state-of-the-art performance on out-of-domain data. On the out-of-domain data, the SIWE model outperformed the baseline estimators in root mean square error and Pearson correlation coefficient by relative 17.58% and 18.21%, respectively, on Switchboard and CALLHOME. The performance was further improved when the WER of the training set was close to the WER of the evaluation dataset.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
449,604
2210.07037
Self-Supervised Learning of Linear Precoders under Non-Linear PA Distortion for Energy-Efficient Massive MIMO Systems
Massive multiple input multiple output (MIMO) systems are typically designed under the assumption of linear power amplifiers (PAs). However, PAs are typically most energy-efficient when operating close to their saturation point, where they cause non-linear distortion. Moreover, when using conventional precoders, this distortion coherently combines at the user locations, limiting performance. As such, when designing an energy-efficient massive MIMO system, this distortion has to be managed. In this work, we propose the use of a neural network (NN) to learn the mapping between the channel matrix and the precoding matrix, which maximizes the sum rate in the presence of this non-linear distortion. This is done for a third-order polynomial PA model for both the single and multi-user case. By learning this mapping a significant increase in energy efficiency is achieved as compared to conventional precoders and even as compared to perfect digital pre-distortion (DPD), in the saturation regime.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
323,552
1704.07766
A lower bound on the differential entropy of log-concave random vectors with applications
We derive a lower bound on the differential entropy of a log-concave random variable $X$ in terms of the $p$-th absolute moment of $X$. The new bound leads to a reverse entropy power inequality with an explicit constant, and to new bounds on the rate-distortion function and the channel capacity. Specifically, we study the rate-distortion function for log-concave sources and distortion measure $| x - \hat x|^r$, and we establish that the difference between the rate distortion function and the Shannon lower bound is at most $\log(\sqrt{\pi e}) \approx 1.5$ bits, independently of $r$ and the target distortion $d$. For mean-square error distortion, the difference is at most $\log (\sqrt{\frac{\pi e}{2}}) \approx 1$ bits, regardless of $d$. We also provide bounds on the capacity of memoryless additive noise channels when the noise is log-concave. We show that the difference between the capacity of such channels and the capacity of the Gaussian channel with the same noise power is at most $\log (\sqrt{\frac{\pi e}{2}}) \approx 1$ bits. Our results generalize to the case of vector $X$ with possibly dependent coordinates, and to $\gamma$-concave random variables. Our proof technique leverages tools from convex geometry.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
72,419
2412.04738
DHIL-GT: Scalable Graph Transformer with Decoupled Hierarchy Labeling
Graph Transformer (GT) has recently emerged as a promising neural network architecture for learning graph-structured data. However, its global attention mechanism with quadratic complexity concerning the graph scale prevents wider application to large graphs. While current methods attempt to enhance GT scalability by altering model architecture or encoding hierarchical graph data, our analysis reveals that these models still suffer from the computational bottleneck related to graph-scale operations. In this work, we target the GT scalability issue and propose DHIL-GT, a scalable Graph Transformer that simplifies network learning by fully decoupling the graph computation to a separate stage in advance. DHIL-GT effectively retrieves hierarchical information by exploiting the graph labeling technique, as we show that the graph label hierarchy is more informative than plain adjacency by offering global connections while promoting locality, and is particularly suitable for handling complex graph patterns such as heterophily. We further design subgraph sampling and positional encoding schemes for precomputing model input on top of graph labels in an end-to-end manner. The training stage thus favorably removes graph-related computations, leading to ideal mini-batch capability and GPU utilization. Notably, the precomputation and training processes of DHIL-GT achieve complexities linear to the number of graph edges and nodes, respectively. Extensive experiments demonstrate that DHIL-GT is efficient in terms of computational boost and mini-batch capability over existing scalable Graph Transformer designs on large-scale benchmarks, while achieving top-tier effectiveness on both homophilous and heterophilous graphs.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
514,541
2204.08848
I still have Time(s): Extending HeidelTime for German Texts
HeidelTime is one of the most widespread and successful tools for detecting temporal expressions in texts. Since HeidelTime's pattern matching system is based on regular expression, it can be extended in a convenient way. We present such an extension for the German resources of HeidelTime: HeidelTime-EXT . The extension has been brought about by means of observing false negatives within real world texts and various time banks. The gain in coverage is 2.7% or 8.5%, depending on the admitted degree of potential overgeneralization. We describe the development of HeidelTime-EXT, its evaluation on text samples from various genres, and share some linguistic observations. HeidelTime ext can be obtained from https://github.com/texttechnologylab/heideltime.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
292,235
2210.10231
Speaker- and Age-Invariant Training for Child Acoustic Modeling Using Adversarial Multi-Task Learning
One of the major challenges in acoustic modelling of child speech is the rapid changes that occur in the children's articulators as they grow up, their differing growth rates and the subsequent high variability in the same age group. These high acoustic variations along with the scarcity of child speech corpora have impeded the development of a reliable speech recognition system for children. In this paper, a speaker- and age-invariant training approach based on adversarial multi-task learning is proposed. The system consists of one generator shared network that learns to generate speaker- and age-invariant features connected to three discrimination networks, for phoneme, age, and speaker. The generator network is trained to minimize the phoneme-discrimination loss and maximize the speaker- and age-discrimination losses in an adversarial multi-task learning fashion. The generator network is a Time Delay Neural Network (TDNN) architecture while the three discriminators are feed-forward networks. The system was applied to the OGI speech corpora and achieved a 13% reduction in the WER of the ASR.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
324,831
2204.09271
A Survey of Video-based Action Quality Assessment
Human action recognition and analysis have great demand and important application significance in video surveillance, video retrieval, and human-computer interaction. The task of human action quality evaluation requires the intelligent system to automatically and objectively evaluate the action completed by the human. The action quality assessment model can reduce the human and material resources spent in action evaluation and reduce subjectivity. In this paper, we provide a comprehensive survey of existing papers on video-based action quality assessment. Different from human action recognition, the application scenario of action quality assessment is relatively narrow. Most of the existing work focuses on sports and medical care. We first introduce the definition and challenges of human action quality assessment. Then we present the existing datasets and evaluation metrics. In addition, we summarized the methods of sports and medical care according to the model categories and publishing institutions according to the characteristics of the two fields. At the end, combined with recent work, the promising development direction in action quality assessment is discussed.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
292,374
2501.09893
Sparse Binary Representation Learning for Knowledge Tracing
Knowledge tracing (KT) models aim to predict students' future performance based on their historical interactions. Most existing KT models rely exclusively on human-defined knowledge concepts (KCs) associated with exercises. As a result, the effectiveness of these models is highly dependent on the quality and completeness of the predefined KCs. Human errors in labeling and the cost of covering all potential underlying KCs can limit model performance. In this paper, we propose a KT model, Sparse Binary Representation KT (SBRKT), that generates new KC labels, referred to as auxiliary KCs, which can augment the predefined KCs to address the limitations of relying solely on human-defined KCs. These are learned through a binary vector representation, where each bit indicates the presence (one) or absence (zero) of an auxiliary KC. The resulting discrete representation allows these auxiliary KCs to be utilized in training any KT model that incorporates KCs. Unlike pre-trained dense embeddings, which are limited to models designed to accept such vectors, our discrete representations are compatible with both classical models, such as Bayesian Knowledge Tracing (BKT), and modern deep learning approaches. To generate this discrete representation, SBRKT employs a binarization method that learns a sparse representation, fully trainable via stochastic gradient descent. Additionally, SBRKT incorporates a recurrent neural network (RNN) to capture temporal dynamics and predict future student responses by effectively combining the auxiliary and predefined KCs. Experimental results demonstrate that SBRKT outperforms the tested baselines on several datasets and achieves competitive performance on others. Furthermore, incorporating the learned auxiliary KCs consistently enhances the performance of BKT across all tested datasets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
525,309