id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1509.01822 | The MIMO Wiretap Channel Decomposed | The problem of sending a secret message over the Gaussian multiple-input multiple-output (MIMO) wiretap channel is studied. While the capacity of this channel is known, it is not clear how to construct optimal coding schemes that achieve this capacity. In this work, we use linear operations along with successive interference cancellation to attain effective parallel single-antenna wiretap channels. By using independent scalar Gaussian wiretap codebooks over the resulting parallel channels, the capacity of the MIMO wiretap channel is achieved. The derivation of the schemes is based upon joint triangularization of the channel matrices. We find that the same technique can be used to re-derive capacity expressions for the MIMO wiretap channel in a way that is simple and closely connected to a transmission scheme. This technique allows to extend the previously proven strong security for scalar Gaussian channels to the MIMO case. We further consider the problem of transmitting confidential messages over a two-user broadcast MIMO channel. For that problem, we find that derivation of both the capacity and a transmission scheme is a direct corollary of the proposed analysis for the MIMO wiretap channel. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 46,659 |
1905.07710 | A 2D dilated residual U-Net for multi-organ segmentation in thoracic CT | Automatic segmentation of organs-at-risk (OAR) in computed tomography (CT) is an essential part of planning effective treatment strategies to combat lung and esophageal cancer. Accurate segmentation of organs surrounding tumours helps account for the variation in position and morphology inherent across patients, thereby facilitating adaptive and computer-assisted radiotherapy. Although manual delineation of OARs is still highly prevalent, it is prone to errors due to complex variations in the shape and position of organs across patients, and low soft tissue contrast between neighbouring organs in CT images. Recently, deep convolutional neural networks (CNNs) have gained tremendous traction and achieved state-of-the-art results in medical image segmentation. In this paper, we propose a deep learning framework to segment OARs in thoracic CT images, specifically for the: heart, esophagus, trachea and aorta. Our approach employs dilated convolutions and aggregated residual connections in the bottleneck of a U-Net styled network, which incorporates global context and dense information. Our method achieved an overall Dice score of 91.57% on 20 unseen test samples from the ISBI 2019 SegTHOR challenge. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 131,310 |
1308.0702 | Universal Empathy and Ethical Bias for Artificial General Intelligence | Rational agents are usually built to maximize rewards. However, AGI agents can find undesirable ways of maximizing any prior reward function. Therefore value learning is crucial for safe AGI. We assume that generalized states of the world are valuable - not rewards themselves, and propose an extension of AIXI, in which rewards are used only to bootstrap hierarchical value learning. The modified AIXI agent is considered in the multi-agent environment, where other agents can be either humans or other "mature" agents, which values should be revealed and adopted by the "infant" AGI agent. General framework for designing such empathic agent with ethical bias is proposed also as an extension of the universal intelligence model. Moreover, we perform experiments in the simple Markov environment, which demonstrate feasibility of our approach to value learning in safe AGI. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 26,245 |
1608.04185 | Learning to Rank Questions for Community Question Answering with Ranking
SVM | This paper presents our method to retrieve relevant queries given a new question in the context of Discovery Challenge: Learning to Re-Ranking Questions for Community Question Answering competition. In order to do that, a set of learning to rank methods was investigated to select an appropriate method. The selected method was optimized on training data by using a search strategy. After optimizing, the method was applied to development and test set. Results from the competition indicate that the performance of our method outperforms almost participants and show that Ranking SVM is efficient for retrieving relevant queries in community question answering. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 59,790 |
2111.12849 | Particle Graph Autoencoders and Differentiable, Learned Energy Mover's
Distance | Autoencoders have useful applications in high energy physics in anomaly detection, particularly for jets - collimated showers of particles produced in collisions such as those at the CERN Large Hadron Collider. We explore the use of graph-based autoencoders, which operate on jets in their "particle cloud" representations and can leverage the interdependencies among the particles within a jet, for such tasks. Additionally, we develop a differentiable approximation to the energy mover's distance via a graph neural network, which may subsequently be used as a reconstruction loss function for autoencoders. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 268,092 |
2011.05867 | DeepI2I: Enabling Deep Hierarchical Image-to-Image Translation by
Transferring from GANs | Image-to-image translation has recently achieved remarkable results. But despite current success, it suffers from inferior performance when translations between classes require large shape changes. We attribute this to the high-resolution bottlenecks which are used by current state-of-the-art image-to-image methods. Therefore, in this work, we propose a novel deep hierarchical Image-to-Image Translation method, called DeepI2I. We learn a model by leveraging hierarchical features: (a) structural information contained in the shallow layers and (b) semantic information extracted from the deep layers. To enable the training of deep I2I models on small datasets, we propose a novel transfer learning method, that transfers knowledge from pre-trained GANs. Specifically, we leverage the discriminator of a pre-trained GANs (i.e. BigGAN or StyleGAN) to initialize both the encoder and the discriminator and the pre-trained generator to initialize the generator of our model. Applying knowledge transfer leads to an alignment problem between the encoder and generator. We introduce an adaptor network to address this. On many-class image-to-image translation on three datasets (Animal faces, Birds, and Foods) we decrease mFID by at least 35% when compared to the state-of-the-art. Furthermore, we qualitatively and quantitatively demonstrate that transfer learning significantly improves the performance of I2I systems, especially for small datasets. Finally, we are the first to perform I2I translations for domains with over 100 classes. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 206,068 |
2405.08263 | Palette-based Color Transfer between Images | As an important subtopic of image enhancement, color transfer aims to enhance the color scheme of a source image according to a reference one while preserving the semantic context. To implement color transfer, the palette-based color mapping framework was proposed. \textcolor{black}{It is a classical solution that does not depend on complex semantic analysis to generate a new color scheme. However, the framework usually requires manual settings, blackucing its practicality.} The quality of traditional palette generation depends on the degree of color separation. In this paper, we propose a new palette-based color transfer method that can automatically generate a new color scheme. With a redesigned palette-based clustering method, pixels can be classified into different segments according to color distribution with better applicability. {By combining deep learning-based image segmentation and a new color mapping strategy, color transfer can be implemented on foreground and background parts independently while maintaining semantic consistency.} The experimental results indicate that our method exhibits significant advantages over peer methods in terms of natural realism, color consistency, generality, and robustness. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 454,036 |
2406.19081 | Unsupervised Latent Stain Adaptation for Computational Pathology | In computational pathology, deep learning (DL) models for tasks such as segmentation or tissue classification are known to suffer from domain shifts due to different staining techniques. Stain adaptation aims to reduce the generalization error between different stains by training a model on source stains that generalizes to target stains. Despite the abundance of target stain data, a key challenge is the lack of annotations. To address this, we propose a joint training between artificially labeled and unlabeled data including all available stained images called Unsupervised Latent Stain Adaptation (ULSA). Our method uses stain translation to enrich labeled source images with synthetic target images in order to increase the supervised signals. Moreover, we leverage unlabeled target stain images using stain-invariant feature consistency learning. With ULSA we present a semi-supervised strategy for efficient stain adaptation without access to annotated target stain data. Remarkably, ULSA is task agnostic in patch-level analysis for whole slide images (WSIs). Through extensive evaluation on external datasets, we demonstrate that ULSA achieves state-of-the-art (SOTA) performance in kidney tissue segmentation and breast cancer classification across a spectrum of staining variations. Our findings suggest that ULSA is an important framework for stain adaptation in computational pathology. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 468,295 |
1809.03171 | The AAU Multimodal Annotation Toolboxes: Annotating Objects in Images
and Videos | This tech report gives an introduction to two annotation toolboxes that enable the creation of pixel and polygon-based masks as well as bounding boxes around objects of interest. Both toolboxes support the annotation of sequential images in the RGB and thermal modalities. Each annotated object is assigned a classification tag, a unique ID, and one or more optional meta data tags. The toolboxes are written in C++ with the OpenCV and Qt libraries and are operated by using the visual interface and the extensive range of keyboard shortcuts. Pre-built binaries are available for Windows and MacOS and the tools can be built from source under Linux as well. So far, tens of thousands of frames have been annotated using the toolboxes. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 107,255 |
1607.06667 | Inpainting of long audio segments with similarity graphs | We present a novel method for the compensation of long duration data loss in audio signals, in particular music. The concealment of such signal defects is based on a graph that encodes signal structure in terms of time-persistent spectral similarity. A suitable candidate segment for the substitution of the lost content is proposed by an intuitive optimization scheme and smoothly inserted into the gap, i.e. the lost or distorted signal region. Extensive listening tests show that the proposed algorithm provides highly promising results when applied to a variety of real-world music signals. | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 58,914 |
1810.09720 | Color naming guided intrinsic image decomposition | Intrinsic image decomposition is a severely under-constrained problem. User interactions can help to reduce the ambiguity of the decomposition considerably. The traditional way of user interaction is to draw scribbles that indicate regions with constant reflectance or shading. However the effect scopes of the scribbles are quite limited, so dozens of scribbles are often needed to rectify the whole decomposition, which is time consuming. In this paper we propose an efficient way of user interaction that users need only to annotate the color composition of the image. Color composition reveals the global distribution of reflectance, so it can help to adapt the whole decomposition directly. We build a generative model of the process that the albedo of the material produces both the reflectance through imaging and the color labels by color naming. Our model fuses effectively the physical properties of image formation and the top-down information from human color perception. Experimental results show that color naming can improve the performance of intrinsic image decomposition, especially in cleaning the shadows left in reflectance and solving the color constancy problem. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 111,115 |
1604.07088 | Effect of User Mobility on the Performance of Device-to-Device Networks
with Distributed Caching | We consider a distributed caching device-to-device (D2D) network in which a user's file of interest is cached as several portions in the storage of other devices in the network. Assuming that the user needs to obtain all these file portions, the portions cached farther away naturally become the performance bottleneck. This is due to the fact that dominant interferers may be closer to the receiver than the serving device. Using a simple stochastic geometry model, we concretely demonstrate that this bottleneck can be loosened if the users are mobile. Gains obtained from mobility are quantified in terms of coverage probability. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 55,037 |
1804.07853 | What's Going On in Neural Constituency Parsers? An Analysis | A number of differences have emerged between modern and classic approaches to constituency parsing in recent years, with structural components like grammars and feature-rich lexicons becoming less central while recurrent neural network representations rise in popularity. The goal of this work is to analyze the extent to which information provided directly by the model structure in classical systems is still being captured by neural methods. To this end, we propose a high-performance neural model (92.08 F1 on PTB) that is representative of recent work and perform a series of investigative experiments. We find that our model implicitly learns to encode much of the same information that was explicitly provided by grammars and lexicons in the past, indicating that this scaffolding can largely be subsumed by powerful general-purpose neural machinery. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 95,619 |
2405.07943 | Decision Mamba Architectures | Recent advancements in imitation learning have been largely fueled by the integration of sequence models, which provide a structured flow of information to effectively mimic task behaviours. Currently, Decision Transformer (DT) and subsequently, the Hierarchical Decision Transformer (HDT), presented Transformer-based approaches to learn task policies. Recently, the Mamba architecture has shown to outperform Transformers across various task domains. In this work, we introduce two novel methods, Decision Mamba (DM) and Hierarchical Decision Mamba (HDM), aimed at enhancing the performance of the Transformer models. Through extensive experimentation across diverse environments such as OpenAI Gym and D4RL, leveraging varying demonstration data sets, we demonstrate the superiority of Mamba models over their Transformer counterparts in a majority of tasks. Results show that DM outperforms other methods in most settings. The code can be found at https://github.com/meowatthemoon/DecisionMamba. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 453,925 |
2001.10839 | A kind of quaternary sequences of period $2p^mq^n$ and their linear
complexity | Sequences with high linear complexity have wide applications in cryptography. In this paper, a new class of quaternary sequences over $\mathbb{F}_4$ with period $2p^mq^n$ is constructed using generalized cyclotomic classes. Results show that the linear complexity of these sequences attains the maximum. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 161,919 |
1905.01395 | On the Difficulty of Evaluating Baselines: A Study on Recommender
Systems | Numerical evaluations with comparisons to baselines play a central role when judging research in recommender systems. In this paper, we show that running baselines properly is difficult. We demonstrate this issue on two extensively studied datasets. First, we show that results for baselines that have been used in numerous publications over the past five years for the Movielens 10M benchmark are suboptimal. With a careful setup of a vanilla matrix factorization baseline, we are not only able to improve upon the reported results for this baseline but even outperform the reported results of any newly proposed method. Secondly, we recap the tremendous effort that was required by the community to obtain high quality results for simple methods on the Netflix Prize. Our results indicate that empirical findings in research papers are questionable unless they were obtained on standardized benchmarks where baselines have been tuned extensively by the research community. | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | 129,716 |
2211.05764 | DC-Check: A Data-Centric AI checklist to guide the development of
reliable machine learning systems | While there have been a number of remarkable breakthroughs in machine learning (ML), much of the focus has been placed on model development. However, to truly realize the potential of machine learning in real-world settings, additional aspects must be considered across the ML pipeline. Data-centric AI is emerging as a unifying paradigm that could enable such reliable end-to-end pipelines. However, this remains a nascent area with no standardized framework to guide practitioners to the necessary data-centric considerations or to communicate the design of data-centric driven ML systems. To address this gap, we propose DC-Check, an actionable checklist-style framework to elicit data-centric considerations at different stages of the ML pipeline: Data, Training, Testing, and Deployment. This data-centric lens on development aims to promote thoughtfulness and transparency prior to system development. Additionally, we highlight specific data-centric AI challenges and research opportunities. DC-Check is aimed at both practitioners and researchers to guide day-to-day development. As such, to easily engage with and use DC-Check and associated resources, we provide a DC-Check companion website (https://www.vanderschaar-lab.com/dc-check/). The website will also serve as an updated resource as methods and tooling evolve over time. | false | false | false | false | true | false | true | false | false | false | false | false | false | true | false | false | false | true | 329,669 |
2012.02593 | On Attitude Recovery of Spacecraft using Nonlinear Control | The general objective of this Ph.D. thesis is to study the dynamics and control of rigid and flexible spacecraft supported by a high-fidelity numerical simulation environment. The demand for greater attitude pointing precision, attitude maneuvering or recovery with the increased use of lightweight and flexible materials necessitates the consideration of flexible dynamics in the control strategy. These highly nonlinear dynamics which increase the order of the system are extremely difficult to model with high degree of accuracy. A general model for attitude and flexible dynamics for a class of spacecraft is hence derived in detail based on the so-called hybrid coordinates approach. The spacecraft considered has a star topology with a rigid central bus and flexible plate-type appendages. Given that the flexible spacecraft is under-actuated, the input-output feedback linearization technique is specifically used to partition the system into two distinct parts, namely an external linear system and an internal unobservable nonlinear system. A general internal/zero dynamics theorem for a class of nonlinear systems is proved and then applied to a flexible spacecraft which results in a linear asymptotically stable zero dynamics. The overall closed-loop stability of the flexible spacecraft is also analyzed rigorously and shown to be locally asymptotically stable using the Lyapunov theory. The robustness of the controller against modeling and parametric uncertainties is examined through extensive numerical simulations. Overall, the feedback linearization control scheme has been proven to be feasible and efficient for the attitude recovery of a spacecraft and has also become front and center in other application areas in the recent years. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 209,820 |
1907.07433 | Towards Blockchain-based Multi-Agent Robotic Systems: Analysis,
Classification and Applications | Decentralization, immutability and transparency make of Blockchain one of the most innovative technology of recent years. This paper presents an overview of solutions based on Blockchain technology for multi-agent robotic systems, and provide an analysis and classification of this emerging field. The reasons for implementing Blockchain in a multi-robot network may be to increase the interaction efficiency between agents by providing more trusted information exchange, reaching a consensus in trustless conditions, assessing robot productivity or detecting performance problems, identifying intruders, allocating plans and tasks, deploying distributed solutions and joint missions. Blockchain-based applications are discussed to demonstrate how distributed ledger can be used to extend the number of research platforms and libraries for multi-agent robotic systems. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | true | 138,876 |
2106.09898 | Bad Characters: Imperceptible NLP Attacks | Several years of research have shown that machine-learning systems are vulnerable to adversarial examples, both in theory and in practice. Until now, such attacks have primarily targeted visual models, exploiting the gap between human and machine perception. Although text-based models have also been attacked with adversarial examples, such attacks struggled to preserve semantic meaning and indistinguishability. In this paper, we explore a large class of adversarial examples that can be used to attack text-based models in a black-box setting without making any human-perceptible visual modification to inputs. We use encoding-specific perturbations that are imperceptible to the human eye to manipulate the outputs of a wide range of Natural Language Processing (NLP) systems from neural machine-translation pipelines to web search engines. We find that with a single imperceptible encoding injection -- representing one invisible character, homoglyph, reordering, or deletion -- an attacker can significantly reduce the performance of vulnerable models, and with three injections most models can be functionally broken. Our attacks work against currently-deployed commercial systems, including those produced by Microsoft and Google, in addition to open source models published by Facebook, IBM, and HuggingFace. This novel series of attacks presents a significant threat to many language processing systems: an attacker can affect systems in a targeted manner without any assumptions about the underlying model. We conclude that text-based NLP systems require careful input sanitization, just like conventional applications, and that given such systems are now being deployed rapidly at scale, the urgent attention of architects and operators is required. | false | false | false | false | false | false | true | false | true | false | false | false | true | false | false | false | false | false | 241,830 |
1712.00866 | Raw Waveform-based Audio Classification Using Sample-level CNN
Architectures | Music, speech, and acoustic scene sound are often handled separately in the audio domain because of their different signal characteristics. However, as the image domain grows rapidly by versatile image classification models, it is necessary to study extensible classification models in the audio domain as well. In this study, we approach this problem using two types of sample-level deep convolutional neural networks that take raw waveforms as input and uses filters with small granularity. One is a basic model that consists of convolution and pooling layers. The other is an improved model that additionally has residual connections, squeeze-and-excitation modules and multi-level concatenation. We show that the sample-level models reach state-of-the-art performance levels for the three different categories of sound. Also, we visualize the filters along layers and compare the characteristics of learned filters. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 85,997 |
2409.13268 | JoyHallo: Digital human model for Mandarin | In audio-driven video generation, creating Mandarin videos presents significant challenges. Collecting comprehensive Mandarin datasets is difficult, and the complex lip movements in Mandarin further complicate model training compared to English. In this study, we collected 29 hours of Mandarin speech video from JD Health International Inc. employees, resulting in the jdh-Hallo dataset. This dataset includes a diverse range of ages and speaking styles, encompassing both conversational and specialized medical topics. To adapt the JoyHallo model for Mandarin, we employed the Chinese wav2vec2 model for audio feature embedding. A semi-decoupled structure is proposed to capture inter-feature relationships among lip, expression, and pose features. This integration not only improves information utilization efficiency but also accelerates inference speed by 14.3%. Notably, JoyHallo maintains its strong ability to generate English videos, demonstrating excellent cross-language generation capabilities. The code and models are available at https://jdh-algo.github.io/JoyHallo. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 489,922 |
2410.01774 | Trained Transformer Classifiers Generalize and Exhibit Benign
Overfitting In-Context | Transformers have the capacity to act as supervised learning algorithms: by properly encoding a set of labeled training ("in-context") examples and an unlabeled test example into an input sequence of vectors of the same dimension, the forward pass of the transformer can produce predictions for that unlabeled test example. A line of recent work has shown that when linear transformers are pre-trained on random instances for linear regression tasks, these trained transformers make predictions using an algorithm similar to that of ordinary least squares. In this work, we investigate the behavior of linear transformers trained on random linear classification tasks. Via an analysis of the implicit regularization of gradient descent, we characterize how many pre-training tasks and in-context examples are needed for the trained transformer to generalize well at test-time. We further show that in some settings, these trained transformers can exhibit "benign overfitting in-context": when in-context examples are corrupted by label flipping noise, the transformer memorizes all of its in-context examples (including those with noisy labels) yet still generalizes near-optimally for clean test examples. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 493,938 |
2202.11486 | Augmentation based unsupervised domain adaptation | The insertion of deep learning in medical image analysis had lead to the development of state-of-the art strategies in several applications such a disease classification, as well as abnormality detection and segmentation. However, even the most advanced methods require a huge and diverse amount of data to generalize. Because in realistic clinical scenarios, data acquisition and annotation is expensive, deep learning models trained on small and unrepresentative data tend to outperform when deployed in data that differs from the one used for training (e.g data from different scanners). In this work, we proposed a domain adaptation methodology to alleviate this problem in segmentation models. Our approach takes advantage of the properties of adversarial domain adaptation and consistency training to achieve more robust adaptation. Using two datasets with white matter hyperintensities (WMH) annotations, we demonstrated that the proposed method improves model generalization even in corner cases where individual strategies tend to fail. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 281,907 |
1905.01639 | Deep Video Inpainting | Video inpainting aims to fill spatio-temporal holes with plausible content in a video. Despite tremendous progress of deep neural networks for image inpainting, it is challenging to extend these methods to the video domain due to the additional time dimension. In this work, we propose a novel deep network architecture for fast video inpainting. Built upon an image-based encoder-decoder model, our framework is designed to collect and refine information from neighbor frames and synthesize still-unknown regions. At the same time, the output is enforced to be temporally consistent by a recurrent feedback and a temporal memory module. Compared with the state-of-the-art image inpainting algorithm, our method produces videos that are much more semantically correct and temporally smooth. In contrast to the prior video completion method which relies on time-consuming optimization, our method runs in near real-time while generating competitive video results. Finally, we applied our framework to video retargeting task, and obtain visually pleasing results. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 129,774 |
1907.09014 | Learning Hybrid Object Kinematics for Efficient Hierarchical Planning
Under Uncertainty | Sudden changes in the dynamics of robotic tasks, such as contact with an object or the latching of a door, are often viewed as inconvenient discontinuities that make manipulation difficult. However, when these transitions are well-understood, they can be leveraged to reduce uncertainty or aid manipulation---for example, wiggling a screw to determine if it is fully inserted or not. Current model-free reinforcement learning approaches require large amounts of data to learn to leverage such dynamics, scale poorly as problem complexity grows, and do not transfer well to significantly different problems. By contrast, hierarchical POMDP planning-based methods scale well via plan decomposition, work well on novel problems, and directly consider uncertainty, but often rely on precise hand-specified models and task decompositions. To combine the advantages of these opposing paradigms, we propose a new method, MICAH, which given unsegmented data of an object's motion under applied actions, (1) detects changepoints in the object motion model using action-conditional inference, (2) estimates the individual local motion models with their parameters, and (3) converts them into a hybrid automaton that is compatible with hierarchical POMDP planning. We show that model learning under MICAH is more accurate and robust to noise than prior approaches. Further, we combine MICAH with a hierarchical POMDP planner to demonstrate that the learned models are rich enough to be used for performing manipulation tasks under uncertainty that require the objects to be used in novel ways not encountered during training. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 139,244 |
1707.03872 | Independence, Conditionality and Structure of Dempster-Shafer Belief
Functions | Several approaches of structuring (factorization, decomposition) of Dempster-Shafer joint belief functions from literature are reviewed with special emphasis on their capability to capture independence from the point of view of the claim that belief functions generalize bayes notion of probability. It is demonstrated that Zhu and Lee's {Zhu:93} logical networks and Smets' {Smets:93} directed acyclic graphs are unable to capture statistical dependence/independence of bayesian networks {Pearl:88}. On the other hand, though Shenoy and Shafer's hypergraphs can explicitly represent bayesian network factorization of bayesian belief functions, they disclaim any need for representation of independence of variables in belief functions. Cano et al. {Cano:93} reject the hypergraph representation of Shenoy and Shafer just on grounds of missing representation of variable independence, but in their frameworks some belief functions factorizable in Shenoy/Shafer framework cannot be factored. The approach in {Klopotek:93f} on the other hand combines the merits of both Cano et al. and of Shenoy/Shafer approach in that for Shenoy/Shafer approach no simpler factorization than that in {Klopotek:93f} approach exists and on the other hand all independences among variables captured in Cano et al. framework and many more are captured in {Klopotek:93f} approach.% | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 76,945 |
2306.17595 | RBSR: Efficient and Flexible Recurrent Network for Burst
Super-Resolution | Burst super-resolution (BurstSR) aims at reconstructing a high-resolution (HR) image from a sequence of low-resolution (LR) and noisy images, which is conducive to enhancing the imaging effects of smartphones with limited sensors. The main challenge of BurstSR is to effectively combine the complementary information from input frames, while existing methods still struggle with it. In this paper, we suggest fusing cues frame-by-frame with an efficient and flexible recurrent network. In particular, we emphasize the role of the base-frame and utilize it as a key prompt to guide the knowledge acquisition from other frames in every recurrence. Moreover, we introduce an implicit weighting loss to improve the model's flexibility in facing input frames with variable numbers. Extensive experiments on both synthetic and real-world datasets demonstrate that our method achieves better results than state-of-the-art ones. Codes and pre-trained models are available at https://github.com/ZcsrenlongZ/RBSR. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 376,756 |
2210.11238 | Analysis of Smooth Pursuit Assessment in Virtual Reality and Concussion
Detection using BiLSTM | The sport-related concussion (SRC) battery relies heavily upon subjective symptom reporting in order to determine the diagnosis of a concussion. Unfortunately, athletes with SRC may return-to-play (RTP) too soon if they are untruthful of their symptoms. It is critical to provide accurate assessments that can overcome underreporting to prevent further injury. To lower the risk of injury, a more robust and precise method for detecting concussion is needed to produce reliable and objective results. In this paper, we propose a novel approach to detect SRC using long short-term memory (LSTM) recurrent neural network (RNN) architectures from oculomotor data. In particular, we propose a new error metric that incorporates mean squared error in different proportions. The experimental results on the smooth pursuit test of the VR-VOMS dataset suggest that the proposed approach can predict concussion symptoms with higher accuracy compared to symptom provocation on the vestibular ocular motor screening (VOMS). | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 325,238 |
2209.03715 | Optimization-based framework for low-voltage grid reinforcement
assessment under various levels of flexibility and coordination | The rapid electrification of residential heating and mobility sectors is expected to drive the existing distribution grid assets beyond their planned operating conditions. This change will also reveal new potentials through sector coupling, flexibilities, and the local exchange of decentralized generation. This paper thus presents an optimization framework for multi-modal energy systems at the low voltage (LV) distribution grid level. In this, we focus on the reinforcement requirements of the grid and the techno-economic assessment of flexibility components and coordination between agents. By employing a multi-level approach, computational complexity is reduced, and various levels of coordination and flexibilities are implemented. We conclude the work with a case study for a representative rural grid in Germany, in which we observe high economic potential in the flexible operation of buildings, majorly thanks to better integration of photovoltaics. Across all paradigms barring a best-case benchmark, grid reinforcement based on a mix of passive and active measures was necessary. A synergy effect is observed between flexibilities and coordination, as their combination reduces the peaks to the extent of completely avoiding grid reinforcement. The presented framework can be applied with a wide range of grid and component types to outline the broad landscape of future LV distribution grids. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 316,575 |
2104.04295 | Transforming Feature Space to Interpret Machine Learning Models | Model-agnostic tools for interpreting machine-learning models struggle to summarize the joint effects of strongly dependent features in high-dimensional feature spaces, which play an important role in pattern recognition, for example in remote sensing of landcover. This contribution proposes a novel approach that interprets machine-learning models through the lens of feature space transformations. It can be used to enhance unconditional as well as conditional post-hoc diagnostic tools including partial dependence plots, accumulated local effects plots, or permutation feature importance assessments. While the approach can also be applied to nonlinear transformations, we focus on linear ones, including principal component analysis (PCA) and a partial orthogonalization technique. Structured PCA and diagnostics along paths offer opportunities for representing domain knowledge. The new approach is implemented in the R package `wiml`, which can be combined with existing explainable machine-learning packages. A case study on remote-sensing landcover classification with 46 features is used to demonstrate the potential of the proposed approach for model interpretation by domain experts. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 229,351 |
2304.11267 | Speed Is All You Need: On-Device Acceleration of Large Diffusion Models
via GPU-Aware Optimizations | The rapid development and application of foundation models have revolutionized the field of artificial intelligence. Large diffusion models have gained significant attention for their ability to generate photorealistic images and support various tasks. On-device deployment of these models provides benefits such as lower server costs, offline functionality, and improved user privacy. However, common large diffusion models have over 1 billion parameters and pose challenges due to restricted computational and memory resources on devices. We present a series of implementation optimizations for large diffusion models that achieve the fastest reported inference latency to-date (under 12 seconds for Stable Diffusion 1.4 without int8 quantization on Samsung S23 Ultra for a 512x512 image with 20 iterations) on GPU-equipped mobile devices. These enhancements broaden the applicability of generative AI and improve the overall user experience across a wide range of devices. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 359,743 |
1911.04580 | Supervised Initialization of LSTM Networks for Fundamental Frequency
Detection in Noisy Speech Signals | Fundamental frequency is one of the most important parameters of human speech, of importance for the classification of accent, gender, speaking styles, speaker identification, age, among others. The proper detection of this parameter remains as an important challenge for severely degraded signals. In previous references for detecting fundamental frequency in noisy speech using deep learning, the networks, such as Long Short-term Memory (LSTM) has been initialized with random weights, and then trained following a back-propagation through time algorithm. In this work, a proposal for a more efficient initialization, based on a supervised training using an Auto-associative network, is presented. This initialization is a better starting point for the detection of fundamental frequency in noisy speech. The advantages of this initialization are noticeable using objective measures for the accuracy of the detection and for the training of the networks, under the presence of additive white noise at different signal-to-noise levels. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 153,023 |
2208.08100 | CommitBART: A Large Pre-trained Model for GitHub Commits | GitHub commits, which record the code changes with natural language messages for description, play a critical role for software developers to comprehend the software evolution. To promote the development of the open-source software community, we collect a commit benchmark including over 7.99 million commits across 7 programming languages. Based on this benchmark, we present CommitBART, a large pre-trained encoder-decoder Transformer model for GitHub commits. The model is pre-trained by three categories (i.e., denoising objectives, cross-modal generation and contrastive learning) for six pre-training tasks to learn commit fragment representations. Furthermore, we unify a ``commit intelligence'' framework with one understanding task and three generation tasks for commits. The comprehensive experiments on these tasks demonstrate that CommitBARTsignificantly outperforms previous pre-trained works for code. Further analysis also reveals each pre-training task enhances the model performance. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 313,260 |
2104.01231 | Diverse Gaussian Noise Consistency Regularization for Robustness and
Uncertainty Calibration | Deep neural networks achieve high prediction accuracy when the train and test distributions coincide. In practice though, various types of corruptions occur which deviate from this setup and cause severe performance degradations. Few methods have been proposed to address generalization in the presence of unforeseen domain shifts. In particular, digital noise corruptions arise commonly in practice during the image acquisition stage and present a significant challenge for current methods. In this paper, we propose a diverse Gaussian noise consistency regularization method for improving robustness of image classifiers under a variety of corruptions while still maintaining high clean accuracy. We derive bounds to motivate and understand the behavior of our Gaussian noise consistency regularization using a local loss landscape analysis. Our approach improves robustness against unforeseen noise corruptions by 4.2-18.4% over adversarial training and other strong diverse data augmentation baselines across several benchmarks. Furthermore, it improves robustness and uncertainty calibration by 3.7% and 5.5%, respectively, against all common corruptions (weather, digital, blur, noise) when combined with state-of-the-art diverse data augmentations. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 228,273 |
2310.05644 | Diagnosing Catastrophe: Large parts of accuracy loss in continual
learning can be accounted for by readout misalignment | Unlike primates, training artificial neural networks on changing data distributions leads to a rapid decrease in performance on old tasks. This phenomenon is commonly referred to as catastrophic forgetting. In this paper, we investigate the representational changes that underlie this performance decrease and identify three distinct processes that together account for the phenomenon. The largest component is a misalignment between hidden representations and readout layers. Misalignment occurs due to learning on additional tasks and causes internal representations to shift. Representational geometry is partially conserved under this misalignment and only a small part of the information is irrecoverably lost. All types of representational changes scale with the dimensionality of hidden representations. These insights have implications for deep learning applications that need to be continuously updated, but may also aid aligning ANN models to the rather robust biological vision. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 398,231 |
1911.08196 | Defending with Shared Resources on a Network | In this paper we consider a defending problem on a network. In the model, the defender holds a total defending resource of R, which can be distributed to the nodes of the network. The defending resource allocated to a node can be shared by its neighbors. There is a weight associated with every edge that represents the efficiency defending resources are shared between neighboring nodes. We consider the setting when each attack can affect not only the target node, but its neighbors as well. Assuming that nodes in the network have different treasures to defend and different defending requirements, the defender aims at allocating the defending resource to the nodes to minimize the loss due to attack. We give polynomial time exact algorithms for two important special cases of the network defending problem. For the case when an attack can only affect the target node, we present an LP-based exact algorithm. For the case when defending resources cannot be shared, we present a max-flow-based exact algorithm. We show that the general problem is NP-hard, and we give a 2-approximation algorithm based on LP-rounding. Moreover, by giving a matching lower bound of 2 on the integrality gap on the LP relaxation, we show that our rounding is tight. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 154,125 |
2302.13711 | Internal-Coordinate Density Modelling of Protein Structure: Covariance
Matters | After the recent ground-breaking advances in protein structure prediction, one of the remaining challenges in protein machine learning is to reliably predict distributions of structural states. Parametric models of fluctuations are difficult to fit due to complex covariance structures between degrees of freedom in the protein chain, often causing models to either violate local or global structural constraints. In this paper, we present a new strategy for modelling protein densities in internal coordinates, which uses constraints in 3D space to induce covariance structure between the internal degrees of freedom. We illustrate the potential of the procedure by constructing a variational autoencoder with full covariance output induced by the constraints implied by the conditional mean in 3D, and demonstrate that our approach makes it possible to scale density models of internal coordinates to full protein backbones in two settings: 1) a unimodal setting for proteins exhibiting small fluctuations and limited amounts of available data, and 2) a multimodal setting for larger conformational changes in a high data regime. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 348,039 |
2411.01348 | Optimizing Violence Detection in Video Classification Accuracy through
3D Convolutional Neural Networks | As violent crimes continue to happen, it becomes necessary to have security cameras that can rapidly identify moments of violence with excellent accuracy. The purpose of this study is to identify how many frames should be analyzed at a time in order to optimize a violence detection model's accuracy as a parameter of the depth of a 3D convolutional network. Previous violence classification models have been created, but their application to live footage may be flawed. In this project, a convolutional neural network was created to analyze optical flow frames of each video. The number of frames analyzed at a time would vary with one, two, three, ten, and twenty frames, and each model would be trained for 20 epochs. The greatest validation accuracy was 94.87% and occurred with the model that analyzed three frames at a time. This means that machine learning models to detect violence may function better when analyzing three frames at a time for this dataset. The methodology used to identify the optimal number of frames to analyze at a time could be used in other applications of video classification, especially those of complex or abstract actions, such as violence. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 505,027 |
2303.02423 | Estimating Age of Information Using Finite Order Moments | Age of information (AoI) has been proposed as a more suitable metric for characterizing the freshness of information than traditional metrics like delay and throughput. However, the calculation of AoI requires complex analysis and strict end-to-end synchronization. Most existential AoI-related works have assumed that the statistical characterizations of the arrival process and the service process are known. In fact, due to the randomness of the sources and the channel noises, these processes are often unavailable in reality. To this end, we propose a method to estimate the average AoI on a point-to-point wireless Rayleigh channel, which uses the available finite order statistical moments of the arrival process. Based on this method, we explicitly present the upper and lower bounds on the average AoI of the system. Our results show that 1) with the increase of the traffic intensity, the absolute error of the estimated average AoI bounds is first increasing and then decreasing, while the average AoI is monotonically increasing; 2) the average AoI can be effectively approximated by using the first two order moment estimation bounds, especially when traffic intensity is small or approaches unity; 3) tighter bounds can be obtained by using more moments. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 349,352 |
2309.17182 | RECOMBINER: Robust and Enhanced Compression with Bayesian Implicit
Neural Representations | COMpression with Bayesian Implicit NEural Representations (COMBINER) is a recent data compression method that addresses a key inefficiency of previous Implicit Neural Representation (INR)-based approaches: it avoids quantization and enables direct optimization of the rate-distortion performance. However, COMBINER still has significant limitations: 1) it uses factorized priors and posterior approximations that lack flexibility; 2) it cannot effectively adapt to local deviations from global patterns in the data; and 3) its performance can be susceptible to modeling choices and the variational parameters' initializations. Our proposed method, Robust and Enhanced COMBINER (RECOMBINER), addresses these issues by 1) enriching the variational approximation while retaining a low computational cost via a linear reparameterization of the INR weights, 2) augmenting our INRs with learnable positional encodings that enable them to adapt to local details and 3) splitting high-resolution data into patches to increase robustness and utilizing expressive hierarchical priors to capture dependency across patches. We conduct extensive experiments across several data modalities, showcasing that RECOMBINER achieves competitive results with the best INR-based methods and even outperforms autoencoder-based codecs on low-resolution images at low bitrates. Our PyTorch implementation is available at https://github.com/cambridge-mlg/RECOMBINER/. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 395,668 |
1907.05855 | DisCoRL: Continual Reinforcement Learning via Policy Distillation | In multi-task reinforcement learning there are two main challenges: at training time, the ability to learn different policies with a single model; at test time, inferring which of those policies applying without an external signal. In the case of continual reinforcement learning a third challenge arises: learning tasks sequentially without forgetting the previous ones. In this paper, we tackle these challenges by proposing DisCoRL, an approach combining state representation learning and policy distillation. We experiment on a sequence of three simulated 2D navigation tasks with a 3 wheel omni-directional robot. Moreover, we tested our approach's robustness by transferring the final policy into a real life setting. The policy can solve all tasks and automatically infer which one to run. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 138,470 |
2411.15605 | GIFT: A Framework for Global Interpretable Faithful Textual Explanations
of Vision Classifiers | Understanding deep models is crucial for deploying them in safety-critical applications. We introduce GIFT, a framework for deriving post-hoc, global, interpretable, and faithful textual explanations for vision classifiers. GIFT starts from local faithful visual counterfactual explanations and employs (vision) language models to translate those into global textual explanations. Crucially, GIFT provides a verification stage measuring the causal effect of the proposed explanations on the classifier decision. Through experiments across diverse datasets, including CLEVR, CelebA, and BDD, we demonstrate that GIFT effectively reveals meaningful insights, uncovering tasks, concepts, and biases used by deep vision classifiers. The framework is released at https://github.com/valeoai/GIFT. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 510,684 |
2412.03527 | FANAL -- Financial Activity News Alerting Language Modeling Framework | In the rapidly evolving financial sector, the accurate and timely interpretation of market news is essential for stakeholders needing to navigate unpredictable events. This paper introduces FANAL (Financial Activity News Alerting Language Modeling Framework), a specialized BERT-based framework engineered for real-time financial event detection and analysis, categorizing news into twelve distinct financial categories. FANAL leverages silver-labeled data processed through XGBoost and employs advanced fine-tuning techniques, alongside ORBERT (Odds Ratio BERT), a novel variant of BERT fine-tuned with ORPO (Odds Ratio Preference Optimization) for superior class-wise probability calibration and alignment with financial event relevance. We evaluate FANAL's performance against leading large language models, including GPT-4o, Llama-3.1 8B, and Phi-3, demonstrating its superior accuracy and cost efficiency. This framework sets a new standard for financial intelligence and responsiveness, significantly outstripping existing models in both performance and affordability. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 513,993 |
2111.04525 | D-Flow: A Real Time Spatial Temporal Model for Target Area Segmentation | Semantic segmentation has attracted a large amount of attention in recent years. In robotics, segmentation can be used to identify a region of interest, or \emph{target area}. For example, in the RoboCup Standard Platform League (SPL), segmentation separates the soccer field from the background and from players on the field. For satellite or vehicle applications, it is often necessary to find certain regions such as roads, bodies of water or kinds of terrain. In this paper, we propose a novel approach to real-time target area segmentation based on a newly designed spatial temporal network. The method operates under domain constraints defined by both the robot's hardware and its operating environment . The proposed network is able to run in real-time, working within the constraints of limited run time and computing power. This work is compared against other real time segmentation methods on a dataset generated by a Nao V6 humanoid robot simulating the RoboCup SPL competition. In this case, the target area is defined as the artificial grass field. The method is also tested on a maritime dataset collected by a moving vessel, where the aim is to separate the ocean region from the rest of the image. This dataset demonstrates that the proposed model can generalise to a variety of vision problems. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 265,508 |
2003.04774 | ENTMOOT: A Framework for Optimization over Ensemble Tree Models | Gradient boosted trees and other regression tree models perform well in a wide range of real-world, industrial applications. These tree models (i) offer insight into important prediction features, (ii) effectively manage sparse data, and (iii) have excellent prediction capabilities. Despite their advantages, they are generally unpopular for decision-making tasks and black-box optimization, which is due to their difficult-to optimize structure and the lack of a reliable uncertainty measure. ENTMOOT is our new framework for integrating (already trained) tree models into larger optimization problems. The contributions of ENTMOOT include: (i) explicitly introducing a reliable uncertainty measure that is compatible with tree models, (ii) solving the larger optimization problems that incorporate these uncertainty aware tree models, (iii) proving that the solutions are globally optimal, i.e. no better solution exists. In particular, we show how the ENTMOOT approach allows a simple integration of tree models into decision-making and black-box optimization, where it proves as a strong competitor to commonly-used frameworks. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 167,659 |
2303.01258 | Domain-adapted large language models for classifying nuclear medicine
reports | With the growing use of transformer-based language models in medicine, it is unclear how well these models generalize to nuclear medicine which has domain-specific vocabulary and unique reporting styles. In this study, we evaluated the value of domain adaptation in nuclear medicine by adapting language models for the purpose of 5-point Deauville score prediction based on clinical 18F-fluorodeoxyglucose (FDG) PET/CT reports. We retrospectively retrieved 4542 text reports and 1664 images for FDG PET/CT lymphoma exams from 2008-2018 in our clinical imaging database. Deauville scores were removed from the reports and then the remaining text in the reports was used as the model input. Multiple general-purpose transformer language models were used to classify the reports into Deauville scores 1-5. We then adapted the models to the nuclear medicine domain using masked language modeling and assessed its impact on classification performance. The language models were compared against vision models, a multimodal vision language model, and a nuclear medicine physician with seven-fold Monte Carlo cross validation, reported are the mean and standard deviations. Domain adaption improved all language models. For example, BERT improved from 61.3% five-class accuracy to 65.7% following domain adaptation. The best performing model (domain-adapted RoBERTa) achieved a five-class accuracy of 77.4%, which was better than the physician's performance (66%), the best vision model's performance (48.1), and was similar to the multimodal model's performance (77.2). Domain adaptation improved the performance of large language models in interpreting nuclear medicine text reports. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 348,899 |
2102.09397 | Meta-Transfer Learning for Low-Resource Abstractive Summarization | Neural abstractive summarization has been studied in many pieces of literature and achieves great success with the aid of large corpora. However, when encountering novel tasks, one may not always benefit from transfer learning due to the domain shifting problem, and overfitting could happen without adequate labeled examples. Furthermore, the annotations of abstractive summarization are costly, which often demand domain knowledge to ensure the ground-truth quality. Thus, there are growing appeals for Low-Resource Abstractive Summarization, which aims to leverage past experience to improve the performance with limited labeled examples of target corpus. In this paper, we propose to utilize two knowledge-rich sources to tackle this problem, which are large pre-trained models and diverse existing corpora. The former can provide the primary ability to tackle summarization tasks; the latter can help discover common syntactic or semantic information to improve the generalization ability. We conduct extensive experiments on various summarization corpora with different writing styles and forms. The results demonstrate that our approach achieves the state-of-the-art on 6 corpora in low-resource scenarios, with only 0.7% of trainable parameters compared to previous work. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 220,767 |
2012.08645 | An anatomically-informed 3D CNN for brain aneurysm classification with
weak labels | A commonly adopted approach to carry out detection tasks in medical imaging is to rely on an initial segmentation. However, this approach strongly depends on voxel-wise annotations which are repetitive and time-consuming to draw for medical experts. An interesting alternative to voxel-wise masks are so-called "weak" labels: these can either be coarse or oversized annotations that are less precise, but noticeably faster to create. In this work, we address the task of brain aneurysm detection as a patch-wise binary classification with weak labels, in contrast to related studies that rather use supervised segmentation methods and voxel-wise delineations. Our approach comes with the non-trivial challenge of the data set creation: as for most focal diseases, anomalous patches (with aneurysm) are outnumbered by those showing no anomaly, and the two classes usually have different spatial distributions. To tackle this frequent scenario of inherently imbalanced, spatially skewed data sets, we propose a novel, anatomically-driven approach by using a multi-scale and multi-input 3D Convolutional Neural Network (CNN). We apply our model to 214 subjects (83 patients, 131 controls) who underwent Time-Of-Flight Magnetic Resonance Angiography (TOF-MRA) and presented a total of 111 unruptured cerebral aneurysms. We compare two strategies for negative patch sampling that have an increasing level of difficulty for the network and we show how this choice can strongly affect the results. To assess whether the added spatial information helps improving performances, we compare our anatomically-informed CNN with a baseline, spatially-agnostic CNN. When considering the more realistic and challenging scenario including vessel-like negative patches, the former model attains the highest classification results (accuracy$\simeq$95\%, AUROC$\simeq$0.95, AUPR$\simeq$0.71), thus outperforming the baseline. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 211,818 |
1508.03891 | REBA: A Refinement-Based Architecture for Knowledge Representation and
Reasoning in Robotics | This paper describes an architecture for robots that combines the complementary strengths of probabilistic graphical models and declarative programming to represent and reason with logic-based and probabilistic descriptions of uncertainty and domain knowledge. An action language is extended to support non-boolean fluents and non-deterministic causal laws. This action language is used to describe tightly-coupled transition diagrams at two levels of granularity, with a fine-resolution transition diagram defined as a refinement of a coarse-resolution transition diagram of the domain. The coarse-resolution system description, and a history that includes (prioritized) defaults, are translated into an Answer Set Prolog (ASP) program. For any given goal, inference in the ASP program provides a plan of abstract actions. To implement each such abstract action, the robot automatically zooms to the part of the fine-resolution transition diagram relevant to this action. A probabilistic representation of the uncertainty in sensing and actuation is then included in this zoomed fine-resolution system description, and used to construct a partially observable Markov decision process (POMDP). The policy obtained by solving the POMDP is invoked repeatedly to implement the abstract action as a sequence of concrete actions, with the corresponding observations being recorded in the coarse-resolution history and used for subsequent reasoning. The architecture is evaluated in simulation and on a mobile robot moving objects in an indoor domain, to show that it supports reasoning with violation of defaults, noisy observations and unreliable actions, in complex domains. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | true | 46,057 |
2002.07756 | Hierarchical Correlation Clustering and Tree Preserving Embedding | We propose a hierarchical correlation clustering method that extends the well-known correlation clustering to produce hierarchical clusters applicable to both positive and negative pairwise dissimilarities. Then, in the following, we study unsupervised representation learning with such hierarchical correlation clustering. For this purpose, we first investigate embedding the respective hierarchy to be used for tree preserving embedding and feature extraction. Thereafter, we study the extension of minimax distance measures to correlation clustering, as another representation learning paradigm. Finally, we demonstrate the performance of our methods on several datasets. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 164,562 |
2409.15680 | Distributed Online Bandit Nonconvex Optimization with One-Point Residual
Feedback via Dynamic Regret | This paper considers the distributed online bandit optimization problem with nonconvex loss functions over a time-varying digraph. This problem can be viewed as a repeated game between a group of online players and an adversary. At each round, each player selects a decision from the constraint set, and then the adversary assigns an arbitrary, possibly nonconvex, loss function to this player. Only the loss value at the current round, rather than the entire loss function or any other information (e.g. gradient), is privately revealed to the player. Players aim to minimize a sequence of global loss functions, which are the sum of local losses. We observe that traditional multi-point bandit algorithms are unsuitable for online optimization, where the data for the loss function are not all a priori, while the one-point bandit algorithms suffer from poor regret guarantees. To address these issues, we propose a novel one-point residual feedback distributed online algorithm. This algorithm estimates the gradient using residuals from two points, effectively reducing the regret bound while maintaining $\mathcal{O}(1)$ sampling complexity per iteration. We employ a rigorous metric, dynamic regret, to evaluate the algorithm's performance. By appropriately selecting the step size and smoothing parameters, we demonstrate that the expected dynamic regret of our algorithm is comparable to existing algorithms that use two-point feedback, provided the deviation in the objective function sequence and the path length of the minimization grows sublinearly. Finally, we validate the effectiveness of the proposed algorithm through numerical simulations. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 491,008 |
1711.08141 | Shift: A Zero FLOP, Zero Parameter Alternative to Spatial Convolutions | Neural networks rely on convolutions to aggregate spatial information. However, spatial convolutions are expensive in terms of model size and computation, both of which grow quadratically with respect to kernel size. In this paper, we present a parameter-free, FLOP-free "shift" operation as an alternative to spatial convolutions. We fuse shifts and point-wise convolutions to construct end-to-end trainable shift-based modules, with a hyperparameter characterizing the tradeoff between accuracy and efficiency. To demonstrate the operation's efficacy, we replace ResNet's 3x3 convolutions with shift-based modules for improved CIFAR10 and CIFAR100 accuracy using 60% fewer parameters; we additionally demonstrate the operation's resilience to parameter reduction on ImageNet, outperforming ResNet family members. We finally show the shift operation's applicability across domains, achieving strong performance with fewer parameters on classification, face verification and style transfer. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 85,141 |
2411.16498 | Multi-Resolution Generative Modeling of Human Motion from Limited Data | We present a generative model that learns to synthesize human motion from limited training sequences. Our framework provides conditional generation and blending across multiple temporal resolutions. The model adeptly captures human motion patterns by integrating skeletal convolution layers and a multi-scale architecture. Our model contains a set of generative and adversarial networks, along with embedding modules, each tailored for generating motions at specific frame rates while exerting control over their content and details. Notably, our approach also extends to the synthesis of co-speech gestures, demonstrating its ability to generate synchronized gestures from speech inputs, even with limited paired data. Through direct synthesis of SMPL pose parameters, our approach avoids test-time adjustments to fit human body meshes. Experimental results showcase our model's ability to achieve extensive coverage of training examples, while generating diverse motions, as indicated by local and global diversity metrics. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | true | 511,048 |
2307.09365 | An Evaluation of Zero-Cost Proxies -- from Neural Architecture
Performance to Model Robustness | Zero-cost proxies are nowadays frequently studied and used to search for neural architectures. They show an impressive ability to predict the performance of architectures by making use of their untrained weights. These techniques allow for immense search speed-ups. So far the joint search for well-performing and robust architectures has received much less attention in the field of NAS. Therefore, the main focus of zero-cost proxies is the clean accuracy of architectures, whereas the model robustness should play an evenly important part. In this paper, we analyze the ability of common zero-cost proxies to serve as performance predictors for robustness in the popular NAS-Bench-201 search space. We are interested in the single prediction task for robustness and the joint multi-objective of clean and robust accuracy. We further analyze the feature importance of the proxies and show that predicting the robustness makes the prediction task from existing zero-cost proxies more challenging. As a result, the joint consideration of several proxies becomes necessary to predict a model's robustness while the clean accuracy can be regressed from a single such feature. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 380,149 |
2008.02714 | Multi-source Heterogeneous Domain Adaptation with Conditional Weighting
Adversarial Network | Heterogeneous domain adaptation (HDA) tackles the learning of cross-domain samples with both different probability distributions and feature representations. Most of the existing HDA studies focus on the single-source scenario. In reality, however, it is not uncommon to obtain samples from multiple heterogeneous domains. In this article, we study the multisource HDA problem and propose a conditional weighting adversarial network (CWAN) to address it. The proposed CWAN adversarially learns a feature transformer, a label classifier, and a domain discriminator. To quantify the importance of different source domains, CWAN introduces a sophisticated conditional weighting scheme to calculate the weights of the source domains according to the conditional distribution divergence between the source and target domains. Different from existing weighting schemes, the proposed conditional weighting scheme not only weights the source domains but also implicitly aligns the conditional distributions during the optimization process. Experimental results clearly demonstrate that the proposed CWAN performs much better than several state-of-the-art methods on four real-world datasets. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 190,690 |
2206.03334 | Correlations of network trajectories | Temporal networks model how the interaction between elements in a complex system evolve over time. Just like complex systems display collective dynamics, here we interpret temporal networks as trajectories performing a collective motion in graph space, following a latent graph dynamical system. Under this paradigm, we propose a way to measure how the network pulsates and collectively fluctuates over time and space. To this aim, we extend the notion of linear correlations function to the case of sequences of network snapshots, i.e. a network trajectory. We construct stochastic and deterministic graph dynamical systems and show that the emergent collective correlations are well captured by simple measures, and illustrate how these patterns are revealed in empirical networks arising in different domains. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 301,239 |
1808.10524 | Total Recall: Understanding Traffic Signs using Deep Hierarchical
Convolutional Neural Networks | Recognizing Traffic Signs using intelligent systems can drastically reduce the number of accidents happening world-wide. With the arrival of Self-driving cars it has become a staple challenge to solve the automatic recognition of Traffic and Hand-held signs in the major streets. Various machine learning techniques like Random Forest, SVM as well as deep learning models has been proposed for classifying traffic signs. Though they reach state-of-the-art performance on a particular data-set, but fall short of tackling multiple Traffic Sign Recognition benchmarks. In this paper, we propose a novel and one-for-all architecture that aces multiple benchmarks with better overall score than the state-of-the-art architectures. Our model is made of residual convolutional blocks with hierarchical dilated skip connections joined in steps. With this we score 99.33% Accuracy in German sign recognition benchmark and 99.17% Accuracy in Belgian traffic sign classification benchmark. Moreover, we propose a newly devised dilated residual learning representation technique which is very low in both memory and computational complexity. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 106,393 |
2111.03536 | A Unified Game-Theoretic Interpretation of Adversarial Robustness | This paper provides a unified view to explain different adversarial attacks and defense methods, \emph{i.e.} the view of multi-order interactions between input variables of DNNs. Based on the multi-order interaction, we discover that adversarial attacks mainly affect high-order interactions to fool the DNN. Furthermore, we find that the robustness of adversarially trained DNNs comes from category-specific low-order interactions. Our findings provide a potential method to unify adversarial perturbations and robustness, which can explain the existing defense methods in a principle way. Besides, our findings also make a revision of previous inaccurate understanding of the shape bias of adversarially learned features. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 265,205 |
2111.13331 | Impact of classification difficulty on the weight matrices spectra in
Deep Learning and application to early-stopping | Much research effort has been devoted to explaining the success of deep learning. Random Matrix Theory (RMT) provides an emerging way to this end: spectral analysis of large random matrices involved in a trained deep neural network (DNN) such as weight matrices or Hessian matrices with respect to the stochastic gradient descent algorithm. To have more comprehensive understanding of weight matrices spectra, we conduct extensive experiments on weight matrices in different modules, e.g., layers, networks and data sets. Following the previous work of \cite{martin2018implicit}, we classify the spectra in the terminal stage into three main types: Light Tail (LT), Bulk Transition period (BT) and Heavy Tail(HT). These different types, especially HT, implicitly indicate some regularization in the DNNs. A main contribution from the paper is that we identify the difficulty of the classification problem as a driving factor for the appearance of heavy tail in weight matrices spectra. Higher the classification difficulty, higher the chance for HT to appear. Moreover, the classification difficulty can be affected by the signal-to-noise ratio of the dataset, or by the complexity of the classification problem (complex features, large number of classes) as well. Leveraging on this finding, we further propose a spectral criterion to detect the appearance of heavy tails and use it to early stop the training process without testing data. Such early stopped DNNs have the merit of avoiding overfitting and unnecessary extra training while preserving a much comparable generalization ability. These findings from the paper are validated in several NNs, using Gaussian synthetic data and real data sets (MNIST and CIFAR10). | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 268,267 |
2209.04701 | Subdiffusive semantic evolution in Indo-European languages | How do words change their meaning? Although semantic evolution is driven by a variety of distinct factors, including linguistic, societal, and technological ones, we find that there is one law that holds universally across five major Indo-European languages: that semantic evolution is strongly subdiffusive. Using an automated pipeline of diachronic distributional semantic embedding that controls for underlying symmetries, we show that words follow stochastic trajectories in meaning space with an anomalous diffusion exponent $\alpha= 0.45\pm 0.05$ across languages, in contrast with diffusing particles that follow $\alpha=1$. Randomization methods indicate that preserving temporal correlations in semantic change directions is necessary to recover strongly subdiffusive behavior; however, correlations in change sizes play an important role too. We furthermore show that strong subdiffusion is a robust phenomenon under a wide variety of choices in data analysis and interpretation, such as the choice of fitting an ensemble average of displacements or averaging best-fit exponents of individual word trajectories. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 316,859 |
2101.12370 | An Automated Theorem Proving Framework for Information-Theoretic Results | We present a versatile automated theorem proving framework capable of automated discovery, simplification and proofs of inner and outer bounds in network information theory, deduction of properties of information-theoretic quantities (e.g. Wyner and G\'acs-K\"orner common information), and discovery of non-Shannon-type inequalities, under a unified framework. Our implementation successfully generated proofs for 32 out of 56 theorems in Chapters 1-14 of the book Network Information Theory by El Gamal and Kim. Our framework is based on the concept of existential information inequalities, which provides an axiomatic framework for a wide range of problems in information theory. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 217,544 |
2112.08351 | Database Search Results Disambiguation for Task-Oriented Dialog Systems | As task-oriented dialog systems are becoming increasingly popular in our lives, more realistic tasks have been proposed and explored. However, new practical challenges arise. For instance, current dialog systems cannot effectively handle multiple search results when querying a database, due to the lack of such scenarios in existing public datasets. In this paper, we propose Database Search Result (DSR) Disambiguation, a novel task that focuses on disambiguating database search results, which enhances user experience by allowing them to choose from multiple options instead of just one. To study this task, we augment the popular task-oriented dialog datasets (MultiWOZ and SGD) with turns that resolve ambiguities by (a) synthetically generating turns through a pre-defined grammar, and (b) collecting human paraphrases for a subset. We find that training on our augmented dialog data improves the model's ability to deal with ambiguous scenarios, without sacrificing performance on unmodified turns. Furthermore, pre-fine tuning and multi-task learning help our model to improve performance on DSR-disambiguation even in the absence of in-domain data, suggesting that it can be learned as a universal dialog skill. Our data and code will be made publicly available. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 271,767 |
1807.03083 | Evaluating Active Learning Heuristics for Sequential Diagnosis | Given a malfunctioning system, sequential diagnosis aims at identifying the root cause of the failure in terms of abnormally behaving system components. As initial system observations usually do not suffice to deterministically pin down just one explanation of the system's misbehavior, additional system measurements can help to differentiate between possible explanations. The goal is to restrict the space of explanations until there is only one (highly probable) explanation left. To achieve this with a minimal-cost set of measurements, various (active learning) heuristics for selecting the best next measurement have been proposed. We report preliminary results of extensive ongoing experiments with a set of selection heuristics on real-world diagnosis cases. In particular, we try to answer questions such as "Is some heuristic always superior to all others?", "On which factors does the (relative) performance of the particular heuristics depend?" or "Under which circumstances should I use which heuristic?" | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 102,420 |
2102.01197 | Common Randomness Generation over Slow Fading Channels | This paper analyzes the problem of common randomness (CR) generation from correlated discrete sources aided by unidirectional communication over Single-Input Single-Output (SISO) slow fading channels with additive white Gaussian noise (AWGN) and arbitrary state distribution. Slow fading channels are practically relevant for wireless communications. We completely solve the SISO slow fading case by establishing its corresponding outage CR capacity using our characterization of its channel outage capacity. The generated CR could be exploited to improve the performance gain in the identification scheme. The latter is known to be more efficient than the classical transmission scheme in many new applications, which demand ultra-reliable low latency communication. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 218,023 |
2208.06497 | SeeSaw: Interactive Ad-hoc Search Over Image Databases | As image datasets become ubiquitous, the problem of ad-hoc searches over image data is increasingly important. Many high-level data tasks in machine learning, such as constructing datasets for training and testing object detectors, imply finding ad-hoc objects or scenes within large image datasets as a key sub-problem. New foundational visual-semantic embeddings trained on massive web datasets such as Contrastive Language-Image Pre-Training (CLIP) can help users start searches on their own data, but we find there is a long tail of queries where these models fall short in practice. SeeSaw is a system for interactive ad-hoc searches on image datasets that integrates state-of-the-art embeddings like CLIP with user feedback in the form of box annotations to help users quickly locate images of interest in their data even in the long tail of harder queries. One key challenge for SeeSaw is that, in practice, many sensible approaches to incorporating feedback into future results, including state-of-the-art active-learning algorithms, can worsen results compared to introducing no feedback, partly due to CLIP's high-average performance. Therefore, SeeSaw includes several algorithms that empirically result in larger and also more consistent improvements. We compare SeeSaw's accuracy to both using CLIP alone and to a state-of-the-art active-learning baseline and find SeeSaw consistently helps improve results for users across four datasets and more than a thousand queries. SeeSaw increases Average Precision (AP) on search tasks by an average of .08 on a wide benchmark (from a base of .72), and by a .27 on a subset of more difficult queries where CLIP alone performs poorly. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 312,732 |
1910.06514 | Target-Oriented Deformation of Visual-Semantic Embedding Space | Multimodal embedding is a crucial research topic for cross-modal understanding, data mining, and translation. Many studies have attempted to extract representations from given entities and align them in a shared embedding space. However, because entities in different modalities exhibit different abstraction levels and modality-specific information, it is insufficient to embed related entities close to each other. In this study, we propose the Target-Oriented Deformation Network (TOD-Net), a novel module that continuously deforms the embedding space into a new space under a given condition, thereby adjusting similarities between entities. Unlike methods based on cross-modal attention, TOD-Net is a post-process applied to the embedding space learned by existing embedding systems and improves their performances of retrieval. In particular, when combined with cutting-edge models, TOD-Net gains the state-of-the-art cross-modal retrieval model associated with the MSCOCO dataset. Qualitative analysis reveals that TOD-Net successfully emphasizes entity-specific concepts and retrieves diverse targets via handling higher levels of diversity than existing models. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 149,360 |
2204.03428 | Detecting Vocal Fatigue with Neural Embeddings | Vocal fatigue refers to the feeling of tiredness and weakness of voice due to extended utilization. This paper investigates the effectiveness of neural embeddings for the detection of vocal fatigue. We compare x-vectors, ECAPA-TDNN, and wav2vec 2.0 embeddings on a corpus of academic spoken English. Low-dimensional mappings of the data reveal that neural embeddings capture information about the change in vocal characteristics of a speaker during prolonged voice usage. We show that vocal fatigue can be reliably predicted using all three kinds of neural embeddings after only 50 minutes of continuous speaking when temporal smoothing and normalization are applied to the extracted embeddings. We employ support vector machines for classification and achieve accuracy scores of 81% using x-vectors, 85% using ECAPA-TDNN embeddings, and 82% using wav2vec 2.0 embeddings as input features. We obtain an accuracy score of 76%, when the trained system is applied to a different speaker and recording environment without any adaptation. | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 290,299 |
2302.07027 | AdapterSoup: Weight Averaging to Improve Generalization of Pretrained
Language Models | Pretrained language models (PLMs) are trained on massive corpora, but often need to specialize to specific domains. A parameter-efficient adaptation method suggests training an adapter for each domain on the task of language modeling. This leads to good in-domain scores but can be impractical for domain- or resource-restricted settings. A solution is to use a related-domain adapter for the novel domain at test time. In this paper, we introduce AdapterSoup, an approach that performs weight-space averaging of adapters trained on different domains. Our approach is embarrassingly parallel: first, we train a set of domain-specific adapters; then, for each novel domain, we determine which adapters should be averaged at test time. We present extensive experiments showing that AdapterSoup consistently improves performance to new domains without extra training. We also explore weight averaging of adapters trained on the same domain with different hyper-parameters, and show that it preserves the performance of a PLM on new domains while obtaining strong in-domain results. We explore various approaches for choosing which adapters to combine, such as text clustering and semantic similarity. We find that using clustering leads to the most competitive results on novel domains. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 345,607 |
2206.06221 | A Unified Approach for Dynamic Analysis of Tensegrity Structures with
Arbitrary Rigid Bodies and Rigid Bars | This paper proposes a unified approach for dynamic modeling and simulations of general tensegrity structures with rigid bars and rigid bodies of arbitrary shapes. The natural coordinates are adopted as a non-minimal description in terms of different combinations of basic points and base vectors to resolve the heterogeneity between rigid bodies and rigid bars in three-dimensional space. This leads to a set of differential-algebraic equations with a constant mass matrix and free from trigonometric functions. Formulations for linearized dynamics are derived to enable modal analysis around static equilibrium. For numerical analysis of nonlinear dynamics, we derive a modified symplectic integration scheme which yields realistic results for long-time simulations, and accommodates non-conservative forces as well as boundary conditions. Numerical examples demonstrate the efficacy of the proposed approach for dynamic simulations of Class-1-to-$k$ general tensegrity structures under complex situations, including dynamic external loads, cable-based deployments, and moving boundaries. The novel tensegrity structures also exemplify new ways to create multi-functional structures. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 302,295 |
2407.08513 | Fine-Tuning Stable Diffusion XL for Stylistic Icon Generation: A
Comparison of Caption Size | In this paper, we show different fine-tuning methods for Stable Diffusion XL; this includes inference steps, and caption customization for each image to align with generating images in the style of a commercial 2D icon training set. We also show how important it is to properly define what "high-quality" really is especially for a commercial-use environment. As generative AI models continue to gain widespread acceptance and usage, there emerge many different ways to optimize and evaluate them for various applications. Specifically text-to-image models, such as Stable Diffusion XL and DALL-E 3 require distinct evaluation practices to effectively generate high-quality icons according to a specific style. Although some images that are generated based on a certain style may have a lower FID score (better), we show how this is not absolute in and of itself even for rasterized icons. While FID scores reflect the similarity of generated images to the overall training set, CLIP scores measure the alignment between generated images and their textual descriptions. We show how FID scores miss significant aspects, such as the minority of pixel differences that matter most in an icon, while CLIP scores result in misjudging the quality of icons. The CLIP model's understanding of "similarity" is shaped by its own training data; which does not account for feature variation in our style of choice. Our findings highlight the need for specialized evaluation metrics and fine-tuning approaches when generating high-quality commercial icons, potentially leading to more effective and tailored applications of text-to-image models in professional design contexts. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 472,194 |
2005.08343 | Facial Action Unit Detection using 3D Facial Landmarks | In this paper, we propose to detect facial action units (AU) using 3D facial landmarks. Specifically, we train a 2D convolutional neural network (CNN) on 3D facial landmarks, tracked using a shape index-based statistical shape model, for binary and multi-class AU detection. We show that the proposed approach is able to accurately model AU occurrences, as the movement of the facial landmarks corresponds directly to the movement of the AUs. By training a CNN on 3D landmarks, we can achieve accurate AU detection on two state-of-the-art emotion datasets, namely BP4D and BP4D+. Using the proposed method, we detect multiple AUs on over 330,000 frames, reporting improved results over state-of-the-art methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 177,587 |
1805.05426 | O.D.E.S. : An Online Dynamic Examination System based on a CMS Wordpress
plugin | This paper describes the online dynamic examination application plugin named O.D.E.S., developed according to the open source software philosophy, where the CMS Wordpress is used as programmers/coders are given the potential to develop applications from scratch with safety and ease. In ODES application there exists two types of users: admin/teacher and student. The admin/teacher can create/edit/delete/view questions, categories of questions and examination papers. The questions are divided in two types, multiple choice questions (including true/false) and long answer questions (essays). The teacher can create exams choosing the number of questions and the types of questions that exist in the pool of questions that are previously created. The selection is done randomly by the application and the teacher just determines the total number of both multiple choice or long answer questions as well as the importance (weight) of each one of them (including negative grades also). The student takes the random generated exam and receives his/hers grades. The grades of the multiple choice questions are done automatically, whereas for the long answer questions the teacher is responsible to put grades on. After the completion of the exam the teacher can view the student's final score via the control panel or a report. | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | false | 97,421 |
1505.00835 | A novel plasticity rule can explain the development of sensorimotor
intelligence | Grounding autonomous behavior in the nervous system is a fundamental challenge for neuroscience. In particular, the self-organized behavioral development provides more questions than answers. Are there special functional units for curiosity, motivation, and creativity? This paper argues that these features can be grounded in synaptic plasticity itself, without requiring any higher level constructs. We propose differential extrinsic plasticity (DEP) as a new synaptic rule for self-learning systems and apply it to a number of complex robotic systems as a test case. Without specifying any purpose or goal, seemingly purposeful and adaptive behavior is developed, displaying a certain level of sensorimotor intelligence. These surprising results require no system specific modifications of the DEP rule but arise rather from the underlying mechanism of spontaneous symmetry breaking due to the tight brain-body-environment coupling. The new synaptic rule is biologically plausible and it would be an interesting target for a neurobiolocal investigation. We also argue that this neuronal mechanism may have been a catalyst in natural evolution. | false | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | 42,771 |
1603.00939 | Routing Autonomous Vehicles in Congested Transportation Networks:
Structural Properties and Coordination Algorithms | This paper considers the problem of routing and rebalancing a shared fleet of autonomous (i.e., self-driving) vehicles providing on-demand mobility within a capacitated transportation network, where congestion might disrupt throughput. We model the problem within a network flow framework and show that under relatively mild assumptions the rebalancing vehicles, if properly coordinated, do not lead to an increase in congestion (in stark contrast to common belief). From an algorithmic standpoint, such theoretical insight suggests that the problem of routing customers and rebalancing vehicles can be decoupled, which leads to a computationally-efficient routing and rebalancing algorithm for the autonomous vehicles. Numerical experiments and case studies corroborate our theoretical insights and show that the proposed algorithm outperforms state-of-the-art point-to-point methods by avoiding excess congestion on the road. Collectively, this paper provides a rigorous approach to the problem of congestion-aware, system-wide coordination of autonomously driving vehicles, and to the characterization of the sustainability of such robotic systems. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | 52,832 |
2109.05385 | On the Initial Behavior Monitoring Issues in Federated Learning | In Federated Learning (FL), a group of workers participate to build a global model under the coordination of one node, the chief. Regarding the cybersecurity of FL, some attacks aim at injecting the fabricated local model updates into the system. Some defenses are based on malicious worker detection and behavioral pattern analysis. In this context, without timely and dynamic monitoring methods, the chief cannot detect and remove the malicious or unreliable workers from the system. Our work emphasize the urgency to prepare the federated learning process for monitoring and eventually behavioral pattern analysis. We study the information inside the learning process in the early stages of training, propose a monitoring process and evaluate the monitoring period required. The aim is to analyse at what time is it appropriate to start the detection algorithm in order to remove the malicious or unreliable workers from the system and optimise the defense mechanism deployment. We tested our strategy on a behavioral pattern analysis defense applied to the FL process of different benchmark systems for text and image classification. Our results show that the monitoring process lowers false positives and false negatives and consequently increases system efficiency by enabling the distributed learning system to achieve better performance in the early stage of training. | false | false | false | false | true | false | true | false | false | false | false | false | true | false | false | false | false | false | 254,770 |
2501.12524 | Efficient Lung Ultrasound Severity Scoring Using Dedicated Feature
Extractor | With the advent of the COVID-19 pandemic, ultrasound imaging has emerged as a promising technique for COVID-19 detection, due to its non-invasive nature, affordability, and portability. In response, researchers have focused on developing AI-based scoring systems to provide real-time diagnostic support. However, the limited size and lack of proper annotation in publicly available ultrasound datasets pose significant challenges for training a robust AI model. This paper proposes MeDiVLAD, a novel pipeline to address the above issue for multi-level lung-ultrasound (LUS) severity scoring. In particular, we leverage self-knowledge distillation to pretrain a vision transformer (ViT) without label and aggregate frame-level features via dual-level VLAD aggregation. We show that with minimal finetuning, MeDiVLAD outperforms conventional fully-supervised methods in both frame- and video-level scoring, while offering classification reasoning with exceptional quality. This superior performance enables key applications such as the automatic identification of critical lung pathology areas and provides a robust solution for broader medical video classification tasks. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 526,344 |
2411.13975 | Transforming Static Images Using Generative Models for Video Salient
Object Detection | In many video processing tasks, leveraging large-scale image datasets is a common strategy, as image data is more abundant and facilitates comprehensive knowledge transfer. A typical approach for simulating video from static images involves applying spatial transformations, such as affine transformations and spline warping, to create sequences that mimic temporal progression. However, in tasks like video salient object detection, where both appearance and motion cues are critical, these basic image-to-video techniques fail to produce realistic optical flows that capture the independent motion properties of each object. In this study, we show that image-to-video diffusion models can generate realistic transformations of static images while understanding the contextual relationships between image components. This ability allows the model to generate plausible optical flows, preserving semantic integrity while reflecting the independent motion of scene elements. By augmenting individual images in this way, we create large-scale image-flow pairs that significantly enhance model training. Our approach achieves state-of-the-art performance across all public benchmark datasets, outperforming existing approaches. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 509,993 |
2309.10324 | Metastatic Breast Cancer Prognostication Through Multimodal Integration
of Dimensionality Reduction Algorithms and Classification Algorithms | Machine learning (ML) is a branch of Artificial Intelligence (AI) where computers analyze data and find patterns in the data. The study focuses on the detection of metastatic cancer using ML. Metastatic cancer is the point where the cancer has spread to other parts of the body and is the cause of approximately 90% of cancer related deaths. Normally, pathologists spend hours each day to manually classify whether tumors are benign or malignant. This tedious task contributes to mislabeling metastasis being over 60% of time and emphasizes the importance to be aware of human error, and other inefficiencies. ML is a good candidate to improve the correct identification of metastatic cancer saving thousands of lives and can also improve the speed and efficiency of the process thereby taking less resources and time. So far, deep learning methodology of AI has been used in the research to detect cancer. This study is a novel approach to determine the potential of using preprocessing algorithms combined with classification algorithms in detecting metastatic cancer. The study used two preprocessing algorithms: principal component analysis (PCA) and the genetic algorithm to reduce the dimensionality of the dataset, and then used three classification algorithms: logistic regression, decision tree classifier, and k-nearest neighbors to detect metastatic cancer in the pathology scans. The highest accuracy of 71.14% was produced by the ML pipeline comprising of PCA, the genetic algorithm, and the k-nearest neighbors algorithm, suggesting that preprocessing and classification algorithms have great potential for detecting metastatic cancer. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 392,963 |
1304.5213 | Carbon Dating The Web: Estimating the Age of Web Resources | In the course of web research it is often necessary to estimate the creation datetime for web resources (in the general case, this value can only be estimated). While it is feasible to manually establish likely datetime values for small numbers of resources, this becomes infeasible if the collection is large. We present "carbon date", a simple web application that estimates the creation date for a URI by polling a number of sources of evidence and returning a machine-readable structure with their respective values. To establish a likely datetime, we poll bitly for the first time someone shortened the URI, topsy for the first time someone tweeted the URI, a Memento aggregator for the first time it appeared in a public web archive, Google's time of last crawl, and the Last-Modified HTTP response header of the resource itself. We also examine the backlinks of the URI as reported by Google and apply the same techniques for the resources that link to the URI. We evaluated our tool on a gold-standard data set of 1200 URIs in which the creation date was manually verified. We were able to estimate a creation date for 75.90% of the resources, with 32.78% having the correct value. Given the different nature of the URIs, the union of the various methods produces the best results. While the Google last crawl date and topsy account for nearly 66% of the closest answers, eliminating the web archives or Last-Modified from the results produces the largest overall negative impact on the results. The carbon date application is available for download or use via a webAPI. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | true | 24,067 |
2107.03663 | Graph and Recurrent Neural Network-based Vehicle Trajectory Prediction
For Highway Driving | Integrating trajectory prediction to the decision-making and planning modules of modular autonomous driving systems is expected to improve the safety and efficiency of self-driving vehicles. However, a vehicle's future trajectory prediction is a challenging task since it is affected by the social interactive behaviors of neighboring vehicles, and the number of neighboring vehicles can vary in different situations. This work proposes a GNN-RNN based Encoder-Decoder network for interaction-aware trajectory prediction, where vehicles' dynamics features are extracted from their historical tracks using RNN, and the inter-vehicular interaction is represented by a directed graph and encoded using a GNN. The parallelism of GNN implies the proposed method's potential to predict multi-vehicular trajectories simultaneously. Evaluation on the dataset extracted from the NGSIM US-101 dataset shows that the proposed model is able to predict a target vehicle's trajectory in situations with a variable number of surrounding vehicles. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 245,221 |
2203.02128 | Distributionally Robust Bayesian Optimization with $\varphi$-divergences | The study of robustness has received much attention due to its inevitability in data-driven settings where many systems face uncertainty. One such example of concern is Bayesian Optimization (BO), where uncertainty is multi-faceted, yet there only exists a limited number of works dedicated to this direction. In particular, there is the work of Kirschner et al. (2020), which bridges the existing literature of Distributionally Robust Optimization (DRO) by casting the BO problem from the lens of DRO. While this work is pioneering, it admittedly suffers from various practical shortcomings such as finite contexts assumptions, leaving behind the main question Can one devise a computationally tractable algorithm for solving this DRO-BO problem? In this work, we tackle this question to a large degree of generality by considering robustness against data-shift in $\varphi$-divergences, which subsumes many popular choices, such as the $\chi^2$-divergence, Total Variation, and the extant Kullback-Leibler (KL) divergence. We show that the DRO-BO problem in this setting is equivalent to a finite-dimensional optimization problem which, even in the continuous context setting, can be easily implemented with provable sublinear regret bounds. We then show experimentally that our method surpasses existing methods, attesting to the theoretical results. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 283,650 |
2111.12994 | NomMer: Nominate Synergistic Context in Vision Transformer for Visual
Recognition | Recently, Vision Transformers (ViT), with the self-attention (SA) as the de facto ingredients, have demonstrated great potential in the computer vision community. For the sake of trade-off between efficiency and performance, a group of works merely perform SA operation within local patches, whereas the global contextual information is abandoned, which would be indispensable for visual recognition tasks. To solve the issue, the subsequent global-local ViTs take a stab at marrying local SA with global one in parallel or alternative way in the model. Nevertheless, the exhaustively combined local and global context may exist redundancy for various visual data, and the receptive field within each layer is fixed. Alternatively, a more graceful way is that global and local context can adaptively contribute per se to accommodate different visual data. To achieve this goal, we in this paper propose a novel ViT architecture, termed NomMer, which can dynamically Nominate the synergistic global-local context in vision transforMer. By investigating the working pattern of our proposed NomMer, we further explore what context information is focused. Beneficial from this "dynamic nomination" mechanism, without bells and whistles, the NomMer can not only achieve 84.5% Top-1 classification accuracy on ImageNet with only 73M parameters, but also show promising performance on dense prediction tasks, i.e., object detection and semantic segmentation. The code and models will be made publicly available at https://github.com/TencentYoutuResearch/VisualRecognition-NomMer | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 268,159 |
1907.02911 | Weight-space symmetry in deep networks gives rise to permutation
saddles, connected by equal-loss valleys across the loss landscape | The permutation symmetry of neurons in each layer of a deep neural network gives rise not only to multiple equivalent global minima of the loss function, but also to first-order saddle points located on the path between the global minima. In a network of $d-1$ hidden layers with $n_k$ neurons in layers $k = 1, \ldots, d$, we construct smooth paths between equivalent global minima that lead through a `permutation point' where the input and output weight vectors of two neurons in the same hidden layer $k$ collide and interchange. We show that such permutation points are critical points with at least $n_{k+1}$ vanishing eigenvalues of the Hessian matrix of second derivatives indicating a local plateau of the loss function. We find that a permutation point for the exchange of neurons $i$ and $j$ transits into a flat valley (or generally, an extended plateau of $n_{k+1}$ flat dimensions) that enables all $n_k!$ permutations of neurons in a given layer $k$ at the same loss value. Moreover, we introduce high-order permutation points by exploiting the recursive structure in neural network functions, and find that the number of $K^{\text{th}}$-order permutation points is at least by a factor $\sum_{k=1}^{d-1}\frac{1}{2!^K}{n_k-K \choose K}$ larger than the (already huge) number of equivalent global minima. In two tasks, we illustrate numerically that some of the permutation points correspond to first-order saddles (`permutation saddles'): first, in a toy network with a single hidden layer on a function approximation task and, second, in a multilayer network on the MNIST task. Our geometric approach yields a lower bound on the number of critical points generated by weight-space symmetries and provides a simple intuitive link between previous mathematical results and numerical observations. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 137,719 |
2408.09162 | Zero-Shot Object-Centric Representation Learning | The goal of object-centric representation learning is to decompose visual scenes into a structured representation that isolates the entities. Recent successes have shown that object-centric representation learning can be scaled to real-world scenes by utilizing pre-trained self-supervised features. However, so far, object-centric methods have mostly been applied in-distribution, with models trained and evaluated on the same dataset. This is in contrast to the wider trend in machine learning towards general-purpose models directly applicable to unseen data and tasks. Thus, in this work, we study current object-centric methods through the lens of zero-shot generalization by introducing a benchmark comprising eight different synthetic and real-world datasets. We analyze the factors influencing zero-shot performance and find that training on diverse real-world images improves transferability to unseen scenarios. Furthermore, inspired by the success of task-specific fine-tuning in foundation models, we introduce a novel fine-tuning strategy to adapt pre-trained vision encoders for the task of object discovery. We find that the proposed approach results in state-of-the-art performance for unsupervised object discovery, exhibiting strong zero-shot transfer to unseen datasets. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 481,312 |
2007.13876 | Semi-Supervised Learning with Data Augmentation for End-to-End ASR | In this paper, we apply Semi-Supervised Learning (SSL) along with Data Augmentation (DA) for improving the accuracy of End-to-End ASR. We focus on the consistency regularization principle, which has been successfully applied to image classification tasks, and present sequence-to-sequence (seq2seq) versions of the FixMatch and Noisy Student algorithms. Specifically, we generate the pseudo labels for the unlabeled data on-the-fly with a seq2seq model after perturbing the input features with DA. We also propose soft label variants of both algorithms to cope with pseudo label errors, showing further performance improvements. We conduct SSL experiments on a conversational speech data set with 1.9kh manually transcribed training data, using only 25% of the original labels (475h labeled data). In the result, the Noisy Student algorithm with soft labels and consistency regularization achieves 10.4% word error rate (WER) reduction when adding 475h of unlabeled data, corresponding to a recovery rate of 92%. Furthermore, when iteratively adding 950h more unlabeled data, our best SSL performance is within 5% WER increase compared to using the full labeled training set (recovery rate: 78%). | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 189,243 |
2103.09755 | Aggregated Multi-GANs for Controlled 3D Human Motion Prediction | Human motion prediction from historical pose sequence is at the core of many applications in machine intelligence. However, in current state-of-the-art methods, the predicted future motion is confined within the same activity. One can neither generate predictions that differ from the current activity, nor manipulate the body parts to explore various future possibilities. Undoubtedly, this greatly limits the usefulness and applicability of motion prediction. In this paper, we propose a generalization of the human motion prediction task in which control parameters can be readily incorporated to adjust the forecasted motion. Our method is compelling in that it enables manipulable motion prediction across activity types and allows customization of the human movement in a variety of fine-grained ways. To this aim, a simple yet effective composite GAN structure, consisting of local GANs for different body parts and aggregated via a global GAN is presented. The local GANs game in lower dimensions, while the global GAN adjusts in high dimensional space to avoid mode collapse. Extensive experiments show that our method outperforms state-of-the-art. The codes are available at https://github.com/herolvkd/AM-GAN. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 225,253 |
1901.02787 | On Secure Network Coding for Multiple Unicast Traffic | This paper investigates the problem of secure communication in a wireline noiseless scenario where a source wishes to communicate to a number of destinations in the presence of a passive external adversary. Different from the multicast scenario, where all destinations are interested in receiving the same message, in this setting different destinations are interested in different messages. The main focus of this paper is on characterizing the secure capacity region, when the adversary has unbounded computational capabilities, but limited network presence. First, an outer bound on the secure capacity region is derived for arbitrary network topologies and general number of destinations. Then, secure transmission schemes are designed and analyzed in terms of achieved rate performance. In particular, for the case of two destinations, it is shown that the designed scheme matches the outer bound, hence characterizing the secure capacity region. It is also numerically verified that the designed scheme matches the outer bound for a special class of networks with general number of destinations, referred to as combination network. Finally, for an arbitrary network topology with general number of destinations, a two-phase polynomial time in the network size scheme is designed and its rate performance {is} compared with the capacity-achieving scheme for networks with two destinations. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 118,278 |
2401.05882 | Extreme Value Theory Based Rate Selection for Ultra-Reliable
Communications | Ultra-reliable low latency communication (URLLC) requires the packet error rate to be on the order of $10^{-9}$-$10^{-5}$. Determining the appropriate transmission rate to satisfy this ultra-reliability constraint requires deriving the statistics of the channel in the ultra-reliable region and then incorporating these statistics into the rate selection. In this paper, we propose a framework for determining the rate selection for ultra-reliable communications based on the extreme value theory (EVT). We first model the wireless channel at URLLC by estimating the parameters of the generalized Pareto distribution (GPD) best fitting to the tail distribution of the received powers, i.e., the power values below a certain threshold. Then, we determine the maximum transmission rate by incorporating the Pareto distribution into the rate selection function. Finally, we validate the selected rate by computing the resulting error probability. Based on the data collected within the engine compartment of Fiat Linea, we demonstrate the superior performance of the proposed methodology in determining the maximum transmission rate compared to the traditional extrapolation-based approaches. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 420,945 |
2204.10362 | Human Preferences as Dueling Bandits | The dramatic improvements in core information retrieval tasks engendered by neural rankers create a need for novel evaluation methods. If every ranker returns highly relevant items in the top ranks, it becomes difficult to recognize meaningful differences between them and to build reusable test collections. Several recent papers explore pairwise preference judgments as an alternative to traditional graded relevance assessments. Rather than viewing items one at a time, assessors view items side-by-side and indicate the one that provides the better response to a query, allowing fine-grained distinctions. If we employ preference judgments to identify the probably best items for each query, we can measure rankers by their ability to place these items as high as possible. We frame the problem of finding best items as a dueling bandits problem. While many papers explore dueling bandits for online ranker evaluation via interleaving, they have not been considered as a framework for offline evaluation via human preference judgments. We review the literature for possible solutions. For human preference judgments, any usable algorithm must tolerate ties, since two items may appear nearly equal to assessors, and it must minimize the number of judgments required for any specific pair, since each such comparison requires an independent assessor. Since the theoretical guarantees provided by most algorithms depend on assumptions that are not satisfied by human preference judgments, we simulate selected algorithms on representative test cases to provide insight into their practical utility. Based on these simulations, one algorithm stands out for its potential. Our simulations suggest modifications to further improve its performance. Using the modified algorithm, we collect over 10,000 preference judgments for submissions to the TREC 2021 Deep Learning Track, confirming its suitability. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 292,753 |
2406.12319 | The Comparative Trap: Pairwise Comparisons Amplifies Biased Preferences
of LLM Evaluators | As large language models (LLMs) are increasingly used as evaluators for natural language generation tasks, ensuring unbiased assessments is essential. However, LLM evaluators often display biased preferences, such as favoring verbosity and authoritative tones. Our empirical analysis reveals that these biases are exacerbated in pairwise evaluation, where LLMs directly compare two outputs and easily prioritize superficial attributes. In contrast, pointwise evaluation, which assesses outputs independently, is less susceptible to such bias because each output is judged in isolation. To address the limitations of the pairwise evaluation, we introduce a novel evaluation method, PRePair, which integrates pointwise reasoning within a pairwise framework. PRePair effectively alleviates biased preference, improving performance on the adversarial benchmark (LLMBar) while outperforming pointwise evaluation on the standard benchmark (MT-Bench). | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 465,342 |
1312.1020 | High-quality Image Restoration from Partial Mixed Adaptive-Random
Measurements | A novel framework to construct an efficient sensing (measurement) matrix, called mixed adaptive-random (MAR) matrix, is introduced for directly acquiring a compressed image representation. The mixed sampling (sensing) procedure hybridizes adaptive edge measurements extracted from a low-resolution image with uniform random measurements predefined for the high-resolution image to be recovered. The mixed sensing matrix seamlessly captures important information of an image, and meanwhile approximately satisfies the restricted isometry property. To recover the high-resolution image from MAR measurements, the total variation algorithm based on the compressive sensing theory is employed for solving the Lagrangian regularization problem. Both peak signal-to-noise ratio and structural similarity results demonstrate the MAR sensing framework shows much better recovery performance than the completely random sensing one. The work is particularly helpful for high-performance and lost-cost data acquisition. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 28,830 |
1812.07909 | An Empirical Study of Generative Models with Encoders | Generative adversarial networks (GANs) are capable of producing high quality image samples. However, unlike variational autoencoders (VAEs), GANs lack encoders that provide the inverse mapping for the generators, i.e., encode images back to the latent space. In this work, we consider adversarially learned generative models that also have encoders. We evaluate models based on their ability to produce high quality samples and reconstructions of real images. Our main contributions are twofold: First, we find that the baseline Bidirectional GAN (BiGAN) can be improved upon with the addition of an autoencoder loss, at the expense of an extra hyper-parameter to tune. Second, we show that comparable performance to BiGAN can be obtained by simply training an encoder to invert the generator of a normal GAN. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 116,905 |
2411.17711 | AnyECG: Foundational Models for Electrocardiogram Analysis | Electrocardiogram (ECG), a non-invasive and affordable tool for cardiac monitoring, is highly sensitive in detecting acute heart attacks. However, due to the lengthy nature of ECG recordings, numerous machine learning methods have been developed for automated heart disease detection to reduce human workload. Despite these efforts, performance remains suboptimal. A key obstacle is the inherent complexity of ECG data, which includes heterogeneity (e.g., varying sampling rates), high levels of noise, demographic-related pattern shifts, and intricate rhythm-event associations. To overcome these challenges, this paper introduces AnyECG, a foundational model designed to extract robust representations from any real-world ECG data. Specifically, a tailored ECG Tokenizer encodes each fixed-duration ECG fragment into a token and, guided by proxy tasks, converts noisy, continuous ECG features into discrete, compact, and clinically meaningful local rhythm codes. These codes encapsulate basic morphological, frequency, and demographic information (e.g., sex), effectively mitigating signal noise. We further pre-train the AnyECG to learn rhythmic pattern associations across ECG tokens, enabling the capture of cardiac event semantics. By being jointly pre-trained on diverse ECG data sources, AnyECG is capable of generalizing across a wide range of downstream tasks where ECG signals are recorded from various devices and scenarios. Experimental results in anomaly detection, arrhythmia detection, corrupted lead generation, and ultra-long ECG signal analysis demonstrate that AnyECG learns common ECG knowledge from data and significantly outperforms cutting-edge methods in each respective task. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 511,559 |
2011.08018 | High-level Prior-based Loss Functions for Medical Image Segmentation: A
Survey | Today, deep convolutional neural networks (CNNs) have demonstrated state of the art performance for supervised medical image segmentation, across various imaging modalities and tasks. Despite early success, segmentation networks may still generate anatomically aberrant segmentations, with holes or inaccuracies near the object boundaries. To mitigate this effect, recent research works have focused on incorporating spatial information or prior knowledge to enforce anatomically plausible segmentation. If the integration of prior knowledge in image segmentation is not a new topic in classical optimization approaches, it is today an increasing trend in CNN based image segmentation, as shown by the growing literature on the topic. In this survey, we focus on high level prior, embedded at the loss function level. We categorize the articles according to the nature of the prior: the object shape, size, topology, and the inter-regions constraints. We highlight strengths and limitations of current approaches, discuss the challenge related to the design and the integration of prior-based losses, and the optimization strategies, and draw future research directions. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 206,748 |
2004.08994 | Adversarial Training for Large Neural Language Models | Generalization and robustness are both key desiderata for designing machine learning methods. Adversarial training can enhance robustness, but past work often finds it hurts generalization. In natural language processing (NLP), pre-training large neural language models such as BERT have demonstrated impressive gain in generalization for a variety of tasks, with further improvement from adversarial fine-tuning. However, these models are still vulnerable to adversarial attacks. In this paper, we show that adversarial pre-training can improve both generalization and robustness. We propose a general algorithm ALUM (Adversarial training for large neural LangUage Models), which regularizes the training objective by applying perturbations in the embedding space that maximizes the adversarial loss. We present the first comprehensive study of adversarial training in all stages, including pre-training from scratch, continual pre-training on a well-trained model, and task-specific fine-tuning. ALUM obtains substantial gains over BERT on a wide range of NLP tasks, in both regular and adversarial scenarios. Even for models that have been well trained on extremely large text corpora, such as RoBERTa, ALUM can still produce significant gains from continual pre-training, whereas conventional non-adversarial methods can not. ALUM can be further combined with task-specific fine-tuning to attain additional gains. The ALUM code is publicly available at https://github.com/namisan/mt-dnn. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 173,226 |
2212.09981 | Benchmarking person re-identification datasets and approaches for
practical real-world implementations | Recently, Person Re-Identification (Re-ID) has received a lot of attention. Large datasets containing labeled images of various individuals have been released, allowing researchers to develop and test many successful approaches. However, when such Re-ID models are deployed in new cities or environments, the task of searching for people within a network of security cameras is likely to face an important domain shift, thus resulting in decreased performance. Indeed, while most public datasets were collected in a limited geographic area, images from a new city present different features (e.g., people's ethnicity and clothing style, weather, architecture, etc.). In addition, the whole frames of the video streams must be converted into cropped images of people using pedestrian detection models, which behave differently from the human annotators who created the dataset used for training. To better understand the extent of this issue, this paper introduces a complete methodology to evaluate Re-ID approaches and training datasets with respect to their suitability for unsupervised deployment for live operations. This method is used to benchmark four Re-ID approaches on three datasets, providing insight and guidelines that can help to design better Re-ID pipelines in the future. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 337,282 |
2109.04641 | Learning to Teach with Student Feedback | Knowledge distillation (KD) has gained much attention due to its effectiveness in compressing large-scale pre-trained models. In typical KD methods, the small student model is trained to match the soft targets generated by the big teacher model. However, the interaction between student and teacher is one-way. The teacher is usually fixed once trained, resulting in static soft targets to be distilled. This one-way interaction leads to the teacher's inability to perceive the characteristics of the student and its training progress. To address this issue, we propose Interactive Knowledge Distillation (IKD), which also allows the teacher to learn to teach from the feedback of the student. In particular, IKD trains the teacher model to generate specific soft target at each training step for a certain student. Joint optimization for both teacher and student is achieved by two iterative steps: a course step to optimize student with the soft target of teacher, and an exam step to optimize teacher with the feedback of student. IKD is a general framework that is orthogonal to most existing knowledge distillation methods. Experimental results show that IKD outperforms traditional KD methods on various NLP tasks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 254,477 |
1906.05497 | Deep Network Approximation Characterized by Number of Neurons | This paper quantitatively characterizes the approximation power of deep feed-forward neural networks (FNNs) in terms of the number of neurons. It is shown by construction that ReLU FNNs with width $\mathcal{O}\big(\max\{d\lfloor N^{1/d}\rfloor,\, N+1\}\big)$ and depth $\mathcal{O}(L)$ can approximate an arbitrary H\"older continuous function of order $\alpha\in (0,1]$ on $[0,1]^d$ with a nearly tight approximation rate $\mathcal{O}\big(\sqrt{d} N^{-2\alpha/d}L^{-2\alpha/d}\big)$ measured in $L^p$-norm for any $N,L\in \mathbb{N}^+$ and $p\in[1,\infty]$. More generally for an arbitrary continuous function $f$ on $[0,1]^d$ with a modulus of continuity $\omega_f(\cdot)$, the constructive approximation rate is $\mathcal{O}\big(\sqrt{d}\,\omega_f( N^{-2/d}L^{-2/d})\big)$. We also extend our analysis to $f$ on irregular domains or those localized in an $\varepsilon$-neighborhood of a $d_{\mathcal{M}}$-dimensional smooth manifold $\mathcal{M}\subseteq [0,1]^d$ with $d_{\mathcal{M}}\ll d$. Especially, in the case of an essentially low-dimensional domain, we show an approximation rate $\mathcal{O}\big(\omega_f(\tfrac{\varepsilon}{1-\delta}\sqrt{\tfrac{d}{d_\delta}}+\varepsilon)+\sqrt{d}\,\omega_f(\tfrac{\sqrt{d}}{(1-\delta)\sqrt{d_\delta}}N^{-2/d_\delta}L^{-2/d_\delta})\big)$ for ReLU FNNs to approximate $f$ in the $\varepsilon$-neighborhood, where $d_\delta=\mathcal{O}\big(d_{\mathcal{M}}\tfrac{\ln (d/\delta)}{\delta^2}\big)$ for any $\delta\in(0,1)$ as a relative error for a projection to approximate an isometry when projecting $\mathcal{M}$ to a $d_{\delta}$-dimensional domain. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 135,047 |
1903.00710 | Lie-algebraic connections between two classes of risk-sensitive
performance criteria for linear quantum stochastic systems | This paper is concerned with the original risk-sensitive performance criterion for quantum stochastic systems and its recent quadratic-exponential counterpart. These functionals are of different structure because of the noncommutativity of quantum variables and have their own useful features such as tractability of evolution equations and robustness properties. We discuss a Lie algebraic connection between these two classes of cost functionals for open quantum harmonic oscillators using an apparatus of complex Hamiltonian kernels and symplectic factorizations. These results are aimed to extend useful properties from one of the classes of risk-sensitive costs to the other and develop state-space equations for computation and optimization of these criteria in quantum robust control and filtering problems. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 123,076 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.